Engineering Electrical and Electronic Engineering

CCD and CMOS Imaging Sensors

Description

This cluster of papers focuses on the advancements in CMOS image sensor technology, including high-speed imaging, low-noise sensors, photon counting strategies, dynamic range enhancement, radiation effects, pixel-level ADC integration, temporal noise analysis, logarithmic response sensors, and their applications in biomedical imaging.

Keywords

CMOS Image Sensors; High-Speed Imaging; Low-Noise Sensors; Photon Counting; Dynamic Range; Radiation Effects; Pixel-Level ADC; Temporal Noise Analysis; Logarithmic Response; Biomedical Imaging

Preface DIGITAL STILL CAMERAS AT A GLANCE Kenji Toyoda What Is a Digital Still Camera? History of Digital Still Cameras Variations of Digital Still Cameras Basic Structure of Digital Still … Preface DIGITAL STILL CAMERAS AT A GLANCE Kenji Toyoda What Is a Digital Still Camera? History of Digital Still Cameras Variations of Digital Still Cameras Basic Structure of Digital Still Cameras Applications of Digital Still Cameras OPTICS IN DIGITAL STILL CAMERAS Takeshi Koyama Optical System Fundamentals and Standards for Evaluating Optical Performance Characteristics of DSC Imaging Optics Important Aspects of Imaging Optics Design for DSCs DSC Imaging Lens Zoom Types and Their Applications Conclusion References BASICS OF IMAGE SENSORS Junichi Nakamura Functions of an Image Sensor Photodetector in a Pixel Noise Photoconversion Characteristics Array Performance Optical Format and Pixel Size CCD Image Sensor vs. CMOS Image Sensor References CCD IMAGE SENSORS Tetsuo Yamada Basics of CCDs Structures and Characteristics of CCD Image Sensor DSC Applications Future Prospects References CMOS IMAGE SENSORS Isao Takayanagi Introduction to CMOS Image Sensors CMOS Active Pixel Technology Signal Processing and Noise Behavior CMOS Image Sensors for DSC Applications Future Prospects of CMOS Image Sensors for DSC Applications References EVALUATION OF IMAGE SENSORS Toyokazu Mizoguchi What is Evaluation of Image Sensors? Evaluation Environment Evaluation Methods COLOR THEORY AND ITS APPLICATION TO DIGITAL STILL CAMERAS Po-Chieh Hung Color Theory Camera Spectral Sensitivity Characterization of a Camera White Balance Conversion for Display (Color Management) Summary References IMAGE-PROCESSING ALGORITHMS Kazuhiro Sato Basic Image-Processing Algorithms Camera Control Algorithm Advanced Image Processing: How to Obtain Improved Image Quality References IMAGE-PROCESSING ENGINES Seiichiro Watanabe Key Characteristics of an Image-Processing Engine Imaging Engine Architecture Comparison Analog Front End (AFE) Digital Back End (DBE) Future Design Engines References EVALUATION OF IMAGE QUALITY Hideaki Yoshida What is Image Quality? General Items or Parameters Detailed Items or Factors Standards Relating to Image Quality SOME THOUGHTS ON FUTURE DIGITAL STILL CAMERAS Eric R. Fossum The Future of DSC Image Sensors Some Future Digital Cameras References
We report an imaging sensor capable of recording the optical properties of partially polarized light by monolithically integrating aluminum nanowire optical filters with a CCD imaging array. The imaging sensor, … We report an imaging sensor capable of recording the optical properties of partially polarized light by monolithically integrating aluminum nanowire optical filters with a CCD imaging array. The imaging sensor, composed of 1000 by 1000 imaging elements with 7.4 μm pixel pitch, is covered with an array of pixel-pitch matched nanowire optical filters with four different orientations offset by 45°. The polarization imaging sensor has a signal-to-noise ratio of 45 dB and captures intensity, angle and degree of linear polarization in the visible spectrum at 40 frames per second with 300 mW of power consumption.
The charge-coupled device dominates an ever-increasing variety of scientific imaging and spectroscopy applications. Recent experience indicates, however, that the full potential of CCD performance lies well beyond that realized in … The charge-coupled device dominates an ever-increasing variety of scientific imaging and spectroscopy applications. Recent experience indicates, however, that the full potential of CCD performance lies well beyond that realized in devices currently available. Test data suggest that major improvements are feasible in spectral response, charge collection, charge transfer, and readout noise. These properties, their measurement in existing CCDs, and their potential for future improvement are discussed in this paper.
An arbitrated address-event imager has been designed and fabricated in a 0.6-μm CMOS process. The imager is composed of 80 × 60 pixels of 32 × 30 μm. The value … An arbitrated address-event imager has been designed and fabricated in a 0.6-μm CMOS process. The imager is composed of 80 × 60 pixels of 32 × 30 μm. The value of the light intensity collected by each photosensitive element is inversely proportional to the pixel's interspike time interval. The readout of each spike is initiated by the individual pixel; therefore, the available output bandwidth is allocated according to pixel output demand. This encoding of light intensities favors brighter pixels, equalizes the number of integrated photons across light intensity, and minimizes power consumption. Tests conducted on the imager showed a large output dynamic range of 180 dB (under bright local illumination) for an individual pixel. The array, on the other hand, produced a dynamic range of 120 dB (under uniform bright illumination and when no lower bound was placed on the update rate per pixel). The dynamic range is 48.9 dB value at 30-pixel updates/s. Power consumption is 3.4 mW in uniform indoor light and a mean event rate of 200 kHz, which updates each pixel 41.6 times per second. The imager is capable of updating each pixel 8.3K times per second (under bright local illumination).
This paper introduces a new methodology to compute dense visual flow using the precise timings of spikes from an asynchronous event-based retina. Biological retinas, and their artificial counterparts, are totally … This paper introduces a new methodology to compute dense visual flow using the precise timings of spikes from an asynchronous event-based retina. Biological retinas, and their artificial counterparts, are totally asynchronous and data-driven and rely on a paradigm of light acquisition radically different from most of the currently used frame-grabber technologies. This paper introduces a framework to estimate visual flow from the local properties of events' spatiotemporal space. We will show that precise visual flow orientation and amplitude can be estimated using a local differential approach on the surface defined by coactive events. Experimental results are presented; they show the method adequacy with high data sparseness and temporal resolution of event-based acquisition that allows the computation of motion flow with microsecond accuracy and at very low computational cost.
We have constructed a large-format mosaic CCD camera for the Sloan Digital Sky Survey. The camera consists of two arrays, a photometric array that uses 30 2048 × 2048 SITe/Tektronix … We have constructed a large-format mosaic CCD camera for the Sloan Digital Sky Survey. The camera consists of two arrays, a photometric array that uses 30 2048 × 2048 SITe/Tektronix CCDs (24 μm pixels) with an effective imaging area of 720 cm2 and an astrometric array that uses 24 400 × 2048 CCDs with the same pixel size, which will allow us to tie bright astrometric standard stars to the objects imaged in the photometric camera. The instrument will be used to carry out photometry essentially simultaneously in five color bands spanning the range accessible to silicon detectors on the ground in the time-delay–and–integrate (TDI) scanning mode. The photometric detectors are arrayed in the focal plane in six columns of five chips each such that two scans cover a filled stripe 25 wide. This paper presents engineering and technical details of the camera.
The biomimetic CMOS dynamic vision and image sensor described in this paper is based on a QVGA (304×240) array of fully autonomous pixels containing event-based change detection and pulse-width-modulation (PWM) … The biomimetic CMOS dynamic vision and image sensor described in this paper is based on a QVGA (304×240) array of fully autonomous pixels containing event-based change detection and pulse-width-modulation (PWM) imaging circuitry. Exposure measurements are initiated and carried out locally by the individual pixel that has detected a change of brightness in its field-of-view. Pixels do not rely on external timing signals and independently and asynchronously request access to an (asynchronous arbitrated) output channel when they have new grayscale values to communicate. Pixels that are not stimulated visually do not produce output. The visual information acquired from the scene, temporal contrast and grayscale data, are communicated in the form of asynchronous address-events (AER), with the grayscale values being encoded in inter-event intervals. The pixel-autonomous and massively parallel operation ideally results in lossless video compression through complete temporal redundancy suppression at the pixel level. Compression factors depend on scene activity and peak at ~1000 for static scenes. Due to the time-based encoding of the illumination information, very high dynamic range - intra-scene DR of 143 dB static and 125 dB at 30 fps equivalent temporal resolution - is achieved. A novel time-domain correlated double sampling (TCDS) method yields array FPN of <;0.25% rms. SNR is >56 dB (9.3 bit) for >10 Lx illuminance.
An optimal spectrum extraction procedure is described, and examples of its performance with CCD data are presented. The algorithm delivers the maximum possible signal-to-noise ratio while preserving spectrophotometric accuracy. The … An optimal spectrum extraction procedure is described, and examples of its performance with CCD data are presented. The algorithm delivers the maximum possible signal-to-noise ratio while preserving spectrophotometric accuracy. The effects of moderate geometric distortion and of cosmic-ray hits on the spectrum are automatically accounted for. In tests with background-noise limited CCD spectra, optimal extraction offers a 70-percent gain in effective exposure time in comparison with conventional extraction procedures.
The Image Reduction and Analysis Facility (IRAF) is a general purpose software system for the reduction and analysis of scientific data. The IRAF system provides a good selection of programs … The Image Reduction and Analysis Facility (IRAF) is a general purpose software system for the reduction and analysis of scientific data. The IRAF system provides a good selection of programs for general image processing and graphics applications, plus a large selection of programs for the reduction and analysis of optical astronomy data. The system also provides a complete modern scientific programming environment, making it straightforward for institutions using IRAF to add their own software to the system. Every effort has been made to make the system as portable and device independent as possible, so that the system may be used on a wide variety of host computers and operating systems with a wide variety of graphics and image display devices.
This paper describes a 128 times 128 pixel CMOS vision sensor. Each pixel independently and in continuous time quantizes local relative intensity changes to generate spike events. These events appear … This paper describes a 128 times 128 pixel CMOS vision sensor. Each pixel independently and in continuous time quantizes local relative intensity changes to generate spike events. These events appear at the output of the sensor as an asynchronous stream of digital pixel addresses. These address-events signify scene reflectance change and have sub-millisecond timing precision. The output data rate depends on the dynamic content of the scene and is typically orders of magnitude lower than those of conventional frame-based imagers. By combining an active continuous-time front-end logarithmic photoreceptor with a self-timed switched-capacitor differencing circuit, the sensor achieves an array mismatch of 2.1% in relative intensity event threshold and a pixel bandwidth of 3 kHz under 1 klux scene illumination. Dynamic range is > 120 dB and chip power consumption is 23 mW. Event latency shows weak light dependency with a minimum of 15 mus at > 1 klux pixel illumination. The sensor is built in a 0.35 mum 4M2P process. It has 40times40 mum <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> pixels with 9.4% fill factor. By providing high pixel bandwidth, wide dynamic range, and precisely timed sparse digital output, this silicon retina provides an attractive combination of characteristics for low-latency dynamic vision under uncontrolled illumination with low post-processing requirements.
The VideoToolbox is a free collection of two hundred C subroutines for Macintosh computers that calibrates and controls the computer-display interface to create accurately specified visual stimuli. High-level platform-independent languages … The VideoToolbox is a free collection of two hundred C subroutines for Macintosh computers that calibrates and controls the computer-display interface to create accurately specified visual stimuli. High-level platform-independent languages like MATLAB are best for creating the numbers that describe the desired images. Low-level, computer-specific VideoToolbox routines control the hardware that transforms those numbers into a movie. Transcending the particular computer and language, we discuss the nature of the computer-display interface, and how to calibrate and control it.
The architecture of the edge detector presented is highly pipeline to perform the computations of gradient magnitude and direction for the output image samples. The chip design is based on … The architecture of the edge detector presented is highly pipeline to perform the computations of gradient magnitude and direction for the output image samples. The chip design is based on a 2- mu m, double-metal, CMOS technology and was implemented using a silicon compiler system in less than 2 man-months. It is designed to operate with a 10-MHz two-phase clock, and it performs approximately 200*10/sup 6/ additions/s to provide the required magnitude and direction outputs every clock cycle. The function of the chip has been demonstrated with a prototype system that is performing image edge detection in real time.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">&gt;</ETX>
Image subtraction is a method by which one image is matched against another by using a convolution kernel, so that they can be differenced to detect and measure variable objects. … Image subtraction is a method by which one image is matched against another by using a convolution kernel, so that they can be differenced to detect and measure variable objects. It has been demonstrated that constant optimal-kernel solutions can be derived over small sub-areas of dense stellar fields. Here we generalize the theory to the case of space-varying kernels. In particular, it is shown that the CPU cost required for this new extension of the method is almost the same as for fitting a constant kernel solution. It is also shown that constant flux scaling between the images (constant kernel integral) can be imposed in a simple way. The method is demonstrated with a series of Monte-Carlo images. Differential PSF variations and differential rotation between the images are simulated. It is shown that the new method is able to achieve optimal results even in these difficult cases, thereby automatically correcting for these common instrumental problems. It is also demonstrated that the method does not suffer due to problems associated with under-sampling of the images. Finally, the method is applied to images taken by the OGLE II collaboration. It is proved that, in comparison to the constant-kernel method, much larger sub-areas of the images can be used for the fit, while still maintaining the same accuracy in the subtracted image. This result is especially important in case of variables located in low density fields, like the Huchra lens. Many other useful applications of the method are possible for major astrophysical problems; Supernova searches and Cepheids surveys in other galaxies, to mention but two. Many other applications will certainly show-up, since variability searches are a major issue in astronomy.
An unsupervised software ``robot'' that automatically and robustly reduces and analyzes CCD observations of photometric standard stars is described. The robot measures extinction coefficients and other photometric parameters in real … An unsupervised software ``robot'' that automatically and robustly reduces and analyzes CCD observations of photometric standard stars is described. The robot measures extinction coefficients and other photometric parameters in real time and, more carefully, on the next day. It also reduces and analyzes data from an all-sky $10 \mu m$ camera to detect clouds; photometric data taken during cloudy periods are automatically rejected. The robot reports its findings back to observers and data analysts via the World-Wide Web. It can be used to assess photometricity, and to build data on site conditions. The robot's automated and uniform site monitoring represents a minimum standard for any observing site with queue scheduling, a public data archive, or likely participation in any future National Virtual Observatory.
A 352/spl times/288 pixel CMOS image sensor chip with per-pixel single-slope ADC and dynamic memory in a standard digital 0.18-/spl mu/m CMOS process is described. The chip performs "snapshot" image … A 352/spl times/288 pixel CMOS image sensor chip with per-pixel single-slope ADC and dynamic memory in a standard digital 0.18-/spl mu/m CMOS process is described. The chip performs "snapshot" image acquisition, parallel 8-bit A/D conversion, and digital readout at continuous rate of 10000 frames/s or 1 Gpixels/s with power consumption of 50 mW. Each pixel consists of a photogate circuit, a three-stage comparator, and an 8-bit 3T dynamic memory comprising a total of 37 transistors in 9.4/spl times/9.4 /spl mu/m with a fill factor of 15%. The photogate quantum efficiency is 13.6%, and the sensor conversion gain is 13.1 /spl mu/V/e/sup -/. At 1000 frames/s, measured integral nonlinearity is 0.22% over a 1-V range, rms temporal noise with digital CDS is 0.15%, and rms FPN with digital CDS is 0.027%. When operated at low frame rates, on-chip power management circuits permit complete powerdown between each frame conversion and readout. The digitized pixel data is read out over a 64-bit (8-pixel) wide bus operating at 167 MHz, i.e., over 1.33 GB/s. The chip is suitable for general high-speed imaging applications as well as for the implementation of several still and standard video rate applications that benefit from high-speed capture, such as dynamic range enhancement, motion estimation and compensation, and image stabilization.
First Page First Page
The role of CMOS Image Sensors since their birth around the 1960s, has been changing a lot. Unlike the past, current CMOS Image Sensors are becoming competitive with regard to … The role of CMOS Image Sensors since their birth around the 1960s, has been changing a lot. Unlike the past, current CMOS Image Sensors are becoming competitive with regard to Charged Couple Device (CCD) technology. They offer many advantages with respect to CCD, such as lower power consumption, lower voltage operation, on-chip functionality and lower cost. Nevertheless, they are still too noisy and less sensitive than CCDs. Noise and sensitivity are the key-factors to compete with industrial and scientific CCDs. It must be pointed out also that there are several kinds of CMOS Image sensors, each of them to satisfy the huge demand in different areas, such as Digital photography, industrial vision, medical and space applications, electrostatic sensing, automotive, instrumentation and 3D vision systems. In the wake of that, a lot of research has been carried out, focusing on problems to be solved such as sensitivity, noise, power consumption, voltage operation, speed imaging and dynamic range. In this paper, CMOS Image Sensors are reviewed, providing information on the latest advances achieved, their applications, the new challenges and their limitations. In conclusion, the State-of-the-art of CMOS Image Sensors.
Event-based dynamic vision sensors (DVSs) asynchronously report log intensity changes. Their high dynamic range, sub-ms latency and sparse output make them useful in applications such as robotics and real-time tracking. … Event-based dynamic vision sensors (DVSs) asynchronously report log intensity changes. Their high dynamic range, sub-ms latency and sparse output make them useful in applications such as robotics and real-time tracking. However they discard absolute intensity information which is useful for object recognition and classification. This paper presents a dynamic and active pixel vision sensor (DAVIS) which addresses this deficiency by outputting asynchronous DVS events and synchronous global shutter frames concurrently. The active pixel sensor (APS) circuits and the DVS circuits within a pixel share a single photodiode. Measurements from a 240×180 sensor array of 18.5 μm 2 pixels fabricated in a 0.18 μm 6M1P CMOS image sensor (CIS) technology show a dynamic range of 130 dB with 11% contrast detection threshold, minimum 3 μs latency, and 3.5% contrast matching for the DVS pathway; and a 51 dB dynamic range with 0.5% FPN for the APS readout.
The pinned photodiode is the primary photodetector structure used in most CCD and CMOS image sensors. This paper reviews the development, physics, and technology of the pinned photodiode. The pinned photodiode is the primary photodetector structure used in most CCD and CMOS image sensors. This paper reviews the development, physics, and technology of the pinned photodiode.
Charge-coupled devices (CCDs) are presently the technology of choice for most imaging applications. In the 23 years since their invention in 1970, they have evolved to a sophisticated level of … Charge-coupled devices (CCDs) are presently the technology of choice for most imaging applications. In the 23 years since their invention in 1970, they have evolved to a sophisticated level of performance. However, as with all technologies, we can be certain that they will be supplanted someday. In this paper, the Active Pixel Sensor (APS) technology is explored as a possible successor to the CCD. An active pixel is defined as a detector array technology that has at least one active transistor within the pixel unit cell. The APS eliminates the need for nearly perfect charge transfer--the Achilles' heel of CCDs. This perfect charge transfer makes CCD's radiation 'soft,' difficult to use under low light conditions, difficult to manufacture in large array sizes, difficult to integrate with on-chip electronics, difficult to use at low temperatures, difficult to use at high frame rates, and difficult to manufacture in non-silicon materials that extend wavelength response. With the active pixel, the signal is driven from the pixel over metallic wires rather than being physically transported in the semiconductor. This paper makes a case for the development of APS technology. The state of the art is reviewed and the application of APS technology to future space-based scientific sensor systems is addressed.
Temporal noise sets the fundamental limit on image sensor performance, especially under low illumination and in video applications. In a CCD image sensor, temporal noise is primarily due to the … Temporal noise sets the fundamental limit on image sensor performance, especially under low illumination and in video applications. In a CCD image sensor, temporal noise is primarily due to the photodetector shot noise and the output amplifier thermal and 1/f noise. CMOS image sensors suffer from higher noise than CCDs due to the additional pixel and column amplifier transistor thermal and 1/f noise. Noise analysis is further complicated by the time-varying circuit models, the fact that the reset transistor operates in subthreshold during reset, and the nonlinearity of the charge to voltage conversion, which is becoming more pronounced as CMOS technology scales. The paper presents a detailed and rigorous analysis of temporal noise due to thermal and shot noise sources in CMOS active pixel sensor (APS) that takes into consideration these complicating factors. Performing time-domain analysis, instead of the more traditional frequency-domain analysis, we find that the reset noise power due to thermal noise is at most half of its commonly quoted kT/C value. This result is corroborated by several published experimental data including data presented in this paper. The lower reset noise, however, comes at the expense of image lag. We find that alternative reset methods such as overdriving the reset transistor gate or using a pMOS transistor can alleviate lag, but at the expense of doubling the reset noise power. We propose a new reset method that alleviates lag without increasing reset noise.
Dark Energy is the dominant constituent of the universe and we have little understanding of it. We describe a new project aimed at measuring the dark energy equation of state … Dark Energy is the dominant constituent of the universe and we have little understanding of it. We describe a new project aimed at measuring the dark energy equation of state parameter, w, to a statistical precision of ~5% with four separate techniques. The survey will image 5000 deg 2 in the southern sky and collect 300 million galaxies, 30,000 galaxy clusters, and 2000 Type Ia supernovae. The survey will be carried out using a new 3 deg 2 mosaic camera mounted at the prime focus of the 4m Blanco telescope at CTIO.
We describe the details of the Hubble Space Telescope (HST) Advanced Camera for Surveys / Wide Field Channel (ACS/WFC) observations of the COSMOS field, including the data calibration and processing … We describe the details of the Hubble Space Telescope (HST) Advanced Camera for Surveys / Wide Field Channel (ACS/WFC) observations of the COSMOS field, including the data calibration and processing procedures. We obtained a total of 583 orbits of HST ACS/WFC imaging in the F814W filter, covering a field that is 1.64 square degrees in area, the largest contiguous field ever imaged with HST. The median exposure depth across the field is 2028 seconds (one HST orbit), achieving a limiting point-source depth AB(F814W) = 27.2 (5 sigma). We also present details about the astrometric image registration, distortion removal and image combination using MultiDrizzle, as well as motivating the choice of our final pixel scale (30 milliarcseconds per pixel), based on the requirements for weak lensing science. The final set of images are publicly available through the archive sites at IPAC and STScI, along with further documentation on how they were produced.
view Abstract Citations (1038) References (44) Co-Reads Similar Papers Volume Content Graphics Metrics Export Citation NASA/ADS Radiative Transfer in a Clumpy Universe: The Colors of High-Redshift Galaxies Madau, Piero Abstract … view Abstract Citations (1038) References (44) Co-Reads Similar Papers Volume Content Graphics Metrics Export Citation NASA/ADS Radiative Transfer in a Clumpy Universe: The Colors of High-Redshift Galaxies Madau, Piero Abstract We assess the effects of the stochastic attenuation produced by intervening QSO absorption systems on the broadband colors of galaxies at cosmological distances. We compute the H I opacity of a clumpy universe as a function of redshift, including scattering in resonant lines, such as Lyα, Lyβ, Lyγ, and higher order members, and Lyman- continuum absorption. Both the numerous, optically thin Lyman-α forest clouds and the rarer, optically thick Lyman limit systems are found to contribute to the obscuration of background sources. We study the mean properties of primeval galaxies at high redshift in four broad optical passbands, U_n_, B, G, and R. Even if young galaxies radiated a significant amount of ionizing photons, the attenuation due to the accuphotoelectric opacity along the path is so severe that sources beyond z~3 will drop out of the U_n_ image altogether. We also show that the observed B - R color of distant galaxies can be much redder than expected from a stellar population. At z~3.5, the blanketing by discrete absorption lines in the Lyman series is so effective that background galaxies appear, on average, 1 mag fainter in B. By z~4, the observed B magnitude increment due to intergalactic absorption exceeds 2 mag. By modeling the intrinsic UV spectral energy distribution of star-forming galaxies with a stellar population synthesis code, we show that the (B - R)_AB_ ~ 0 criterion for identifying "flat-spectrum," metal- producing galaxies is biased against objects at z > 3. The continuum blanketing from the Lyman series produces a characteristic staircase profile in the transmitted power. We suggest that this cosmic Lyman decrement might be used as a tool to identify high-z galaxies. Publication: The Astrophysical Journal Pub Date: March 1995 DOI: 10.1086/175332 Bibcode: 1995ApJ...441...18M Keywords: Color; Cosmology; Galactic Evolution; Lyman Spectra; Radiative Transfer; Red Shift; Universe; Absorption Spectra; Background Radiation; Resonance Scattering; Stochastic Processes; Ultraviolet Astronomy; Astrophysics; COSMOLOGY: OBSERVATIONS; GALAXIES: EVOLUTION; GALAXIES: INTERGALACTIC MEDIUM; GALAXIES: QUASARS: ABSORPTION LINES; RADIATIVE TRANSFER; ULTRAVIOLET: GALAXIES full text sources ADS | data products NED (2) Related Materials (3) Part 2: 1996ApJ...461...20H Part 3: 1999ApJ...514..648M Part 4: 2012ApJ...746..125H
We have designed, fabricated, and tested a series of compact CMOS integrated circuits that realize the winner-take-all function.These analog, continuous-time circuits use only O(n) of interconnect to perform this function.We … We have designed, fabricated, and tested a series of compact CMOS integrated circuits that realize the winner-take-all function.These analog, continuous-time circuits use only O(n) of interconnect to perform this function.We have also modified the winner-take-all circuit, realizing a circuit that computes local nonlinear inhibition.Two general types of inhibition mediate activity in neural systems: subtractive inhibition, which sets a zero level for the computation, and multiplicative (nonlinear) inhibition, which regulates the gain of the computation.We report a physical realization of general nonlinear inhibition in its extreme form, known as winner-take-all.We have designed and fabricated a series of compact, completely functional CMOS integrated circuits that realize the winner-take-all function, using the full analog nature of the medium.This circuit has been used successfully as a component in several VLSI sensory systems that perform auditory localization (Lazzaro and Mead, in press) and visual stereopsis (Mahowald and Delbruck, 1988).Winnertake-all circuits with over 170 inputs function correctly in these sensory systems.We have also modified this global winner-take-all circuit, realizing a circuit that computes local nonlinear inhibition.The circuit allows multiple winners in the network, and is well suited for use in systems that represent a feature space topographically and that process several features in parallel.We have designed, fabricated, and tested a CMOS integrated circuit that computes locally the winner-take-all function of spatially ordered input.
The effect of illumination of a semiconductor junction is, as is well-known, a photovoltage between the two sides of the junction. In this article it will be shown that a … The effect of illumination of a semiconductor junction is, as is well-known, a photovoltage between the two sides of the junction. In this article it will be shown that a nonuniform illuimination gives a lateral photovoltage parallel to the junction in addition to the (transverse) photovoltage mentioned above. A photocell will be described that uses the lateral effect and can detect the position of a light spot to less than 100 Å. By utilizing an associated lens or aperture, one can measure an angular motion smaller than 0.1 second of arc. The output voltage of the cell is a linear function of the position of the light spot, with zero output for the light spot in the center, reversing in sign when the light spot changes from one side to the other of the center position. The linearity is better than 1.5 per cent over a distance of 0.030 inch. The equivalent noise resistance of the cell is equal to its output resistance, approximately 100 ohms. The sensitivity of the cell is approximately 200 microamperes per lumen and its frequency response is about the same as that of junction transistors. The response curve can be shifted by the application of a voltage between the base contacts. This is an electronic equivalent of a mechanical translation of the cell. It is also possible to do the equivalent of "chopping" the light by applying a modulating voltage to the alloyed dot.
A family of CMOS-based active pixel image sensors (APSs) that are inherently compatible with the integration of on-chip signal processing circuitry is reported. The image sensors were fabricated using commercially … A family of CMOS-based active pixel image sensors (APSs) that are inherently compatible with the integration of on-chip signal processing circuitry is reported. The image sensors were fabricated using commercially available 2-/spl mu/m CMOS processes and both p-well and n-well implementations were explored. The arrays feature random access, 5-V operation and transistor-transistor logic (TTL) compatible control signals. Methods of on-chip suppression of fixed pattern noise to less than 0.1% saturation are demonstrated. The baseline design achieved a pixel size of 40 /spl mu/m/spl times/40 /spl mu/m with 26% fill-factor. Array sizes of 28/spl times/28 elements and 128/spl times/128 elements have been fabricated and characterized. Typical output conversion gain is 3.7 /spl mu/V/e/sup -/ for the p-well devices and 6.5 /spl mu/V/e/sup -/ for the n-well devices. Input referred read noise of 28 e/sup -/ rms corresponding to a dynamic range of 76 dB was achieved. Characterization of various photogate pixel designs and a photodiode design is reported. Photoresponse variations for different pixel designs are discussed.
The Medipix2 chip is a pixel-detector readout chip consisting of 256 /spl times/ 256 identical elements, each working in single photon counting mode for positive or negative input charge signals. … The Medipix2 chip is a pixel-detector readout chip consisting of 256 /spl times/ 256 identical elements, each working in single photon counting mode for positive or negative input charge signals. Each pixel cell contains around 500 transistors and occupies a total surface area of 55 /spl mu/m /spl times/ 55 /spl mu/m. A 20-/spl mu/m wide octagonal opening connects the detector and the preamplifier input via bump bonding. The preamplifier feedback provides compensation for detector leakage current on a pixel by pixel basis. Two identical pulse height discriminators are used to create a pulse if the preamplifier output falls within a defined energy window. These digital pulses are then counted with a 13-b pseudorandom counter. The counter logic, based in a shift register, also behaves as the input-output register for the pixel. Each cell also has an 8-b configuration register which allows masking, test-enabling and 3-b individual threshold adjust for each discriminator. The chip can be configured in serial mode and readout either serially or in parallel. The chip is designed and manufactured in a 6-metal 0.25-/spl mu/m CMOS technology. First measurements show an electronic pixel noise of 140 e~ root mean square (rms) and an unadjusted threshold variation around 360 e~ rms.
My life has been an interesting voyage. I became an astronomer because I could not imagine living on Earth and not trying to understand how the Universe works. My scientific … My life has been an interesting voyage. I became an astronomer because I could not imagine living on Earth and not trying to understand how the Universe works. My scientific career has revolved around observing the motions of stars within galaxies and the ...Read More
We present the photometric calibration of the HST Advanced Camera for Surveys (ACS). We give here an overview of the performance and calibration of the 2 CCD cameras, the Wide … We present the photometric calibration of the HST Advanced Camera for Surveys (ACS). We give here an overview of the performance and calibration of the 2 CCD cameras, the Wide Field Channel (WFC) and the High Resolution Channel (HRC), and a description of the best techniques for reducing ACS CCD data. On-orbit observations of spectrophotometric standard stars have been used to revise the pre-launch estimate of the instrument response curves to best match predicted and observed count rates. Synthetic photometry has been used to determine zeropoints for all filters in 3 magnitude systems and to derive interstellar extinction values for the ACS photometric systems. Due to the CCD internal scattering of long wavelength photons, the width of the PSF increases significantly in the near-IR and the aperture correction for photometry with near-IR filters depends on the spectral energy distribution of the source. We provide encircled energy curves and a detailed recipe to correct for the latter effect. Transformations between the ACS photometric systems and the UBVRI and WFPC2 systems are presented. In general, two sets of transformations are available: 1 based on the observation of 2 star clusters; the other on synthetic photometry. We discuss the accuracy of these transformations and their sensitivity to details of the spectra being transformed. Initial signs of detector degradation due to the HST radiative environment are already visible. We discuss the impact on the data in terms of dark rate increase, charge transfer inefficiency, and hot pixel population.
CMOS active pixel sensors (APS) have performance competitive with charge-coupled device (CCD) technology, and offer advantages in on-chip functionality, system power reduction, cost, and miniaturization. This paper discusses the requirements … CMOS active pixel sensors (APS) have performance competitive with charge-coupled device (CCD) technology, and offer advantages in on-chip functionality, system power reduction, cost, and miniaturization. This paper discusses the requirements for CMOS image sensors and their historical development, CMOS devices and circuits for pixels, analog signal chain, and on-chip analog-to-digital conversion are reviewed and discussed.
In this article, we provide a basic introduction to CMOS image-sensor technology, design and performance limits and present recent developments and future directions in this area. We also discuss image-sensor … In this article, we provide a basic introduction to CMOS image-sensor technology, design and performance limits and present recent developments and future directions in this area. We also discuss image-sensor operation and describe the most popular CMOS image-sensor architectures. We note the main non-idealities that limit CMOS image sensor performance, and specify several key performance measures. One of the most important advantages of CMOS image sensors over CCDs is the ability to integrate sensing with analog and digital processing down to the pixel level. Finally, we focus on recent developments and future research directions that are enabled by pixel-level processing, the applications of which promise to further improve CMOS image sensor performance and broaden their applicability beyond current markets.
The Dark Energy Camera is a new imager with a 22 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro … The Dark Energy Camera is a new imager with a 22 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five-element optical corrector, seven filters, a shutter with a 60 cm aperture, and a charge-coupled device (CCD) focal plane of 250 μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 megapixel focal plane comprises 62 2k × 4k CCDs for imaging and 12 2k × 2k CCDs for guiding and focus. The CCDs have 15 μm × 15 μm pixels with a plate scale of 0263 pixel−1. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 s with 6–9 electron readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.
Due to recent advances in digital technologies, and availability of credible data, an area of artificial intelligence, deep learning, has emerged, and has demonstrated its ability and effectiveness in solving … Due to recent advances in digital technologies, and availability of credible data, an area of artificial intelligence, deep learning, has emerged, and has demonstrated its ability and effectiveness in solving complex learning problems not possible before. In particular, convolution neural networks (CNNs) have demonstrated their effectiveness in image detection and recognition applications. However, they require intensive CPU operations and memory bandwidth that make general CPUs fail to achieve desired performance levels. Consequently, hardware accelerators that use application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), and graphic processing units (GPUs) have been employed to improve the throughput of CNNs. More precisely, FPGAs have been recently adopted for accelerating the implementation of deep learning networks due to their ability to maximize parallelism as well as due to their energy efficiency. In this paper, we review recent existing techniques for accelerating deep learning networks on FPGAs. We highlight the key features employed by the various techniques for improving the acceleration performance. In addition, we provide recommendations for enhancing the utilization of FPGAs for CNNs acceleration. The techniques investigated in this paper represent the recent trends in FPGA-based accelerators of deep learning networks. Thus, this review is expected to direct the future advances on efficient hardware accelerators and to be useful for deep learning researchers.
Convolutional neural networks (CNNs) require numerous computations and external memory accesses. Frequent accesses to off-chip memory cause slow processing and large power dissipation. For real-time object detection with high throughput … Convolutional neural networks (CNNs) require numerous computations and external memory accesses. Frequent accesses to off-chip memory cause slow processing and large power dissipation. For real-time object detection with high throughput and power efficiency, this paper presents a Tera-OPS streaming hardware accelerator implementing a you-only-look-once (YOLO) CNN. The parameters of the YOLO CNN are retrained and quantized with the PASCAL VOC data set using binary weight and flexible low-bit activation. The binary weight enables storing the entire network model in block RAMs of a field-programmable gate array (FPGA) to reduce off-chip accesses aggressively and, thereby, achieve significant performance enhancement. In the proposed design, all convolutional layers are fully pipelined for enhanced hardware utilization. The input image is delivered to the accelerator line-by-line. Similarly, the output from the previous layer is transmitted to the next layer line-by-line. The intermediate data are fully reused across layers, thereby eliminating external memory accesses. The decreased dynamic random access memory (DRAM) accesses reduce DRAM power consumption. Furthermore, as the convolutional layers are fully parameterized, it is easy to scale up the network. In this streaming design, each convolution layer is mapped to a dedicated hardware block. Therefore, it outperforms the "one-size-fits-all" designs in both performance and power efficiency. This CNN implemented using VC707 FPGA achieves a throughput of 1.877 tera operations per second (TOPS) at 200 MHz with batch processing while consuming 18.29 W of on-chip power, which shows the best power efficiency compared with the previous research. As for object detection accuracy, it achieves a mean average precision (mAP) of 64.16% for the PASCAL VOC 2007 data set that is only 2.63% lower than the mAP of the same YOLO network with full precision.
Conventional algorithms for rejecting cosmic rays in single CCD exposures rely on the contrast between cosmic rays and their surroundings and may produce erroneous results if the point‐spread function is … Conventional algorithms for rejecting cosmic rays in single CCD exposures rely on the contrast between cosmic rays and their surroundings and may produce erroneous results if the point‐spread function is smaller than the largest cosmic rays. This paper describes a robust algorithm for cosmic‐ray rejection, based on a variation of Laplacian edge detection. The algorithm identifies cosmic rays of arbitrary shapes and sizes by the sharpness of their edges and reliably discriminates between poorly sampled point sources and cosmic rays. Examples of its performance are given for spectroscopic and imaging data, including Hubble Space Telescope Wide Field Planetary Camera 2 images.
Good representations of the passbands for the Johnson-Cousins UBVRI system have been devised by comparing synthetic photometry with actual observations and with standard system magnitudes. Small adjustments have been made … Good representations of the passbands for the Johnson-Cousins UBVRI system have been devised by comparing synthetic photometry with actual observations and with standard system magnitudes. Small adjustments have been made to previously published passbands. Users are urged to match these passbands so that better photometry and calibration are ensured. Mismatched B bands are shown to be a major source of recent (U - B) transformation problems. The nature of systematic differences between the natural colors of the most widely used sets of standard star photometry is investigated and suggested CCD filter combinations are discussed.
We present CSWin Transformer, an efficient and effective Transformer-based backbone for general-purpose vision tasks. A challenging issue in Transformer design is that global self-attention is very expensive to compute whereas … We present CSWin Transformer, an efficient and effective Transformer-based backbone for general-purpose vision tasks. A challenging issue in Transformer design is that global self-attention is very expensive to compute whereas local self-attention often limits the field of interactions of each token. To address this issue, we develop the Cross-Shaped Window self-attention mechanism for computing self-attention in the horizontal and vertical stripes in parallel that form a cross-shaped window, with each stripe obtained by splitting the input feature into stripes of equal width. We provide a mathematical analysis of the effect of the stripe width and vary the stripe width for different layers of the Transformer network which achieves strong modeling capability while limiting the computation cost. We also introduce Locally-enhanced Positional Encoding (LePE), which handles the local positional information better than existing encoding schemes. LePE naturally supports arbitrary input resolutions, and is thus especially effective and friendly for downstream tasks. Incorporated with these designs and a hierarchical structure, CSWin Transformer demonstrates competitive performance on common vision tasks. Specifically, it achieves 85.4% Top-1 accuracy on ImageNet-1K without any extra training data or label, 53.9 box AP and 46.4 mask AP on the COCO detection task, and 52.2 mIOU on the ADE20K semantic segmentation task, surpassing previous state-of-the-art Swin Transformer backbone by +1.2, +2.0, +1.4, and +2.0 respectively under the similar FLOPs setting. By further pretraining on the larger dataset ImageNet-21K, we achieve 87.5% Top-1 accuracy on ImageNet-1K and high segmentation performance on ADE20K with 55.7 mIoU. <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup> <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup> Code and pretrain model is available at https://github.com/microsoft/CSWin-Transformer
The remarkable progress of Vision Transformer (ViT) models has significantly advanced performance in computer vision tasks. However, the deployment of ViTs in resource-constrained environments remains a challenge, as the attention … The remarkable progress of Vision Transformer (ViT) models has significantly advanced performance in computer vision tasks. However, the deployment of ViTs in resource-constrained environments remains a challenge, as the attention computation mechanisms within these models form a significant bottleneck, requiring substantial memory and computational resources. To address this challenge, we introduce TAFP-ViT, a tailored hardware-software co-design framework for Vision Transformers. On the software level, TAFP-ViT leverages a learnable compressor to perform multi-head shared compression on feature maps, and fuses decompression reconstruction, QKV generation and QKV processing together for calculation, thereby greatly reducing memory and computation requirements. Furthermore, TAFP-ViT combines dynamic inter-layer token pruning to eliminate unimportant tokens and hardware-friendly intra-block row pruning to diminish redundant computations. The proposed software design converts the calculations before and after SoftMax into dense and sparse triple matrix multiplication (TMM) forms respectively. On the hardware level, TAFP-ViT proposes a configurable systolic array (SA) to efficiently adapt to the QKV fusion computation pattern. The SA has flexible PE units that can effectively support general matrix multiplication (GEMM), dense and sparse TMM. The TMM and flexible dataflows allow TAFP-ViT to avoid handling transpositions and storing intermediate computation results, greatly enhancing computational efficiency. Besides, TAFP-ViT innovatively designs a Top-k engine to support dynamic pruning on the fly with high throughput and low resource consumption. Experiments show that the proposed TAFP-ViT achieves remarkable speedups of 123.91 ×, 29.5 ×, and 3.01 ∼ 20.65 × compared to conventional CPUs, GPUs, and previous state-of-the-art works, respectively. Additionally, TAFP-ViT reaches a throughput of up to 731.5 GOP/s and an impressive energy efficiency of 77.9 GOPS/W.
Alexandra Denisa BORDIANU , Anamaria Flori MITESCU | Review of the Air Force Academy/Revista Academiei Forţelor Aeriene "Henri Coandă"
This paper explores the importance of optimizing FPGA accelerators in the context of energy efficiency and enhanced performance for processing convolutional neural networks (CNN) in resource-limited environments. A reconfigurable RTL-level … This paper explores the importance of optimizing FPGA accelerators in the context of energy efficiency and enhanced performance for processing convolutional neural networks (CNN) in resource-limited environments. A reconfigurable RTL-level accelerator for CNN-based object detection systems is proposed, focusing on hardware and power consumption optimization techniques. Various aspects such as the importance of CNNs in artificial vision, their fundamental structure, and their acceleration on FPGA-SoC devices are presented. Additionally, the benefits of integrating FPGA with SoC and the design requirements to achieve optimal performance and energy efficiency are discussed. This research highlights the significance of innovative approaches to attaining the desired energy efficiency and performance in resource-constrained environments, such as mobile devices, IoT, and electric vehicles
A. Vani | International Journal for Research in Applied Science and Engineering Technology
Edge detection is a critical operation in image processing, widely used in fields such as computer vision, robotics, medical imaging, and object recognition. The Sobel operator, known for its simplicity … Edge detection is a critical operation in image processing, widely used in fields such as computer vision, robotics, medical imaging, and object recognition. The Sobel operator, known for its simplicity and effectiveness, computes the gradient of pixel intensities to identify edges within an image. Traditional software-based implementations, while functional, often struggle with real-time processing requirements. The Sobel algorithm is implemented in Verilog HDL, applying 3×3 convolution kernels to compute both horizontal and vertical gradients. These gradients are combined to produce edge magnitudes that highlight the boundaries within the image. The FPGA implementation is developed and tested using the Xilinx Vivado Design Suite. The design is simulated and verified for functional correctness, with results compared to a software-based Python implementation
<title>Abstract</title> Vision Transformers show important results in the current Deep Learning technological landscape, being able to approach complex and dense tasks, for instance, Monocular Depth Estimation. However, in the transformer … <title>Abstract</title> Vision Transformers show important results in the current Deep Learning technological landscape, being able to approach complex and dense tasks, for instance, Monocular Depth Estimation. However, in the transformer architecture, the attention module introduces a quadratic cost concerning the processed tokens. In dense Monocular Depth Estimation tasks, the inherently high computational complexity results in slow inference and poses significant challenges, particularly in resource-constrained onboard applications. To mitigate this issue, efficient attention modules have been developed. In this paper, we leverage these techniques to reduce the computational cost of networks designed for Monocular Depth Estimation, to reach an optimal trade-off between the quality of the results and inference speed. More specifically, optimization has been applied not only to the entire network but also independently to the encoder and decoder to assess the model's sensitivity to these modifications. Additionally, this paper introduces the use of the Pareto Frontier as an analytic method to get the optimal trade-off between the two objectives of quality and inference time. The results indicate that various optimised networks achieve performance comparable to, and in some cases surpass, their respective baselines, while significantly enhancing inference speed.
Dongwei Xuan , Ruiyang Zhang , Jiajun Qin +7 more | Nuclear Instruments and Methods in Physics Research Section A Accelerators Spectrometers Detectors and Associated Equipment
Clara Plasse , D. Götz , A. Meuris +4 more | Nuclear Instruments and Methods in Physics Research Section A Accelerators Spectrometers Detectors and Associated Equipment
Abstract This paper presents an 11-bit successive approximation register (SAR) analog-to-digital converter (ADC) designed for high-performance integrating-type X-ray pixel detectors at X-ray Free Electron Laser (XFEL) facilities. The ADC employs … Abstract This paper presents an 11-bit successive approximation register (SAR) analog-to-digital converter (ADC) designed for high-performance integrating-type X-ray pixel detectors at X-ray Free Electron Laser (XFEL) facilities. The ADC employs a split capacitor array to reduce power consumption and chip area, making it suitable for large-scale integration within a pixel readout chip. The proposed ADC design enables the development of a new digital readout architecture capable of achieving both high frame rates and a wide dynamic range. The ADC was designed and implemented in a prototype chip using a 130 nm CMOS process. The core circuit occupies an area of 0.026 mm 2 . The measured effective number of bits (ENOB) achieves 10.36-bit with core circuit power consumption around 53 μW at 2 MS/s using a 1.2 V supply.
Abstract CERN's strategic R&amp;D programme on technologies for future experiments recently started investigating the TPSCo 65nm ISC CMOS imaging process for monolithic active pixels sensors for application in high energy … Abstract CERN's strategic R&amp;D programme on technologies for future experiments recently started investigating the TPSCo 65nm ISC CMOS imaging process for monolithic active pixels sensors for application in high energy physics. In collaboration with the ALICE experiment and other institutes, several prototypes demonstrated excellent performance, qualifying the technology. The Hybrid-to-Monolithic (H2M), a new test-chip produced in the same process but with a larger pixel pitch than previous prototypes, exhibits an unexpected asymmetric efficiency pattern. This contribution describes a simulation procedure combining TCAD, Monte Carlo and circuit simulations to model and understand this effect. It proved able to reproduce measurement results and attribute the asymmetric efficiency drop to a slow charge collection due to low amplitude potential wells created by the circuit layout and impacting efficiency via ballistic deficit.
Y. He , Rafael Ballabriga Sune , E. Buschmann +7 more | Nuclear Instruments and Methods in Physics Research Section A Accelerators Spectrometers Detectors and Associated Equipment
M. Babeluk , D. Auguste , M. Barbero +7 more | Nuclear Instruments and Methods in Physics Research Section A Accelerators Spectrometers Detectors and Associated Equipment