CMOS (Complementary Metal Oxide Semiconductor)
CMOS Stands for “Complementary Metal Oxide Semiconductor.” It is a technology used to produce integrated circuits. CMOS circuits are found in several types of electronic components, including microprocessors, batteries, and digital camera image sensors.
The “MOS” in CMOS refers to the transistors in a CMOS component, called MOSFETs (metal oxide semiconductor field-effect transistors). The “metal” part of the name is a bit misleading, as modern MOSFETs often use Polysilicon instead of aluminum as the conductive material. Each MOSFET includes two terminals (“source” and “drain”) and a gate, which is insulated from the body of the transistor. When enough voltage is applied between the gate and body, electrons can flow between the source and drain terminals.
The “complimentary” part of CMOS refers to the two different types of semiconductors each transistor contains — N-type and P-type. N-type semiconductors have a greater concentration of electrons than holes, or places where an electron could exist. P-type semiconductors have a greater concentration of holes than electrons. These two semiconductors work together and may form logic gates based on how the circuit is designed
Operating on the photoelectric effect, image sensors convert photons into electrical charge. Unlike CCD sensors (Charge-Coupled Device), CMOS sensors (Complementary Metal-Oxide Semiconductor) convert charge into voltage straight in the pixels. Voltage amplification and quantization create the output digital value.
Present-day CMOS sensors excel by high frame rates and outstanding image quality. In high-performance industrial cameras they enable precise image evaluation. In most applications, CCD sensors have been replaced by ever-advancing technology.
The following illustration explains basic functionalities and main features of CMOS sensors.
1) Full-well capacity [e–] and saturation capacity [e–]
Think of a pixel as a well and of the full-well capacity as the maximum number of electrons that can be stored in this well. This corresponds to the maximum number of photons which would generate such electrons (saturation irradiance). The saturation capacity actually used for the characterization of a camera is measured differently and directly from camera images. The value is typically smaller than the full-well capacity. This difference might cause discussion if comparing imaging sensor data and camera data. A high saturation capacity allows for longer exposure times. If a pixel is over-exposed it is set to maximum DN and it does not contain useful information.
2) Absolute sensitivity threshold (AST) [e–]
The absolute sensitivity threshold describes the lowest number of photons (minimum detectable irradiation) where the camera can differentiate useful image information in a picture from noise. This means, the lower the threshold, the more sensitive the camera. You should take the AST into account in very low light applications. It is more significant than only referring to the QE, as the AST combines QE, dark noise, and the shot noise (which is caused by the quantum nature of the photons). The value is determined from the value where SNR is equal 1 (signal is as large as noise).
3) Temporal dark noise [e–]
Even if the sensor is not illuminated each pixel shows a (dark) signal. With increasing exposure time and temperature electrons are generated in each pixel without light. This dark signal varies,
which is called dark noise (measured in electrons). A lower dark noise is preferred for most applications. The dark noise together with the photon shot noise and the quantization noise describe the noise of the camera.
4) Dynamic range [dB]
The dynamic range (DR) is the ratio between saturation irradiance and the minimum detectable irradiation. DR is measured in dB. Cameras with a high DR are able to give more detailed image information for dark and bright areas in a single image at the same time. So a high DR is especially important in applications with dark and bright areas in one image or with rapidly changing light conditions.
Physical processings within a sensor / a camera
5) Quantum efficiency [%]
An imaging sensor converts photons into electrons. The conversion ratio, the quantum efficiency (QE), depends on the wavelength. The more photons are converted into electrons, the more sensitive to light the sensor is and the more information can be obtained from the image. The values measured in a camera can differ from image sensor supplier data, as a camera might use a cover glass or filters.
6) Maximum Signal-to-Noise Ratio (SNRmax) [dB]
The signal-to-noise ratio (SNR) is the ratio between the grey value (corrected for the dark value) and the signal noise. It is often measured in dB. SNR depends mainly on K and dark noise. SNR increases with the number of photons. The maximum SNR (SNRmax) is reached for the saturation irradiance.
7) K-Factor (Digital Number DN/e–)
A camera converts the electrons (e–) from the image sensor into digital numbers (DN). This conversion is described by the overall system gain K, measured in digital number (DN) per electron (e–). K electrons are required to increase the grey level by 1 DN. The K-Factor depends on the camera design. A slightly increased K-Factor may improve
Imaging devices like digital cameras, imaging tools used in medical, camera modules, and night vision tools like radar, thermal imaging devices, sonar, etc.
With their significant features, the applications of complementary metal-oxide semiconductor (CMOS) image sensors covers a very extensive range,
From industrial automation to traffic applications such as aiming systems, Blind guidance, Active/passive range finders, etc.