CCD vs CMOS

Reconstructing capacity scenes using CCD and CMOS technology: A technical dissertation.

“Make yourself comfortable and indulge in a freshly baked scone while I take you on a journey of enlightenment. By the end, unlock the true potential of your camera’s output and gain a deeper understanding of its capabilities.”

This article delves into a particularly contentious matter prevalent in the field of photography – the debate surrounding CCD colours versus CMOS colours.
The conventional wisdom – that CCD colours are superior to their CMOS counterparts – has been fiercely debated on various forums. However, the assertion lacks a sound technical basis and empirical evidence to qualify it as an absolute and indisputable truth. In the interests of shedding light on the issue and presenting a more informed perspective, we delve deeper into the subject.

To accurately commence our analysis, it is imperative to rectify the sentence in question through a lexical lens. The usage of “best” presupposes a standard of quality, which is merely deduced from the observation of photographs obtained from respective cameras or raw converters. Thus, the appropriate term to employ would be “pleasant.” It is often argued that the utilization of CCD technology in camera systems produces more appealing color palettes, therefore crediting the technological variance between CCD and CMOS. However, can this variance be solely attributed to the difference in these technologies? To put it simply, the answer is no. Although, delving into the reasons behind this negation requires a more intricate explanation.

The basic task of a digital sensor

Both CCD (Charge Coupled Device) and CMOS (Complementary Metal-Oxide Semiconductor) are based on the photoelectric physical principle in which a photon incident on an atom of a metal or semimetal causes the expulsion of an electron.

 

Both technologies employ suitably refined and doped silicon (Si) to be a semiconductor suitable for the construction of sensors:

In practice, it involves converting the incident photons into electrons that can be collected to form an electric charge proportional to the intensity of the exposure. Any sensor based on this physical effect behaves in an ideally linear way, where a doubling of the incident photons corresponds to a doubling of the electric charge collected that is subsequently converted into a digital value by the A / D circuit.
In its most basic form, the sensor exhibits sensitivity beyond the range of human perceptible light, despite its simple construction.
Here is a typical curve:

Which as we see extends well beyond the canonical 380 – 730nm range.

Moreover, the sensor only provides an electric charge intensity value as there’s no detection of color variation. If the objective is to replicate reality as perceived through human senses, it becomes crucial to restrict the sensor’s sensitivity range via the addition of Near Ultraviolet (NUV) and Near Infrared (NIR) filters.
Below them there is a matrix of colour filters, typically in a Bayer scheme:

Spectral Sensitivity Functions (SSFs):

Now that the sensor has been limited in the visible range and equipped with a colour matrix, we have three distinct RGB curves. Ideally, these curves should be superimposable to those of the standard CIE observer:

 

In this case, we see the CIE 1931 2 ° standard observer curves, the first proposed by Commission Internationale de l’Eclairage in 1931.
If the sensor curves matched, then the camera would see exactly like us, making the characterization phase superfluous. However, it is not possible to obtain identical SSFs to the standard observer.
An example of real SSF of a camera:

As a result, a camera does not, by itself, return realistic colours. The raw data present in RAW need a characterization, the role played by the camera profile.
Each sensor,  CCD or CMOS, has its own specific SSF curves, which are the result of the combination of all layers: the NIR and NUV filters, the colour matrix and the native sensitivity of the Si that depends on the production process. The variability between sensors can be slight or very significant, but already at this stage, it is practically impossible to attribute it to the difference between CCD and CMOS.
In the field of image sensors, the level of efficiency of SSFs is a crucial factor. A key point to ponder is whether CCD sensors can offer a more efficient SSF than CMOS, ultimately, obtaining a richer and more informative color signal. Though, answering this is quite complex as it is not a one-size-fits-all solution. Undoubtedly, some CCD sensors may have better SSF efficiency than particular CMOS sensors, but different scenarios occur vice versa. What decides which SSF is better than the other?

The signal separation capability

In photographic technology, the characterization phase holds significant importance as cameras are incapable of rendering colours exactly like us. While roses may appear red, the grass green, and the sky blue, they are not the true representation of these colours. Manufacturers avoid designing their sensors to approach raw colour rendering to reality and instead, they strive to pursue the maximum possible separation capacity. This capacity refers to the sensor’s sensitivity in registering the difference between two spectral inputs, which would produce two XYZ triplets closely situated in the tristimulus space defined by the standard observer. The primary objective of maximizing this ability over a vast area of the human locus e for a range of illuminants is to distinguish between two objects of extremely similar colours. This capability is fundamental in rendering the raw information necessary for reproducing the accurate representation of reality. Therefore, an efficient sensor aims to enhance this capability for different illuminants, primarily sunlight.

 

Our Test

After these premises, we come to the most interesting part: to compare in an experimental way two cameras, one CCD and one CMOS.
We chose the Nikon D200 and Nikon D700, respectively CCD and CMOS, because of the same manufacturer and they are sufficiently close in time. In this way, we try to isolate as much as possible the variable CCD vs CMOS.
All comparison tests are conducted on the spectral models of the cameras to rule out any possible disturbance.

-SSF curves:
First, let’s compare the SSFs of the two machines the test is based on:

 

There are design similarities, but also significant differences.

-Rendering of a virtual SG target:
Virtually exposing the multi-spectral image of a ColorChecker SG on their respective SSFs
we get for D65 (Daylight 6504K):

Under this illuminant, the cameras offer very similar output with minimal differences.
The sigma stands at 0.50 with a maximum error of 2.38 DeltaE 2000.

For StdA (Tungsten Bulb 2856K):

Even under the artificial light of tungsten, the raw output of the cameras is similar, with a sigma of 0.62 and a maximum error of 2.74 DeltaE 2000.

-Separation performance:
To analyze the signal separation capabilities of the two cameras we use a synthetic spectral target covering almost the entire human locus; only a small section is ignored close to the purple axis as the signal from any camera would be too low.

There are more than ten thousand unique spectral unique samples of the Reflectance class and therefore related to the illuminant that we choose. Here in D50:

The heat-map in u’v’ diagram represents the intensity of variation of the output in 16bit encoding for each sample with variation of 1 dCh (Delta Chromaticity), from zero (no capacity of separation) to 300 (maximum separation capacity). The graph shows the AdobeRGB gamut (red triangle) and the Pointer gamut as a reference (irregular perimeter). The Pointer gamut contains all the real objects observable in reflection.
As we can see, the performances are really very similar and completely equivalent on the practical level. The design similarity of the two sensors is evident, although they belong to two different technologies showing in the test a comparable result for the D50 illuminant.
For StdA we get:

Under the light of Tungsten, there are greater differences, but it would be difficult to assess which of the two sensors does better. Once again the performances are superimposable and the practical differences are null. We can therefore conclude that the difference between CCD and CMOS does not involve substantial changes to the output. There are other variables that lead to a difference in yield in the observed photographs between two cameras, be they CCD and CMOS or belonging to the same technology.

Final test:
For further proof of our conclusions, we will adopt a multispectral image of a scene measured in the laboratory and we will show the results of the scene-referred reconstruction with the Cobalt profiling. The scene is defined from 400 to 700nm with a range of 10nm.

Calculation of the image under D65 for standard observer 1931 2°:

Rendering D200 with a profile for D65:

Rendering D700 with a profile for D65:

Calculation of the image under StdA for standard observer 1931 2°:

Rendering D200 with a profile for StdA:

Rendering D700 with a profile for StdA:

Final Conclusions:

When comparing CCD and CMOS technologies in equally qualitative projects, the quantification of their incidence remains unclear. The capacity to separate signals between the two depends primarily on the overall sensor quality, rather than the technological preference. In the cameras reviewed, the hardware performance concerning colour discrimination was found to be similar, and only minimal distinctions could be detected, which were more prominent in laboratory testing. It is essential to note that the choice between CCD and CMOS is only one of the many factors that impact sensor quality, and several other elements are worth considering to ensure that the chosen option aligns with the project’s specific requirements.
In the normal experience of use, the difference between the look of a CCD and a CMOS image is
attribute to other factors:

– When the camera has been released
The characterization of the sensor
– The technology of the camera profile
– The colour correction added to the profile itself

In the Adobe profiling tutorial, it was demonstrated that the most significant factor that influences the difference in camera output is the characterization of raw data materials. However, in cases where cameras undergo proper characterization and profiling with equal precision and technology, it becomes extremely challenging to discern differences in the scene-reconstructed version of the image captured, as confirmed in the final test.

0 Comments