**[»] Tutorials**,

**[»] Engineering**and

**[»] Optics**.

In all my recent posts, it has become evident that choosing the proper camera sensor is not a trivial task and may drastically affect spectrometers’ performances. This is not true only in spectroscopy, but all fields basically hit the same noise and accessible dynamic range limits (fluorescence microscopy is a good example of that).

It is therefore important to be aware of camera performances in the design process, such as to select the one that maximizes the quality of the measurement to be done. It is also of similar importance to have a good understanding of the camera noise model to anticipate problems and limitations in the very early stage of the design.

In this post, I present a low-cost, Do-It-Yourself (DIY), test bench setup that closely follows the currently recognized standard on camera performance evaluation. As always, the setup is proposed as open-hardware under a CERN OHL v2 license and can be downloaded [∞] here for free :)

The standard that was used to design the setup is the [∞] EMVA 1288 standard. Although I followed the document in detail, I cannot describe the setup as EMVA 1288 compliant because that would imply paying an annual fee to the EMVA committee – this is a very classical approach in the industry (nothing is ever free!) and we should already be thankful that the standard itself is available for download without charges which is uncommon when compared to all ISO standards that can cost up to thousands of dollars.

That being said, let’s dig a bit into the camera model introduced by the standard. I will refer here to the **linear model**, which works for industrial machine vision cameras and linear CCD/CMOS arrays. It does not apply to cameras that have incorporated gamma curves. Note that some cameras, like FLIR cameras based on the 20 Mpx IMX183 sensor, used to have gamma enabled by default – it’s your responsibility to disable all on-board processing (gamma, sharpening, dark compensation etc.) before evaluating the camera performance. Also, all the computations here requires working in the highest possible number of bits, at the very least 10 bpp. It will not work for cameras that do not propose high bpp outputs.

The linear model is shown in Figure 1. A given number of photons µ_{p} is converted into *µ _{e}* electrons through the

**quantum efficiency**of the sensor,

*η*. A number of thermal electrons

*µ*is generated into the process leading to what is called

_{d}**dark noise**, a signal that is occurring even in the absence of light. The total number of generated electrons are converted by the camera into counts units using a

*N*bits analog-to-digital (ADC) converter. Note that counts may interchangeably be called

**gray levels**in the context of a camera, but it is less likely to meet the name “counts” when talking about linear sensors. The ratio between the number of electrons and the counts is called the

**system gain**,

*K*. The ratio between the number of photons and the generated counts in the absence of dark signal is called the

**responsivity**,

*R*, and is equal to the product of the system gain and the quantum efficiency,

*η*K*.

To measure the performances of a camera or linear sensor, the EMVA standard recommends illuminating the camera sensor evenly using a homogeneous light source of a diameter *D* placed at a distance *8*D* of the sensor. They also recommend that the diameter *D* is at least equal to the camera sensor size to avoid fall-off of the illumination across the sensor.

Based on these premises, I made the setup of Figure 2. The camera under test is held as some distance from the light source using Thorlabs’ CML25 C-mount extension tube.

The illumination, shown in detail in Figure 3, is made using an home-made integration sphere and a narrow-band LED. I had 365 nm and 625 nm LEDs available, the latter being a good fit to evaluate camera in the region used by our [∞] OpenRAMAN spectrometer when using a green excitation laser. It is possible to use any other LED provided it has enough illumination power because integration sphere tends to attenuate the signal by several orders of magnitude.

Because commercial integration spheres are expensive, typically in the 4-digits dollars range, I chose to make one myself using my Bambulab X1C 3D printer. The resulting work is shown in Figure 4 and was printed using a two-colors printing technique such that the interior was printed using white matte PLA filament and the exterior using standard black PLA filament. I also tested printing the whole part in white matte PLA and spray-painting the exterior in black. Both versions showed very similar results with a 2.8% homogeneity on a 3.5×3.5 mm² ROI (the standard recommends a maximum of 3% inhomogeneity). The two-colors version was much slower to print and had nasty “hairs” inside the sphere that required some cleaning. The spray-painted version required more manual work but was still very satisfactory (probably more than the two-colors print IMHO). The results were, in general, surprisingly good for such cheap ($5?) pieces of plastic. I believe (but it remains to be tested) that the performances are partly due to the subsurface scattering properties of the white matte PLA because the whole part glow in red when illuminated, reason for which I added a metal pinhole on top of it and colored the outside in black to avoid intrusion of daylight into the system.

Some comments need to be made about the pinhole in Figure 3. It results from a series of compromises to keep the system as DIY as possible. For reasons detailed below, the camera is evaluated on a 3.5×3.5 mm² ROI giving a diagonal of 4.95 mm. The distance to the sensor required for a pinhole of 4.95 mm was not easily achievable using standard parts at Thorlabs and would have required a lathe to cut a C-mount extension tube at the proper length. Instead, I chose a CML25 extension tube, which required a 4.6 mm pinhole to match the f/8 illumination recommended by the standard. Consequently, the pinhole is a bit smaller than the 4.95 mm and is 4.6 mm. The opening in the integration sphere is a bit larger (5.50 mm) to avoid misalignment effect when glueing the pinhole and vignetting of the illumination. Using larger apertures would not be recommended because they would decrease the reference sphere efficiency unless you increase the reference sphere diameter accordingly.

Last but not least, there remains the question of calibration of the system’s luminance. Knowing how many photons reach the camera sensor is important for the computation of the quantum efficiency and the absolute photon sensitivity threshold of the camera. Note that calibration is not utterly required, and that it is possible to compare cameras relative to each other in arbitrary units provided the illumination is kept the same between the two measurements.

The calibration process is shown in Figure 5. It uses Thorlabs’ DET36A detector (I had one around and it was convenient for the job) which features a 3.6×3.6 mm² photodiode. The photodiode must be placed at the exact same position as the camera sensor in the measurement setup of Figure 2 which is done using a SM1L10 tube in conjunction with a custom spacer. Although the drawings I provide recommend a brass spacer, I 3D-printed mine using my Bambulab X1C which is probably enough for what we do. I measured the signal using my [»] homemade photodiode readout setup at maximum amplification, taking care to subtract the dark signal of the DET36A from the measurement by measuring the signal with the LED switched off. I converted the amp reading into Watts/mm² using Thorlabs’ default responsivity curve of the DET36A and the area of the sensor (12.96 mm²). More on that later.

With my newly-calibrated camera test bench setup, I decided to measure the performances of the cheap [∞] Daheng camera I used to lower the cost of the OpenRAMAN spectrometer. The camera is based on a Sony IMX273 sensor, and I will compare the results obtained using the DIY setup to those of provided by a commercial company using the same sensor at the end of the post.

To evaluate the various parameters, we have to measure the camera gray level values at different photons exposures. The easiest way to achieve this is to set the illumination constant and to vary the exposure time in the camera. From the different measurements, it will be possible to retrieve the system gain and responsivity of the system as well as the linearity error of the camera (*i.e.* its divergence to the assumed linear model). As an order of magnitude, my setup was around 50 nW/mm² and required exposure time below 20 ms on the IMX273 sensor.

All required equations are given in the EMVA 1288 document, and I will retain here only the important ones. Readers willing to measure cameras themselves are invited to read the standard for a proper implementation of the algorithms.

From the luminance of the system *L*, given in W/m² and obtained from the calibration, it is possible to compute the number of photons reaching the sensor when using a given exposure time, *t*,

with *λ* the wavelength of the illumination, *L*, the luminance, *A*, the area of a pixel, *t*, the exposure time, *h*, the Planck constant (6.6260755×10^{-34} J.s) and, *c*, the speed of light (2.99792458×10^{8} m/s).

All the analysis therefore assumes that the illumination is monochromatic or close to. I did not evaluate what would be the effect of the bandwidth of the LED but it was assumed to be negligible.

When presenting the linear model, I said that the total number of photons, *µ _{p}*, is transformed into a number of electrons

*µ*, through the efficiency of the sensor,

_{e}*η*, to which we need to also account for a given number of thermally generated electrons,

*µ*. The total number of electrons is then transformed into a gray count,

_{d}*µ*, through the system gain,

_{c}*K*. The model is therefore expressed as

where *µ _{c}^{*}* is the dark signal count introduced for notation convenience

The **responsivity** of the system, *R*, is obtained by measuring how the mean gray level, corrected by the dark signal, varies as a function of exposure time:

with

The value of R can be found from a linear fit of the mean gray value evolution with exposure time. The experimental results are shown in Figure 6 for the Daheng camera with 3.45 µm pixels of the IMX273 sensor. The mean gray level is obtained, as the name suggests, by computing the mean gray value at the given exposure time. It also needs to be corrected by the mean gray value obtained at the same exposure time with the illumination switched off to correct for the dark signal count, *µ _{c}^{*}*. The standard also recommends making the fit on the region between 10% and 70% of the saturation level of the camera (more on that below). Note that the plot here is labeled in terms of number of photons, but these are computed through the formula given above.

From the responsivity measurement, it is already possible to compute the linearity error of the sensor which is nothing but the deviation of the data of Figure 6 to the fitted line. It is possible to compute the deviation from the fit in both gray level counts and in percents, the standard favorizing the latter option. The results for the Daheng camera are shown in Figure 7. While for most spectroscopy work linearity error of up to 1% is without consequences (unless you are aiming at extremely precise quantitative analysis), some techniques are very sensitive to this effect and can be a deal breaker in the camera selection process. A typical example is interferometry and other fringes-based techniques where non-linearities can leave scars in the demodulation process.

The quantum efficiency is obtained from the responsivity and the system gain, *K*. To obtain the system gain, the standard uses the variance (noise) of the gray level signal as second observant. The variance is expressed through the same equation as the photons-to-count conversion except it also needs to take into account the quantization noise, *σ _{q}*, introduced by the ADC converter. The EMVA standard proposes to use

*σ*in counts units. The counts variance,

_{q}=1/12*σ²*, is therefore

_{c}where *σ² _{e}* is the variance in photo-electrons,

*σ²*is the variance in the dark signal and

_{d}*σ²*is the variance due to quantification of the ADC as mentioned.

_{q}The statistical distribution in photo-electrons inherits those of the photons which follows a Poisson law. We therefore have

and since

we can derive that

leading to the linear model

Measuring the variance in gray level count would require taking the average of individual variance obtained through many measurements. The EMVA standard however notes that all pixels are assumed to behave similarly, and we can therefore take only two images and obtain the variance as the mean squared difference between the two images. Since the offset can be obtained by a measurement of the variance in the absence of light, we can plot the results of Figure 7 through a pure gain model using

because

Again, the fit is performed on the exposures comprised between 10% and 70% of the saturation of the sensor.

Talking about saturation, the **saturation capacity** of the sensor corresponds to the value that maximizes the variance of the signal. Indeed, once the saturation threshold is reached, compression occurs which clips the gray values to a maximum and therefore artificially limits the measured variance. Measuring saturation is therefore the first operation that must be done on the data and is shown here in Figure 8. The saturation can be expressed in terms of exposure time during the experiment but is more meaningful when expressed in numbers of electrons. To give a precise reading on saturation, small exposure steps must therefore be taken. Here, I used relatively coarse steps because the images were recorded manually, and it was a bit cumbersome to use the 50 recommended steps by the standard.

The saturation capacity of the sensor, given in electrons, will give a direct indication of the maximum achievable **signal-to-noise ratio** (SNR) of that sensor. The total SNR at a given exposure is

when the number of photo-electrons is large enough such that the shot noise dominates the dark noise and the quantization noise, the equation simplifies to

The maximum achievable SNR is therefore given as the square root of the saturation capacity expressed in electrons. Not all cameras are equal in that regard and the maximum achievable SNR is not necessarily higher in more expensive cameras. In fact, I had the occasion to do a comparative measurement of a cheap IMX183 rolling shutter camera to a more expensive, global shutter, model of a different brand and obtained better results with the cheap camera. When we explained to the salesman of the second (more expensive) camera that we were looking for a camera with high-achievable SNR and the cheaper camera performed better in that regards, politely therefore declining the offer for his expensive camera, the salesman replied in a blatantly arrogant way that we were “comparing a Rolls-Royce to a Ladda” due to its lack of proper understanding of camera technology. Take-away from this experience: don’t be fooled by the price tag into thinking that it ought to be better just because it’s more expensive!

We can therefore ask ourselves what justifies the higher price tag of the other camera then. One of the reasons that some cameras are more expensive than others, even if they don’t achieve as much maximum SNR, is related to low-light performances due to the dark noise. While there are some systems that are driven by the maximum achievable SNR, many others are driven by a different aspect. Good examples are spectrometers which are ultimately driven by their [»] dynamic range property, the ratio between the highest peak to the surrounding noise of the baseline. A spectrometer with a good dynamic range will offer more readable peaks than a spectrometer with a low one (see [»] here for examples).

The **dynamic range**, *DR*, is obtained as the ratio of the maximum measurable signal related to the dark noise

and requires a precise evaluation of the noise due to dark signal, *σ _{d}*.

The linear model of the EMVA 1288 standard divides dark noise into a part that does not change with exposure time, and a part that does. While the former is typically composed of many things grouped into a single term *σ _{d0}*, the latter is directly linked to thermal electrons that follow a Poisson distribution too, leading to the expression

where *µ _{I}* is the dark current, expressed in e

^{-}/px.s,

*t*is the exposure time, and

*σ*is the non-time dependent part.

_{d,0}The dark current varies itself with temperature and the standard propose a simplified model that takes into account a temperature-doubling effect:

where *µ _{I,0}* is the dark current at a reference temperature

*T*.

_{0}Because of the thermal variation of the dark current therefore requires having the camera in a thermal steady state. Any reported measurement of dark current should also include, at the very least, the temperature at which it was measured and, at best, the doubling time. Here, I did not have the required apparatus to change the camera temperature so I let the camera reach steady state by letting it run half an hour in a 20°C ambience.

Note also that most cameras tend to compensate for dark noise exposure variations in the mean gray levels, *µ _{d}*. A measurement of µd is therefore seldom useful to evaluate the dark current

*µ*. However, it is impossible to compensate noise effects in a reliable way and the variance equation is therefore a safer place to evaluate

_{I}*µ*.

_{I}Following the standard, and taking good care of the note here-above, a linear model was fitted on the gray level variance of measurements in the absence of light against exposure time. The results are shown in Figure 10.

The dark current is directly given by the slope of the line divided by the system gain, *K*, and the result expressed in e-/px.s (note: in Figure 10 the scale is in µs which therefore requires a 10^{6} correction to have e-/px.s). The **temporal dark noise** is obtained by removing the quantization effect from the offset of the fitted line and dividing by the system gain:

The offset also plays a particular importance in the computation of the absolute sensitivity threshold, *µ _{p,min}*, which is the minimum number of photons required to reach a SNR of 1 in the absence of dark current (the derivation of the formula can be found in the EMVA standard document)

The latter value is used to compute the **dynamic range**, *DR*, usually expressed in dB (*20*log _{10}(DR)*) or eventually in bits (

*log*),

_{2}(DR)Comparing two cameras for either dynamic range or low-light performances is therefore a complicated task because it involves the evolution of dark current in both exposure time and temperature, and the standard reporting only offers the dynamic range or absolute sensitivity threshold at a virtual zero exposure time (which is seldom useful), and does not impose camera evaluators to report dark current doubling time because it requires more complex instrumentation. The problem of temperature dependance has practical implications far beyond those of exotic experiments: not all cameras heat-up the same way! A striking constatation when operating Daheng (or Baumer) cameras is that they heat-up quite a lot. The consequence is that they reach a steady-state temperature above those of FLIR camera at the same surrounding temperature ambience, resulting in different operating conditions in regards to dark current.

Also, in spectroscopy we are typically interested in long exposure time, sometimes above 10 seconds, which demultiply the effect of dark current on the total dark noise. Having a camera that performs well at low-exposure time (low absolute sensitivity threshold or high reported dynamic range) does not necessarily guarantee good long-exposures performances as dark current has to be taken into account and corrected for the temperature at which you intend to operate the camera. This also explains while cooled sensors are so common in fields such as spectroscopy, fluorescence microscopy and astronomy.

I also ran into inconsistencies when exploring dark current over larger exposure ranges. Between 0.1 and 1 second, I got a perfectly linear model (at the exception of two odd points which is already an interesting observation since I had none in the measurements of Figure 10) with similar offset but different dark current (1.89 count/px.s instead of 2.55 count /px.s). This is illustrated in Figure 11a. Things got even worse as I kept increasing the exposure time as can be seen in Figure 11b. While I can possibly explain the results of Figure 11a vs Figure 10 by some warming-up effects, I have no explanation about the results of Figure 11b. The only thing that I can add to this, is that I had to enable a special option in the camera (“remove parameter limit”) to be able to go above 1 sec exposure time but I have no idea on what this imply into the camera hardware or logic itself.

Now that we have collected some measurements, it is time to check if they are correct. Since Daheng does not publish (or that I did not find) EMVA reports, I had to look for data provided by different camera manufacturers using the same sensor. This is suboptimal because even if some properties are expected to remain constant as they depend on the sensor (*e.g.* quantum efficiency), others may depend on the specific hardware designed by the camera manufacturer to interface with the sensor. Among all camera manufacturers, I found that the data provided by [∞] Lucid was the most complete and used it as comparison data.

Results are given in Figure 12. I got excellent agreement for the saturation capacity, maximum achievable SNR, gain and temporal dark noise. The measurement of quantum efficiency and absolute sensitivity threshold are 20% off (since absolute sensitivity depends on quantum efficiency it’s normal that a 20% error in the latter is repeated in the former). This is also reflected in the dynamic range, although the difference is not 20% in this case due to the logarithmic scale. The linearity error is about the same order of magnitude, maybe a touch larger than those of Lucid. Concerning the dark current, it is difficult to tell because I have two different measurements depending on the exposure time range (see Figure 10 and Figure 11a) which are either 10% or 49% off. I don’t have an explanation yet, but it may be related to thermal effects as I found Daheng cameras to run very hot compared to FLIR or Basler cameras I used to work with.

The results are therefore not so bad for such a low-cost setup and most problems might probably be resolved by better calibration. Although the uncertainty on the response of the photodiode cannot explain by itself a 20% discrepancy, some improvements can certainly be made in that direction. And a more precise evaluation of the quantum efficiency will *de facto* yield closer values for the absolute sensitivity threshold as well. This leaves only the problem of dark current which is in itself a tricky problem. Future work should therefore try to tackle this and also fill-in the remaining pieces of data by measuring the dark noise doubling temperature, which is seldom given by camera manufacturers. Having a DIY system that could measure this parameter while providing good agreement on all other standards parameters would be a very nice achievement!

If you wish to help me build such a system, consider donating to my [∞] Patreon :) Together, we can make open hardware beat the market giants ^__^’ don’t hesitate to share your comments on the [∞] community board as well to let me know!

I would therefore like to give a big thanks to **Young**, **Naif**, **Sebastian**, **Alex**, **Stephen**, **James**, **Lilith**, **Jesse**, **Jon**, **Cory**, **Karel**, **Sivaraman**, **Themulticaster**, **Tayyab**, **David**, **Marcel**, **Michael**, **Shaun**, **Kirk**, **Dennis**, **M** and **Natan** who have supported this post through [∞] Patreon and made such an extraordinary adventure possible.

**You may also like:**

[»] Dynamic Range Analysis of a 350-700nm Spectrometer

[»] Choosing the Best Camera Sensor for Spectroscopy