During experimentation with spectroscopy and my Thorlabs camera I ran into sensitivity issue and wondered if I could solve the problem by using a high quality photodiode instead of a camera CMOS sensor. The idea justifies because it is possible to use longer integration time with photodiode and more accurate amplification circuits. Also, the Thorlabs camera I have is limited to 1 second maximum exposure and that the electronics has to manage a lot of pixels while it is not mandatory for spectroscopy if you rotate the grating.
On the other hand, and before I buy an expensive amplified photodiode system (starting price is 300 EUR ex-VAT…), I wanted to know how sensitive a CMOS pixel actually is. So I decided to run a very simple experiment using neutral density filters of optical density 6 (letting then passes only 0.0001% of the light) and my 4.5 mWatts green NdYAG laser. The concept is relatively simple: just attach the OD6 filters in front of the camera and send the laser beam onto the camera sensor, put the exposure time to maximum, record an image and repeat the process with the laser switched off to get a black image level. Knowing the power of the laser and the overall excitation of the pixels, we can get some sensitivity figures.
The black level image had a sum of gray levels of 766,740 and 99% of the data was within 1.5 gray levels. The laser image had a sum of gray level of 29,530,337. Dividing the power of the laser times the optical density and dividing by the difference in gray levels multiplied by the noise floor (1.5 gray level), we get our approximation to the noise equivalent power (NEP) of one pixel which is 0.1 fWatts (10-16 Watts!). That’s the very minimum light level my CMOS sensor can detect with the exposure set to maximum; or at least an order of magnitude of that level.
Let us now compare this to the NEP of Thorlabs FDS100 photodiode which is 12 fWatts/√Hz. As the exposure time of the camera was set to 1 second, we can roughly estimate that the CMOS-based spectrophotometer is 120 more sensitive than the photodiode. Obviously, this requires focusing the light onto one single pixel and using the same integration time (1 second). However, we have seen in a [»] previous post how to focus light on 2 px diameter disks and so it would still require about 15 minutes of photodiode integration time to reach the same detectivity. Clearly, the photodiode is outperformed here by the CMOS sensor.
But how can it be? The answer lies in the size of the detector element. When you compare the datasheets of photodiodes, you can see that the noise levels increase as the size of the photosensitive elements increase. And because one of our CMOS pixel is much smaller than our photodiode element (13mm²), it has a smaller noise floor and thus a better sensitivity. However, the detectivity (square root of the photosensitive area divided by the NEP) of the photodiode is actually much better than the detectivity of the CMOS pixel, meaning that the photodiode actually does its job better although it may have, in this case, an overall noise that is higher due to the size of the photosensitive element.
The consequences of all of that is that we cannot beat our CMOS sensor in terms of sensitivity for spectrophotometric application unless we buy a really, really, expensive sensor. Still, the story ends well because this simple experiment just saved me a lot of wasted money! And that’s always good news when you are a starving scientist ;-)[⇈] Top of Page
You may also like: