**[»] Tutorials**and

**[»] Optics**.

In previous posts I mentioned the fact that [»] image simulations, despite being a great visual communication tool, are not suitable for system analysis mainly because they are not quantitative enough about the system aberrations. I then introduced a more quantitative way to assess image system quality through the [»] geometrical spot size but it lacked a complete description of the diffractive effects of light, reason for which I then introduced the concept of [»] diffractive point spread functions.

The point spread function can be seen as the impulse response of the optical system. It shows how an infinitely small source point (eventually in the form of a collimated beam) is spread on the detector surface. An example of PSF is given in Figure 1 with the full image on the left and a cross section on the right. The amplitude of the plot does not really matter and different normalization rules are often applied such as normalizing by the integral of either the plot or the cross section. You can review the various normalization methods [»] in our previous post.

This however still lack some of the quantitative aspect we were looking for. We can compare the PSF of two systems and visually tell if one seems to be better than the other (tighter psf equals better resolution) but we don’t have a single number that we can use for automated analysis, such as the [»] rms of the geometrical spot size that we introduced earlier. This is exactly what I am going to address in this post.

Just like we computed the rms value of the geometrical spot size, we can extract information from the PSF of Figure 1 through simple mathematical operations. We can compute the rms of the PSF but in practice this is rarely done. Most optical design softwares rather rely on the concept of **encircled energy** which is the ratio of the energy (integral of PSF) within a given disk to the total energy of the PSF. A tighter PSF (sharper spot size) will have the same energy contained in a smaller disk than a broader PSF coming from a poorer design. Similarly, the encircled energy of a sharp PSF at a given disk size will be higher than the encircled energy at that same disk size for the poorer design.

There are then two ways to address the encircled energy quantification. Either you give the energy ratio at a given disk size or you given the disk radius that contains a given amount of energy ratio. The exact amount of energy ratio is up to you and can be 50%, 67%, 90%, 95% etc.

The encircled energy plot of the PSF of Figure 1 is given in Figure 2. It reaches 50% at 4.3 µm, 67% at 10.6 µm and 95% at 18.0 µm. Mathematically, it was obtained by discrete integration of the cross section of Figure 1.

Encircled energy is a great way to compare two different PSFs but it does not answer one of the fundamental question of optical design: what is the smallest feature that the optical system can resolve. This is known as the **resolution power** of the system and it is entirely described by its PSF as well.

Certainly, from the aspect of Figure 1, we can already state that we cannot resolve features that are much smaller than the central peak width which is about 5 µm large. On the other end, we expect the resolution to be better than 36 µm since we know from Figure 2 that almost all of the PSF energy is contained within a 36 µm diameter disk.

The usual though experiment to address resolution power consists of placing two nearby light-emitting spots and watch at which separation distance we can observe two peaks on the detector. The results for the PSF of Figure 1 is shown in Figure 3 for distances of 1 µm, 2 µm, 3 µm and 4 µm.

We can see that the two peaks start resolving at 3 µm and we can therefore say that the resolution power of the system is about 3 µm. The exact value depends on how you define the term *resolve*. Some people will be looking a complete separation, some other will be looking at 20% dip and some other will say it is when a dip just start forming but you don’t see it yet (this corresponds to cancelling of the second derivative).

This problem was extensively studied for systems with no aberration which are the diffraction limited resolution powers. The most accepted one is the [∞] Rayleigh criterion which relates the resolution of the system to the wavelength of the light and the system f-number. While these criterions are interesting to evaluate the maximum achievable resolution of a given system (which is why I’m mentioning this here), they are of no help to compute the actual resolution of your specific system because it is usually not diffraction limited (and if it is, you may want to let more light passes through as I explained [»] here).

The method used in Figure 3 is fine as a though experiment but the objects we observe do not consist of only two single nearby emitting points but are a collection of individual points, some which are farther away than others. Each of these points will contribute to the final image that we will be observing. Mathematically, this corresponds to the convolution of the input image with the PSF. This assumes the PSF remains constant over the field that we are studying which is okay for local analysis but usually do not hold over the complete field of view so you need to repeat the analysis at different field positions.

At this point, we could simulate an image by explicitly computing the convolution between an input image and the PSF but we would fall into the same trap as with [»] geometrical image simulations which lack quantitative information about the system performance. In fact, when we computed these geometrical simulations, we applied some sort of convolution except that the (geometrical) PSF was changing slightly for every points. What we will do here is different.

We know that the convolution of two functions can be reduced to a product in the [∞] Fourier space. Furthermore, system theory shows that we can study systems through the gain and phase of the Fourier transform of the system impulse response. Since the PSF can be seen as the impulse response of the optical system, we can apply the same methodology here and focus our attention to the Fourier transform of the PSF.

More specifically, the gain function will give us information on the attenuation level of the input features details. The input image will have features of different sizes and each of these will be attenuated differently. The finest features will disappear first and the largest feature will usually disappear last. If we were to input a sine wave of bright and dark areas at a given period, the gain function at that frequency would be how damped the amplitude of the sine wave becomes as it is imaged by the system.

In optics, the gain function is called the **Modulation Transfer Function (MTF)** and it is defined for spatial frequencies, usually expressed in mm^{-1}, equivalently called lp/mm (line pairs per millimeters) although this naming convention comes from the time where black/transparent stripes targets were used which are __not__ sine waves and therefore have a different transfer function. The name lp/mm remains even if we are actually referring to sine wave periods but it is always good to check if the person you are talking to is referring to sine wave or these black/transparent stripes as they are still being used experimentally.

Mathematically, the MTF is obtained as the modulus of the Fourier transform of the PSF and is normalized such that *MTF(0) = 1*. The MTF of the PSF of Figure 1 is given in Figure 4.

It is customary to include the MTF of the corresponding diffraction limited system as well in the results because it gives an indication on the effects of aberrations on imaging transfer properties. Once again however, these plots contain a lot of information and we need some ways to extract a single number from them. Resolution is then often described as the frequency at which the contrast drops below a given amount, usually 30%. In Figure 4 this corresponds to 34 lp/mm which represents features of 29 µm. One other important quantity is the frequency at which the MTF drops to zero for all succeeding frequencies and is called the cut-off frequency of the system. Whereas it is still possible to observe features smaller than 29 µm in Figure 4 despite they will be very dim, it is impossible to observe any details above the cut-off frequency since they will have zero modulation, that is they don’t transfer through the system.

The exact value you use to characterize your system is up to you (50% modulation, 30% modulation, cut-off…) but in all cases you need to specify the criterion you selected as you present your resolution figures. Giving cut-off will obviously yield a very high resolution but users will usually find it dishonest because they will, by definition, never observe any features which would actually have that size. On the other hand, selecting a modulation drop too restrictive (like 50% modulation) will give you lower resolution figures and will make your system looks worse than your competitors even if you achieve the same resolution as they do. It can also make the life of the users more difficult when they have to compare datasheets that all have different criterions to express resolution.

Experimentally speaking, you will find several resolution test targets available at optical suppliers like [∞] Thorlabs. I have generated simulations of the most important ones including Ronchi targets (Figure 5) which are the famous black/transparent stripes, the USAF 1951 target (Figure 6) which is widely use to evaluate a system resolution, the slanted edge target (Figure 7) which can be used to derive the full MTF of the system and the Siemens/star target (Figure 7) which is, to my experience, the least used of the list here. All the images were obtained by convolution with the system PSF of Figure 1. You can observe the drop of modulation/contrast in the cross-sections of Figure 5.

Finally, it is important to realize that all the different quantifiers studied here only form a toolbox that you can use to analyze your system. Specific applications will have specific attributes to evaluate the quality of their system. For instance, spectroscopy decays information along one axis and you can adapt the concept of encircled energy to integrate along one axis rather than the radius. Similarly, in a previous post I also did [»] an in-depth study of the convolution of the spectroscope PSFs with a slit to study actual spectral resolution. Optical design softwares like ZEMAX will only give you the most common analysis tools (encircled energy, psf, mtf…) but you are the designer and it is part of your job to identify what is the best way to characterize your system quality to optimize it efficiently. Off-the-shelves software are usually relatively poor in that regards and this is something I would like to address in the #DevOptical software through enhanced modularity.

That is all for today! In the next post we will dig into extremely exciting stuff which is at the very core of optical design – the third order aberration theory!

I would like to give a big thanks to **James**, **Lilith**, **Cam**, **Samuel**, **Themulticaster**, **Sivaraman**, **Vaclav**, **Arif** and **Jesse** who have supported this post through [∞] Patreon. I remind you to take the occasion to invite you to donate through Patreon, even as little as $1. I cannot stress it more, you can really help me to post more content and make more experiments! The device presented in this post was paid 100% through the money collected on Patreon!

**You may also like:**

[»] Resolution in Spectroscopy

[»] 400-800 nm Spectrometer Performances

[»] Camera Lenses as Microscopy Objectives