Published: 2021-12-04 | Category: [»] Optics.

Recently, we have seen how we could use geometrical raytracing to [»] simulate images transform through optical systems. This method was however lacking quantitative information and we further refined our presentation to yield the [»] RMS spot size and spot diagrams of the optical system. Both of these methods however neglects diffractive aspects in the optical system which we will address in this post. I will use the same system as in my previous post to illustrate the computations. It is shown in Figure 1.

Figure 1 – Sample optical system used here

If we take the RMS spot diagram of [»] last post and accumulate the rays to a bitmap placed on the image plane, we obtain the impulse response of the system which is called in optics the Point Spread Function (PSF). It shows how a single source point is spread as it transfer through the system.

If we would apply a PSF based on purely geometrical approach for the on-axis field of Figure 1 (blue rays), we would obtain the results of Figure 2. The 2D image of the PSF is given on the left hand-side while the transversal cut is shown on the right hand-side of the figure. PSF are always normalized before being displayed so the intensity of the plot does not matter at this time of the analysis but we will come back later on this.

Figure 2 – Geometrical PSF

If you had to measure the PSF of the real system however, you would be likely to measure something much closer to Figure 3.

Figure 3 – Rayleigh-Sommerfeld Diffractive PSF

The results of Figure 3 includes the effects of diffraction and requires a bit more computation than the simple geometrical raytracing used for Figure 2. Note how the diffractive PSF displays some annular patterns which are clearly not predicted by the geometrical raytracing alone. The PSF itself also appears much broader which will result in a lower image resolution at the end.

So, what is this diffraction effect that we are talking about?

Sommerfeld (1868-1951) described diffraction as “any deviation of the lights rays from rectilinear paths which cannot be interpreted as reflection or refraction”. Historically, the first observation of diffraction is acknowledged to Grimmaldi in 1665 which described the pattern observed by light traversing a small aperture. In 1678, Christian Huygens proposed a model in which light would travel as waves for which Thomas Young showed evidences in 1801 through the observation of interference patterns in an experiment involving light passing through two small slits. A mathematically detailed formulation of the theory was then proposed by Augustin Fresnel in 1818 which was later appended by other physicists including Sommerfeld from which we derived the results of Figure 3. It is important to understand that the behaviour of light can be extremely complex and simplification hypothesis are introduced in the various models. With time, more and more sophisticated models were elaborated but we will stick here to the scalar diffraction theory only because it is the one commonly included in optical design software simulation tools.

In the early Huygens model, light propagates through space as waves (just like sound would do). Each point source of light would emit spherical waves and the surface of these spherical wavelets could be seen as secondary source of spherical waves themselves. As the spherical wave of the source point would hit an aperture, some of the secondary sources would be blocked while other would passes through as represented in Figure 4.

Figure 4 – Huygens model of point source with aperture

As a consequence of Figure 4, some of the light would extend to regions that do not follow rectilinear path from the source through the edge of the aperture, giving a qualitative explanation of what is actually observed when light traverses apertures. It however lack a quantitative approach for which we need to move to the work of Fresnel and its successors. Since these models can be relatively complex and requires using mathematical tools that I haven’t used since I was a student (like Green functions), I will stick here to a soft approach.

We now know, through Maxwell theory of electro-magnetism, that light do indeed exist as waves for which we can express the phase as being proportional to k*r where r is the distance travelled by the wave and k is the wavenumber which relates to the wavelength of the light, λ, by k=2π/λ. The wave at any location in space can be characterized by a complex number with an amplitude A and a phase φ:

Furthermore, the energy observed by a sensor (human eye, CCD, photodiode etc.) is linked to the power density of the wave which is proportional to the squared magnitude of this complex number obtained by multiplying itself to its conjugate:

This has important consequence because if the total field at a detector is given by the superposition of two waves of phase φ1 and φ2 and respective intensities A1 and A2 we get an observed energy of

whereas the energy of the fields alone are

and

The observation of two waves reaching a detector therefore produces results that cannot be predicted by the observation of the constituent waves alone. This is at the origin of the ring pattern in Figure 3 but also explain many other experimental results such as when using diffraction gratings. Furthermore, the observation also directly depends on the phase of the two waves which cannot be accessed by observing the waves separately.

To generate the PSF at the image plane we can use the Huygens model of the secondary wavelets at the aperture and compute the superposition for each points of the image plane. The Huygens principle is however only a qualitative description and we need to use something like the Rayleigh-Sommerfeld formula for a quantitative evaluation:

where A is the surface of the aperture, r=P1-P0 is the vector going from the current aperture point P0 to the image point P1 and ϴ is the angle between r and the surface normal of the aperture.

As previously we have

It is worth noting that this formula can be applied on planar aperture only and for aperture that are much larger than the wavelength of the light considered. A more in-depth derivation of the formula and its limitation can be found in “Introduction to Fourier Optics” by Joseph W. Goodman (McGraw Hill, 1996).

In practice, we will sample the exit pupil using a grid of M×M points and, for each point of the aperture, compute the field produced by a spherical wave observed at a N×N grid at the image plane using the formula here-above (without the integral sign since we are now using a sum). We will then sum these fields maps (actually, we use only one accumulator map) and compute their squared magnitude to produce a PSF image. The PSF is then normalized as we will detail below.

You may ask how to evaluate E at the exit pupil position as it may looks like a recursive law (E at some position giving E at some other position). To solve this problem we will use the hypothesis that diffractive effects are negligible in the aperture stop of the system, and so at the images of the aperture stop (entrance and exit pupils). This is a valid hypothesis for most systems but it is not always true. For instance, I personally met the case of a concave grating spectrometer setup illuminated by a very narrow slit where the slit was introducing strong diffractive effects on the grating, which was also the aperture of the system, and we had to take these effects into account to predict the exact output of the system.

Since we assume we can now neglect the diffraction effect, we are able to evaluate the phase in the exit pupil by tracing rays through the whole system and multiply their optical path length by k, the wavenumber. The optical path length of the ray is the geometrical path length it followed multiplied by the refractive index of the material the ray traverses. When computing the optical path length in the exit pupil, you have to trace rays from the object to the image plane and then backward to the exit pupil plane.

The PSF you obtain using the Rayleigh-Sommerfeld formula will have some magnitude that has no real signification. First, since we are dealing with spherical waves interception with a plane, a great deal of energy of the initial wave will be lost. Also, when taking the intensity of the field (squared magnitude), we only get a signal proportional to the power density but not the actual power density value. As a consequence, we need to normalize the PSF by some means to give a meaning to it.

There are two normalization often used: either you normalize the PSF by its maximum value or you normalize the PSF by its integrated value. In the former case you assume the peak has magnitude 1 and in the latter you assume that the total energy of the PSF is 1. Be careful when doing energy normalization that you have to normalize on either the 2D case or the 1D profile separately. If you normalize in 2D and extract a cross section, the integral of the cross section will not be 1 and you will typically see much lower value than what you intend to.

The Rayleigh-Sommerfeld usually yields the most accurate data reported by optical design software such as ZEMAX OpticStudio. However, it is relatively slow to compute due to its O(M²N²) complexity and is therefore not used often in practice as there exist much faster approximations.

To understand what these approximations are, we first need to digress a bit.

If a source point produces an inflating spherical wave, we can assume that a deflating spherical wave would produce a perfect image point. That is, for a perfect imaging system, we can expect the term E(P0) in the Rayleigh-Sommerfeld formula to be given by the formula of a perfect spherical wave centered on the image point of the system. Aberrations in the optical system will then introduce some phase lag (either positive or negative) to this sphere-wave-induced phase. We can therefore decompose E(P0) as

where ES(P0) is the field that would be expected by the pure spherical wave and ∆φ is the accumulated phase lag.

It is therefore possible to map the phase lags for each position of the exit pupil to characterize the amount of aberrations in an optical system. The phase lags are obtained by tracing rays through the entire optical system to the image plane and then backward to the reference sphere which is centered on the image point and whose radius is equal to the distance from the image plane to the exit pupil. When expressed in wave units, that is ∆φ/2π, we refer to this map as the Wavefront Error Map of the system. The higher the wavefront error, the more aberrations the system has.

Fraunhoffer, through the theory laid by Fresnel, showed that under the paraxial approximation (i.e. for rays close to the optical axis), the Rayleigh-Sommerfeld formula decays into an inverse Fourier Transform with an extra term in front of it:

where A mainly codes for the aperture (0 or 1) but can also take any other value (dimmed rays etc.).

The PSF is still computed as previously using

with the extra normalization after.

Since there are some very powerful methods to efficiently compute the Fourier Transform, known as the Fast Fourier Transform (FFT), it is much faster to compute the PSF this way rather than using the Rayleigh-Sommerfeld integral as we did previously. The method is however only valid under the paraxial approximation and system with too small f-number will not compute the correct PSF using this method. ZEMAX OpticStudio recommends trying both the FFT and the Rayleigh-Sommerfeld integral and accept the FFT result only if the results are marginally different. Also, the FFT PSF method only works for aperture and image planes perpendicular to the optical axis and for which the image plane is not far from the position that will give a good image (in focus). Slanted or curved detectors requires direct usage of the Rayleigh-Sommerfeld integral, same for systems where you wish to observe the results far from the focus position.

The PSF of the system of Figure 1 was computed using the FFT method and the results are shown in Figure 5. The results are here very close to the one of Figure 4. The ripples in the 2D profile is due to a low sampling of the pupil but it is possible to improve on this by using larger maps when computing the Fourier Transform.

Figure 5 – FFT Diffractive PSF

Last but not least, when computing the results of Figure 3 (and also Figure 2), we chose both the aperture discretization (M×M) and the image plane discretization (N×N) as well as the pixel size. When computing the results of Figure 5, we only chose the discretization of the aperture (M×M) as the pixel size and image plane discretization are directly inferred by the formula.

In case of the Rayleigh-Sommerfeld formula, it can be shown that a minimal pixel size of λ/4 is enough for most computations although any suitable value can be used. When no values are given, OpticStudio use the following formula to compute the pixel size

with λ the wavelength, F# the working f-number of the system and M the pupil grid sampling.

You can obviously tune this formula to your needs as it tends to give relative coarse pixel sizes to my personal taste.

Concerning the FFT PSF formula, the PSF map size discretization will be the same as the aperture discretization (M×M). The pixel size of the PSF is given by

and you therefore have to live with whatever value you get depending on your wavelength and working F-number since (M-1)/M will quickly get very close to 1, whichever value you use for M.

There is however a trick to change the pixel size. If you zero-pad the wavefront to a map of N×N pixels, the output pixel size will now become

because we have multiplied the previous pixel size by the factor M/N due to the zero-padding process.

We can use this formula to achieve a given pixel size and overall map size. Imagine you want the pixel size to be some target value q and you want the PSF image size to be qN×qN (N×N sampling with pixels of size q). You can achieve this using a pupil sampling M×M of

The actual pixel size p will be slightly different than the target value but will be close to it. That’s how I generated the results of Figure 5 to have similar output (pixel and overall dimensions) as the results of Figure 2 and Figure 3. The actual values differs by a touch though because the map is 65×65 µm instead of 64×64 µm but there isn’t much we can do about this.

That is all for today!

In future posts we will dig a little deeper in the relation between aberrations and wavefront error maps and we will then see various ways to analyse the PSFs obtained using the method detailed in this post. It may not look like but the results of today are a real big step in the #DevOptical series and it took me at the very least one big hundred of hours to fully develop and validate all the necessary code to generate the PSF using both the Rayleigh-Sommerfeld and the FFT approaches.

I would like to give a big thanks to James, Naif, Lilith, Cam, Samuel, Themulticaster, Sivaraman, Vaclav and Arif who have supported this post through [∞] Patreon. I also take the occasion to invite you to donate through Patreon, even as little as $1. I cannot stress it more, you can really help me to post more content and make more experiments!

[⇈] Top of Page

You may also like:

[»] #DevOptical Part 3: Aperture STOP and Pupils

[»] #DevOptical Part 9: Geometrical Image Simulation

[»] #DevOptical Part 14: Third-Order Aberration Theory

[»] Custom 5x Plan Objective from Stock Elements

[»] Achieving High-Performance Spectroscopy