Published: 2021-10-17 | Categories: [»] Tutorialsand[»] Optics.

Now that we have all the equations necessary to [»] trace real rays using both the refraction and the reflection law on standard optical surfaces, we can start investigating about image quality assessment techniques.

In 2017 I published a post about [»] optical aberrations where I showed that even perfectly manufactured lenses do not produce perfect spot images. As soon as we depart from the paraxial condition, rays will intercept the image plane at different position depending on where they started (on-axis or off-axis) and where they intersected the [»] aperture stop.

I have spent a long time thinking about what would be the best way to introduce you to aberration theory and I came to the conclusion that starting with image simulation was the most intuitive way to start. Since I already talked about aberration in a [»] former post, I also encourage you to read this as well to see what actually happen to rays as they go through the optical system.

In geometrical image analysis, we simulate how a given picture will be transformed by an optical system. It does not account for diffractive effects that I will present later but it already gives you a relatively close approximation to what you would actually get if you performed the experiment in a lab.

To illustrate, I chose the system of Figure 1 which is made of two plano-convex lenses separated by an aperture stop. The lens are both 50 mm focal length and the aperture stop diameter is 5 mm. We will simulate an image that is 25 mm height, which is as large as the diameter of the lenses.

Figure 1 – A simple optical system

To simulate how the image is transformed by the system, we will place two grids: one at the input and one at the output of the system. The input grid will be filled with the image we would like to render and the output will receive the traced rays. Usually, the relative size ratio between the two grids will be chosen equal to the magnification of the system but you are free to choose any output grid size you would like. Also, you can chose any number of cells in the output grid but a coarser grid will give you lower resolution (think of an image that would have, say, only 64×64 pixels compare to a 512×512 one). Take care not to oversample either as you will always be limited by the definition of the input image anyway.

Here I chose a 512×512 pixels image for the input grid with a height of 25 mm (the image size I would like to simulate) and I chose an identical output grid since the system magnification is 1:1. If the magnification had been 1:2, I would probably have fixed the output height to 50 mm and kept the sampling to 512×512.

For each cell (pixel) of the input image, we will trace rays through the system and see where they intersect the image plane. One ray is however not enough as the end position will also depends on where it crosses the aperture stop. If we send “N” rays, we will divide the input pixel intensity by N and add the intensities to the pixels of the output grid where each ray arrives. This is for grayscale, single wavelength, images. For colour images, we further need to send rays of different wavelengths and assign them to the red, green and blue channels of the image. I will come back on this later at the end of the post.

To launch the “N” rays we will sample the aperture plane at different positions. The easiest method is to use a square grid and to limit the valid area to a disk. We orient each rays of the input plane such that it passes through the desired position in the aperture plane using [»] ray aiming technique. Alternatively, you can compute the entrance pupil and sample directly there but this will fail for fast systems which have higher pupil aberration.

After applying the algorithm to the [∞] Lenna picture using a 33×33 sampling of the entrance pupil, I obtained the results of Figure 2. Note the loss of sharpness as the image goes through the optical system.

Figure 2 – Result of the geometrical image simulation process

Let us analyse the results of Figure 2. You can skip this section if you did not watch our [»] former video.

The first thing we notice on the raytraced image is the loss of sharpness as we move away from the center of the image. This is typical for almost all optical systems as the center part of the image is always closer to the paraxial approximation and therefore has better overall quality. Comma, astigmatism and field curvature are typical aberrations that will induce blurry edges in a picture.

On the other hand, if you look closely, you will see that the furry-stuff of her hat has more blur than the rest. This is because the system of Figure 1 is far from being achromatic and blue light will usually be out of focus faster than other wavelengths. This is a clear example of what chromatic aberrations induce in a picture.

Finally, if you pay attention to details near the center of the image, such as the model’s eyes, you will see that they have loss a bit of sharpness even though we are in the center region of the image where coma, astigmatism and field curvature usually become insignificant. Even if chromatic aberrations might explain a part of the loss of sharpness, it is also typical of spherical aberration which depends only on the size of the aperture and appears everywhere in an image. We could have reduced the blurriness by decreasing the aperture size but this would translate into less light going through our system.

Clearly, if we had to perform this experiment, it would be better to limit the field of view to about 10 mm diameter rather than the full 25×25 mm² used here.

We can do many more things using geometrical image simulation and some are illustrated in Figure 3. It is a convenient way for a student to experiment with lens systems without going to the lab.

Figure 3 – Example of image simulation as we modify the optical system

Although image simulation may seems like the ultimate tool to analyse optical systems since you visualize directly the output of the system, it is actually only rarely used in practice.

One of the major issue faced with image simulation is that it is horribly slow. Despite I ran the example of Figure 2 on a NVIDIA Quadro RTX A4000 card with 6,144 cores, the simulation took about 5 minutes to complete. This is due to the number of rays that needs to be traced (512×512 image grid divided into a 33×33 pupil grid for 3 colours gives about 1 billion rays to trace). This kind of speed is incompatible with optimization processes as we would like to test as many configuration as possible when we try to figure out what are the best radii of curvatures, glasses selection or thickness for a given system.

Also, the output of the simulation is merely qualitative as we see what the image looks like but we have no overall number that can tell us if one solution is better than another. This is again a big problem when we need to optimize a given system.

Finally, the algorithm presented here only accounts for geometrical aberration and does not account for diffraction effects. This means it will be valid only for highly aberrated systems where diffractive aspects can be neglected. While it is possible to derivate an algorithm which include diffraction, it adds even more computation. Also, high quality system would require extremely fine grids for input/output to account for the increased resolution.

All of that put together, image simulation is a tool that is mostly used for communication rather than for quantitative analysis. Professionally speaking, I only used (and have only seen used) image simulation to illustrate reports and presentations for clients because it usually speaks for itself at contrario of bunches of numbers such as rms wavefront error for different field position.

Still, image simulation is one of the tool usually offered by optical design software and it is an important asset to have in our program. I would therefore like to make some additional comments about the implementation of the algorithm.

In my previous description of the way to generate the image, I said that the input image was sampled pixel by pixel and rays traced to the output plane. It is actually more efficient to start from the output plane and to trace rays back to the input plane. In the first procedure we end up with 1 read operation (input pixel) and “N” write operations (output pixels) while it the second procedure we have “N” read operations (input pixels) and only 1 write operation (output pixel). Writing 1 pixel at a time enables increased parallelism for the algorithm which can therefore result in a later speed-up which is non-negligible considering the time required to simulate an image.

Also, I only briefly described how to deal with polychromatic systems. Ideally, we would need to know the exact wavelength distribution of the input image and trace many different wavelength before we would integrate them using the detector response (eye or camera). [∞] Wikipedia has some pages about spectral distribution so that you can see some graphs. The problem when starting from a computer image is that we only have RGB channels and no way to know what were the initial wavelengths distribution of the image. In the example of Figure 2, I simply used 450 nm for the blue channel, 550 nm for the green channel and 650 nm for the red channel which is a very loose approximation. A better way, although untested, might be to sample wavelengths uniformly from the spectral distribution function of the camera and weight the results as we recombine the final RGB image.

That is all for today! I hope you enjoyed learning about image simulation processes. I will come back shortly with how geometrical aberrations are quantified in traditional optical design process and then move to diffraction and third-order aberration theory. Stay tuned for the next parts!

I would like to give a big thanks to James, Daniel, Naif, Lilith, Cam, Samuel, Themulticaster, Sivaraman and Arif who have supported this post through [∞] Patreon. I also take the occasion to invite you to donate through Patreon, even as little as $1. I cannot stress it more, you can really help me to post more content and make more experiments!

[⇈] Top of Page

You may also like:

[»] #DevOptical Part 3: Aperture STOP and Pupils

[»] #DevOptical Part 11: The Diffractive PSFs

[»] #DevOptical Part 14: Third-Order Aberration Theory

[»] #DevOptical Part 10: RMS Spot Size

[»] #DevOptical Part 19: A Quick Focus Algorithm