Published: 2021-09-11 | Categories: [»] Tutorialsand[»] Optics.
Introduction

Now that we have set the basis for [»] transforming points and directions in 3D space, we can tackle the exquisite task of optical raytracing.

Raytracing is at the core of every optical design software and allows to compute all important quantities related to image quality, including (but not limited to) spot and fan diagrams, wavefront error analysis, mtf, through-focus mtf, psf, image simulation etc. I will develop all these terms in later articles and we will now focus on how to perform real raytracing. I’m using the world “real” here in opposition to [»] paraxial raytracing that we already covered in some depth in the previous posts.

We will assume that we have a set of surfaces or bodies (e.g.: sphere, cylinders, plane…) already positioned in 3D space with their [»] local space coordinates system (abbreviated here lsp). One surface, typically a plane (but not restricted to), will emit rays that will transfer from surface to surface and follow effects such as refraction, reflection, scattering, diffraction etc. Data are then collected on image surfaces, also typically plane (but not restricted to) to assess image formation properties listed previously.

Sequential vs. Non-Sequential Raytracing

Optical design softwares make one more distinctions between two types of raytracing: sequential and non-sequential raytracing.

In non-sequential raytracing, rays are launched into the 3D space like they would do in the real world. There is no preferred directions and rays will interact with any surface they will intersect. After intersecting surfaces, the main ray will usually refract but child rays will be created to account for reflection on surface or scattering properties of the surface. The process stops when the rays do not intercept any more surfaces which will never occur if you place them inside a closed box as they will infinitively scatter from one face of the box to the other. To avoid this, rays are associated to an energy/intensity level and they are discarded when this level drops below some threshold value.

Non-sequential raytracing is particularly useful to study stray light and ghosting effects in optical systems but it tends to generate a lot of computations. It also makes the standard tool for image analysis such as wavefront error difficult to address. For this reason, optical design software introduces the concept of sequential raytracing. Most optical design jobs are actually done with this latter type of raytracing and the non-sequential mode is only used to assert robustness to ghosting and stray light. Many designers often don’t bother with non-sequential raytracing and apply it only for extremely tough applications such as deployment of optics in space where the money involved allows the extra hours required to spend on these subsequent analyses.

In sequential raytracing, bodies are split into surfaces contributions and are added to an ordered list. The list always starts with the object surface and ends with the image surface. Rays traverse the list from left to right and never go backward into the list. This means that if you launch N rays at the object surface, you will end up with a maximum of N rays at the image surface. I say “a maximum of” because some rays may be clipped (vignetted) when intercepting outside of the clear aperture of the surfaces or having total internal reflection effects.

It is common to represent bunch of rays as a table where each row represents a ray and each column represent the state of the ray at the i-th surface of the system. The state of the ray usually consists of the ray position and direction (before refraction or reflection) at the surface interception in either world or local space coordinates, a flag to tell if the ray is vignetted or not and eventually other properties such as the normal of the surface at the interception position.

Note that I said rays do not go backward into the list in sequential raytracing but I did not say that they cannot go backward in 3D space. You have to understand sequential raytracing as a system directing the rays from one surface to the next, forgetting about all other surfaces in the system. Most of the time, in refractive system, rays will also travel space from left to right. It is only when using reflective surfaces like in Cassegrain telescopes that you will see rays bouncing back and forth but don’t be fooled: it is still sequential raytracing!

Ray/Surface Interception

One of the major jobs of raytracing software is to compute the interception position of a ray with a surface. The surfaces can be of any type but are usually smooth and described by analytical formulas although it is not an explicit requirement. 99% of the job also restrict to one surface type: the standard conic section:

where z is the sag of the surface, r the radial coordinate, c the curvature of the surface and k the conic constant. Note that this equation is expressed in the local space coordinate of the body where z is the forward (optical) axis.

When the curvature is zero, the surface decays into a plane. When the conic constant is zero, the surface decay into a sphere.

I personally like to make the distinction between the three cases: plane, sphere and standard conic section. This allows for more flexibility when introducing the concept of Constructive Solid Geometry (CSG). It is also easier to introduce the concept of raytracing with plane and sphere first.

To find the interception between a ray and a surface, we first need to express the surface as a function

where x, y and z describe all the points on the surface.

We also know that the rays have a starting position P and a direction v [»] defining a line of equation

or, when splitting into the components,

We usually restrict to

since photons do not travel backward from their direction vector but this is not a strict rule to apply.

The interception position corresponds to when the ray components match the surface equation:

Since (x0,y0,z0) and (vx,vy,vz) are given, we need to solve for ξ. Once we have obtained ξ, we can compute the point coordinate from the line equation.

As explained in the [»] previous post, it is usually easier to solve the problem in the local space coordinate of the surface body and the first step is to project the ray from world space to the surface local space. Once the interception position has been found in the local space coordinate system, it can be projected back to world space after having applied the surface effect if any (e.g.: refraction, reflection etc.). From the new ray coordinates, we then find the interception to the next surface and so on.

Let’s now analyse the resolution of the equation for the different bodies we met (plane, sphere and standard conic section).

Interception with a plane

When using c=0 in the conic section equation we get

Which is the equation of a plane perpendicular to the z axis and located at the origin of the system coordinates.

The resolution is immediate because we have

and so

leading to

The interception coordinate is therefore

The normal of the plane is always:

although we could have used the opposite vector. It is important to keep the same formalism for the orientation of the normal as it will play an important role when computing refraction of rays.

We see that the solution exists only when

which would fail for a ray travelling parallel to the surface.

It is extremely important to check for interception conditions because they will lead to invalidating the vignetting flag for the ray. Rays that do not intercept surface, or intercept them outside of the clear aperture area shall not be traced to the next surface.

Interception with a canted plane

It is possible to generalize the previous equations to a canted plane defined by the equation

or

with the surface normal

and d the offset to the origin.

Solving the system implies

with

The condition for the interception to occur is still the same, i.e. that the ray does not travel parallel to the plane (perpendicular to normal):

You can check that setting n=(0,0,-1) and d=0 will give you the equation of the previous section.

Interception with a sphere

The common expression of a sphere centered at coordinates (0,0,0) is

with R the radius of the sphere.

The interception problem now becomes

It is however easier to consider a sphere centered at the coordinates (0,0,R) such that the point (0,0,0) is solution to the problem:

Then, using,

we can develop the equation to

We can re-arrange this equation into

with

that we can also shorten using vector notation

with p=(x0,y0,z0-R).

This system can be solved only if

and the solutions are

If D=0 we have only one solution which is –B/2A.

The case A=0 shall be tested too but does not correspond to a healthy ray because it would occur if the ray has no direction vector.

When doing CSG, it is important to keep both interception coordinates but with raytracing we are only interested in keeping one of the two points. When the surface is convex (R>0) we want to keep the first point of intersection (smaller ξ) but when the surface is concave (R<0) we want to keep the second point of interception (larger ξ):

Finally, the surface normal is obtained from the interception coordinates from the sphere origin divided by the radius:

To avoid errors, it is important to use the sphere equations only when R0.

Interception with the standard conic section surface

Optical design softwares do not bother dividing the different cases (sphere, parabola, plane, ellipse…) and use the standard conic section of equation

with

Solving this equation for interception with a ray required a bit of re-arrangement though:

and so

to which we can inject the ray equations

leading to the second order equation

that we can rearrange into

with

For which a solution only exists if

with the solution being

And, as for the sphere,

Except that this time the case c=0 is a valid solution (plane).

The normal of the surface is obtained using the gradient of the sag at the interception position

that we will often want to normalize for the further operations.

Since the system is symmetric, the expression for dz/dx and dz/dy will be similar and I will only derive the expressions for the dz/dx case. If you are not familiar with derivation operations, here is a remainder of the rules required for this post:

The first thing is to express the derivative in relation to the formula using the radial coordinates:

with

Concerning dz/dr we have

Applying the rules to the standard conic section surface we get

Posing

We can simplify the equations to

and

Then,

for which a solution only exists if

meaning that

One of the nice features of these equations is that they involve only r2 which limits the number of square roots which are computationally expensive.

Surface with no analytical solutions

Not all surfaces have analytical solution to raytracing. For instance, aspheric lenses are described by an equation of the form:

where ai are the aspheric coefficients.

The last equation is known as an even asphere in optical design softwares because it only has even terms in r powers. In practice, the sum is not infinite and manufacturers will generally consider the power terms below or equal to 10. The a1 term (r2) is usually skipped during optimization process because it interferes with the curvature term of the lens but manufacturers can accept a non-null a1 term if you ask them. There are other lens types with equations having no analytical solution to raytracing such as the odd asphere, the Zernike lens or the Chebydchev lenses.

In such cases, there are no other solution but to use iterative search method to solve the interception problem. Such methods are however slow and do not guarantee to converge to the solution. They may also fail during convergence even if the solution exists.

In most cases, these lenses are deviations of the standard conic surface with a small departure represented by some polynomial type. Interception of the standard conic section is therefore a good starting approximation point for the rest of the process. This is not always the case and some lenses can have large deviations to the standard conic section. Toroidal lenses are a good illustration for this because they have different curvature in x and y coordinates and can therefore have large sag departure along their two principal axes.

The general idea is to solve the surface equation

for the parameter ξ.

Among the methods that can be used are dichotomy search, simplex and Netwon-Raphson-based methods (non-exhaustive list!). I personally used the latter successfully and will quickly describe how to use them.

Starting from an approximation of the interception coordinate ξ0, the following algorithm is applied iteratively until it converges to the solution:

where 0<k≤1 is a damping constant to avoid oscillating around the solution.

The derivative can be either computed from the surface equation or from an estimate. There are many methods to compute an estimate of the derivative including using the finite difference between g(ξi) and g(ξi-1), or by fitting a polynomial to the last “Ng(ξi) points and derivating analytically the polynomial. I have experience only when using the direct analytical derivative of the function. I recommend using k=0.1 which gave me satisfaction in the past for even aspheric lenses when using the standard conic section as approximation of the interception coordinates.

Once the interception position is found, it is still required to find the surface normal. For this, we use the same equation that we used for the standard conic section:

which requires computing the derivative of the sag equation in both x and y.

Skipping such lengthy computations would be a good thing but aspheric lenses are now part of traditional catalogs as well so it is important to support them.

Apertures

Once the interception position with the surface has been found (if it exists), it is important to check if it lies in the clear aperture of the lens.

To understand why apertures are necessary in the computation, imagine a plane surface. We know that it is possible to find the interception with that surface as long as the ray direction is not perpendicular to the surface normal. But it is still possible to intercept the surface very far away for rays that are almost parallel to the surface! Since it is not possible (not desirable) to make optics that have the size of a Jupiter moon, we have to restrict the valid interception points to a localized region of space.

Most of the apertures are circular but you can use square apertures as well or even star-shaped aperture. You can even do an aperture that has a blocking portion in the center if you would like! Also, the aperture does not necessarily need to be centred around the optical axis of the lens.

The first step is therefore to [»] transform the interception position into the aperture local space coordinate system. Most of the time this will by the identity transform, though.

Once in the local space of the aperture, you can use an analytical expression to determine if the ray is within the aperture. For instance, here are a few apertures types:

Circular aperture

Circular obscuration

Rectangular aperture

Apertures can be combined just like we will do with Constructive Solid Geometry to achieve more complex effect but this go beyond the scope of this post. If a ray is clipped by an aperture, it shall be marked as such in the vignetting flag of the lens and no longer raytraced to the other surfaces.

One of the usage of aperture, when programming, is that you can create a special aperture type that will never block rays but will record the rays footprint such that you can obtain the effective lens diameter required for the lens in the usage conditions.

Surface Effects

Once the ray has intercepted the surface and has passed the aperture check, it may experience some of the following effects: refraction across the surface, reflection on the surface, diffraction (as with gratings), scattering etc.

In non-sequential raytracing, multiple effects can be combined. For instance, a ray intercepting a glass surface will mostly refract but a part of it will also be reflected according to the Fresnel equations. A ray intercepting a grating will be split into the many diffraction orders and scattering will throw rays all over the place.

In sequential raytracing however, only one effect is considered. We will consider only complete refraction, reflection, single diffraction order etc. Note that in the case of refraction, a Total Internal Refraction (TIR) condition is considered as failure in sequential raytracing while it is converted to a pure reflection in non-sequential raytracing.

Here, I will consider only reflection and refraction. I will discuss other types of effects in later post when I have some time to experiment with them first (in particular scattering).

Refraction

To find the refraction law, we will first split the ray directions into the orthonormal basis n and x, n being the normal vector of the surface and is considered as facing the ray. I will also consider that the incoming direction vector has been normalized (length is 1).

The situation is displayed in Figure 1.

Figure 1 – Refraction of a ray

Under these conditions we have

and

where α1 and α2 are the angle of the ray with the normal before and after refraction.

We find

and therefore

We also know that a ray coming from a material of refractive index n1 to a material of refractive index n2 will follow the Snell-Descartes law of refraction:

and so

The cosine of the incoming angle can be obtained from the dot product of the incoming vector with the normal (I reverted the sign due to the orientation of the normal)

The cosine of the outgoing angle can be obtained using a small trick:

It is important to check that

which would indicate a Total Internal Reflection (TIR) event.

One advantage of the trick used to compute the cosines of the angles is that they do not rely on expensive trigonometric function evaluation but rather use the more simple square root.

The final formula is therefore

If a TIR occur, we can either drop the ray or apply the reflection formula.

Reflection

Reflections typically occurs on metallic surfaces such as silver or aluminium mirrors but will also happens on dielectric coating as wells or during TIR events. Reflection of a ray is illustrated in Figure 2.

Figure 2 – Reflection of a ray

Decomposing the ray as before, we get

and

Therefore,

and

with

we obtain the reflection formula

Ray Aiming and Normalized Pupil Coordinates

There is one more important note that I need to make about how ZEMAX OpticsStudio works when launching rays. I am not familiar with the other optical design softwares (CodeV and OSLO namely) but I guess they have similar modes of operations.

OpticsStudio refers to rays in normalized coordinates relative to the STOP and the FOV. So, a ray starting at the edge of the X FOV would have the coordinates (+1,0) or (-1,0), whatever the actual FOV is. The software then converts this normalized coordinate to a physical one to launch the actual ray. The same thing occurs with the direction. Instead of specifying the direction vector itself, we give the intersection coordinate with the STOP surface as normalized coordinate as well. Once we know where the ray starts and where it passes through the STOP surface, we have entirely characterized its path.

Now comes the question on how to compute the starting direction of the ray to launch it. Computing the position was simple but computing the direction from the STOP coordinates is more tricky. In OpticStudio this is referred to as Ray Aiming and the software allows three modes of operations: none (off), paraxial ray and real ray. I will first describe what these modes are before I explain how I am doing in the #DevOptical software.

When ray aiming is set to “none” in OpticStudio, the software uses the paraxial entrance pupil of the system. The position and diameter are computed like we did in our [»] previous post but you can use the full raytracing engine too although it makes little sense (more computations). The position is obtained by multiplying the normalized coordinate by the paraxial radius of the entrance pupil. From the position of the ray in the entrance pupil and the starting position we can compute the initial direction vector. This is relatively fast but assumes there is a one-to-one relationship between the actual STOP real position and the paraxial entrance pupil. This is obviously only true very close to the optical axis where the paraxial condition is respected and this mode of operation quickly generate incorrect results as the size of the STOP increases.

There is then the paraxial and real ray aiming mode. It is interesting to note that these modes produce the exact same results if you configure OpticStudio to have the actual STOP aperture limiting the rays in the system (more on this below). For OpticStudio users, this is the “floating aperture” mode in the aperture settings. When ray aiming is active, the system will correct the starting position until the ray physically passes at the given normalized coordinates of the STOP surface. This requires an iterative search for the correct position. In practice, we compute the actual STOP intersection position and multiply the starting radial position by the ratio between the desired STOP radial position and the observed radial position at iteration #i. The process is repeated until the ray passes through the STOP at the desired coordinates withing some threshold (e.g.: 10-12 meters or less).

That was the “floating aperture” mode. OpticStudio also offers another aperture mode where you specify the entrance pupil diameter from which the software infers the size of the STOP. When the ray aiming is set to “paraxial”, a paraxial raytrace is used to compute the STOP size from the entrance pupil diameter. When the ray aiming is set to “real”, reals rays are used to compute the STOP size. What is fishy with this is that when you will go to the “system reports” analysis tab and look at the entrance pupil diameter you will always look for the paraxial entrance pupil which may then differs from the one you selected in the menu!

To avoid this kind of troubles, I always work in the “floating aperture” mode and specify the size of the STOP myself. I then use ray aiming to calculate the correct direction of the ray. It is a bit more computations and will not work with systems having central obscuration but this is the price to pay to avoid the mess of multiples modes of operations, each giving a slightly different answer. I also believe that, in the spirit of this #DevOptical series, having a single option that covers 99% of the use-cases is better. The 1% left can then be dealt with more specialized software like OpticStudio or CodeV.

Ray aiming will have important consequences when we start discussing wavefront error maps, PSF and MTF. But I’m leaving this for later.

Final Words

This was a pretty long post and I’m really glad to be able to release it. If you followed a bit the website, you will remember that I developed the raytracing code in 2017 already (4 years ago!) when discussing [»] a custom 5x microscopy objective.

Everything that is here has been thoroughly tested and validated against commercial optical design software. It is, to my knowledge, the most complete tutorial on the topic you will find online for free. I struggled a lot implementing all these algorithms and I hope this will help other people as well.

The content of this post will also be necessary to develop extremely important image quality quantifiers including spot and fan diagrams, modulation transfer function, impulse response etc. It will also be the occasion to discuss about wavefronts, diffraction effects and 3rd order aberrations theory. It will also serve as the basis for Monte-Carlo tolerance analysis much later.

The next post will be about spot diagrams, which are the most straightforward way to me to introduce people to optical aberrations once they know the concepts of raytracing. I will then move to wavefront and diffraction theory before going to the concepts of Seidel aberrations.

I’m currently working on more advanced concepts for the series that I will reveal later. I have already some nice stuffs working for Seidel aberrations and paraxial tolerance analysis. This summer I also started working on a video tutorial on how to assemble the OpenRAMAN spectrometer but it’s taking me more time than expected. I bought a BlackMagic Pocket 6K camera for the occasion but I took a lens that’s a bit to small for the place I’m recording the videos (I took a 35 mm lens but a 24 mm would have been more suited… too bad!).

[p]I would like to give a big thanks to James, Daniel, Naif, Lilith, Cam, Samuel, Themulticaster, Sivaraman and Arif who have supported this post through [∞] Patreon. I also take the occasion to invite you to donate through Patreon, even as little as $1. I cannot stress it more, you can really help me to post more content and make more experiments!

[/p]
[⇈] Top of Page

You may also like:

[»] Understanding 3D Transforms

[»] #DevOptical Part 14: Third-Order Aberration Theory

[»] #DevOptical Part 11: The Diffractive PSFs

[»] RSA Cryptography Implementation Tips

[»] Data Communication With a PIC Using RS232