Heterodyning is a powerful technique that allows you to study signals that have frequencies well beyond the reach of your data logger. For example, it let you study radio frequencies signals of a few hundred MHz with a humble [»] 5 kHz oscilloscope. Heterodyne techniques also enable you to play with frequencies by summing and subtracting two source or doubling an input frequency.
The overall concept is actually very straightforward and best explained with a sine input signal of frequency fsig. To set our minds once and for all, let’s say that fsig is 11 kHz. Because our [»] 5 kHz oscilloscope cannot analyze signals with frequencies above 2.5 kHz (from the Shannon theorem), it is well beyond our reach. Right? Well… not with heterodyning!
In heterodyne systems, we will “mix” the signal of interest with a master sine source fosc using an analog product operation. The fosc frequency is chosen such that we will study anything that lies between fosc-fbw and fosc+fbw where fbw is the bandwidth of our acquisition system (which is 2.5 kHz in our example). I have chosen fosc of 10 kHz for the example here.
We then have our two signals (amplitudes are supposed to be 1V to make notations easier):
Computing the product of these signals gives:
which corresponds to the sum of two new signals of frequencies fosc-fsig and fosc+fsig. If you don’t believe me, just have a look at the end of the post I have added proofs of these formulas ;-)
In our example, we then end up with frequencies of 1 kHz (11 kHz – 10 kHz) and 21 kHz (11 kHz + 10 kHz). If we filter out the highest frequency using a low-pass filter, we end up with the sole 1 kHz signal which is well in the reach of our humble home-built data logger.
And because a signal rarely consists of only one frequency component, we can do the very same for any frequency f contained in the signal. As a consequence, mixing a signal that has a spectral distribution W(f) will result in an upshift (W”) and a downshift (W’) version of that distribution around the master fosc frequency: W’=W(f-fosc) and W”=W(f+fosc). By filtering all the components above fbw (fbw << fosc), we keep only W’ and get rid of W”.
One of the biggest issues with this approach is that you cannot discriminate positive frequencies from negative ones by only looking at the frequency spectrum on your computer. So when we record a downscaled frequency of 1 kHz, we cannot really tell if it is +1 kHz or -1 kHz. When considering signals in “normal” (i.e.: not heterodyned!) this is not much of a problem because the only difference would be in the phase and not in the magnitude; but with heterodyne systems this becomes much more troublesome: imagine you have a signal of 11 kHz and a oscillator of 10 kHz. We have seen that it will yield a downscaled frequency of 1 kHz. Now, do the same with a signal of 9 kHz. What do you get? -1 kHz! There is the problem because, when checking the frequency spectrum of the signal, there is no way to tell the 9 kHz signal from the 11 kHz one. Actually, if your system consists of signals at both 9 kHz and 11 kHz, they will sum up in the Fourier spectrum!
One way to solve the problem is to use tuneable low-pass and high-pass filter with a very sharp cut-off set to the oscillator frequency. This allows you to divide the experiment into a two stages version: on one channel you analyze everything that is below the oscillator frequency, and on the second channel you analyze everything that is above the oscillator frequency. You then recompose the full spectrum by summing both parts. However, neither extremely sharp nor tuneable filters are easy to build, at contrario of tuneable sine sources.
To overcome this I have set up a trick. It is relatively simple but will work only in very specific cases, where the signal bandwidth is relatively small. The idea is to record two spectrum using two slightly different oscillator frequencies and to correlate the resulting spectrum. Imagine that we have a signal at 11 kHz and two oscillators: one at 10 kHz and the other one at 10.5 kHz. From the first spectrum, we get a ∆f of 1 kHz and so we know that the signal is either at 9 kHz (fosc1-∆f) or 11 kHz (fosc1+∆f). From the second oscillator, we get a ∆f of 0.5 kHz and so we know the signal is either at 10 kHz (fosc2-∆f) or at 11 kHz (fosc2+∆f). By correlating the two spectra, we can rule out the 9 kHz and 10 kHz peak and we know the signal is at 11 kHz since both spectra agree on that one.
The example works fine because we have a unique frequency component in the signal. However, as the signal gets more complex, we do not have clear peaks but a spread functions that depends on the frequency, W(f). As a consequence, it is not easy to get such good correlations on the signals Wosc1(f) and Wosc2 (f). One way to implement the trick is to take the square root of the product of both spectra:
This works well at the condition that W(f) has a limited bandwith located away from fosc1 and fosc2 but still within the range [fosc1+fbw, fosc2-fbw] with fbw the bandwidth of the acquisition system (fosc1<fosc2). This implies that |fosc1-fosc2|<<fbw to keep a working range as large as possible. However, as the gap between fosc1 and fosc2 gets too small, the result becomes less satisfactory. It then requires some careful tuning!
Note that once you have identified the actual location of the signal in the frequency domain, you can set up the oscillator to be either just above or just below the signal bandwidth boundaries and to record a cleaner spectrum using only one oscillator.
I will now cover some results obtained from both actual tests and simulations. I have also included a detailed mathematical description of the heterodyne process if you want to dig further into the topic!
I have done some tests and simulations using both Matlab/Simulink and a frequency generator Protek 9205 mixed to a [»] homemade oscillator using a 1 MHz analog multiplied IC AD633. I will first cover the results obtained with the frequency generator and then go through the simulations. All the data from the test were collected using the FFT function of our humble [»] 5 kHz oscilloscope.
The frequency generator was set to a frequency around 800 Hz and yielded the spectrum of Figure 1 while the homemade oscillator generated the spectrum of Figure 2. Not so surprisingly, the Protek spectrum is much cleaner than our homemade sine wave generator that have some distortion and a stronger dc offset.
When the homemade oscillator is mixed with itself using the AD633 analog multiplier, the result of Figure 3 are obtained. The product spectrum is shown in black and the original spectrum is overlaid in dashed grey. You can see a clear peak at 482 Hz which is twice the original dominant peak, just as expected. However, if you look closely, you will see that some signal remains at 241 Hz due to the dc term in the original signal.
If we write down the equations for a signal with a dc term we get:
and so we have three components in the signal : one at 2f (as expected) with an amplitude of Vac squared, one at f with an amplitude of Vdc*Vac and a dc component of amplitude Vdc squared plus Vac squared. To keep the f term as low as possible, we have to cancel as much dc as possible. On the other hand, it is impossible to output a signal with zero dc term because it is a direct consequence of the heterodyne formulas where two components are created: one with the sum of input frequencies, and one with the subtraction of the input frequencies. As the input frequencies are the same, we end up with twice the input frequency and a dc term of same amplitude. If you don’t need the dc term, you will have to high-pass filter it.
If we decide to mix the homemade oscillator with the Protek one, we obtain the result of Figure 4. Just as for Figure 3, I have plotted the resulting spectrum in black and overlaid it with the original spectrum of the Protek generator in dashed grey. The original 778 Hz is correctly split into the down-shifted 537 Hz signal (778 Hz – 241 Hz) and the up-shifted signal at 1019 Hz (778 Hz + 241 Hz). You will also find a dc term which is due to the product of the two inputs dc components and a signal at 241 Hz and 778 Hz which are respectively the result of the ac term of a source multiplied by the dc term of the second source.
The results of Figure 4 are actually pretty good and would be a good illustration for a textbook illustration. However, as we have seen, real world application signals are far from being as simple as two pure sine waves. To go further I have decided to run a few simulations using Matlab/Simulink. You may ask why I decided to switch to simulations with the promising results of Figure 4 ? Well the answer is easy: because most of the trials I have run involved more than two sources or more than one analog multiplier unit. As I only had the Protek generator and the homemade oscillator built from an earlier experiment as well as only one AD633, I decided to get a first trial with simulations before attempting anything with real circuitry. Since the results were not as promising as I would have liked, I did not implement it using real sources. However, I swear to give an update as soon as I get something nice working!
The first thing we will need is to create some kind of real-world shaped signal. To do so, I have taken a sine source of 900 Hz and multiply it by a pulse generator set to 1 Hz with a 10% duty cycle. This will produce fragments of sine waves which have the effect of broadening the spike around 900 Hz in the frequency spectrum. It is still relatively artificial but has the advantage of spreading across a larger span of frequencies than our perfect spikes made of ideal sources.
Once we have our source working, we will have to ask Matlab to compute its spectrum for us. To do so, first start by inserting a zero-holder block of sampling time 0.1 ms, just as in a real data logging system. Then, insert a buffer block to tell Matlab to store the n-last elements. Set the buffer size to 10000 so that we collect ten thousands samples. With the sampling rate and buffer size, we will scan a spectrum of 0 – 5 kHz with a resolution of 1 Hz. Finally, you will have to put an FFT block (set size to 10000 as the buffer size) followed by a magnitude block to get the power spectrum (the output of the FFT is a complex number and we are interested in its magnitude). The results can be plotted to screen using the Vector Scope block. Please refer to Figure 5 to check that your model is correctly built.
If you get everything right, you should obtain the results of Figure 6. This will be our comparison spectrum for later operations.
We will now mix this signal with a sine wave of 1 kHz which will represent our reference oscillator. Modify your model according to Figure 7. The Low-Pass filter is a subsystem comprising a succession (like 8 or 10) of single pole low-passes made of transfer function 1 / (2πfp+1).
We obtain the result of Figure 8, just as predicted by theory. Note, however, that the spectrum is distorted around the origin because of the “reflexion” of components above 1 kHz that will roll back from negative to positive values. This will have consequence when analyzing the spectrum data because it already no longer represents as closely as we would like the original spectrum of Figure 6.
Now comes the tricky part. In real world operations, we would filter everything above the reference oscillator, acquire the signal with our oscilloscope, plot the result and shift all the frequencies by fosc. Actually, we will even plot the sum of two graphs to account for both positive and negative frequency uncertainty issue. In our example, we would have something looking like two peaks at 900 Hz and 1100 Hz with some spreading in-between. This is possible to simulate in Matlab but there is an easier solution to deal with it.
The trick is to first filter the up-shifted peak like we normally do but then to remix the resulting signal with the reference oscillator. This will produce the same result as we would have done in spectrum post-processing. Actually, it is the exact idea of the post-processing operation: to remap the signal to its original frequency domain. Your model should now look like the one of Figure 9.
The resulting spectrum is then given on Figure 10. It is exactly what we had expected with the two peaks and the spreading. Note that the roll-back effects noticed on Figure 8 still applies here because we have only shifted the spectrum to recover the original frequency span.
We may do the same with a second reference oscillator at 1.5 kHz. The resulting spectrum is given on Figure 11. Note that the spectrum looks more similar to the original one (compare with Figure 6) due to the bigger distance with the oscillator which makes the distortion around zero less marked.
In the introduction, I have explained that a trick to release the positive/negative uncertainty was to correlate the peaks from two distinct oscillator references. Here, we clearly see that the peak at 900 Hz is well correlated between Figure 10 and Figure 11. To do this correlation, I have taken the square root of the product of the spectrum. Please refer to Figure 12 to connect your model correctly.
The results are given on Figure 13 and ought to be compared to the original spectrum shown on Figure 6.
The uncertainty between positive and negative frequencies has been waived but the recovered spectrum is far from being perfect. More simulations will show that the results get better as the oscillators are moved away from the signal frequency span. However, because the overall bandwidth of the acquisition system is limited by the technology used (in our case, 2.5 kHz), we cannot stretch the oscillators too far from the actual signal range… As stated previously, one good compromise is to first locate where the signals extends and to select and oscillator reference that is placed just at the boundary. In our case, using the reference of 1.5 kHz or even at 2.0 kHz will yield satisfactory results.
Finally, please note that the technique described here applies to any signal that may be located at the left, right or centre of the two oscillators. In our example, the signal was at the left (lower frequencies) than our oscillators, but we could have chosen a signal of 2 kHz or 1.1 kHz as well.
In this section you will find detailed mathematical derivations linked to heterodyne systems. It is not mandatory so skip it if you don’t like maths :-)
Earlier I have used trigonometric identities to compute the product of two sine waves. It is the very formula taught in highschools all over the world and that every student must have learned by heart. However, you may wonder how to prove this identity right. To do so, we will need to use the Euler identity first (j2=-1):
If you also doubt this equation, you can derive it from the Maclaurin series of exp(jθ), sin(θ) and cos(θ):
By rearranging the Euler formula we get:
which is precisely the formula used above!
It is also possible to get an understanding of heterodyne systems by taking a look at the frequency domain using the Fourier transforms. Let then be a signal w(t) and its Fourier transform W(f) linked by the relations:
Therefore, studying the frequency spectrum of the product of signal and a sine wave becomes:
with f’=f+fosc and f”=f-fosc.
which are the downshifted and upshifted spectrum, respectively.
In the case we are multiplying the signal with a more complex oscillator source the previous maths still applies but we have to consider all the frequencies that compose the oscillator signal. For example, if the oscillator is composed of two frequencies fosc1 and fosc2, we would have got:
The example generalise with any frequency composition:
this can be read as the infinite sum of all frequencies f and is nothing but the inverse Fourier transform of the OSC(f) spectrum. As I hate maths probably more than you do, I will just tell you that the Fourier transform of the product corresponds to the convolution of both spectra:
To get a better understanding at this ugly formula we will first apply it to a pure sine wave to understand what it exactly does.
We know that
with δ(…) the Dirac function:
which, in the frequency space, correspond to two peaks of magnitude j/2 and –j/2 respectively located at –fosc and +fosc.
If we apply the convolution of this spectrum and W(f) (the signal spectrum), we may simplify the integral because we know that the Dirac function is null everything except at its origin:
which is exactly the result obtained in the previous section.
You may then understand heterodyning as an operator that copies the whole signal frequency spectrum at each individual frequency component of the oscillator source. The resulting spectrum may be of little interest with complex oscillator signals but is more helpful with signals that present characteristic distortions such as square wave.
The Fourier spectrum of a square wave oscillating from -1 to +1 at a frequency f0 displays a fundamental peak at f0 followed by harmonics at 3f0, 5f0, 7f0 etc. As a consequence of the convolution theorem, applying the heterodyne technique to a signal with a square wave oscillator will yield a shift in the spectrum of fundamental f0 but also at 3f0, 5f0, 7f0 etc.
You may also like: