Spectrum Analyzer
Spectrum Analyzer
Spectrum Analyzer
Introduction
At the most basic level, the spectrum analyzer can be described as a frequency-selective, peak-responding voltmeter calibrated to display the rms value of a sine wave. The spectrum analyzer is not a power meter, even though it can be used to display power directly. As long as we know some value of a sine wave (for example, peak or average) and know the resistance across which we measure this value, we can calibrate our voltmeter to indicate power.
With the advent of digital technology, modern spectrum analyzers have been given many more capabilities.
Fourier theory tells us any time-domain electrical phenomenon is made up of one or more sine waves of appropriate frequency, amplitude, and phase.
In other words, we can transform a time-domain signal into its frequency domain equivalent. Measurements in the frequency domain tell us how much energy is present at each particular frequency.
With proper filtering, a waveform such as shown in Figure can be decomposed into separate sinusoidal waves, or spectral components, which we can then evaluate independently.
Each sine wave is characterized by its amplitude and phase. If the signal that we wish to analyze is periodic, as in our case here, Fourier says that the constituent sine waves are separated in the frequency domain by 1/T, where T is the period of the signal.
Some measurements require that we preserve complete information about the signal frequency, amplitude and phase. This type of signal analysis is called vector signal analysis.
Modern spectrum analyzers are capable of performing a wide variety of vector signal measurements.
However, another large group of measurements can be made without knowing the phase relationships among the sinusoidal components. This type of signal analysis is called spectrum analysis.
Theoretically, to make the transformation from the time domain to the frequency domain, the signal must be evaluated over all time, that is, over infinity.
However, in practice, we always use a finite time period when making a measurement.
Fourier transformations can also be made from the frequency to the time domain. This case also theoretically requires the evaluation of all spectral components over frequencies to infinity.
In reality, making measurements in a finite bandwidth that captures most of the signal energy produces acceptable results. When performing a Fourier transformation on frequency domain data, the phase of the individual components is indeed critical: For example, a square wave transformed to the frequency domain and back again could turn into a sawtooth wave if phase were not preserved.
What is a spectrum
A spectrum is a collection of sine waves that, when combined properly, produce the timedomain signal under examination. Suppose that we were hoping to see a sine wave. Although the waveform certainly shows us that the signal is not a pure sinusoid, it does not give us a definitive indication of the reason why.
The frequency domain display plots the amplitude versus the frequency of each sine wave in the spectrum. As shown, the spectrum in this case comprises just two sine waves.
We now know why our original waveform was not a pure sine wave. It contained a second sine wave, the second harmonic in this case.
Does this mean we have no need to perform time-domain measurements? Not at all. The time domain is better for many measurements, and some can be made only in the time domain.
For example, pure time-domain measurements include pulse rise and fall times, overshoot, and ringing.
Engineers and technicians are also very concerned about distortion of the message modulated onto a carrier.
Third-order intermodulation (two tones of a complex signal modulating each other) can be particularly troublesome because the distortion components can fall within the band of interest and so will not be filtered away.
Transmitters and other intentional radiators can often be required to operate at closely spaced adjacent frequencies.
A key performance measure for the power amplifiers and other components used in these systems is the amount of signal energy that spills over into adjacent channels and causes interference.
Electromagnetic interference (EMI) is a term applied to unwanted emissions from both intentional and unintentional radiators. Here, the concern is that these unwanted emissions, either radiated or conducted (through the power lines or other interconnecting wires), might impair the operation of other systems. Almost anyone designing or manufacturing electrical or electronic products must test for emission levels versus frequency according to regulations set by various government agencies or industry-standard bodies.
Types of measurements
Common spectrum analyzer measurements include frequency, power, modulation, distortion, and noise. Understanding the spectral content of a signal is important, especially in systems with limited bandwidth. Transmitted power is another key measurement. Too little power may mean the signal cannot reach its intended destination. Too much power may drain batteries rapidly, create distortion, and cause excessively high operating temperatures. Measuring the quality of the modulation is important for making sure a system is working properly and that the information is being correctly transmitted by the system.
Tests such as modulation degree, sideband amplitude, modulation quality, and occupied bandwidth are examples of common analog modulation measurements. Digital modulation metrics include error vector magnitude (EVM), IQ imbalance, phase error versus time, and a variety of other measurements.
In communications, measuring distortion is critical for both the receiver and transmitter.
Excessive harmonic distortion at the output of a transmitter can interfere with other communication bands. The pre-amplification stages in a receiver must be free of intermodulation distortion to prevent signal crosstalk. An example is the intermodulation of cable TV carriers as they move down the trunk of the distribution system and distort other channels on the same cable. Common distortion measurements include intermodulation, harmonics, and spurious emissions.
Noise is often the signal you want to measure. Any active circuit or device will generate excess noise. Tests such as noise figure and signal-to-noise ratio (SNR) are important for characterizing the performance of a device and its contribution to overall system performance.
Heterodyne means to mix; that is, to translate frequency. And super refers to super-audio frequencies, or frequencies above the audio range. We see that an input signal passes through an attenuator, then through a low-pass filter (later we shall see why the filter is here) to a mixer, where it mixes with a signal from the local oscillator (LO). Because the mixer is a non-linear device, its output includes not only the two original signals, but also their harmonics and the sums and differences of the original frequencies and their harmonics. If any of the mixed signals falls within the passband of the intermediatefrequency (IF) filter, it is further processed (amplified and perhaps compressed on a logarithmic scale). It is essentially rectified by the envelope detector, digitized, and displayed. A ramp generator creates the horizontal movement across the display from left to right. The ramp also tunes the LO so that its frequency change is in proportion to the ramp voltage. Similar to superheterodyne AM radios, the type that receive ordinary AM broadcast signals, you will note a strong similarity between them and the spectrum analyzer. The differences are that the output of a spectrum analyzer is a display instead of a speaker, and the local oscillator is tuned electronically rather than by a front-panel knob.
RF Attenuator
The first part of our analyzer is the RF input attenuator. Its purpose is to ensure the signal enters the mixer at the optimum level to prevent overload, gain compression, and distortion. Because attenuation is a protective circuit for the analyzer, it is usually set automatically, based on the reference level.
However, manual selection of attenuation is also available in steps of 10, 5, 2, or even 1 dB.
An example of an attenuator circuit with a maximum attenuation of 70 dB in increments of 2 dB. The blocking capacitor is used to prevent the analyzer from being damaged by a DC signal or a DC offset of the signal.
Unfortunately, it also attenuates low frequency signals and increases the minimum useable start frequency of the analyzer to 100 Hz for some analyzers, 9 kHz for others.
We need to pick an LO frequency and an IF that will create an analyzer with the desired tuning range. Lets assume that we want a tuning range from0 to 3 GHz. We then need to choose the IF frequency. Lets try a 1 GHz IF. Since this frequency is within our desired tuning range, we could have an input signal at 1 GHz. Since the output of a mixer also includes the original input signals, an input signal at 1 GHz would give us a constant output from the mixer at the IF. The 1 GHz signal would thus pass through the system and give us a constant amplitude response on the display regardless of the tuning of the LO. The result would be a hole in the frequency range at which we could not properly examine signals because the amplitude response would be independent of the LO frequency. Therefore, a 1 GHz IF will not work.
So we shall choose, instead, an IF that is above the highest frequency to which we wish to tune. In spectrum analyzers that can tune to 3 GHz, the IF chosen is about 3.9 GHz. Remember that we want to tune from 0 Hz to3 GHz. (Actually from some low frequency because we cannot view a 0 Hz signal with this architecture.) If we start the LO at the IF (LO minus IF = 0 Hz) and tune it upward from there to 3 GHz above the IF, then we can cover the tuning range with the LO minus IF mixing product. Using this information, we can generate a tuning equation:
fsig = fLO fIF
where fsig = signal frequency fLO = local oscillator frequency, and fIF = intermediate frequency (IF)
If we wanted to determine the LO frequency needed to tune the analyzer to a low-, mid-, or high-frequency signal (say, 1 kHz, 1.5 GHz, or 3 GHz), we would first restate the tuning equation in terms of fLO: fLO = fsig + fIF Then we would plug in the numbers for the signal and IF in the tuning equation: fLO = 1 kHz + 3.9 GHz = 3.900001 GHz, fLO = 1.5 GHz + 3.9 GHz = 5.4 GHz, or fLO = 3 GHz; + 3.9 GHz = 6.9 GHz.
In this figure, fLO is not quite high enough to cause the fLO fsig mixing product to fall in the IF passband, so there is no response on the display.
If we adjust the ramp generator to tune the LO higher, however, this mixing product will fall in the IF passband at some point on the ramp (sweep), and we shall see a response on the display. Since the ramp generator controls both the horizontal position of the trace on the display and the LO frequency, we can now calibrate the horizontal axis of the display in terms of the input signal frequency.
What happens if the frequency of the input signal is 8.2 GHz? As the LO tunes through its 3.9 to 7.0 GHz range, it reaches a frequency (4.3 GHz) at which it is the IF away from the 8.2 GHz input signal. At this frequency we have a mixing product that is equal to the IF, creating a response on the display. In other words, the tuning equation could just as easily have been: fsig = fLO + fIF. This equation says that the architecture of Figure 2-1 could also result in a tuning range from 7.8 to 10.9 GHz, but only if we allow signals in that range to reach the mixer. The job of the input low-pass filter is to prevent these higher frequencies from getting to the mixer. We also want to keep signals at the intermediate frequency itself from reaching the mixer, so the low-pass filter must do a good job of attenuating signals at 3.9 GHz, as well as in the range from 7.8 to 10.9 GHz.
In summary, we can say that for a single-band RF spectrum analyzer, we would choose an IF above the highest frequency of the tuning range.
We would make the LO tunable from the IF to the IF plus the upper limit of the tuning range and include a low-pass filter in front of the mixer that cuts off below the IF. To separate closely spaced signals, some spectrum analyzers have IF bandwidths as narrow as 1 kHz; 10 Hz; 1 Hz.
Such narrow filters are difficult to achieve at a center frequency of 3.9 GHz. So we must add additional mixing stages, typically two to four stages, to down-convert from the first to the final IF.
A possible IF chain based on the architecture of a typical spectrum analyzer is shown below. The full tuning equation for this analyzer is: fsig = fLO1 (fLO2 + fLO3 + ffinal IF) However, fLO2 + fLO3 + ffinal IF = 3.6 GHz + 300 MHz + 21.4 MHz = 3.9214 GHz, the first IF
Although only passive filters are shown in the diagram, the actual implementation includes amplification in the narrower IF stages. The final IF section contains additional components, such as logarithmic amplifiers or analog to digital converters, depending on the design of the particular analyzer.
Most RF spectrum analyzers allow an LO frequency as low as, and even below, the first IF. Because there is finite isolation between the LO and IF ports of the mixer, the LO appears at the mixer output. When the LO equals the IF, the LO signal itself is processed by the system and appears as a response on the display, as if it were an input signal at 0 Hz. This response, called LO feedthrough, can mask very low frequency signals, so not all analyzers allow the display range to include 0 Hz.
IF gain
The next component of the block diagram is a variable gain amplifier. It is used to adjust the vertical position of signals on the display without affecting the signal level at the input mixer. When the IF gain is changed, the value of the reference level is changed accordingly to retain the correct indicated value for the displayed signals. Generally, we do not want the reference level to change when we change the input attenuator, so the settings of the input attenuator and the IF gain are coupled together. A change in input attenuation will automatically change the IF gain to offset the effect of the change in input attenuation, thereby keeping the signal at a constant position on the display.
Analog filters
Frequency resolution is the ability of a spectrum analyzer to separate two input sinusoids into distinct responses. Fourier tells us that a sine wave signal only has energy at one frequency, so we shouldnt have any resolution problems. Two signals, no matter how close in frequency, should appear as two lines on the display. But a closer look at our superheterodyne receiver shows why signal responses have a definite width on the display. The output of a mixer includes the sum and difference products plus the two original signals (input and LO). A bandpass filter determines the intermediate frequency, and this filter selects the desired mixing product and rejects all other signals. Because the input signal is fixed and the local oscillator is swept, the products from the mixer are also swept.
If a mixing product happens to sweep past the IF, the characteristic shape of the bandpass filter is traced on the display. The narrowest filter in the chain determines the overall displayed bandwidth, and in the architecture of Figure 2-5, this filter is in the 21.4 MHz IF.
So two signals must be far enough apart, or else the traces they make will fall on top of each other and look like only one response. Fortunately, spectrum analyzers have selectable resolution (IF) filters, so it is usually possible to select one narrow enough to resolve closely spaced signals.
Data sheets describe the ability to resolve signals by listing the 3 dB bandwidths of the available IF filters.
This number tells us how close together equal-amplitude sinusoids can be and still be resolved. In this case, there will be about a 3 dB dip between the two peaks traced out by these signals.
The signals can be closer together before their traces merge completely, but the 3 dB bandwidth is a good rule of thumb for resolution of equal-amplitude signals3
Two equal-amplitude sinusoids separated by the 3 dB BW of the selected IF filter can be resolved
Sometimes, we are dealing with sinusoids that are not equal in amplitude. The smaller sinusoid can actually be lost under the skirt of the response traced out by the larger. The top trace looks like a single signal, but in fact represents two signals: one at 300 MHz (0 dBm) and another at 300.005 MHz (30 dBm). The lower trace shows the display after the 300 MHz signal is removed.
Another specification is listed for the resolution filters: bandwidth selectivity (or selectivity or shape factor). Bandwidth selectivity helps determine the resolving power for unequal sinusoids. Bandwidth selectivity is generally specified as the ratio of the 60 dB bandwidth to the 3 dB bandwidth, as shown below. If the analog filters are a four-pole, synchronously-tuned design, with a nearly Gaussian shape, it exhibits a bandwidth selectivity of about 12.7:1.
Example: what resolution bandwidth must we choose to resolve signals that differ by 4 kHz and 30 dB, assuming 12.7:1 bandwidth selectivity? Since we are concerned with rejection of the larger signal when the analyzer is tuned to the smaller signal, we need to consider not the full bandwidth, but the frequency difference from the filter center frequency to the skirt. To determine how far down the filter skirt is at a given offset, we use the following equation: H(f) = 10(N) log10 *(f/f0)2 + 1] Where H(f) is the filter skirt rejection in dB N is the number of filter poles f is the frequency offset from the center in Hz, and f0 is given by: RBW*(2*sqrt(21/N-1))-1
For our example, N=4 and f = 4000. Lets begin by trying the 3 kHz RBW filter. First, we compute f0 = 3448.44.
Now we can determine the filter rejection at a 4 kHz offset: H(4000)= 14.8 dB This is not enough to allow us to see the smaller signal. Lets determine H(f) again using a 1 kHz filter: f0 = 1149.48 H(4000) = 44.7 dB Thus, the 1 kHz resolution bandwidth filter does resolve the smaller signal.
The 3 kHz filter (top trace) does not resolve smaller signal; reducing the resolution bandwidth to 1 kHz (bottom trace) does.
Digital filters
Some spectrum analyzers use digital techniques to realize their resolution bandwidth filters.
Digital filters can provide important benefits, such as dramatically improved bandwidth selectivity.
Some spectrum analyzers implement all resolution bandwidths digitally. Other analyzers, take a hybrid approach, using analog filters for the wider bandwidths and digital filters for bandwidths of 300 Hz and below.
Residual FM
Filter bandwidth is not the only factor that affects the resolution of a spectrum analyzer. The stability of the LOs in the analyzer, particularly the first LO, also affects resolution. The first LO is typically a YIG-tuned oscillator (tuning somewhere in the 3 to 7 GHz range). In early spectrum analyzer designs, these oscillators had residual FM of 1 kHz or more. This instability was transferred to any mixing products resulting from the LO and incoming signals, and it was not possible to determine whether the input signal or the LO was the source of this instability. The minimum resolution bandwidth is determined, at least in part, by the stability of the first LO. Analyzers where no steps are taken to improve upon the inherent residual FM of the YIG oscillators typically have a minimum bandwidth of 1 kHz. For example, Agilent analyzers have residual FM of 1 to 4 Hz. This allows bandwidths as low as 1 Hz. So any instability we see on a spectrum analyzer today is due to the incoming signal.
Phase Noise
Even though we may not be able to see the actual frequency jitter of a spectrum analyzer LO system, there is still a manifestation of the LO frequency or phase instability that can be observed. This is known as phase noise (sometimes called sideband noise). No oscillator is perfectly stable. All are frequency or phase modulated by random noise to some extent. As previously noted, any instability in the LO is transferred to any mixing products resulting from the LO and input signals. So the LO phase-noise modulation sidebands appear around any spectral component on the display that is far enough above the broadband noise floor of the system.
Phase noise is displayed only when a signal is displayed far enough above the system noise floor
The amplitude difference between a displayed spectral component and the phase noise is a function of the stability of the LO. The more stable the LO, the farther down the phase noise. The amplitude difference is also a function of the resolution bandwidth. If we reduce the resolution bandwidth by a factor of ten, the level of the displayed phase noise decreases by 10 dB. The shape of the phase noise spectrum is a function of analyzer design, in particular, the sophistication of the phase lock loops employed to stabilized the LO. In some analyzers, the phase noise is a relatively flat pedestal out to the bandwidth of the stabilizing loop. In others, the phase noise may fall away as a function of frequency offset from the signal. Phase noise is specified in terms of dBc (dB relative to a carrier) and normalized to a 1 Hz noise power bandwidth. It is sometimes specified at specific frequency offsets. At other times, a curve is given to show the phase noise characteristics over a range of offsets.
Generally, we can see the inherent phase noise of a spectrum analyzer only in the narrower resolution filters, when it obscures the lower skirts of these filters. The use of the digital filters does not change this effect. For wider filters, the phase noise is hidden under the filter skirt, just as in the case of two unequal sinusoids discussed earlier. In any case, phase noise becomes the ultimate limitation in an analyzers ability to resolve signals of unequal amplitude. We may have determined that we can resolve two signals based on the 3 dB bandwidth and selectivity, only to find that the phase noise covers up the smaller signal.
Sweep time
Analog resolution filters: If resolution were the only criterion on which we judged a spectrum analyzer, we might design our analyzer with the narrowest possible resolution (IF) filter and let it go at that. But resolution affects sweep time, and we care very much about sweep time. Sweep time directly affects how long it takes. Resolution comes into play because the IF filters are bandlimited circuits that require finite times to charge and discharge. If the mixing products are swept through them too quickly, there will be a loss of displayed amplitude to complete a measurement.
Sweeping an analyzer too fast causes a drop in displayed amplitude and a shift in indicated frequency
If we think about how long a mixing product stays in the passband of the IF filter, that time is directly proportional to bandwidth and inversely proportional to the sweep in Hz per unit time: Time in passband: RBW/(Span/ST) = [(RBW)*(ST)]/Span ST is sweep time. On the other hand, the rise time of a filter is inversely proportional to its bandwidth, and if we include a constant of proportionality, k, then: Rise time = k/RBW If we make the terms equal and solve for sweep time, we have: k/RBW = [(RBW)*(ST)]/Span ST= [k*Span]/(RBW)2
The value of k is in the 2 to 3 range for the synchronously-tuned, near-Gaussian filters used in many analyzers. The important message here is that a change in resolution has a dramatic effect on sweep time. Most analyzers provide values in a 1, 3, 10 sequence or in ratios roughly equaling the square root of 10. So sweep time is affected by a factor of about 10 with each step in resolution. Spectrum analyzers automatically couple sweep time to the span and resolution bandwidth settings. Sweep time is adjusted to maintain a calibrated display. If a sweep time longer than the maximum available is called for, the analyzer indicates that the display is uncalibrated with a Meas Uncal message in the upper-right part of the graticule. We are allowed to override the automatic setting and set sweep time manually if the need arises.
Digital resolution filters The digital resolution filters used in Agilent spectrum analyzers have an effect on sweep time that is different from the effects weve just discussed for analog filters. For swept analysis, the speed of digitally implemented filters can show a 2 to 4 times improvement. FFT-based digital filters show an even greater difference. This difference occurs because the signal being analyzed is processed in frequency blocks, depending upon the particular analyzer. For example, if the frequency block was 1 kHz, then when we select a 10 Hz resolution bandwidth, the analyzer is in effect simultaneously processing the data in each 1 kHz block through 100 contiguous 10 Hz filters. If the digital processing were instantaneous, we would expect sweep time to be reduced by a factor of 100. In practice, the reduction factor is less, but is still significant.
Problem: A spectrum analyzer uses synthesized LO generator capable of generating (not that this full range has to be used) a range 1 to 10 GHz. The IF is centered at 800 MHz. The mixer can operate in fundamental or second harmonic mode. What maximum range of RF input frequencies can be covered, and what should be the approx. bandwidth and the centre frequency range of the preselector?
Envelope detector: Spectrum analyzers typically convert the IF signal to video with an envelope detector. In its simplest form, an envelope detector consists of a diode, resistive load and low-pass filter, as shown below. The output of the IF chain in this example, an amplitude modulated sine wave, is applied to the detector. The response of the detector follows the changes in the envelope of the IF signal, but not the instantaneous value of the IF sine wave itself.
For most measurements, we choose a resolution bandwidth narrow enough to resolve the individual spectral components of the input signal. If we fix the frequency of the LO so that our analyzer is tuned to one of the spectral components of the signal, the output of the IF is a steady sine wave with a constant peak value. The output of the envelope detector will then be a constant (dc) voltage, and there is no variation for the detector to follow.
However, there are times when we deliberately choose a resolution bandwidth wide enough to include two or more spectral components. At other times, we have no choice. The spectral components are closer in frequency than our narrowest bandwidth. Assuming only two spectral components within the passband, we have two sine waves interacting to create a beat note, and the envelope of the IF signal varies, as shown, as the phase between the two sine waves varies.
The width of the resolution (IF) filter determines the maximum rate at which the envelope of the IF signal can change. This bandwidth determines how far apart two input sinusoids can be so that after the mixing process they will both be within the filter at the same time. Lets assume a 21.4 MHz final IF and a 100 kHz bandwidth. Two input signals separated by 100 kHz would produce mixing products of 21.35 and 21.45 MHz and would meet the criterion. The detector must be able to follow the changes in the envelope created by these two signals but not the 21.4 MHz IF signal itself.
The envelope detector is what makes the spectrum analyzer a voltmeter. Lets duplicate the situation above and have two equal-amplitude signals in the passband of the IF at the same time. A power meter would indicate a power level 3 dB above either signal, that is, the total power of the two. Assume that the two signals are close enough so that, with the analyzer tuned half way between them, there is negligible attenuation due to the roll-off of the filter (assume filter is perfectly rectangular). Then the analyzer display will vary between a value that is twice the voltage of either (6 dB greater) and zero (minus infinity on the log scale). We must remember that the two signals are sine waves (vectors) at different frequencies, and so they continually change in phase with respect to each other: At some time they add exactly in phase; at another, exactly out of phase. So the envelope detector follows the changing amplitude values of the peaks of the signal from the IF chain but not the instantaneous values, resulting in the loss of phase information. This gives the analyzer its voltmeter characteristics.
Displays
Up until the mid-1970s, spectrum analyzers were purely analog. The displayed trace presented a continuous indication of the signal envelope, and no information was lost. However, analog displays had drawbacks. The major problem was in handling the long sweep times required for narrow resolution bandwidths. In the extreme case, the display became a spot that moved slowly across the cathode ray tube (CRT), with no real trace on the display. So a meaningful display was not possible with the longer sweep times.
Later came, variable-persistence storage CRT in which we could adjust the fade rate of the display. When properly adjusted, the old trace would just fade out at the point where the new trace was updating the display. This display was continuous, had no flicker, and avoided confusing overwrites. It worked quite well, but the intensity and the fade rate had to be readjusted for each new measurement situation. When digital circuitry became affordable in the mid-1970s, it was quickly put to use in spectrum analyzers. Once a trace had been digitized and put into memory, it was permanently available for display. It became an easy matter to update the display at a flicker-free rate without blooming or fading. The data in memory was updated at the sweep rate, and since the contents of memory were written to the display at a flicker-free rate, we could follow the updating as the analyzer swept through its selected frequency span just as we could with analog systems.
Detector types With digital displays, we had to decide what value should be displayed for each display data point. No matter how many data points we use across the display, each point must represent what has occurred over some frequency range and, although we usually do not think in terms of time when dealing with a spectrum analyzer, over some time interval. It is as if the data for each interval is thrown into a bucket and we apply whatever math is necessary to extract the desired bit of information from our input signal. This datum is put into memory and written to the display. This provides great flexibility.
Each bucket contains data from a span and time frame that is determined by these equations: Frequency: bucket width = span/(trace points - 1) Time: bucket width = sweep time/(trace points - 1) The sampling rates are different for various instruments, but greater accuracy is obtained from decreasing the span and/or increasing the sweep time since the number of samples per bucket will increase in either case. Even in analyzers with digital IFs, sample rates and interpolation behaviors are designed to be the equivalent of continuous-time processing.
Each of the 101 trace points (buckets) covers a 1 MHz frequency span and a 0.1 millisecond time span
The bucket concept is important, as it will help us differentiate the six detector types:
Sample Positive peak (also simply called peak) Negative peak Normal Average Quasi-peak
Sample detection Let us simply select the data point as the instantaneous level at the center of each bucket. This is the sample detection mode. To give the trace a continuous look, we design a system that draws vectors between the points. Of course, the more points there are in the trace, the better the replication of the analog signal will be. The number of available display points can vary for different analyzers. For most spectrum analyzers, the number of display points for frequency domain traces can be set from a minimum of 101 points to a maximum of 8192 points. More points do indeed get us closer to the analog signal.
While the sample detection mode does a good job of indicating the randomness of noise, it is not a good mode for analyzing sinusoidal signals. If we were to look at a 100 MHz comb, we might set it to span from 0 to 26.5 GHz. Even with 1,001 display points, each display point represents a span (bucket) of 26.5 MHz. This is far wider than the maximum 5 MHz resolution bandwidth.
As a result, the true amplitude of a comb tooth is shown only if its mixing product happens to fall at the center of the IF when the sample is taken.
Therefore, sample detection does not catch all the signals, nor does it necessarily reflect the true peak values of the displayed signals. When resolution bandwidth is narrower than the sample interval (i.e., the bucket width), the sample mode can give erroneous results.
The actual comb over a 500 MHz span using peak (positive) detection
Peak (positive) detection One way to insure that all sinusoids are reported at their true amplitudes is to display the maximum value encountered in each bucket. This is the positive peak detection mode, or peak. Peak is the default mode offered on many spectrum analyzers because it ensures that no sinusoid is missed, regardless of the ratio between resolution bandwidth and bucket width. However, unlike sample mode, peak does not give a good representation of random noise because it only displays the maximum value in each bucket and ignores the true randomness of the noise. So spectrum analyzers that use peak detection as their primary mode generally also offer the sample mode as an alternative.
Normal mode
Sample mode
What happens when a sinusoidal signal is encountered? We know that as a mixing product is swept past the IF filter, an analyzer traces out the shape of the filter on the display. If the filter shape is spread over many display points, then we encounter a situation in which the displayed signal only rises as the mixing product approaches the center frequency of the filter and only falls as the mixing product moves away from the filter center frequency. In either of these cases, the pos-peak and neg-peak detectors sense an amplitude change in only one direction, and, according to the normal detection algorithm, the maximum value in each bucket is displayed.
Normal detection displays maximum values in buckets where signal only rises or only falls
What happens when the resolution bandwidth is narrow, relative to a bucket? The signal will both rise and fall during the bucket. If the bucket happens to be an odd-numbered one, all is well. The maximum value encountered in the bucket is simply plotted as the next data point. However, if the bucket is even-numbered, then the minimum value in the bucket is plotted. Depending on the ratio of resolution bandwidth to bucket width, the minimum value can differ from the true peak value (the one we want displayed) by a little or a lot. In the extreme, when the bucket is much wider than the resolution bandwidth, the difference between the maximum and minimum values encountered in the bucket is the full difference between the peak signal value and the noise.
See bucket 6. The peak value of the previous bucket is always compared to that of the current bucket. The greater of the two values is displayed if the bucket number is odd as depicted in bucket 7. The signal peak actually occurs in bucket 6 but is not displayed until bucket 7.
If the signal only rises or only falls within a bucket, the peak is displayed. This process may cause a maximum value to be displayed one data point too far to the right, but the offset is usually only a small percentage of the span. Some spectrum analyzers, compensate for this potential effect by moving the LO start and stop frequencies.
Another type of error is where two peaks are displayed when only one actually exists (figure below). The outline of the two peaks is displayed using peak detection with a wider RBW. So peak detection is best for locating CW signals well out of the noise. Sample is best for looking at noise, and normal is best for viewing signals and noise.
Normal detection shows two peaks when actually only one peak exists
Average detection
Although modern digital modulation schemes have noise-like characteristics, sample detection does not always provide us with the information we need. For instance, when taking a channel power measurement on a W-CDMA signal, integration of the rms values is required. This measurement involves summing power across a range of analyzer frequency buckets. Sample detection does not provide this. While spectrum analyzers typically collect amplitude data many times in each bucket, sample detection keeps only one of those values and throws away the rest. On the other hand, an averaging detector uses all the data values collected within the time (and frequency) interval of a bucket. Once we have digitized the data, and knowing the circumstances under which they were digitized, we can manipulate the data in a variety of ways to achieve the desired results.
Some spectrum analyzers refer to the averaging detector as an rms detector when it averages power (based on the root mean square of voltage). Many analyzers have an average detector that can average the power, voltage, or log of the signal by including a separate control to select the averaging type:
Power (rms) averaging averages rms levels, by taking the square root of the sum of the squares of the voltage data measured during the bucket interval, divided by the characteristic input impedance of the spectrum analyzer, normally 50 ohms. Power averaging calculates the true average power, and is best for measuring the power of complex signals. Voltage averaging averages the linear voltage data of the envelope signal measured during the bucket interval. It is often used in EMI testing for measuring narrowband signals. Voltage averaging is also useful for observing rise and fall behavior of AM or pulse-modulated signals such as radar and TDMA transmitters
Log-power (video) averaging averages the logarithmic amplitude values (dB) of the envelope signal measured during the bucket interval. Log power averaging is best for observing sinusoidal signals, especially those near noise. Thus, using the average detector with the averaging type set to power provides true average power based upon rms voltage, while the average detector with the averaging type set to voltage acts as a general-purpose average detector. The average detector with the averaging type set to log has no other equivalent.
Average detection is an improvement over using sample detection for the determination of power. Sample detection requires multiple sweeps to collect enough data points to give us accurate average power information. Average detection changes channel power measurements from being a summation over a range of buckets into integration over the time interval representing a range of frequencies in a swept analyzer. In a fast Fourier transfer (FFT) analyzer, the summation used for channel power measurements changes from being a summation over display buckets to being a summation over FFT bins. In both swept and FFT cases, the integration captures all the power information available, rather than just that which is sampled by the sample detector. As a result, the average detector has a lower variance result for the same measurement time. In swept analysis, it also allows the convenience of reducing variance simply by extending the sweep time.
EMI detectors: average and quasi-peak detection An important application of average detection is for characterizing devices for electromagnetic interference (EMI). In this case, voltage averaging, is used for measurement of narrowband signals that might be masked by the presence of broadband impulsive noise. The average detection used in EMI instruments takes an envelopedetected signal and passes it through a low-pass filter with a bandwidth much less than the RBW. The filter integrates (averages) the higher frequency components such as noise. To perform this type of detection in an older spectrum analyzer that doesnt have a built-in voltage averaging detector function, set the analyzer in linear mode and select a video filter with a cut-off frequency below the lowest PRF of the measured signal.
Quasi-peak detectors (QPD) are also used in EMI testing. QPD is a weighted form of peak detection. The measured value of the QPD drops as the repetition rate of the measured signal decreases. Thus, an impulsive signal with a given peak amplitude and a 10 Hz pulse repetition rate will have a lower quasi-peak value than a signal with the same peak amplitude but having a 1 kHz repetition rate. QPD is a way of measuring and quantifying the annoyance factor of a signal. Imagine listening to a radio station suffering from interference. If you hear an occasional pop caused by noise once every few seconds, you can still listen to the program without too much trouble. However, if that same amplitude pop occurs 60 times per second, it becomes extremely annoying, making the radio program intolerable to listen to.
Time gating
Time-gated spectrum analysis allows you to obtain spectral information about signals occupying the same part of the frequency spectrum that are separated in the time domain. Using an external trigger signal to coordinate the separation of these signals, you can perform the following operations:
Measure any one of several signals separated in time; for example, you can separate the spectra of two radios time-sharing a single frequency Measure the spectrum of a signal in one time slot of a TDMA system Exclude the spectrum of interfering signals, such as periodic pulse edge transients that exist for only a limited time
In some cases, time-gating capability enables you to perform measurements that would otherwise be very difficult, if not impossible. For example, consider following figure, which shows a simplified digital mobile-radio signal in which two radios, #1 and #2, are time-sharing a single frequency channel. Each radio transmits a single 1 ms burst, and then shuts off while the other radio transmits for 1 ms. The challenge is to measure the unique frequency spectrum of each transmitter.
Unfortunately, a traditional spectrum analyzer cannot do that. It simply shows the combined spectrum
Using the time-gate capability and an external trigger signal, you can see the spectrum of just radio #1 (or radio #2 if you wished) and identify it as the source of the spurious signal:
If you are fortunate enough to have a gating signal that is only true during the period of interest, then you can use level gating as shown.
However, in many cases the gating signal will not perfectly coincide with the time we want to measure the spectrum. Therefore, a more flexible approach is to use edge triggering in conjunction with a specified gate delay and gate length to precisely define the time period in which to measure the signal.
Consider the GSM signal with eight time slots. Each burst is 0.577 ms and the full frame is 4.615 ms. We may be interested in the spectrum of the signal during a specific time slot. For the purposes of this example, lets assume that we are using only two of the eight available time slots.
A TDMA format signal (in this case, GSM) with eight time slots
When we look at this signal in the frequency domain in, we observe an unwanted spurious signal present in the spectrum. In order to troubleshoot the problem and find the source of this interfering signal, we need to determine the time slot in which it is occurring. If we wish to look at time slot 2, we set up the gate to trigger on the rising edge of burst 0, then specify a gate delay of 1.3 ms and a gate length of 0.3 ms
The gate delay assures that we only measure the spectrum of time slot 2 while the burst is fully on. Note that the gate delay value is carefully selected to avoid the rising edge of the burst, since we want to allow time for the RBW filtered signal to settle out before we make a measurement. Similarly, the gate length is chosen to avoid the falling edges of the burst. If we look at the spectrum of time slot 2, which reveals that the spurious signal is NOT caused by this burst.
There are three common methods used to perform time gating: Gated FFT Gated video Gated sweep
Specifications
What do you need to know about a spectrum analyzer in order to make sure you choose one that will make your particular measurements, and make them adequately? Very basically, you need to know: what's the frequency range? what's the amplitude range (maximum input and sensitivity)? to what level can I measure the difference between two signals, both in amplitude (dynamic range) and frequency (resolution)? and how accurate are my measurements once I've made them?
Specifications
Frequency Range:
The first and foremost specification you want to know is the frequency range of the analyzer. Not only do you want a spectrum analyzer that will cover the fundamental frequencies of your application, but dont forget harmonics or spurious signals on the high end, or baseband and IF on the low end.
An example of needing higher frequency capability is in wireless communications. Some of the cellular standards require that you measure out to the tenth harmonic of your system. If you're working at 900 MHz, that means you need an analyzer that has a high frequency of 10 * 900 MHz = 9 GHz. Also, although we are talking about RF analyzers, you want it to be able to measure your lower frequency baseband and IF signals as well.
Specifications
Accuracy:
The second area to understand is accuracy; how accurate will my results be in both frequency and amplitude? When talking about accuracy specifications, it is important to understand that there is both an absolute accuracy specification, and a relative accuracy specification.
The absolute measurement is made with a single marker. For example, the frequency and power level of a carrier for distortion measurements is an absolute measurement. The relative measurement is made with the relative, or delta, marker. Examples include modulation frequencies, channel spacing, pulse repetition frequencies, and offset frequencies relative to the carrier. Relative measurements are more accurate than absolute measurements.
There are two major design categories of modern spectrum analyzers: synthesized and free-running. In a synthesized analyzer, some or all of the oscillators are phase-locked to a single, traceable, reference oscillator. These analyzers have typical accuracy's on the order of a few hundred hertz. This design method provides the ultimate in performance with according complexity and cost.
Span error is often split into two specs, based on the fact that many spectrum analyzers are fully synthesized for small spans, but are openloop tuned for larger spans. (The slide shows only one span specification.) RBW error can be appreciable in some spectrum analyzers, especially for larger RBW settings, but in most cases it is much smaller than the span error.
If we're measuring a signal at 2 GHz, using a 400 kHz span and a 3 kHz RBW, we can determine our frequency accuracy as follows: Frequency reference accuracy is calculated by adding up the sources of error shown (all of which can be found on the datasheet):
freq ref accuracy = 1.0 x 10-7 (aging) + 0.1 x 10-7 (temp stability) + 0.1 x 10-7 (setability)+ 0.1 x 10-7 (15 warm-up) = 1.3 x 10-7/yr. ref error
Therefore, our frequency accuracy is: (2 x 109 Hz) x (1.3 x 10-7/yr) = 1% of 400 kHz span = 15% of 3 kHz RBW = 10 Hz residual error =
Total =
Most spectrum analyzers are specified in terms of both absolute and relative amplitude accuracy. When we make relative measurements on an incoming signal, we use some part of the signal as a reference. For example, when we make second-harmonic distortion measurements, we use the fundamental of the signal as our reference. Absolute values do not come into play; we are interested only in how the second harmonic differs in amplitude from the fundamental.
Display fidelity Frequency response RF Input attenuator Reference level Resolution bandwidth CRT scaling
Display fidelity and frequency response will directly affect the amplitude accuracy.
The other four items, on the other hand, involve control changes made during the course of a measurement, and therefore affect accuracy only when changed. In other words, if only the frequency control is changed when making the relative measurement, these four uncertainties drop out. If they are changed, however, their uncertainties will further degrade accuracy.
The frequency response, or flatness of the spectrum analyzer, also plays a part in relative amplitude uncertainties and is frequency-range dependent. A low-frequency RF analyzer might have a frequency response of 0.5 dB. On the other hand, a microwave spectrum analyzer tuning in the 20 GHz range could well have a frequency response in excess of 4 dB.
The specification assumes the worst case situation, where frequency response varies the full amplitude range, in this case plus 1 dB and minus 1 dB. The uncertainty between two signals in the same band (the spectrum analyzer's frequency range is actually split into several bands) is 2 x 1 dB = 2 dB since the amplitude uncertainty at each signal's position could fall on the + and - extremes of the specification window.
Because an RF input attenuator must operate over the entire frequency range of the analyzer, its step accuracy, like frequency response, is a function of frequency. At low RF frequencies, we expect the attenuator to be quite good; at 20 GHz, not as good. The IF gain (or reference level control) has uncertainties as well, but should be more accurate than the input attenuator because it operates at only one frequency. Since different filters have different insertion losses, changing the RBW can also degrade accuracy. Finally, changing display scaling from say, 10 dB/div to 1 dB/div or to linear may also introduce uncertainty in the amplitude measurement.
Since our unknown signal to be measured is at a different frequency, we must change the frequency control. Since it is at a different amplitude, we may change reference level to bring it to the reference level, for best accuracy. Hence, absolute amplitude accuracy depends on calibrator accuracy, frequency response, and reference level uncertainty (also known as IF gain uncertainty).
There are some things that you can do to improve the situation. First of all, you should know the specifications for your particular spectrum analyzer. These specs may be good enough over the range in which you are making your measurement. Also, before taking any data, you can step through a measurement to see if any controls can be left unchanged. If so, all uncertainties associated with changing these controls drop out. You may be able to trade off reference level against display fidelity, using whichever is more accurate and eliminating the other as an uncertainty factor. If you have a more accurate calibrator, or one closer to the frequency of interest, you may wish to use that in lieu of the built-in calibrator.
Most analyzers available today have self-calibration routines which may be manual or automatic. These routines generate error-coefficients (for example, amplitude changes versus resolution bandwidth) that the analyzer uses later to correct measured data. As a result, these self-calibration routines allow us to make good amplitude measurements with a spectrum analyzer and give us more freedom to change controls during the course of a measurement.
Specifications: Resolution
Resolution is an important specification when you are trying to measure signals that are close together and want to be able to distinguish them from each other. We saw that the IF filter bandwidth is also known as the resolution bandwidth (RBW). This is because it is the IF filter bandwidth and shape that determines the resolvability between signals. In addition to filter bandwidth, the selectivity, filter type, residual FM, and noise sidebands are factors to consider in determining useful resolution.
One of the first things to note is that a signal cannot be displayed as an infinitely narrow line. It has some width associated with it. This shape is the analyzer's tracing of its own IF filter shape as it tunes past a signal. Thus, if we change the filter bandwidth, we change the width of the displayed response. Datasheets specify the 3 dB or 6 dB bandwidth. This concept enforces the idea that it is the IF filter bandwidth and shape that determines the resolvability between signals.
When measuring two signals of equal-amplitude, the value of the selected RBW tells us how close together they can be and still be distinguishable from one another (by a 3 dB 'dip'). For example, if two signals are 10 kHz apart, a 10 kHz RBW easily separates the responses. However, with wider RBWs, the two signals may appear as one. In general then, two equal-amplitude signals can be resolved if their separation is greater than or equal to the 3 dB bandwidth of the selected resolution bandwidth filter. NOTE: Since the two signals interact when both are present within the RBW, you should use a Video BW about 10 times smaller than the Res BW to smooth the responses.
For example, say we are doing a two-tone test where the signals are separated by 10 kHz. With a 10 kHz RBW, resolution of the equal amplitude tones is not a problem, as we have seen. But the distortion products, which can be 50 dB down and 10 kHz away, could be buried.
Let's try a 3 kHz RBW which has a selectivity of 15:1. The filter width 60 dB down is 45 kHz (15 x 3 kHz), and therefore, distortion will be hidden under the skirt of the response of the test tone. If we switch to a narrower filter (for example, a 1 kHz filter) the 60 dB bandwidth is 15 kHz (15 x 1 kHz), and the distortion products are easily visible (because one-half of the 60 dB bandwidth is 7.5 kHz, which is less than the separation of the sidebands). So our required RBW for the measurement must be 1 kHz.
This tells us then, that two signals unequal in amplitude by 60 dB must be separated by at least one half the 60 dB bandwidth to resolve the smaller signal. Hence, selectivity is key in determining the resolution of unequal amplitude signals.
Another factor affecting resolution is the frequency stability of the spectrum analyzer's local oscillator. This inherent short-term frequency instability of an oscillator is referred to as residual FM. If the spectrum analyzers RBW is less than the peak-to-peak FM, then this residual FM can be seen and looks as if the signal has been "smeared". You cannot tell whether the signal or the LO is the source of the instability. Also, this "smearing" of the signal makes it so that two signals within the specified residual FM cannot be resolved.
This means that the spectrum analyzer's residual FM dictates the minimum resolution bandwidth allowable, which in turn determines the minimum spacing of equal amplitude signals. Phase locking the LOs to a reference reduces the residual FM and reduces the minimum allowable RBW. Higher performance spectrum analyzers are more expensive because they have better phase locking schemes with lower residual FM and smaller minimum RBWs.
Phase noise is specified in terms of dBc or dB relative to a carrier and is displayed only when the signal is far enough above the system noise floor. This becomes the ultimate limitation in an analyzer's ability to resolve signals of unequal amplitude. The figure on previous slide shows that although we may have determined that we should be able to resolve two signals based on the 3-dB bandwidth and selectivity, we find that the phase noise actually covers up the smaller signal. Noise sideband specifications are typically normalized to a 1 Hz RBW. Therefore, if we need to measure a signal 50 dB down from a carrier at a 10 kHz offset in a 1 kHz RBW, we need a phase noise spec of - 80 dBc/1Hz RBW at 10 kHz offset. Note: 50 dBc in a 1 kHz RBW can be normalized to a 1 Hz RBW using the following equation. (-50 dBc - [10*log(1kHz/1Hz)]) = (-50 - [30]) = -80 dBc.
Spectrum analyzers have auto-coupled sweeptime which automatically chooses the fastest allowable sweeptime based upon selected Span, RBW, and VBW.
If the sweeptime manually chosen is too fast, a message is displayed on the screen. Spectrum analyzers usually have a 1-10 or a 1-3-10 sequence of RBWs, some even have 10% steps. More RBWs are better because this allows choosing just enough resolution to make the measurement at the fastest possible sweeptime. For example, if 1 kHz resolution (1 sec sweeptime) is not enough resolution, a 1-3-10 sequence analyzer can make the measurement in a 300 Hz Res BW (10 sec sweeptime), whereas the 1-10 sequence analyzer must use a 100 Hz Res BW (100 sec sweeptime)!
Specifications: Sensitivity/DANL
A Spectrum Analyzer Generates and Amplifies Noise Just Like Any Active Circuit
One of the primary uses of a spectrum analyzer is to search out and measure low-level signals. The sensitivity of any receiver is an indication of how well it can measure small signals. A perfect receiver would add no additional noise to the natural amount of thermal noise present in all electronic systems, represented by kTB (k=Boltzmans constant, T=temperature, and B=bandwidth). In practice, all receivers, including spectrum analyzers, add some amount of internally generated noise.
Spectrum analyzers usually characterize this by specifying the displayed average noise level (DANL) in dBm, with the smallest RBW setting. DANL is just another term for the noise floor of the instrument given a particular bandwidth. It represents the best-case sensitivity of the spectrum analyzer, and is the ultimate limitation in making measurements on small signals.
An input signal below this noise level cannot be detected. Generally, sensitivity is on the order of -90 dBm to -145 dBm.
It is important to know the sensitivity capability of your analyzer in order to determine if it will adequately measure your low-level signals.