The Sampling Theorem: 1 Review of Fourier Transforms
The Sampling Theorem: 1 Review of Fourier Transforms
The Sampling Theorem: 1 Review of Fourier Transforms
(1)
This is a mapping from one function to another function. The first function is the waveform u(t) mapping R R. As a waveform, it maps time (in units of seconds) into real
valued amplitudes. The same defininition can be used if u(t) is a complex valued function
of time, i.e., a mapping R C. Later, when we study modulation, we often consider the
Fourier transform for complex valued functions. For now we can restrict our attention
to real valued functions, but will often write equations in such a way as to make the
generalization to complex valued functions easy. For example, we use the notation |u(t)|2
to denote u2 (t) for real functions, but more generally to denote u(t)u (t).
For the most part, Rwe restrict our attention to finite-energy functions, i.e., functions for
which the integral |u(t)|2 dt exists with a finite value. There are some very common
functions that are not finite-energy, including any non-zero constant function and any
sinusoid. Impulse functions are also not included as finite-energy functions (in fact, impulses are not really functions at all since, for example, the property that a unit impulse
has an integral equal to 1 does not follow from the value of the function at all t). Even
if one ignores the fact that impulses are not really functions, the energy of an impulse in
any reasonable sense is infinite.
In what follows, we explicitly define a waveform as a finite-energy function. Obviously
sinusoids and other infinite energy functions are very important for many types of applications, and generalized functions such as impulses are similarly important. However,
1
the properties of waveforms that we will rely on most strongly here are not shared by
infinite-energy functions. It should be noticed that physical waveforms do not persist
forever, and do not take on infinite values, so we are not restricting the set of physical
waveforms of interest, but only the set of models for those waveforms.
There is a very important theorem, due to Plancherel, that states that if u(t) is finiteenergy, then the integral U (f ) in (1) exists for each f , and U (f ) satisfies the inverse
Fourier transform,
Z
u(t) =
U (f )ej2f t df.
(2)
U (f )ej2f t df.
(3)
uB (t) =
B
(4)
This says that the energy difference between u(t) and the finite bandwidth transforms in
(3) goes to zero with increasing B. In essence this allows u(t) to differ from the inverse
transform of its transform at isolated points. For example, if u(t) is discontinuous at a
number of points, we would not expect the inverse of its transform to converge to u(t) at
those points. In fact, changing u(t) at a single point cannot change its Fourier transform.
The Fourier transform itself in (1) must be interpreted in the same way. Define
Z
UA (f ) =
u(t)ej2f t dt
(5)
(6)
A final part of this result is that the energy in u(t) is the same as the energy in U (f ), i.e.,
Z
Z
2
|u(t)| dt =
|U (f )|2 df.
(7)
Thus, every finite-energy function has a finite-energy transform with the same energy,
and these transforms satisfy the convergence properties in (4) and (6).
These complicated sounding conditions say that for finite-energy functions, we can approximate u(t) by truncating the waveform within very large limits, (A, A). The Fourier
transform will not be substantially changed for A sufficiently large. Similarly, we can
truncate U (f ) within very large frequency limits and the time function u(t) will not be
2
Z
Z
waveform
u (t)
u(t )
u(t/T )
sin(t)
sinc(t) =
t
Fourier transform
U (f )
ej2f U (f )
T U (f T )
1 for |f | 1/2
rect(f ) =
0 for |f | > 1/2
(8)
(9)
(10)
(11)
u( )v(t ) d U (f )V (f )
(12)
u( )v ( t) d U (f )V (f )
(13)
Equations (8-10) follow directly from (1) by substitution and changing the variable of integration. Equation (11) follows from the inverse transform
theorem
RR (2). The convolution
j2f t
(12) follows by Fourier transforming the left side to get u( )v(t )e
d dt. Multiplying and dividing by ej2f yields the right side. Equation (13) gives the transform of
the cross-correlation of u(t) and v(t); it follows by using (8) in (12).
Two useful special cases of any Fourier transform pair are:
Z
u(0) =
U (f ) df
(14)
U (0) =
u(t) dt
(15)
These are useful sanity checks on any Fourier transform pair, and in particular are often
the best way to compute or check multiplicative constants. They are also useful for
deriving various properties of Fourier transforms. The most useful of these is Parsevals
theorem, which follows from applying (14) to (13):
Z
Z
u(t)v (t) dt =
U (f )V (f ) df.
(16)
u(t)v (t) dt = 0. From (16), we see that u(t) and v(t) are orthogonal if and only if
U (f ) and V (f ) are orthogonal; i.e.,
Z
Z
Finally, we note that there is almost complete symmetry between time-domain and
frequency-domain functions in Fourier transform theory, apart from a difference in sign
in (1) and (2). Indeed, if u(t) U (f ) are a Fourier transform pair, then so are
U (t) u(f ). Because of this time-frequency symmetry, we may translate any timedomain result to the frequency domain with the appropriate sign changes, and vice versa.
The Sampling Theorem shows that a continuous-time band-limited signal may be represented perfectly by its samples at uniform intervals of T seconds if T is small enough. In
other words, the continuous-time signal may be reconstructed perfectly from its samples;
sampling at a high enough rate is information-lossless.
An L2 waveform u(t) will be said to be band-limited to a frequency W Hz if its Fourier
transform satisfies U (f ) = 0 for |f | > W . The Sampling Theorem is then as follows:
Theorem 2.1 (Sampling Theorem) Let u(t) be a waveform (i.e., a finite energy function) that is band-limited to W Hz. Then u(t) at all times t can be recovered from its
samples {u(nT ), n Z} at integer multiples of T = 1/(2W ) sec, as follows:
u(t) =
u(nT ) sinc
n=
t nT
T
(18)
1
sinc(t)
2
t nT
T ej2f nT for |f | < 1/(2T )
(19)
sinc
0
for |f | > 1/(2T )
T
From this, we see that the right side of (18) is in fact band-limited to W .
We can interpret the right side of (18) as follows. Suppose we start with a real sequence
. . . , u(T ), u(0), u(T ), u(2T ), . . .. If we multiply u(nT ) by sinc((tnT )/T ), then the resulting waveform is band-limited to W and has the value u(nT ) at t = nT and 0 at t = jT
for each integer j 6= n. Thus, when we add the waveforms u(nT )sinc((t nT )/T ) for all
n, the sum u(t) is band-limited to W and has value u(nT ) at time t = nT for each integer
n Z.
Thus we have shown that the right side of (18) is a waveform that is band-limited to W
and has the values . . . , u(T ), u(0), u(T ), u(2T ), . . . at times t = nT . However, this is not
the Sampling Theorem; rather, it is a sort of modulation theorem, showing that from a
sequence {u(nT )} we can create a band-limited waveform having those values at times
t = nT . We have not shown that we can go the other way i.e., start with an arbitrary
finite-energy band-limited waveform and represent it by its samples.
We postpone this proof until reviewing the the Fourier series expansion of a time-limited
waveform. We assume the following result is familiar to you.
Proposition 1 (Time-domain Fourier series expansion) If u(t) is a waveform that
is time-limited to some interval [, + T ] of length T , then u(t) may be expanded as
u(t) =
un ej2nt/T ,
t + T,
(20)
n=
(21)
Note that the Fourier series bears a close resemblance to the Fourier transform. The
Fourier transform expresses an arbitrary (non-time-limited) waveform by a function R
C. In contrast, the Fourier series represents a time-limited waveform by a sequence of
values, Z C. Comparing (21) with (1), we see that U (n/T ) = T un .
By time-frequency symmetry, we have equally a Fourier series expansion of a band-limited
Fourier transform, as follows:
Proposition 2 (Frequency-domain Fourier series expansion) If U (f ) is a Fourier
transform of a waveform that is band-limited to W f W , then U (f ) may be written
as
X
U (f ) =
un ej2nf T ,
W f W,
(22)
n=
X
un
t nT
u(t) =
sinc
.
(23)
T
T
n=
By evaluating both sides at t = nT , we see that u(nT ) = un /T . This completes the proof
of the Sampling Theorem, and shows that the sampling theorem is in fact just a form of
the Fourier series applied in the frequency domain.
Sampling a waveform
X
t nT
v(t) =
v(nT ) sinc
.
(25)
T
n=
It then follows that
Z
|v(t) u(t)|2 dt =
X
n=
(26)
In other words, the mean square error in the quantization directly gives the mean square
error in the waveform representation. This is one of the major reasons why mean square
error is such a convenient measure of distortion it carries over directly from samples to
waveforms.
The Sampling Theorem is very convenient for sampling band-limited waveforms. However,
most source waveforms are not quite band-limited. This then leads to first filtering the
source waveform to achieve band-limiting and then sampling. This adds another source
of mean squared error, since the part of the waveform that is out of band gets represented
by 0.
There is another rather peculiar mathematical issue with the Sampling Theorem. The
sinc function is non-zero over all non-integer times. Thus recreating the waveform at the
receiver from a set of samples requires infinite delay. Practically, of course, these sinc
functions can be truncated, but then the resulting waveform is no longer band-limited.
There is a similar problem at the source. A truly band-limited waveform cannot be
selected in real time. It must be created at time . Again, this is not a practical
problem, but it says that band-limited functions are only approximations, so that the
clean result of the Sampling Theorem is not quite as nice as it seems at first.
Another approach to a more general form of sampling is to use the time-domain Fourier
series over intervals of some arbitrary duration T . Then an arbitrary waveform can be
represented by the coefficients of its Fourier series, recalculated for each interval T . The
Fourier series within each interval must be truncated in order to represent each interval by
a finite number of samples, but this is often preferable to the truncation inherent to using
the Sampling Theorem because of the nature of the source. For example, the structure of
voice is such that Fourier coefficients, repeated each time interval, often provide a more
convenient discrete time set of samples than sampling at a much higher rate.
R
Recall that two functions, v(t) and w(t) are orthogonal if v(t)w (t) dt = 0. In this
section, we interpret the Fourier series of a time-limited function u(t) as a linear combination of orthogonal functions. We then illustrate some of the consequences of this
orthogonal expansion. We next show that the Sampling Theorem is also an expansion in
terms of orthogonal functions.
4.1
un n (t)
n=
Z
un =
1
T
where
(27)
u(t)n (t) dt
(28)
Z
Z +T
0
j2(nj)t/T
n (t)j (t) dt =
e
dt =
T
t=
for n 6= j
for n = j
(29)
where we have used the fact that the integral is over an integer number of cycles. We
next show that (28) is a consequence of (29). InRparticular, assume that (27) is valid for
some set of coefficients {un }. Then the integral u(t)j (t) can be calculated as follows:
Z
Z
u(t)j (t) dt =
un
n (t)j (t) dt = uj T
(30)
n=
X
2
|u(t)| dt =
u(t)u (t) dt =
u(t)
un n (t) dt
=
=
X
n=
Z
un
un T un
n=
n=
u(t)n (t) dt
=T
|un |2
(31)
n=
Assume that the samples {un } are quantized into points {vn }, P
which are then encoded,
transmitted, decoded, and converted into the waveform v(t) = n vn n (t). Then, from
(31), the squared error is
Z
X
2
|u(t) v(t)| dt = T
|un vn |2 .
(32)
n=
In
P summary, we have shown that the Fourier series is an orthogonal expansion. If u(t) =
n un n (t), where {n (t)} are orthogonal functions each with energy T , then
8
R
un = (1/T ) u(t)n (t) dt;
P
u(t) has energy equal to T
|un |2 .
P
P
If v(t) = n vn n (t), then the energy in u(t) v(t) is T n |un vn |2 .
4.2
u(t) =
un n (t)
(33)
n=
where un = u(nt). We now show that {n (t)} is an orthogonal set of functions. Recall
that two functions are orthogonal if and only if their Fourier transforms are orthogonal.
We have the Fourier transform pair:
T ej2nf T for |f | W
n (t) = sinc{(tnt)/T } n (f ) =
0
for |f | > W
To show that the Fourier transforms are orthogonal,
Z
Z W
n (f )m (f ) df =
T 2 ej2(mn)f df
(34)
For m 6= n, this is the integral of a sinusoid over an integer number of cycles, which is
therefore 0. Thus {n (f )} is a set of orthogonal functions. For m = n, the integral in (34)
is 2W T 2 = T . It follows that the functions {n (t)} are also orthogonal. From Parseval,
each has energy T , i.e.,
Z
0 for m 6= n
(35)
n (t)m (t) dt =
T for m = n
As with the Fourier series, we next use (35) to evaluate the coefficients un in (33).
Z
Z
u(t)m (t) dt =
un
n (t)m
(t) dt = um T
(36)
n=
R
This gives us an alternative expression for un = u(nt) = (1/T ) u(t)n (t) dt. We can
also use (36) to derive (24). As with the Fourier coefficients, we use (34) and then (36),
Z
Z
Z
X
2
|u(t)| dt =
u(t)u (t) dt =
u(t)
un n (t) dt
=
=
X
n=
Z
un
u(t)n (t) dt
un T un = T
X
n=
n=
n=
|un |2
(37)
In summary,
we have shown that the Sampling Theorem is an orthogonal expansion. If
P
u(t) = n un n (t), where {n (t)} are orthogonal functions each with energy T , then
R
un = (1/T ) u(t)n (t) dt;
u(t) has energy equal to T
|un |2 .
10