The Sampling Theorem: 1 Review of Fourier Transforms

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

5.

The Sampling Theorem


R.G. Gallager
You have probably already been exposed to the Sampling Theorem, which says that
if a waveform is bandwidth-limited to W Hz, then it can be represented exactly by its
samples if they are taken at a rate of 2W samples/second or more. Here we will review the
Sampling Theorem. We will interpret the Sampling Theorem as an orthonormal expansion
of bandlimited waveforms. The main reason for discussing the Sampling Theorem is to
make concrete the notion that a bandlimited signal of bandwidth W and duration T has
roughly 2W T degrees of freedom for T large.

Review of Fourier transforms

The Fourier transform of a waveform u(t) is defined as


Z
u(t)ej2f t dt.
U (f ) =

(1)

This is a mapping from one function to another function. The first function is the waveform u(t) mapping R R. As a waveform, it maps time (in units of seconds) into real
valued amplitudes. The same defininition can be used if u(t) is a complex valued function
of time, i.e., a mapping R C. Later, when we study modulation, we often consider the
Fourier transform for complex valued functions. For now we can restrict our attention
to real valued functions, but will often write equations in such a way as to make the
generalization to complex valued functions easy. For example, we use the notation |u(t)|2
to denote u2 (t) for real functions, but more generally to denote u(t)u (t).
For the most part, Rwe restrict our attention to finite-energy functions, i.e., functions for

which the integral |u(t)|2 dt exists with a finite value. There are some very common
functions that are not finite-energy, including any non-zero constant function and any
sinusoid. Impulse functions are also not included as finite-energy functions (in fact, impulses are not really functions at all since, for example, the property that a unit impulse
has an integral equal to 1 does not follow from the value of the function at all t). Even
if one ignores the fact that impulses are not really functions, the energy of an impulse in
any reasonable sense is infinite.
In what follows, we explicitly define a waveform as a finite-energy function. Obviously
sinusoids and other infinite energy functions are very important for many types of applications, and generalized functions such as impulses are similarly important. However,
1

the properties of waveforms that we will rely on most strongly here are not shared by
infinite-energy functions. It should be noticed that physical waveforms do not persist
forever, and do not take on infinite values, so we are not restricting the set of physical
waveforms of interest, but only the set of models for those waveforms.
There is a very important theorem, due to Plancherel, that states that if u(t) is finiteenergy, then the integral U (f ) in (1) exists for each f , and U (f ) satisfies the inverse
Fourier transform,
Z

u(t) =

U (f )ej2f t df.

(2)

U (f )ej2f t df.

(3)

To be more precise, define uB (t) as


Z

uB (t) =
B

In the context of Plancherels Theorem, (2) is shorthand for


Z
lim
|u(t) uB (t)|2 dt = 0
B

(4)

This says that the energy difference between u(t) and the finite bandwidth transforms in
(3) goes to zero with increasing B. In essence this allows u(t) to differ from the inverse
transform of its transform at isolated points. For example, if u(t) is discontinuous at a
number of points, we would not expect the inverse of its transform to converge to u(t) at
those points. In fact, changing u(t) at a single point cannot change its Fourier transform.
The Fourier transform itself in (1) must be interpreted in the same way. Define
Z

UA (f ) =

u(t)ej2f t dt

(5)

Then, according to Plancherels theorem, there is a function U (f ) such that


Z
lim
|U (f ) UA (f )|2 df = 0
A

(6)

A final part of this result is that the energy in u(t) is the same as the energy in U (f ), i.e.,
Z
Z
2
|u(t)| dt =
|U (f )|2 df.
(7)

Thus, every finite-energy function has a finite-energy transform with the same energy,
and these transforms satisfy the convergence properties in (4) and (6).
These complicated sounding conditions say that for finite-energy functions, we can approximate u(t) by truncating the waveform within very large limits, (A, A). The Fourier
transform will not be substantially changed for A sufficiently large. Similarly, we can
truncate U (f ) within very large frequency limits and the time function u(t) will not be
2

substantially changed. In other words, finite-energy functions are essentially constrained


within sufficiently large time and frequency limits. This is important for engineering,
since when we model a function, we dont much care what it does before the device in
question was built or after it is discarded. We also dont much care what happens in frequency ranges beyond those of interest. Thus we prefer models in which these extraneous
effects can be ignored. One problem with Fourier transforms is that parts of the function
outside the time range of interest can have an important effect inside the frequency range
of interest, and vice-versa. Restricting ourselves to finite-energy models avoids the worst
of these problems.
We will give more interpretations of what these conditions mean later. One should not
view sinusoids, impulses, and other infinite-energy functions as second class citizens in the
function world. They are very useful in studying linear filters, for example. Unfortunately,
they are usually poor models for the output of analog sources.
The above theorem is a far more mathematical statement than engineers are used to,
and, in fact, we have not even stated it with perfect precision. However, we hope to
subsequently convince the reader that waveforms, i.e., finite-energy functions, are the
appropriate kinds of functions to study for both analog sources and for channel inputs
and outputs. We also hope to convince the reader that the above theorem is meaningful
in both an intuitive and an engineering sense. You will often see finite-energy functions
referred to as L2 functions.
We next review a few properties of Fourier transforms. In the following short table, u(t)
and U (f ) are a Fourier transform pair, written u(t) U (f ).

Z
Z

waveform
u (t)
u(t )
u(t/T )
sin(t)
sinc(t) =
t

Fourier transform
U (f )
ej2f U (f )
T U (f T )

1 for |f | 1/2
rect(f ) =
0 for |f | > 1/2

(8)
(9)
(10)
(11)

u( )v(t ) d U (f )V (f )

(12)

u( )v ( t) d U (f )V (f )

(13)

Equations (8-10) follow directly from (1) by substitution and changing the variable of integration. Equation (11) follows from the inverse transform
theorem
RR (2). The convolution
j2f t
(12) follows by Fourier transforming the left side to get u( )v(t )e
d dt. Multiplying and dividing by ej2f yields the right side. Equation (13) gives the transform of
the cross-correlation of u(t) and v(t); it follows by using (8) in (12).
Two useful special cases of any Fourier transform pair are:
Z
u(0) =
U (f ) df

(14)

U (0) =

u(t) dt

(15)

These are useful sanity checks on any Fourier transform pair, and in particular are often
the best way to compute or check multiplicative constants. They are also useful for
deriving various properties of Fourier transforms. The most useful of these is Parsevals
theorem, which follows from applying (14) to (13):
Z
Z

u(t)v (t) dt =
U (f )V (f ) df.
(16)

As a corollary, replacing v(t) by u(t) in (16), we derive (7), i.e.


Z
Z
2
|u(t)| dt =
|U (f )|2 df.

corollary, define two waveforms u(t) and v(t) to be orthogonal if


RFor another

u(t)v (t) dt = 0. From (16), we see that u(t) and v(t) are orthogonal if and only if
U (f ) and V (f ) are orthogonal; i.e.,
Z
Z

u(t)v (t) dt = 0 if and only if


U (f )V (f ) df = 0.
(17)

Finally, we note that there is almost complete symmetry between time-domain and
frequency-domain functions in Fourier transform theory, apart from a difference in sign
in (1) and (2). Indeed, if u(t) U (f ) are a Fourier transform pair, then so are
U (t) u(f ). Because of this time-frequency symmetry, we may translate any timedomain result to the frequency domain with the appropriate sign changes, and vice versa.

The Sampling Theorem

The Sampling Theorem shows that a continuous-time band-limited signal may be represented perfectly by its samples at uniform intervals of T seconds if T is small enough. In
other words, the continuous-time signal may be reconstructed perfectly from its samples;
sampling at a high enough rate is information-lossless.
An L2 waveform u(t) will be said to be band-limited to a frequency W Hz if its Fourier
transform satisfies U (f ) = 0 for |f | > W . The Sampling Theorem is then as follows:
Theorem 2.1 (Sampling Theorem) Let u(t) be a waveform (i.e., a finite energy function) that is band-limited to W Hz. Then u(t) at all times t can be recovered from its
samples {u(nT ), n Z} at integer multiples of T = 1/(2W ) sec, as follows:
u(t) =

u(nT ) sinc

n=

t nT
T

(18)

1
sinc(t)
2

The sinc function, sinc(t) = sin(t)/(t) is illustrated in the following figure:


We see that sinc(t/T ) has the value 1 at t = 0 and 0 at t = nT for all integers n 6= 0.
It is non-zero at all times other than these integer times. Also, from (10) and (11), note
that the Fourier transform of sinc(t/T ) is equal to T for |f | < W and 0 for f > W . More
generally, applying the shifting rule of (9), we have

t nT
T ej2f nT for |f | < 1/(2T )

(19)
sinc
0
for |f | > 1/(2T )
T
From this, we see that the right side of (18) is in fact band-limited to W .
We can interpret the right side of (18) as follows. Suppose we start with a real sequence
. . . , u(T ), u(0), u(T ), u(2T ), . . .. If we multiply u(nT ) by sinc((tnT )/T ), then the resulting waveform is band-limited to W and has the value u(nT ) at t = nT and 0 at t = jT
for each integer j 6= n. Thus, when we add the waveforms u(nT )sinc((t nT )/T ) for all
n, the sum u(t) is band-limited to W and has value u(nT ) at time t = nT for each integer
n Z.
Thus we have shown that the right side of (18) is a waveform that is band-limited to W
and has the values . . . , u(T ), u(0), u(T ), u(2T ), . . . at times t = nT . However, this is not
the Sampling Theorem; rather, it is a sort of modulation theorem, showing that from a
sequence {u(nT )} we can create a band-limited waveform having those values at times
t = nT . We have not shown that we can go the other way i.e., start with an arbitrary
finite-energy band-limited waveform and represent it by its samples.
We postpone this proof until reviewing the the Fourier series expansion of a time-limited
waveform. We assume the following result is familiar to you.
Proposition 1 (Time-domain Fourier series expansion) If u(t) is a waveform that
is time-limited to some interval [, + T ] of length T , then u(t) may be expanded as
u(t) =

un ej2nt/T ,

t + T,

(20)

n=

where the Fourier series coefficients un are given by


Z
1 +T
un =
u(t)ej2nt/T dt.
T
5

(21)

Note that the Fourier series bears a close resemblance to the Fourier transform. The
Fourier transform expresses an arbitrary (non-time-limited) waveform by a function R
C. In contrast, the Fourier series represents a time-limited waveform by a sequence of
values, Z C. Comparing (21) with (1), we see that U (n/T ) = T un .
By time-frequency symmetry, we have equally a Fourier series expansion of a band-limited
Fourier transform, as follows:
Proposition 2 (Frequency-domain Fourier series expansion) If U (f ) is a Fourier
transform of a waveform that is band-limited to W f W , then U (f ) may be written
as

X
U (f ) =
un ej2nf T ,
W f W,
(22)
n=

where T = 1/(2W ), for some set of Fourier series coefficients un .


Now, taking the inverse Fourier transform of both sides in (22) and using (19), we have

X
un
t nT
u(t) =
sinc
.
(23)
T
T
n=
By evaluating both sides at t = nT , we see that u(nT ) = un /T . This completes the proof
of the Sampling Theorem, and shows that the sampling theorem is in fact just a form of
the Fourier series applied in the frequency domain.

Sampling a waveform

We have seen that if a waveform is bandlimited to W , then its samples at intervals


T = 1/(2W ) specify the function. We will shortly also show that
Z
X
|u(t)|2 dt =
T |u(nT )|2
(24)
In other words, the energy in u(t) is equal to the weighted sum of the squares of the
samples.
Suppose we then quantize these samples to values v(nT ). Suppose these samples are
encoded and then decoded at the receiver. Suppose the samples v(nT ) are then used to
approximate the waveform u(t) by

X
t nT
v(t) =
v(nT ) sinc
.
(25)
T
n=
It then follows that
Z

|v(t) u(t)|2 dt =

X
n=

T |v(nT ) u(nT )|2 .

(26)

In other words, the mean square error in the quantization directly gives the mean square
error in the waveform representation. This is one of the major reasons why mean square
error is such a convenient measure of distortion it carries over directly from samples to
waveforms.
The Sampling Theorem is very convenient for sampling band-limited waveforms. However,
most source waveforms are not quite band-limited. This then leads to first filtering the
source waveform to achieve band-limiting and then sampling. This adds another source
of mean squared error, since the part of the waveform that is out of band gets represented
by 0.
There is another rather peculiar mathematical issue with the Sampling Theorem. The
sinc function is non-zero over all non-integer times. Thus recreating the waveform at the
receiver from a set of samples requires infinite delay. Practically, of course, these sinc
functions can be truncated, but then the resulting waveform is no longer band-limited.
There is a similar problem at the source. A truly band-limited waveform cannot be
selected in real time. It must be created at time . Again, this is not a practical
problem, but it says that band-limited functions are only approximations, so that the
clean result of the Sampling Theorem is not quite as nice as it seems at first.
Another approach to a more general form of sampling is to use the time-domain Fourier
series over intervals of some arbitrary duration T . Then an arbitrary waveform can be
represented by the coefficients of its Fourier series, recalculated for each interval T . The
Fourier series within each interval must be truncated in order to represent each interval by
a finite number of samples, but this is often preferable to the truncation inherent to using
the Sampling Theorem because of the nature of the source. For example, the structure of
voice is such that Fourier coefficients, repeated each time interval, often provide a more
convenient discrete time set of samples than sampling at a much higher rate.

Orthogonal waveforms and orthogonal expansions

R
Recall that two functions, v(t) and w(t) are orthogonal if v(t)w (t) dt = 0. In this
section, we interpret the Fourier series of a time-limited function u(t) as a linear combination of orthogonal functions. We then illustrate some of the consequences of this
orthogonal expansion. We next show that the Sampling Theorem is also an expansion in
terms of orthogonal functions.

4.1

The Fourier series as an orthogonal expansion

Assume that u(t) is a finite-energy complex-valued function, non-zero only for t


T + . For < n < , let
j2nt/T
e
for t + T
n (t) =
0
otherwise
7

The Fourier series expansion of u(t) is


u(t) =

un n (t)

n=
Z

un =

1
T

where

(27)

u(t)n (t) dt

(28)

We now show that the functions {n (t)} are orthogonal:

Z
Z +T
0

j2(nj)t/T
n (t)j (t) dt =
e
dt =
T

t=

for n 6= j
for n = j

(29)

where we have used the fact that the integral is over an integer number of cycles. We
next show that (28) is a consequence of (29). InRparticular, assume that (27) is valid for
some set of coefficients {un }. Then the integral u(t)j (t) can be calculated as follows:
Z
Z

u(t)j (t) dt =
un
n (t)j (t) dt = uj T
(30)

n=

where we have used (29). This is equivalent to (28).


Note that the argument in (30) does not use the particular form of the orthogonal functions
in the Fourier series; it only uses the orthogonality (plus the fact that each orthogonal
function has energy T ). We will use this argument many times for many different expansions in terms of orthogonal functions. On the other hand, the argument in (30) does not
help to justify that any time limited waveform can be expanded as a linear combination
of the sinusoids in (27). We come back to this issue later.
We can now use the orthogonality again to find the energy in the time-limited waveform
u(t) in terms of the Fourier coefficients {un }. Using first (27) and then (30),
Z
Z
Z

X
2

|u(t)| dt =
u(t)u (t) dt =
u(t)
un n (t) dt

=
=

X
n=

Z
un

un T un

n=

n=

u(t)n (t) dt
=T

|un |2

(31)

n=

Assume that the samples {un } are quantized into points {vn }, P
which are then encoded,
transmitted, decoded, and converted into the waveform v(t) = n vn n (t). Then, from
(31), the squared error is
Z

X
2
|u(t) v(t)| dt = T
|un vn |2 .
(32)

n=

In
P summary, we have shown that the Fourier series is an orthogonal expansion. If u(t) =
n un n (t), where {n (t)} are orthogonal functions each with energy T , then
8

R
un = (1/T ) u(t)n (t) dt;

P
u(t) has energy equal to T
|un |2 .
P
P
If v(t) = n vn n (t), then the energy in u(t) v(t) is T n |un vn |2 .

4.2

The Sampling Theorem as an orthogonal expansion

Assume that u(t) is a finite-energy function band-limited to |f | W Hertz. Let n (t) =


sinc{(t nt)/T } where T = 1/(2W ). The Sampling Theorem then says that

u(t) =

un n (t)

(33)

n=

where un = u(nt). We now show that {n (t)} is an orthogonal set of functions. Recall
that two functions are orthogonal if and only if their Fourier transforms are orthogonal.
We have the Fourier transform pair:

T ej2nf T for |f | W
n (t) = sinc{(tnt)/T } n (f ) =
0
for |f | > W
To show that the Fourier transforms are orthogonal,
Z
Z W

n (f )m (f ) df =
T 2 ej2(mn)f df

(34)

For m 6= n, this is the integral of a sinusoid over an integer number of cycles, which is
therefore 0. Thus {n (f )} is a set of orthogonal functions. For m = n, the integral in (34)
is 2W T 2 = T . It follows that the functions {n (t)} are also orthogonal. From Parseval,
each has energy T , i.e.,

Z
0 for m 6= n

(35)
n (t)m (t) dt =
T for m = n

As with the Fourier series, we next use (35) to evaluate the coefficients un in (33).
Z
Z

u(t)m (t) dt =
un
n (t)m
(t) dt = um T
(36)

n=

R
This gives us an alternative expression for un = u(nt) = (1/T ) u(t)n (t) dt. We can
also use (36) to derive (24). As with the Fourier coefficients, we use (34) and then (36),
Z
Z
Z

X
2

|u(t)| dt =
u(t)u (t) dt =
u(t)
un n (t) dt

=
=

X
n=

Z
un

u(t)n (t) dt

un T un = T

X
n=

n=

n=

|un |2

(37)

In summary,
we have shown that the Sampling Theorem is an orthogonal expansion. If
P
u(t) = n un n (t), where {n (t)} are orthogonal functions each with energy T , then
R
un = (1/T ) u(t)n (t) dt;
u(t) has energy equal to T

|un |2 .

10

You might also like