Solutions To Selected Problems and Exercises: Problem 1.5.1
Solutions To Selected Problems and Exercises: Problem 1.5.1
Solutions To Selected Problems and Exercises: Problem 1.5.1
Chapter 1
p 1.5.1. Find the real and imaginary parts of (a) .j C 3/=.j 3/ and (b)
Problem
.1 C j 2/3 .
Solution. (a)
j C3 .j C 3/.j 3/ 1 3j 3j 9 4 3
D D D j:
j 3 .j 3/.j 3/ 1 C3
2 2 5 5
(b)
p p p 2 p p
.1 C j 2/3 D 13 C 3 12 .j 2/ C 3 1.j 2/ C .j 2/3 D 5 C 2j:
Problem 1.5.2. Find the moduli jzj and arguments of complex numbers (a) z D
2j ; (b) z D 3 C 4j .
p
Solution. (a) j z jD .2/2 C 0 D 2, tan D 1 ) D =2. (You have to be
carefulpwith the coordinate angle; here cos D 0, sin < 0.)
(b) j z jD 9 C 16 D 5, tan D 4=3 ) D arctan 4=3.
Problem 1.5.3. Find the real and imaginary components of complex numbers (a)
z D 5 e j=4 ; (b) z D 2 e j.8C1:27/ .
p p
5 2
Solution. (a) z D 5e j=4 D 5 cos.=4/ C j sin.=4/ D 2 C j 5 2 2 ) Re z D
p p
5 2
2
;
Im z D 522:
(b) z D 2e D 2 cos.1:27/ 2j sin.1:27/ ) Re z D 2 cos.1:27/,
j.8C1:27/
Im z D 2 sin.1:27/:
Solution.
5 5 5 j
D D D :
.1 j /.2 j /.3 j / .1 3j /.3 j / 10j 2
Problem 1.5.5. Sketch sets of points in the complex plane .x; y/; z D x Cjy; such
that (a) jz 1 C j j D 1; (b) z2 C .z /2 D 2:
Problem 1.5.6. Using de Moivre’s formula, find .2j /1=2 . Is this complex number
uniquely defined?
Solution.
p j. 3 C2k/ 1=2 p j. 3 Ck/
.2j /1=2 D 2 e 2 D 2e 4 ; k D 0; 1; 2; : : :
(p 3
2e j. 4 / ; for k D 0; 2; 4; : : : I
D p
j. 3 C/
2e 4 ; for k D 1; 3; 5; : : : I
(p
2 cos. 3 / C j sin. 3 / ; for k D 0; 2; 4; : : : I
D p
4 4
2 cos. 7 4
/ C j sin. 7
4
/ ; for k D 1; 3; 5; : : : :
Problem 1.5.10. Using de Moivre’s formula, derive the complex exponential rep-
resentation (1.4.5) of the signal x.t/ given by the cosine series representation
P
x.t/ D M mD1 cm cos.2 mf0 t C m /.
Solution.
X
M
x.t/ D c0 C cm cos.2 mf0 t C m /
mD1
M
X
.1/m 1 .1/m
xM .t/ D C cos mt sin mt ;
4 mD1 m2 m
for M D 0; 1; 2; 3; : : : ; 9 and 2 < t < 2. Then produce their plots in the
frequency-domain representation. Calculate their power (again, using Mathematica,
Maple, or Matlab, if you wish). Produce plots showing how power is distributed
over different frequencies for each of them. Write down your observations. What is
likely to happen with the plots of these signals as we take more and more terms of the
above series, that is, as M ! 1? Is there a limit signal x1 .t/ D limM !1 xM .t/?
What could it be?
Partial solution. Sample Mathematica code for the plot:
M = 9;
Plot[
Sum[
(((-1)ˆm - 1)/(Pi*mˆ2))*Cos[m*t] - (((-1)ˆm)/m)*Sin[m*t],
{m, M}],
{t, -2*Pi, 2*Pi}]
N[Integrate[(1/(2*Pi))*
Abs[Pi/4 +
Sum[(((-1)ˆm - 1)/(Pi*mˆ2))*Cos[m*u] - (((-1)ˆm)/m)*
Sin[m*u], {m, M2}]]ˆ2, {u, 0, 2*Pi}], 5]
1.4445
xDigital=Table[R*Floor[x[m T]/R],{m,1,50}];
ListPlot[xDigital]
226 Solutions to Selected Problems
Use the “random numbers” string as additive noise to produce random versions
of the digitized signals from Problem 1.5.12. Follow the example described in
Fig. 1.1.3. Experiment with different string lengths and various noise amplitudes.
Then center the noise around zero and repeat your experiments.
Chapter 2
forms an orthogonal system. Is the system normalized? Is the system complete? Use
the above information to derive formulas for coefficients in the Fourier expansions
in terms of sines and cosines. Model this derivation on calculations in Sect. 2.1.
Similarly,
Z P
1 1
2; m D nI
sin .2mt=P / sin .2nt=P / dt D
P 0 0; m ¤ n:
Therefore, we conclude that the given system is orthogonal p but not normalized. It
can be normalized by multiplying each sine and cosine by 1= 2. It is not complete,
but it becomes complete, if we add the function identically equal to 1 to it; it is
obviously orthogonal to all the sines and cosines.
Using the orthogonality property of the above real trigonometric system, we ar-
rive at the following Fourier expansion for a periodic signal x.t/:
1
X
x.t/ D a0 C Œam cos.2mf0 t/ C bm sin.2mf0 t/;
mD1
with coefficients
Z
1 P
a0 D x.t/ dt;
P 0
Z
2 P
am D x.t/ cos .2mt=P / dt;
P 0
Z
2 P
bm D x.t/ sin .2mt=P / dt;
P 0
for m D 1; 2; : : : .
228 Solutions to Selected Problems
Problem 2.7.2. Using the results from Problem 2.7.1, find formulas for amplitudes
cm and phases
P m in the expansion of a periodic signal x.t/ in terms of only cosines,
x.t/ D 1 mD0 cm cos.2 mf0 t C m /:
This gives us q
bm
m D arctan ; cm D 2 C b2 :
am m
am
Problem 2.7.9. Find the complex and real Fourier series for the periodic signal
x.t/ D j sin tj. Produce graphs comparing the signal x.t/ and its finite Fourier
sums of order 1, 3, and 6.
Solution. The first observation is that x.t/ has period . So
Z Z
1 1
zm D j sin tje 2jmt dt D sin t e 2jmt dt
0 0
Z Z
1 e jt e jt 2jmt 1
D e dt D .e jt .12m/ e jt .1C2m/ / dt
0 2j 2j 0
!
1 e j.12m/ 1 e j.1C2m/ 2
D dt D ;
2j j.1 2m/ j.1 C 2m/ .1 4m2 /
1
!
2 X cos.2mt/
x.t/ D 1C2 :
mD1
1 4m2
x[t_] := Abs[Sin[t]]
pl = Plot[x[t], {t, -2 * Pi, 2 * Pi}, Frame -> True,
GridLines -> Automatic, PlotStyle -> {Thickness[0.01]}]
1.0
0.8
0.6
0.4
0.2
0.0
-6 -4 -2 0 2 4 6
sum[t_, M_] := (2/Pi) * (1 + Sum[(2 / (1 - 4 * m^2)) * Cos[2 * m * t], {m, 1, M}])
1.0
0.8
0.6
0.4
0.2
-6 -4 -2 0 2 4 6
s3 = Plot[sum[t, 6],{t, -2 * Pi, 2 * Pi}, Frame -> True,
GridLines -> Automatic, PlotStyle -> {Thickness[0.01]}]
1.0
0.8
0.6
0.4
0.2
0.0
-6 -4 -2 0 2 4 6
Mathematica code and the output showing x.t/; s1 .t/, and s6 .t/ are shown on the
next page.
Problem 2.7.13. (a) The nonperiodic signal x.t/ is defined as equal to 1/2 on the
interval Œ1; C1 and 0 elsewhere. Plot it and calculate its Fourier transform
X.f /. Plot the latter.
230 Solutions to Selected Problems
(b) The nonperiodic signal y.t/ is defined as equal to .t C 2/=4 on the interval
Œ2; 0, .t C 2/=4 on the interval Œ0; 2, and 0 elsewhere. Plot it and calculate
its Fourier transform Y .f /. Plot the latter.
(c) Compare the Fourier transforms X.f / and Y .f /. What conclusion do you draw
about the relationship of the original signals x.t/ and y.t/?
Solution. (a) The Fourier transform of x.t/ is
Z C1
1 j 2f t e j 2f e j 2f sin 2f
X.f / D e dt D D :
1 2 4jf 2f
(c) So we have that Y .f / D X 2 .f /. This means that the signal y.t/ is the convo-
lution of the signal x.t/ with itself: y.t/ D .x x/.t/.
Problem 2.7.18. Utilize the Fourier transform (in the space variable z) to find a
solution of the diffusion (heat) partial differential equation
@u @2 u
D 2;
@t @z
for a function u.t; z/ satisfying the initial condition u.0; z/ D ı.z/. The solution of
the above equation is often used to describe the temporal evolution of the density of
a diffusing substance.
Solution. Let us denote the Fourier transform (in z) of u.t; z/ by
Z 1
U.t; f / D u.t; z/e j 2f z d z:
1
@2 u.t; z/
7! .j 2f /2 U.t; f / D 4 2 f 2 U.t; f /:
@z2
Chapter 3 231
So taking the Fourier transform of both sides of the diffusion equation gives the
equation
@
U.t; f / D 4 2 f 2 U.t; f /;
@t
which is now just an ordinary differential linear equation in the variable t, which
has the obvious exponential (in t) solution
2f 2 t
U.t; f / D C e 4 ;
1 z2
u.t; z/ D p e 4 t :
4 t
Chapter 3
Problem 3.7.2. Calculate the probability that a random quantity uniformly dis-
tributed over the interval Œ0; 3 takes values between 1 and 3. Do the same calculation
for the exponentially distributed random quantity with parameter D 1:5, and the
Gaussian random quantity with parameters D 1:5; 2 D 1.
Solution. (a) Since X has a uniform distribution on the interval [0, 3], then the
value of the p.d.f. is 1/3 between 0 and 3 and 0 elsewhere.
(b)
Z 3 Z 3
1 x= 2
e dx D e 2x=3 dx D 1.e 23=3 e 2=3 / D 0:378:
1 3 1
232 Solutions to Selected Problems
(c) We can solve this problem using the table for the c.d.f. of the standard normal
random quantity:
Solution. (a) We know that for the p.d.f. of any random quantity, we have
Z 1
fX .x/ dx D 1:
1
So Z 1
a
1D ax.1 x/ dx D :
0 6
Thus, the constant a D 6.
(b) To find the c.d.f., we will use the definition
Z x
FX .x/ D fX .y/ dy:
1
Finally, 8
ˆ
< 0; for x < 0I
FX .x/ D x .3 2x/; for 0 x < 1I
2
:̂
1; for x 1:
(c)
Z 1
1
EX D 6x 2 .1 x/ dx D ;
0 2
Z 1
1 3 1
Var.X / D E.X 2 / .EX /2 D 6x 3 .1 x/ dx D D 0:05;
0 4 10 4
p p
Std.X / D Var.X / D 0:05 0:224:
Chapter 3 233
(e) Z 0:9
P.0:4 < X < 0:9/ D 6x.1 x/ dx D 0:62:
0:4
Problem 3.7.6. Find the c.d.f and p.d.f. of the random quantity Y D tan X ,
where X is uniformly distributed over the interval .=2; =2/. Find a physical
(geometric) interpretation of this result.
Solution. The p.d.f. fX .x/ is equal to 1= for x 2 .=2; =2/ and 0 elsewhere.
So the c.d.f. is
8
ˆ
ˆ0; for x =2I
<
FX .x/ D .1=/.x C =2/; for x 2 .=2; =2/I
ˆ
:̂1; for x =2:
Hence,
d d 1 1
fY .y/ D FY .y/ D .arctan.y/ C =2/ D :
dy dy .1 C y 2 /
Problem 3.7.13. A random quantity X has an even p.d.f. fX .x/ of the triangular
shape shown in Sect. 3.7.
234 Solutions to Selected Problems
(a) How many parameters do you need to describe this p.d.f.? Find an explicit ana-
lytic formula for the p.d.f. fX .x/ and the c.d.f. FX .x/. Graph both.
(b) Find the expectation and variance of X .
(c) Let Y D X 3 . Find the p.d.f. fY .y/ and graph it.
Solution. (a) Notice that the triangle is symmetric about the line x D 0. Let us
assume that the vertices of the triangle have the following coordinates: A.a; 0/,
B.a; 0/, C.0; c/. Then the p.d.f is represented by the equations y D ac x C c
in the interval Œ0; a and y D ac x C c in the interval Œa; 0. So we need at most
two parameters.
Next, the normalization condition says that area under the p.d.f is 1. So nec-
essarily, ac D 1 ) c D 1=a. Therefore, actually, one parameter suffices and
our one-parameter family of p.d.f.s has the following analytic description:
8
ˆ
ˆ 0; for x < aI
ˆ
ˆ
< x C 1; for a x < 0I
fX .x/ D a2 x a 1
ˆ
ˆ a2 C a ;
ˆ for 0 x < aI
:̂
0; for x a:
Of course, the above result can be obtained without any integration by observing
that the p.d.f. is an even function, symmetric about the origin.
Z 1 Z 0 Z a
x 1 x 1
VarX D x 2 fX .x/ dx D x2 2
C dx C x2 2 C dx
1 a a a 0 a a
2
a
D :
6
is nonnegative for any x. Thus, the quadratic equation p.x/ D 0 has at most one
solution (root). Therefore, the discriminant of this equation must be nonpositive,
that is,
.2E.ZW //2 4EW 2 EZ 2 0;
which gives the basic form of the Cauchy–Schwarz inequality,
p p
jE.ZW /j EW 2 EZ 2 :
Finally, substitute for Z and W as indicated in the above hint to obtain the desired
result.
Problem 3.7.24. Complete the following sketch of the proof of the central limit the-
orem from Sect. 3.5. Start with a simplifying observation (based on Problem 3.7.23)
that it is sufficient to consider random quantities Xn ; n D 1; 2; : : : ; with expecta-
tions equal to 0, and variances 1.
(a) Define FX .u/ as the inverse Fourier transform of the distribution of X :
Z 1
FX .u/ D Ee juX D e jux dFX .x/:
1
236 Solutions to Selected Problems
Find FX0 .0/ and FX00 .0/. In the statistical literature FX .u/ is called the charac-
teristic function of the random quantity X . Essentially, it completely determines
the probability distribution of X via the Fourier transform (inverse of the inverse
Fourier transform).
(b) Calculate FX .u/ for the Gaussian N.0; 1/ random quantity. Note the fact that its
functional shape is the same as that of the N.0; 1/ p.d.f. This fact is the crucial
reason for the validity of the CLT.
(c) Prove that, for independent random quantities X and Y ,
FXCY .u/ D Ee j u.XCY / D E.e juX e juY / D Ee juX Ee juY D FX .u/ FY .u/
where
X1 X Xn X
Y1 D ; : : : ; Yn D ;
Std.X/ Std.X/
so that, in particular, Y1 ; : : : ; Yn are independent, identically distributed with
EY1 D 0 and EY12 D 1. Hence, using (a)–(c),
Fpn.X
N X /=Std.X/
.u/ D F.Y1 =pnCCYn =pn/ .u/
D F.Y1 =pn/ .u/ F.Yn =pn/ .u/
p
D ŒFY1 .u= n/n :
Now, for each fixed but arbitrary u, instead of calculating the limit n ! 1 of
the above characteristic functions, it will be easier to calculate the limit of their
logarithm. Indeed, in view of de l’HOopital’s rule applied twice (differentiating
with respect to n; explain why this is okay),
because FY0 1 .0/ D 0 and FY001 .0/ D 1; see part (a). So for the characteristic
functions themselves,
2 =2
lim Fpn.X
N X /=Std.X/
.u/ D e u ;
n!1
and we recognize the above limit as the characteristic function of the N.0; 1/
random quantity; see part (b).
The above proof glosses over the issue of whether indeed the convergence of
characteristic functions implies the convergence of c.d.f.s of the corresponding
random quantities. The relevant continuity theorem can be found in any of the
mathematical probability theory textbooks listed in the Bibliographical Com-
ments at the end of this volume.
238 Solutions to Selected Problems
Chapter 4
X
n
X.t/ D Ak cos 2kfk .t C ‚k / ;
kD0
Solution. The mean value of the signal (we use the independence conditions) is
EX.t/ D E A1 cos 2f0 .t C ‚1 / C C E An cos 2nf0 .t C ‚n /
Z P
d 1
D EA1 cos 2f0 .t C 1 / C C EAn
0 P
Z P
d n
cos 2nf0 .t C n / D 0:
0 P
The mean value doesn’t depend on time t; thus, the first requirement of stationarity
is satisfied.
The autocorrelation function is
because all the cross-terms are zero. The autocorrelation function is thus depending
only on (and not on t), so that the second condition of stationarity is also satisfied.
Z P =3
d
EX.t/ D E A cos 2f0 .t C ‚/ D EA cos.2f0 .t C //
0 P =3
ˇP =3 3EA
3EA ˇ
D sin.2f0 .tC // ˇ D sin.2f0 .t C P =3// sin.2f0 t/ :
2 D0 2
Since
pCq pq
sin p sin q D 2 cos sin ;
2 2
we finally get p
3 3
EX.t/ D EA cos 2f0 t C ;
2 3
which clearly depends on t in an essential way. Thus, the signal is not stationary.
The signal Y .t/ D X.t/ EX.t/ obviously has mean zero. Its autocovariance
function is
Now, Y .t; s/ can be easily calculated. Simplify the expression (and plot the
ACF) before you decide the stationarity issue for Y .t/.
Problem 4.3.8. Show that if X1 ; X2 ; : : : ; Xn are independent, exponentially dis-
tributed random quantities with identical p.d.f.s e x ; x 0, then their sum Yn D
X1 C X2 C C Xn has the p.d.f. e y y n1 =.n 1/Š; y 0. Use the technique
of characteristic functions (Fourier transforms) from Chap. 3. The random quantity
Yn is said to have the gamma probability distribution with parameter n. Thus, the
gamma distribution with parameter 1 is just the standard exponential distribution;
see Example 4.1.4. Produce plots of gamma p.d.f.s with parameters n D 2; 5; 20;
and 50. Comment on what you observe as n increases.
Solution. The characteristic function (see Chap. 3) for each of the Xi s is
Z 1
1
FX .u/ D Ee juX D e jux e x dx D :
0 1 ju
1
FYn .u/ D Ee j u.X1 CCXn / D Ee j uX1 Ee j uXn D :
.1 j u/n
The first term on the right side is zero, so that we get the recursive formula
1
Ffn .u/ D Ff .u/;
1 j u n1
which gives the desired result since Ff1 .u/ D FX .u/ D .1 j u/1 :
Chapter 5
Solution. (a)
2 D X .0/ D 16 C 8 D 24:
(b) Let us denote the operation of Fourier transform by F . Then writing perhaps a
little informally, we have
Z 1
SX .f / D X ./e j 2f d D .F X /.f /
1
D F 16e 5j j cos .20/ C 8 cos .10/ .f /
D 16 F .e 5j j / F .cos.20// .f / C 8 F .cos .10//.f /:
Chapter 5 241
But
25 10
F .e 5j j /.f / D D
52 C .2f / 2 25 C .2f /2
and
ı.f C 10/ C ı.f 10/
F .cos 20/.f / D ;
2
so that
F .e 5j j / F .cos .20// .f /
Z 1
10 ı.f s C 10/ C ı.f s 10/
D ds
1 25 C .2f /
2 2
Z 1 Z 1
ı.s .f C 10// ı.s .f 10//
D5 ds C ds
1 25 C .2f / 1 25 C .2f /
2 2
1 1
D5 C ;
25 C 4 2 .f C 10/2 25 C 4 2 .f 10/2
R
because we know that ı.f f0 /X.f / df D X.f0 /. Since F .cos 10/.f /
D ı.f C 5/=2 C ı.f 5/=2,
80 80
SX .f / D C C4ı.f C5/C4ı.f 5/:
25 C 4 2 .f C 10/2 25 C 4 2 .f 10/2
Another way to proceed would be to write e 5j j cos .20/ as e 5 .e j.20 /
e j.20 / /=2, for > 0 (similarly for negative s), and do the integration
directly in terms of just exponential functions (but it was more fun to do convo-
lutions with the Diracdelta impulses, wasn’t it?).
(c)
80 80 160
SX .0/ D C C 4ı.5/ C 4ı.5/ D :
25 C 4 2 100 25 C 4 2 100 25 C 400 2
X
N X
N X
N X
N
.tn tk /zn zk D EŒX .t/X.t C .tn tk //zn zk
nD1 kD1 nD1 kD1
X
N X
N
D EŒX .t C tk /X.t C tn /zn zk
nD1 kD1
242 Solutions to Selected Problems
X
N X
N
DE .zk X.t C tk // .zn X.t C tn //
nD1 kD1
ˇN ˇ2
ˇX ˇ
ˇ ˇ
D Eˇ zn X.t C tn /ˇ 0:
ˇ ˇ
nD1
Chapter 6
1.0
0.8
0.6
0.4
0.2
0.0
-1.0 -0.5 0.0 0.5 1.0 1.5 2.0
Solution. (a)
(b) With X ./ D ı./, the autocovariance function of the output is
Z 1Z 1
Y ./ D X . u C s/h.s/h.u/ ds d u
0 0
Z 1Z 1
D ı.s .u //.1 s/.1 u/ ds d u:
0 0
Chapter 6 243
As long as 0 < u < 1, which implies that 1 < < 1, the inner integral is
Z 1
ı.s .u //.1 s/ ds D 1 .u /;
0
0.30
0.25
0.20
0.15
0.10
0.05
0.00
-1.5 -1.0 -0.5 0.0 0.5 1.0 1.5
1
Y ./ D .jj 1/2 .jj C 2/ for 1 < < 1;
6
and it is zero outside the interval Œ1; 1; see the preceding figure.
(c) The transfer function of the system is
Z 1
sin2 .f / 2f sin.2f /
H.f / D .1 t/e 2jft dt D j :
0 2 2 f 2 4 2 f 2
0.25
0.20
0.15
0.10
0.05
0.00
−1.5 −1.0 −0.5 0.0 0.5 1.0 1.5
To find the value of the power transfer function at f D 0, one can apply
l’HOopital’s rule, differentiating the numerator and denominator of jH.f /j2 three
times, which yields jH.0/j2 D 1=4. Thus, the equivalent-noise bandwidth is
Z 1
1
BWn D .1 t/2 dt D 2=3:
2jH.0/j2 0
Checking the above plot of the power transfer function, one finds that the half-
power bandwidth is approximately BW1=2 D 0:553:
(d) The power spectrum of the output signal is given by
SY .f / D SX .f /jH.f /j2 ;
Therefore,
and
Problem 6.4.5. Consider the circuit shown in Fig. 6.4.2. Assume that the input,
X.t/, is the standard white noise.
(a) Find the power spectra SY .f / and SZ .f / of the outputs Y .t/ and Z.t/.
(b) Find the cross-covariance,
Y Z ./ D E Z.t/Y .t C / ;
Solution. (a) Note that X.t/ D Y .t/ C Z.t/. The impulse response function for the
“Z” circuit is
1 t =RC
hZ .t/ D e ;
RC
and Z 1
Y .t/ D X.t/ hZ .s/X.t s/ ds:
0
So the impulse response function for the “Y” circuit is
Z 1
1 s=RC
hY .t/ D ı.t/ e ı.t s/ ds
0 RC
1 t =RC
D ı.t/
e ; t 0:
RC
The Fourier transform of hY .t/ will give us the transfer function
Z 1
1 t =RC 2jft 2jRCf
HY .f / D ı.t/ e e dt D :
0 RC 1 C 2jRCf
For the standard white noise input X.t/, the power spectrum of the output is
equal to the power transfer function of the system. Indeed,
4 2 R2 C 2 f 2
SY .f / D 1 jHY .f /j2 D :
1 C 4 2 R2 C 2 f 2
The calculation of SX .f / has been done before, as the “Z” circuit represents
the standard RC-filter.
246 Solutions to Selected Problems
(b)
Chapter 7
Problem 7.4.2. A signal of the form x.t/ D 5e .t C2/ u.t/ is to be detected in the
presence of white noise with a flat power spectrum of 0:25 V2 =Hz using a matched
filter.
(a) For t0 D 2, find the value of the impulse response of the matched filter at
t D 0; 2; 4:
(b) Find the maximum output signal-to-noise ratio that can be achieved if t0 D 1:
(c) Find the detection time t0 that should be used to achieve an output signal-to-
noise ratio that is equal to 95% of the maximum signal-to-noise ratio discovered
in part (b).
(d) The signal x.t/ D 5e .t C2/ u.t/ is combined with white noise having a power
spectrum of 2 V2 =Hz. Find the value of RC such that the signal-to-noise ratio at
the output of the RC filter is maximal at t D 0:01 s.
Solution. (a) The impulse response function for the matched filter is of the form
where t0 is the detection time and u.t/ is the usual unit step function. Therefore,
So
S
.t0 D 0/ D 50e 4 :
N max
(c) The sought detection time t0 can thus be found by numerically solving the equa-
tion
50e 4 .1 e 2t0 / D 0:95 50e 4 ;
which yields, approximately, t0 D log 0:05=2 1:5.
Chapter 8
Problem 8.5.1. A zero-mean Gaussian random signal has the autocovariance func-
tion of the form
X ./ D e 0:1j j cos 2 :
Plot it. Find the power spectrum SX .f /: Write the covariance matrix for the signal
sampled at four time instants separated by 0.5 s. Find its inverse (numerically; use
any of the familiar computing platforms, such as Mathematica, Matlab, etc.).
1.0
0.5
0.0
−0.5
−1.0
−4 −2 0 2 4
1 1
Note that the Fourier transform in Mathematica is defined as a function of the angu-
lar velocity variable ! D 2f ; hence the above substitution. The plot of the power
spectrum is next.
0.5
0.4
0.3
0.2
0.1
0.0
-3 -2 -1 0 1 2 3
Problem 8.5.3. Find the joint p.d.f. of the signal from Problem 8.5.1 at t1 D 1; t2 D
1:5; t3 D 2, and t4 D 2:5. Write the integral formula for
Solution. Again, we use Mathematica to carry out all the numerical calculations.
First, we calculate the relevant covariance matrix.
In[3]:= CovGX = N[{{GX[0], GX[0.5], GX[1], GX[1.5]},
{GX[0.5], GX[0], GX[0.5], GX[1]},
{GX[1], GX[0.5], GX[0], GX[0.5]},
{GX[1.5], GX[1], GX[0.5], GX[0]}}] // MatrixForm
Out[3]=
Chapter 8 249
Note the quadratic form in four variables, x1, x2, x3, x4, in the exponent.
The calculation of the sought probability requires evaluation of the 4D integral,
P 2 X.1/ 2; 1 X.1:5/ 4; 1 X.2/ 1; 0 X.2:5/ 3
Z 2Z 4Z 1Z 3
D f .x1 ; x2 ; x3 ; x4 / dx1 dx2 dx3 dx4 ;
2 1 1 0
Problem 8.5.4. Show that if a 2D Gaussian random vector YE D .Y1 ; Y2 / has un-
correlated components Y1 ; Y2 , then those components are statistically independent
random quantities.
250 Solutions to Selected Problems
If the two components are uncorrelated, then D 0, and the formula takes the
following simplified shape:
1 1 y12 y22
fYE .y1 ; y2 / D exp C I
21 2 2 12 22
it factors into the product of the marginal densities of the two components of the
random vector YE :
1 1 y12 1 1 y22
fYE .y1 ; y2 / D p exp p exp ;
21 2 12 22 2 22
D fY1 .y1 / fY2 .y2 /;
Chapter 9
Problem 9.7.8. Verify that the additivity property (9.3.7) of any continuous func-
tion forces its linear form (9.3.8).
Solution. Our assumption is that a function C.v/ satisfies the functional equation
for any real numbers v; w. We will also assume that is it continuous although the
proof is also possible (but harder) under a weaker assumption of measurability. Tak-
ing v D 0; w D 0 gives
Now, iterating (S.9.1) n times, we get that for any real number v,
C.nv/ D n C.v/I
choosing v D 1=n, we see that C.1/ D nC.1=n/ for any positive integer n. Replac-
ing n by m in the last equality and combining it with the preceding equality with
v D 1=m, we get that for any positive integers n; m,
n n
C D C.1/:
m m
Finally, since any real number can be approximated by the rational numbers of the
form n=m, and since C was assumed to be continuous, we get that for any real
number,
C.v/ D v C.1/I
that is, C.v/ is necessarily a linear function.
Bibliographical Comments
The classic modern treatise on the theory of Fourier series and integrals which influenced much of
the harmonic analysis research in the second half of the twentieth century is
[1] A. Zygmund, Trigonometric Series, Cambridge University Press, Cambridge, UK, 1959.
More modest in scope, but perhaps also more usable for the intended reader of this text, are
[2] H. Dym and H. McKean, Fourier Series and Integrals, Academic Press, New York, 1972,
[3] T. W. Körner, Fourier Analysis, Cambridge University Press, Cambridge, UK, 1988,
[4] E. M. Stein and R. Shakarchi, Fourier Analysis: An Introduction, Princeton University Press,
Princeton, NJ, 2003,
[5] P. P. G. Dyke, An Introduction to Laplace Transforms and Fourier Series, Springer-Verlag,
New York, 1991.
[6] F. Constantinescu, Distributions and Their Applications in Physics, Pergamon Press, Oxford,
UK, 1980,
[7] T. Schucker, Distributions, Fourier Transforms and Some of Their Applications to Physics,
World Scientific, Singapore, 1991,
[8] A. I. Saichev and W. A. Woyczyński, Distributions in the Physical and Engineering Sciences,
Vol. 1: Distributional and Fractal Calculus, Integral Transforms and Wavelets, Birkhäuser
Boston, Cambridge, MA, 1997,
[9] A. I. Saichev and W. A. Woyczyński, Distributions in the Physical and Engineering Sciences,
Vol. 2: Linear, Nonlinear, Fractal and Random Dynamics in Continuous Media, Birkhäuser
Boston, Cambridge, MA, 2005.
Good elementary introductions to probability theory, and accessible reads for the engineering
and physical sciences audience, are
[12] M. Denker and W. A. Woyczyński, Introductory Statistics and Random Phenomena: Uncer-
tainty, Complexity, and Chaotic Behavior in Engineering and Science, Birkhäuser Boston,
Cambridge, MA, 1998,
253
254 Bibliographical Comments
deals with a broader issue of how randomness appears in diverse models of natural phenomena and
with the fundamental question of the meaning of randomness itself.
More ambitious, mathematically rigorous treatments of probability theory, based on measure
theory, can be found in
All three also contain a substantial account of the theory of stochastic processes.
Readers more interested in the general issues of statistical inference and, in particular, paramet-
ric estimation, should consult
[16] G. Casella and R. L. Berger, Statistical Inference, Duxbury, Pacific Grove, CA, 2002,
or
[17] D. C. Montgomery and G. C. Runger, Applied Statistics and Probability for Engineers, Wiley,
New York, 1994.
The classic texts on the general theory of stationary processes (signals) are
[18] H. Cramer and M. R. Leadbetter, Stationary and Related Stochastic Processes: Sample Func-
tion Properties and Their Applications, Dover Books, New York, 2004,
[19] A. M. Yaglom, Correlation Thoery of Stationary and Related Random Functions, Vols. I and
II, Springer-Verlag, New York, 1987.
[20] N. Wiener, Extrapolation, Interpolation, and Smoothing of Stationary Time Series, MIT Press
and Wiley, New York, 1950.
[21] P. Bloomfield, Fourier Analysis of Time Series: An Introduction, Wiley, New York, 1976,
[22] P. J. Brockwell and R. A. Davis, Time Series: Theory and Methods, Springer-Verlag, New
York, 1991.
and difficult issues in the analysis of nonlinear and nonstationary random signals are tackled in
[23] M. B. Priestley, Non-linear and Non-stationary Time Series Analysis, Academic Press, Lon-
don, 1988,
[24] W. J. Fitzgerald, R. L. Smith, A. T. Walden, and P. C. Young, eds., Nonlinear and Nonsta-
tionary Signal Processing, Cambridge University Press, Cambridge, UK, 2000.
The latter is a collection of articles, by different authors, on the current research issues in the area.
A more engineering approach to random signal analysis can be found in a large number of
sources, including
[27] M. J. Roberts, Signals and Systems: Analysis of Signals Through Linear Systems, McGraw-
Hill, New York, 2003,
[28] B. D. O. Anderson, and J. B. Moore, Optimal Filtering, Dover Books, New York, 2005.
[29] I. A. Ibragimov and Y. A. Rozanov, Gaussian Random Processes, Springer -Verlag, New
York, 1978,
[30] M. A. Lifshits, Gaussian Random Functions, Kluwer Academic Publishers, Dordrecht,
the Netherlands, 1995.
and for a review of the modern mathematical theory of not necessarily second-order and not nec-
essarily Gaussian stochastic integrals, we refer to
[31] S. Kwapien and W. A. Woyczyński, Random Series and Stochastic Integrals: Single and
Multiple, Birkhäuser Boston, Cambridge, MA, 1992.
Index
A Burgers’
additive noise, 3 equation, 8
additivity property of probabilities, 53 turbulence, 6
adhesion model, 8
analog-to-digital conversion, 2
Anderson, B. D. O., 255 C
angular velocity, 21 Casella, G., 254
approximation of periodic signals, 34ff. Cauchy criterion of convergence, 185
at each time separately, 31 Cauchy–Schwartz inequality, 82, 102
at jump points, 32 causal system, 145
by Césaro averages, 32 central limit theorem, 60, 91, 175
Gibbs phenomenon, 33 error of approximation in —, 92
in power, 31 sketch of proof of —, 103
mean-square error, 31 Césaro average, 32
uniform, 31 chaotic behavior, 2
ARMA system, 157 Chebyshev’s inequality, 188, 191
Arzelà–Ascoli theorem, 198 circuit
autocorrelation function (ACF), 108ff. integrating —, 145
as a positive-definite function, 195 RC —, 149
autocovariance function, 106 complex
normalized, 107 exponentials, 14, 18
orthogonality of —, 18
numbers, 15
computational complexity, 2
B of fast Fourier transform, 45
band-limited noise, 133 computer algorithms, 211ff.
bandwidth conditional probability, 79
equivalent noise —, 142, 152 reverse —, 80
half-power —, 142, 152 confidence intervals, 96ff., 122
of finite-time integrating circuit, 154 for means, 96ff.
Bayes’ formula, 80 for variance, 98
Berger, R. L., 254 Constantinescu, F., 253
Berry–Eseen theorem, 95 control function
Billingsley, P., 199, 254 cumulative —, 201
binomial formula, 54 convergence in mean-square, 184
Bloomfield, P., 254 Cauchy criterion for —, 185
Branicky, Mike, x Cooley, J. W., 45
Brockwell, P. J., 121, 254 correlation
Brown, R. G., 254 coefficient, 82
Brownian motion, 4 covariance matrix, 182
257
258 Index
I
F Ibragimov, I. A., 255
fast Fourier transform (FFT), 44ff. impulse response function, 144
computational complexity of —, 45 causal, 144
filter realizable, 144
causal —, 172 integrating circuit, 145
matched —, 167ff. inverse Fourier transform, 36
Index 259
K P
Kallenberg, O., 114, 254 parameter estimation, 96ff.
kinetic energy, 65 Papoulis, A., 172, 254
Kolmogorov’s theorem Parseval’s formula, 24, 25, 49
on infinite sequences of r.q.s., 199 extended, 24, 25, 49
on sample path continuity, 170 passive tracer, 6
Körner, T. W., 31, 253 period of the signal, 1
Kronecker delta, 22 infinite —, 36
Kwapień, S., 255 periodogram, 131
Petrov, V. V., 95
L Piryatinska, A., x, 108, 122, 132
Landau’s asymptotic notation, 120 Pitman, J., 253
Laplace transform, 172 Poisson distribution, 55
law of large numbers (LLN), 89 polarization identity, 49
Leadbetter, M. R., 254 power
least-squares fit, 89 ff. spectral density, 135, 208
Lèvy process, 5 spectrum, 134ff.
Lifshits, M.A., 255 cumulative, 196
Loève, M., 187, 254 of interpolated digital signal, 137
Loparo, Ken, x transfer function, 152
Priestley, M.B., 254
probability
M
density function (p.d.f.), 57ff.
marginal probability distribution, 78
joint — of random vector, 75
matching filter, 168
normalization condition for —, 59
McKean, H., 253
mean power, 127 distribution, 58ff.
moments of r.q.s., 71 absolutely continuous, 56
Montgomery, D. C., 254 Bernoulli, 54
Moore, J. B., 255 binomial, 54
moving average conditional, 75
autoregressive (ARMA) —, 157 continuous, 56
general, 124 chi-square, 66, 98, 100
interpolated —, 139 table of —, 102
of white noise, 111 cumulative, 52
to filter noise, 117 exponential, 58
Gaussian (normal), 59, 91
calculations with —, 60
N mean and variance of —, 74
nabla operator, 5 table of —, 100
noise
joint —, 75
additive, 3
marginal, 78
white, 110, 134
mixed, 61
normal equations, 87
normalization condition, 55 n-point, 175
of function of r.q., 63
of kinetic energy, 65
O of square of Gaussian r.q., 66
optimal filter, 169ff. Poisson, 55
orthonormal basis, 22 quantiles of —, 96
in 3D space, 24 singular, 61
of complex exponentials, 22 Student’s-t , 95, 100
orthonormality, 18 table of —, 102
of complex exponentials, 18 uniform, 57
260 Index
S
Q Saichev, A. I., 43, 253
quantiles of probability distribution, 96 sample paths, 5
table of chi-square —, 100 continuity with probability 1 of —, 187ff.
differentiability of —, 184ff.
mean-square continuity of —, 184ff.
R mean-square differentiability of —, 184ff.
random sampling period, 2
errors, 80 scalar product, 22
harmonic oscillations, 109, 133 scatterplot, 86
superposition of —, 109, 133 Schucker, T., 253
random interval, 94 Schwartz distributions, 43–44, 253
numbers, 19
Shakarchi, R., 253
phase, 108
signals, 1
quantities (r.q.), 54ff.
analog, 1
absolute moments of —, 69
aperiodic, 1, 18, 35ff.
continuous —, 62ff.
characteristics of, 175
correlation of —, 81
delta-correlated, 135
discrete —, 61ff.
description of —, 1ff.
expectation of —, 71ff.
deterministic, 2
function of —, 64ff.
spectral representation of —, 21ff.
linear transformation of Gaussian —,
64 digital, 1, 140
moments of —, 71 discrete sampling of —, 157
singular —, 63 interpolated, 137ff.
standard deviation of —, 73 Diracdelta impulse, 41ff.
standardized —, 74 energy of —, 8
statistical independence of —, 73ff. filtering of —, 171
variance of —, 64 Gaussian, 175ff.
switching signal, 100 stationary, 171
variable, 48 jointly stationary —, 161
vectors, 67ff. nonintegrable, 36
covariance matrix of —, 164 periodic, 8, 23
Gaussian —, 68, 162ff. power of —, 31
2-D —, 68, 164 random, 4, 105ff., 200ff.
joint probability distribution of —, 67 switching, 113, 135
linear transformation of —, 160 types of —, 1ff.
moments of —, 73 mean power of —, 127ff.
walk, 6 stationary, 106, 196ff.
randomness, 1 discrete —, 196ff.
of signals, 4 Gaussian —, 181ff.
RC filter, 149ff., 162 power spectra of —, 127ff.
rectangular waveform, 26ff. strictly, 105
regression line, 89ff. second-order, weakly, 106
REM sleep, 108 simulation of —, 124, 210ff.
resolution, 2 spectral representation of —, 204ff.
reverse conditional probability, 80 stochastic, 2, 199
Roberts, M. I., 255 transmission of binary —, 80
Ross, S. M., 253 time average of —, 19
Index 261