EE603 Class Notes John Stensby
EE603 Class Notes John Stensby
EE603 Class Notes John Stensby
z0
T 2
X( t ) dt < . (13-1)
Let k(t), k 0, be a complete orthonormal basis for the vector space of complex-valued, square
z0
T
k ( t )j ( t )dt = 1, k = j
. (13-2)
= 0, k j
X( t ) = x m m ( t )
m =1
(13-3)
xm = z0
T
X( t )m ( t )dt
for t in the interval [0,T]. In (13-3), convergence is not pointwise. Instead, Equation (13-3)
limit
N 0
z T
X( t )
N
xk k ( t )
k =1
2
dt = 0 . (13-4)
It is natural to ask if similar results can be obtained for finite power, m.s. Riemann
integrable random processes. The answer is yes. Obviously, for random process X(t), the
expansion coefficients xk will be random variables. In general, the coefficients xk will be pair-wise
603CH13.DOC 13-1
EE603 Class Notes Version 1 John Stensby
correlated. However, by selecting the basis functions as the eigenfunctions of a certain integral
operator, it is possible to insure that the coefficients are pair-wise uncorrelated, a highly desirable
condition that simplifies many applications. When the basis functions are chosen to make the
expansion. These types of expansions have many applications in the areas of communication and
control.
continuous on [0, T][0, T]. Note that is Hermitian; that is, the function satisfies (t1,t2) =
*(t2,t1). Also, it is nonnegative definite, a result that is shown easily. Let f(t) be any function
defined on the interval [0, T]. Then, we can define the random variable
xf = z
0
T
X( t ) f ( t )dt . (13-5)
The mean of xf is
E[ x f ] = z0
T
m( t ) f ( t )dt ,
a result that is zero under the working assumption that m(t) = E[X(t)] = 0. The variance of xf is
= zz T T
0 0
f ( t ) ( t , ) f ( )dtd
603CH13.DOC 13-2
EE603 Class Notes Version 1 John Stensby
zz
T T
0 0
f ( t ) ( t , ) f ( )dtd 0 (13-7)
for arbitrary function f(t), 0 t T. Condition (13-7) implies that autocorrelation function
definite in that
zz
T T
0 0
f ( t ) ( t , ) f ( )dtd > 0 (13-8)
A x( t ) z T
0
( t , ) x( )d (13-9)
(recall that L2[0,T] is the vector space of square integrable functions on [0,T]). Continuous,
Hermitian, nonnegative definite autocorrelation function forms the kernel of linear operator A.
example of a compact, self-adjoint linear operator (for definitions of these terms see the appendix
k k ( t) = z T
0
( t , ) k ( ) d . (13-10)
603CH13.DOC 13-3
EE603 Class Notes Version 1 John Stensby
Much is known about the eigenfunctions and eigenvalues of linear operator A. We state a number
of properties of the eignevectors/eigenvalues. Proofs that are not given here can be found in the
1. For a Hermitian, nonnegative definite, continuous kernel (t,), there exist at least one square-
3. If 1(t) and 2(t) are eigenfunctions corresponding to the same eigenvalue , then 1(t) +
5. The eigenvalues are countable (i.e., a 1-1 correspondence can be established between the
eigenvalues and the integers). Furthermore, the eigenvalues are bounded. In fact, each
inf zz
T T
f =1 0 0
f ( t ) ( t , ) f ( )dtd k sup
f =1
zzT T
0 0
f ( t ) ( t , ) f ( )dtd < (13-11)
6. Every nonzero eigenvalue has a finite-dimensional eigenspace. That is, there are a finite
all square integrable functions on [0, T]. If is not positive definite, there is a zero eigenvalue,
603CH13.DOC 13-4
EE603 Class Notes Version 1 John Stensby
and you must include its orthonormalized eigenfunction(s) to get a complete orthonormal basis of
8. The eigenvalues are nonnegative. For a positive definite kernel (t,), the eigenvalues are
positive. To establish this claim, use (13-8) and (13-2) and write
i = z
T
0 z
0
LMz
i ( t )[ i i ( t )]dt = i ( t )
T
N
T
0
OP
( t , ) i ( )d dt
Q
= zz
T T
( t ) ( t , ) i ( )d dt
0 0 i
. (13-12)
9. The sum of the eigenvalues is the expected value of the process energy in the interval [0, T].
That is
E M z X( t ) dt P = z ( t , t )dt = k .
L T 2 O T
N0 Q 0 k =1
(13-13)
With items 10 through 15, we want to establish Mercers theorem. This theorem states
that you can represent the autocorrelation function (t,) by the expansion
( t , ) = k k ( t) k ( ) . (13-14)
k =1
We will not give a rigorous proof of this result, but we will come close.
10. Let 1(t) and 1 be an eigenfunction and eigenvalue pair for kernel (t,), the nonnegative
603CH13.DOC 13-5
EE603 Class Notes Version 1 John Stensby
1 ( t , ) ( t , ) 11 ( t )1 ( ) (13-15)
X1 ( t ) X( t ) 1 ( t ) z 0
T
X( )1 ( )d . (13-16)
LMF
E X1( t ) X1 ( ) = E X( t ) 1 ( t ) z T
X(s1 )1 (s1)ds1
I F X () 1 ()z T X (s2 )1(s2 )ds2 I OP
NH 0 KH 0 KQ
T
0 z
= E X( t ) X( ) 1 ( ) E X (s2 ) X( t ) 1 (s2 )ds2 1 ( t ) E X(s1 ) X ( ) 1 (s1 )ds1
T
0 z
+ 1( t )1 ( ) zzT T
0 0
E X(s1 ) X (s2 ) 1 (s1) 1 (s2 )ds1ds2 (13-17)
= ( t , ) 1 ( ) z 0
T
( t , s2 ) 1 (s2 )ds2 1( t ) z0
T
(s1, ) 1 (s1 )ds1
+ 1( t )1 ( ) zzT T
0 0
(s1 , s2 )1 (s1 )1(s2 )ds1ds2 .
Use (t,) = (,t), and take the complex conjugate of the eigenfunction relationship to obtain
11 ( ) = z0
T
(s, )1 (s)ds (13-18)
With (13-18), the two cross terms on the right-hand-side of (13-17) become
1 ( ) z
0
T
( t , s) 1(s)ds = 1( t ) z0
T
(s, ) 1 (s)ds = 1 1 ( t )1 ( ) (13-19)
603CH13.DOC 13-6
EE603 Class Notes Version 1 John Stensby
zz
T T
0 0
(s1, s2 )1 (s1 )1 (s2 )ds1ds2 = z
T
0
LMz
1 (s1 )
N
T
0
OP
(s1 , s2 )1 (s2 )ds2 ds1
Q
= z
T
(s ) (s )ds
0 1 1 1 1 1 1
. (13-20)
= 1
11. As defined by (13-15), 1(t,) may be zero for all t, . If not, 1(t,) can be used as the
kernel of integral equation (13-10). This reformulated operator equation has a new
eigenfunction 2(t) and new nonzero eigenvalue 2 (this follows from Property #1 above). They
2 ( t , ) 1 ( t , ) 2 2 ( t )2 ( ) . (13-22)
Furthermore, the new eigenfunction 2(t) is orthogonal to the old eigenfunction 1(t). That
22 ( t) = z 0
T
1 ( t , ) 2 ( )d . (13-23)
603CH13.DOC 13-7
EE603 Class Notes Version 1 John Stensby
22 ( t) = z
0
T
( t , ) 2 ( )d 11 ( t ) zT
( ) 2 ( ) d .
0 1
(13-24)
2 z
T
0
1 ( t ) 2 ( t )dt =
T T
0 0 zz 0
2 T
0z
( t , ) 1* ( t ) 2 ( )d dt 1 1( t ) dt 1 ( ) 2 ( )d
T
z
(13-25)
= z LMNz
T
0
2 ( )
T
0
OP
( t , ) 1 ( t )dt d 1
Q z T
( ) 2 ( ) d .
0 1
Use (13-18) (which results from the Hermitian symmetry of ) to evaluate the term in the bracket
2 z
T
0
T
z
1 ( t ) 2 ( t )dt = 2 ( )
0
T
0
LMz
N
T
0
OP
( t , ) 1 ( t )dt d 1 1 ( ) 2 ( )d
Q z
= z 0
T
z
2 ( ) 11 ( ) d 1 1 ( ) 2 ( )d
T
0
. (13-26)
e
= 1 1 jz0T 1 ()2 ()d
2 z
T
( t ) 2 ( t )dt = 0 .
0 1
(13-27)
eigenvalue pair for kernel 1, 2(t) and 2 are an eigenfunction and eigenvalue, respectively, for
kernel (as can be seen from (13-24) and the fact that 12).
12. Clearly, as long as the resulting autocorrelation function is nonzero, the process outlined in
603CH13.DOC 13-8
EE603 Class Notes Version 1 John Stensby
N
N ( t , ) ( t , ) k k ( t )k ( ) . (13-28)
k =1
13. N(t,) may vanish, and the algorithm for computing eigenvalues may terminate, for some
finite N. In this case, there exist a finite number of nonzero eigenvalues, and autocorrelation
N
( t , ) = k k ( t) k ( ) . (13-29)
k =1
for some N. In this case, the kernel (t,) is said to be degenerate; also, it is easy to show that
14. If the case outlined by 13) does not hold, there exists a countable infinite number of nonzero
eigenvalues. However, N(t,) converges as N . First, we show convergence for the special
case t = ; next, we use this special case to establish convergence for the general case, t . To
LM n n ( ) OP
S n, m ( t , )
m
k k ( t )k ( ) = n n ( t) L m m ( t) M
M M
PP . (13-30)
k=n MM P
N m m ( )PQ
603CH13.DOC 13-9
EE603 Class Notes Version 1 John Stensby
As a function of index N, the sequence S1,N(t,t) is increasing but always bounded above by (t,t),
as shown by (13-31). Hence, as N , both S1,N(t,t) and N(t,t) must converge to some limit.
For the general case t , convergence of S1,N(t,) can be shown by establishing the fact
that partial sum Sn,m (t,) 0 as n, m (in any order). To establish this fact, consider partial
sum Sn,m(t,) to be the inner product of two vectors as shown by (13-30); one vector contains the
m. Now, apply the Cauchy-Schwartz inequality (see Theorem 11-4) to inner product Sn,m (t,)
and obtain
m
Sn, m ( t , ) = k k ( t )k ( ) Sn , m ( t , t ) S n, m ( , ) . (13-32)
k=n
As N , the convergence of S1,N(t,t) implies that partial sum Sn,m(t,t) 0 as n, m (in any
order). Hence, the right-hand-side of (13-32) approaches zero as n, m (in any order), and
this establishes the convergence of S1,N(t,) and (13-28) for the general case t .
15. As it turns out, N(t,) converges to zero as N , a claim that is supported by the
following argument. For each m N and fixed t, multiply N(t,) by m() and integrate to obtain
z0
T
N ( t , ) m ( )d = z
0
T
( t , ) m ( )d
T L
z0 MNk=1 kk (t)k ()OPQm()d
N
= m m ( t ) m m ( t ) . (13-33)
=0
For each fixed t and all m N, N(t,) has zero component in the m() direction. Equation
603CH13.DOC 13-10
EE603 Class Notes Version 1 John Stensby
limit
N 0
z T
N ( t , ) m ( )d = 0 (13-34)
for each m 1. By the continuity of the inner product, we can interchange limit and integration in
(13-34) to see that (t,) has no component in the m() direction, m 1. Since the
eigenfunctions m() span the vector space L2[0,T] of square-integrable functions, we see that
( t , ) = k k ( t) k ( ) , (13-35)
k =1
a result known as Mercers theorem. In fact, the sum in (13-35) can be shown to converge
1965).
Karhunen-Love Expansion
In an expansion of the form (13-3), we show that the coefficients xk will be pair-wise
uncorrelated if, and only if, the basis functions k are eigenfunctions of (13-10) . Then, we show
Theorem 13-1: Suppose that finite-power random process X(t) has an expansion of the form
X( t ) = xm m ( t )
m =1
(13-36)
xm = z 0
T
X( )m ( t )d
for some complete orthonormal set k(t), k 1, of basis functions. If the coefficients xn satisfy
603CH13.DOC 13-11
EE603 Class Notes Version 1 John Stensby
E xn x m = n , n = m
(13-37)
= 0, nm
(i.e., the coefficients are pair-wise uncorrelated and xn has a variance equal to eigenvalue n), then
the basis functions n(t) must be eigenfunctions of (13-9); that is, they must satisfy
z0
T
( t , ) n ( )d = n n ( t ) , 0 t T . (13-38)
Proof: Multiply the expansion in (13-36) (the first equation in (13-36)) by x n , take the
E X( t ) x n = E xm xn
2
m ( t ) = E xn n (t) = n n ( t) . (13-39)
m =1
Now, multiply the complex conjugate of the second equation in (13-36) by X(t), and take the
expectation, to obtain
E X( t ) x n = z
0
T
E X( t ) X ( ) n ( )d = z T
0
( t , ) n ( )d . (13-40)
z0
T
( t , ) n ( )d = n n ( t ), 0 t T,
where n is given by (13-37). In addition to this result, the K-L coefficients will be orthogonal
Theorem 13-2: If the orthogonal basis functions n(t) are eigenfunctions of (13-38) the
603CH13.DOC 13-12
EE603 Class Notes Version 1 John Stensby
Proof: Suppose the orthogonal basis functions n(t) satisfy integral equation (13-38). Compute
E xn x m = E
LMRSz T X( t)n ( t)dt UV xm OP = z T E X(t ) xm n (t)dt .
NT 0 W Q 0 (13-41)
0 z
E xn x m = m m ( t )n ( t )dt = m mn
T
which shows that the coefficients are pair-wise uncorrelated. Theorems 13-1 and 13-2 establish
the claim that the xk will be uncorrelated if, and only if, the basis functions satisfy integral equation
Theorem 13-3: Let X(t) be a finite-power random process on [0, T]. The Karhunen-Love
expansion
X( t ) = xm m ( t )
m =1
, (13-42)
xm = z T
0
X( )m ( )d
where the coefficients are pair-wise uncorrelated and the basis functions satisfy the integral
Proof: Evaluate the mean-square error between the series and the process to obtain
L 2O
E M X ( t ) x m m ( t ) P
N m=1 Q
603CH13.DOC 13-13
EE603 Class Notes Version 1 John Stensby
LM F
I OP E LM xnn ( t)F X( t) xmm (t )I OP .
x m m ( t )
N H K Q N n =1 H m=1 KQ
= E X( t ) X( t ) (13-43)
m =1
LM F
I OP = (t , t ) mm ( t)m (t ) = 0
xm m ( t )
N H KQ
E X( t ) X( t ) (13-44)
m =1 m =1
(E[X(t) xm ]= = mm(t), first established by (13-39), was used here). The fact that the right hand
side of (13-44) is zero follows from Mercers Theorem (discussed in Property 15 above). On the
LM x ( t)F X(t ) x (t )I OP
Nn=0 n n H m=0 m m K Q
E
= E x n X ( t ) n ( t ) E xn xm n ( t)m ( t)
n=0 m= 0 n = 0 . (13-45)
= nn ( t ) n ( t) n n ( t)n ( t)
n =1 n =1
=0
On the right-hand-side of (13-45), E[xnX*] was evaluated with the aid of (13-39); also the fact
that the coefficients are uncorrelated was used in (13-45). Equations (13-43) through (13-45)
imply
L 2
O
E M X ( t ) x m m ( t ) P = 0 , (13-46)
N m=1 Q
603CH13.DOC 13-14
EE603 Class Notes Version 1 John Stensby
As it turns out, the K-L expansion need contain only eigenfunctions that correspond to
= z z ( t , ) ( t )( ) dt d = z ( t ) LM z ( t , )( ) d OP dt
T T T T
0 0 0 N0 Q (13-47)
= 0.
That is, in the K-L expansion, the coefficient x of (t) has zero variance, and it need not be
Example 13-1 (K-L Expansion of the Wiener Process): From Chapter 6, recall that the
( t1 , t 2 ) = 2 D min{t1 , t 2 } , (13-48)
where D is the diffusion constant. Substitute (13-48) into (13-38) and obtain
2D z
0
T
min{t , } n ( )d = n n ( t ) (13-49)
z t
z T
2 D n ( )d + 2 Dt n ( )d = n n ( t ) ,
0 t
(13-50)
for 0 t T. With respect to t, we must differentiate (13-50) twice; the first derivative produces
603CH13.DOC 13-15
EE603 Class Notes Version 1 John Stensby
2D z t
T
n ( )d = n n ( t ) , (13-51)
2D
n ( t ) + n (t ) = 0 , (13-52)
n
where n, n and n are constants that must be chosen to so that n satisfies appropriate boundary
n (0) = 0 (13-54)
for all n. Because of (13-54), Equation (13-53) implies that all n 0. In a similar manner,
Equation (13-51) implies that n (T) = 0 , a result that leads to the conclusion
2D (2 n 1) ( n )
n = = = , n = 1, 2, 3, ... (13-55)
n 2T T
2 D T2
n = , n = 1, 2, 3, ... (13-56)
( n ) 2 2
603CH13.DOC 13-16
EE603 Class Notes Version 1 John Stensby
z0
T
( n sin n t ) 2 dt =
2n T
2
= 1, (13-57)
so that
2
n = . (13-58)
T
After using n 0, (13-58) and (13-55) in Equation (13-53), the eigenfunctions can be expressed
as
n (t ) =
2
T
eb g j
sin n T t , 0 t T . (13-59)
2
X( t ) =
T n =1
e j
xn sin ( n ) T t , 0 t T , (13-60)
xn =
2 T
T 0z e
X( t ) sin ( n ) T t dt . j (13-61)
X( t ) = A cos 0 t + , (13-62)
603CH13.DOC 13-17
EE603 Class Notes Version 1 John Stensby
where A and 0 are constants, and is a random variable that is uniformly distributed on (-, ].
A2
( ) = cos 0 , (13-63)
2
zT0 A 2
0 2
cos 0 ( t ) n ( )d = n n ( t ), 0 t T0 . (13-64)
The eigenvalues and eigenfunctions are found easily. First, use Mercers theorem to write
( t ) = k k ( t)k ( )
k =1
. (13-65)
A2 A2 A2
= cos 0 ( t ) = cos 0 t cos 0 + sin 0 t sin 0
2 2 2
Note that this kernel is degenerate. After normalization, the eigenfunctions, that correspond to
1 ( t ) = 2 / T cos 0 t
. (13-66)
2 ( t ) = 2 / T sin 0 t
Both of these eigenfunctions correspond to the eigenvalue = TA2/4; note that = TA2/4 has an
eigenspace of dimension two. Also, note that there are a countably infinite number of
eigenfunctions in the null space of the operator. That is, for k 1, the eigenfunctions
603CH13.DOC 13-18
EE603 Class Notes Version 1 John Stensby
1k ( t ) = 2 / T cos k 0 t
(13-67)
2 k ( t ) = 2 / T sin k 0 t
where
x1 = +A T0 / 2 cos
. (13-69)
x2 = A T0 / 2 sin
As expected, we have
Suppose X(t) is a wide sense stationary process with a rational power spectrum. That is,
603CH13.DOC 13-19
EE603 Class Notes Version 1 John Stensby
N ( 2 )
S( ) = , (13-71)
D( 2 )
were N and D are polynomials. Such a process occurs if white noise is passed through a linear,
time-invariant filter. Hence, many applications are served well by modeling their processes as
As it turns out, a process with a rational power spectrum can be expanded in a K-L
expansion where the eigenfunctions are non-harmonically related sine and cosine functions. For
such a case, the eigenvalues and eigenfunctions can be found. The example that follows illustrates
2P
S( ) = , - < < , P > 0, > 0 . (13-72)
2 + 2
( ) = F 1 S( ) = P exp( ) . (13-73)
zT
T
P e t u ( u) du = ( t ), - T t T . (13-74)
An analysis leading to the eigenvalues and eigenfunctions is less complicate if a symmetric interval
[-T,T] is used (of course, our expansion will be valid on [0, T]). We can write (13-74) as
( t ) = z T
t
P e ( t u) ( u) du + z
t
T
P e ( u t ) ( u) du = , - T t T . (13-75)
603CH13.DOC 13-20
EE603 Class Notes Version 1 John Stensby
d ( t ) = P e t
dt T z
t u
t
T
z
e ( u) du + P et e u ( u) du . (13-76)
dt
2
d 2 ( t ) = P 2 e t
t u
T z
e ( u) du P e t et ( t )
, (13-77)
+ P 2 e t z
t
T u
e ( u) du P e t e t ( t )
dt
2
d 2 ( t ) = P 2 z
T t u
T
e ( u) du 2 P ( t ) . (13-78)
Now, multiply (13-74) by 2 and use the product to eliminate the integral in (13-78); this
procedure results in
2
d 2 ( t ) = 2 ( 2 P / ) ( t ) . (13-79)
dt
There are no zero eigenvalues since is positive definite. Inspection of (13-79) reveals that the
three cases
ii) = 2P/,
must be considered.
603CH13.DOC 13-21
EE603 Class Notes Version 1 John Stensby
We start by defining
2 ( 2P / )
b2 , 0 < b2 < , (13-80)
2P
= . (13-81)
( jb)( + jb)
where c1 and c2 are complex constants. Plug (13-82) into integral equation (13-75) to obtain
= P e t zT
t
eu c1e jbu + c2 e jbu du + P et z
t
T u
e c1e jbu + c 2 e jbu du
= P e Mc1
L e ( + jb) u e ( jb ) u O
u= t
L e( + jb) u e ( jb ) u O
u= T
MN + jb 2 jb PPQ u =T
+ P e Mc1
MN + jb 2 jb PPQ u= t
t t
+c +c
MN + jb jb jb + jb PQ
L e( + jb)T + c e( jb)T OP + P et LMc e( + jb)T + c e( + jb)T OP .
P e t Mc1
MN + jb 2 jb PQ MN 1 + jb 2 jb PQ
Now, substitute (13-81) for on the left-hand-side of (13-83); then, cancel out like terms to
603CH13.DOC 13-22
EE603 Class Notes Version 1 John Stensby
We must find the values of b (i.e., the frequencies of the eigenfunctions) for which equality is
achieved in (13-84) . Note that both bracket terms must vanish identically to achieve equality for
all time t. However, for c1 c2, neither bracket will vanish for any real b. Hence, we require c1
= c2 in order to obtain equality in (13-84). First, consider c1 = -c2; to zero the first bracket term
we must have
jbT
e jbT e jbT e jbT ( jb) e jbT ( + jb) e e jbT jb e jbT + e jbT
= =
+ jb jb ( + jb)( jb) ( + jb)( jb)
=0
With c1 = -c2, the second bracket in (13-84) is zero if (13-87) holds. Hence, the values of b that
solve (13-87) are roots of (13-84), and they are frequencies of the eigenfunctions.
603CH13.DOC 13-23
EE603 Class Notes Version 1 John Stensby
Next, we must analyze the case c1 = c2 (which is similar to the case c1 = -c2 just finished).
Hence, the permissible frequencies of the eigenfunction are given by the union
These frequencies can be found numerically. Figure 13-1 depicts graphical solutions of
(13-89) for the first nine frequencies. A value of T = 2 was used to construct the figure. Note
2 b1 Y = T/bT
Y
1
b3
b5
b7 b9
0
3 2 5 3 7 4 bT
2 2 2 2
-1 b2
-2
b4
-3
-4 b6
-5 T = 2
b8
Y
-6
=
-b
T/
T
603CH13.DOC 13-24
EE603 Class Notes Version 1 John Stensby
that bk, k odd, form a decreasing sequence of positive numbers, while bk, k even, form a
Once the frequencies bk are found, they can be used to determine the eigenvalues
2 P
k = , k = 1, 2, 3, L (13-90)
( 2 b 2k )
The frequencies bk, k odd, were obtained by setting c1 = c2. For this case, (13-82) yields
where constant lk is chosen to normalize the eigenfunction. That is, lk must satisfy
z T
l 2 cos2 ( b k t ) dt = 1 ,
T k
(13-92)
which leads to
1
lk = , - T t T, k odd , (13-93)
T[1 + Sa (2 b k T)
1
k ( t) = cos b k t , - T t T, k odd . (13-94)
T[1 + Sa (2 b k T)
The frequencies bk, k even, were obtained by setting c1 = -c2. An analysis similar to the
603CH13.DOC 13-25
EE603 Class Notes Version 1 John Stensby
1
k ( t) = sin b k t , - T t T, k even . (13-95)
T[1 Sa (2 b k T)
Observations:
1. Eigenfunctions are cosines and sines at frequencies that are not harmonically related.
2. For each n, the value of bnT is independent of T. Hence, as T increases, the value of bn
3. As bT increases, the upper intersections (the odd integers k) occur at approximately (k-1)/2,
and the lower intersections occur at approximately (k-1)/2, k even. Hence, the higher index
eigenfunctions are approximately a set of harmonically related sines and cosines. For large k
we have
1 ( k 1)
k ( t) cos t , - T t T, k odd
T[1 + Sa (2 b k T) 2T
(13-96)
1 ( k 1)
sin t , - T t T, k even
T[1 Sa (2 b k T) 2T
2P d 2
( t ) = 0 . (13-97)
dt 2
Two independent solutions to this equation are (t) = t and (t) = 1. By direct substitution, it is
seen that neither of these satisfy integral equation (13-74). Hence, this case yields no
603CH13.DOC 13-26
EE603 Class Notes Version 1 John Stensby
2
d 2 ( t ) = ( 2 P / ) ( t ) (13-98)
dt 2
1 ( t ) = e t
, (13-99)
t
2 (t ) = e
where
2 ( 2 P / )
> 0. (13-100)
By direct substitution, it is seen that neither of these satisfy integral equation (13-74). Hence, this
Example 13-5: In radar detection theory, we must detect the presence of a signal given a T-
second record of receiver output data. There are two possibilities (termed hypotheses). First, the
record may consist only of receiver noise; no target is present for this case. The second possibility
is that the data record contains a target reflection embedded in the receiver noise; in this case, a
target is present. You must filter the record of data and make a decision regarding the
presence/absence of a target.
Let (t), 0 t T, denote the record of receiver output data. After receiving the
603CH13.DOC 13-27
EE603 Class Notes Version 1 John Stensby
Here, (t) is zero-mean Gaussian noise that is described by positive definite correlation function
(t,). Note that we allow non-white and non-stationary noise in this example. s(t) is the
reflected signal, which we assume to be known (usually, s(t) is a scaled and time-shifted version
( t ) = kk (t )
k =1
, (13-102)
k z T
0
( t ) k ( t )dt
where k(t) are the eigenfunction of (13-10), an integral equation that utilizes kernel (t,)
describing the receiver noise. The k are uncorrelated Gaussian random variables with variance
equal to the positive eigenvalues of the integral equation; that is, VAR[k] = k.
The received signal (t) may be only noise, or it may be signal + noise. Hence, the
conditional mean of k is
Y
E k H 0 = E LMz T (t)k (t )dt OP = 0
N0 Q
, (13-103)
E kYH1 = E LM z b s( t ) + ( t )g k ( t )dt OP = s k
T
N0 Q
where
sk = z0
T
s( t ) k ( t ) dt
603CH13.DOC 13-28
EE603 Class Notes Version 1 John Stensby
s( t ) = sk k ( t ) (13-104)
k =0
Y Y
Var k H 0 = Var k H 1 = k (13-105)
To start with, our statistical test will use only the first n K-L coefficients k, 1 k n.
r T
V = 1 2 L n (13-106)
r r
Y LM n (2 ) O expFG n 2 / IJ
P0 ( V) P( V H 0 ) =
MNk=1 k PQ H k=1 k k K
. (13-107)
L O F n
I
P1 ( V) P( VYH 1 ) = M (2 k ) P expG ( k sk ) 2 / k J
r r n
MNk =1 Q H k =1 K
P0 (alternatively, P1) is the density for the n coefficients when H0 (alternatively, H1) is true.
We will use a classical likelihood ratio test (see C.W. Helstrom, Statistical Theory of
r
Signal Detection, 2nd edition) to make a decision between H0 and H1. First, given V, we compute
603CH13.DOC 13-29
EE603 Class Notes Version 1 John Stensby
r
( V)
r
P1( V)
r
n LM OP
= exp (2s k k s2k ) / 2 k . (13-108)
P0 ( V) k =1 MN PQ
in terms of the known sk and k. Then, we compare the computed to a user-defined threshold
0 to make our decision (there are several well-know methods for setting the threshold 0). We
decide hypothesis H1 if exceeds the threshold, and H0 if is less than the threshold. Stated
H
r >1
( V) . (13-109)
< 0
H0
The inequality (13-109) will be unchanged, and the decision process will not be affected, if
we take any monotone function of (13-109). Due to the exponential functions in (13-108), we
H1
Gn
n
sk
k
> OP n
s OP
ln 0 + k sk G n0 .
k =1 k
< Q
k =1 k
Q (13-110)
H0
Fourier series expansion of a function q(t); that is, the coefficients qk determine the function
q( t) q k k ( t) . (13-111)
k =1
As will be discussed shortly, function q(t) is the solution of an integral equation based on kernel
603CH13.DOC 13-30
EE603 Class Notes Version 1 John Stensby
H1
n > n
G n q k k ln 0 + q k sk G n0 (13-112)
k =1
< k =1
H0
we have
limit
n
n
q k k =
k =1
z0
T
q ( t ) ( t ) dt
. (13-113)
limit
n
n
q k sk =
k =1
zT
0
q ( t ) s( t ) dt
Use (13-113), and take the limit of (13-112) to obtain the decision criteria
H1
z T
G q ( t ) ( t )dt
0
>
<
T
z
ln 0 + q ( t )s( t )dt .
0
(13-114)
H0
As shown on the left-hand-side of Equation (13-114), statistic G can be computed once data
record (t), 0 t T is known. Then, to make a decision between hypothesis H0 and H1, G is
h( t ) q (T t ), 0 t T , (13-115)
603CH13.DOC 13-31
EE603 Class Notes Version 1 John Stensby
a)
(t)
h(t) = q(T-t), 0 t T G= z
0
T
q ( t ) ( t )dt
Sample
@t=T
H1
b) z T
G q ( t ) ( t )dt
0
>
<
T
z
ln 0 + q ( t )s( t )dt
0
H0
and sample the filter output at t = T (the end of the integration period) to obtain the statistic G.
This is the well known matched filter for signal s(t) embedded in Gaussian, nonstationary,
As described above, function q(t) has expansion (13-111) with coefficients qk sk/k.
However, we show that q(t) is the solution of a well-known integral equation. First, write
(13-111) with as the time variable. Then, multiply the result by (t,), and integrate from = 0
to = T to obtain
z0
T
( t , )q ( )d =
qk
k =1
z
0
T
( t , ) k ( )d =
LM sk OP k k ( ) ,
k =1 N k Q
(13-116)
where sk/k has been substituted for qk. On the right-hand-side, cancel out the eigenvalue k, and
z0
T
( t , )q ( )d = s( t ), 0 t T , (13-117)
for the match filter impulse response q(t). Equation (13-117) is the well-know Fredholm integral
603CH13.DOC 13-32
EE603 Class Notes Version 1 John Stensby
We consider the special case where the noise is white with correlation
( ) = 2 ( ) . (13-118)
The Fredholm integral equation is solved easily for this case; simply substitute (13-118) into
q( t) = s2 ( t) / 2 , 0 t T . (13-119)
So, according to (13-115), the matched filter for the white Gaussian noise case is
a) s(t) b) h(t)
1 1
T t T t
c)
Filter Output
T/3
t
T 2T
Figure 13-4: a) Signal s(t), b) matched filter impulse response h(t) and c) filter output for the
case 2 = 1. Note that the filter output is sampled at t = T to produce the decision statistic G.
603CH13.DOC 13-33
EE603 Class Notes Version 1 John Stensby
h( t ) = s(T t ) / 2 , (13-120)
a folded, shifted and scaled version of the original signal. Figure 13-4 illustrates a) signal s(t), b)
matched filter h(t) and c) filter output, including the sample point t = T, for the case 2 = 1.
603CH13.DOC 13-34