04 e PDF
04 e PDF
04 e PDF
Random processes
Random process
Mean function
The mean function mX (t) of a random process {X(t)} is defined by
mX (t) = Z
E{X(t)}
= xfX(t)(x)dx.
Autocorrelation function
The autocorrelation function RX (t1, t2) of a random process {X(t)} is defined by
Known signal
An extreme example of a random process is a known or deterministic signal. When
X(t) = s(t) is a known signal, we have
m(t) = E{s(t)}
= s(t),
R(t1, t2) = E{s(t1)s(t2)}
= s(t1)s(t2).
Consider the random process {X(t)} with mean E{X(t)} = 3 and autocorrelation
function R(t1, t2) = 9 + 4 exp(0.2|t1 t2|). If Z = X(5), W = X(8), we can
easily obtain E{Z} = E{X(5)} = 3, E{W } = 3, E{Z 2} = R(5, 5) = 13,
E{W 2} = R(8, 8) = 13, Var{Z} = 13 32 = 4, Var{W } = 13 32 = 4, and
E{ZW } = R(5, 8) = 9 + 4e0.6 11.195. In other words, the random variables
Z and W have the variance 2 = 4 and covariance Cov(5, 8) = 4e0.6 2.195.
Autocovariance function
The autocovariance function KX (t1, t2) of a random process {X(t)} is defined by
KX (t1, t2) = E{[X(t1) mX (t1)][X (t2) mX (t2)]}.
Crosscorrelation function
The crosscorrelation function RXY (t1, t2) of random processes {X(t)} and {Y (t)}
is defined by
RXY (t1, t2) = E{X(t1)Y (t2)}.
Crosscovariance function
The crosscovariance function KXY (t1, t2) of random processes {X(t)} and {Y (t)}
is defined by
KXY (t1, t2) = E{[X(t1) mX (t1)][Y (t2) mY (t2)]}
= RXY (t1, t2) mX (t1)mY (t2).
Orthogonality
The two random processes {X(t)} and {Y (t)} are said to be orthogonal if
RXY (t1, t2) = 0 for all t1 and t2.
Random process
Stationary process
A random process {X(t)} is stationary, strict-sense stationary (s.s.s.), or strongly-
stationary if the joint cdf of {X(t1), X(t2), , X(tn)} is the same as the joint cdf
of {X(t1 + ), X(t2 + ), , X(tn + )} for all n, , t1, t2, , tn.
4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 3
4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 4
4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 5
4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 6
Bernoulli process
An i.i.d. random process with two possible values is called a Bernoulli process. For
example, consider the random process {Xn} defined by
(
1, if the nth result is head,
Xn =
0, if the nth result is tail,
when we toss a coin infinitely. The random process {Xn} is a discrete-time discrete-
amplitude random process. The success (head) and failure (tail) probabilities are
P {Xn = 1} = p
and
P {Xn = 0} = 1 p,
respectively.
4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 8
The mean function mX (n) of a Bernoulli process is not a function of time but a
constant. The autocovariance KX (n1, n2) depends not on n1 and n2 individually, but
only on the difference n1 n2.
Clearly, the Bernoulli process is w.s.s..
4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 9
4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 10
A stationary process is always w.s.s., but the converse is not always true. On the
other hand, a w.s.s. normal process is s.s.s.. This result can be obtained from the
pdf
1 1 T 1
fX (x) = exp (x mX ) K X (x mX )
(2)n/2|K X |1/2 2
of a jointly normal random vector.
4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 11
If two random processes {X(t)} and {Y (t)} are jointly w.s.s., then {X(t)} and
{Y (t)} are both w.s.s.. The crosscorrelation function of {X(t)} and {Y (t)} is
XY ( )
RXY (t + , t) = R
= E{X(t + )Y (t)}
= E{Y (t)X (t + )}
Y X ( ).
= R
4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 12
The crosscorrelation function RXY of two jointly w.s.s. random processes has the
following properties:
1. RY X ( ) = RXY ( ).
p
2. |RXY ( )| RXX (0)RY Y (0) 12 {RXX (0) + RY Y (0)}.
3. RXY ( ) is not always maximum at the origin.
4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 13
4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 14
2} = 2 when X
Since E{X n = Wn m, {Xn} is w.s.s. from
n
X l X l
Cov(Xn, Xn+k ) = E Xn m ai Xn+k m ai
i=1 i=1
X
l X
l
= E ni+1
ai X n+kj+1
aj X
i=1 j=1
2
E al alk Xn+kl+1 + al1alk1
= 2
X 2 ,
+ + ak+1a1X k l 1,
n+kl+2 n
0, k l,
(
(al alk + + ak+1a1) 2, k l 1,
=
0, k l.
4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 15
4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 16
4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 17
Square-law detector
Let Y (t) = X 2(t) where {X(t)} is a Gaussian random process with mean 0 and
autocorrelation RX ( ). Then the expectation of Y (t) is E{Y (t)} = E{X 2(t)} =
RX (0). Since X(t+ ) and X(t) are jointly Gaussian with mean 0, the autocorrelation
of Y (t) can be found as
RY (t, t + ) = E{X 2(t)X 2(t + )}
= E{X 2(t + )}E{X 2(t)} + 2E 2{X(t + )X( )}
2 2
= RX (0) + 2RX ( ).
4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 18
Limiter
Let {Y (t)} = {g(X(t))} be a random process which is defined by a random process
{X(t)} and a limiter
1, x > 0,
g(x) =
1, x < 0.
Then we can easily obtain P {Y (t) = 1} = P {X(t) > 0} = 1 FX (0) and
P {Y (t) = 1} = P {X(t) < 0} = FX (0). Thus the mean and autocorrelation of
{Y (t)} are
E{Y (t)} = 1 P {Y (t) = 1} + (1) P {Y (t) = 1}
= 1 2FX (0),
RY ( ) = E{Y (t)Y (t + )}
= P {Y (t)Y (t + ) = 1} P {Y (t)Y (t + ) = 1}
= P {X(t)X(t + ) > 0} P {X(t)X(t + ) < 0}.
Now, if {X(t)} is a stationary Gaussian random process with mean 0, X(t + ) and
X(t) are jointly Gaussian with mean 0, variance RX (0), and correlation coefficient
RX ( )/RX (0). Clearly, FX (0) = 1/2.
4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 19
We have (refer to (3.100), p. 156, Random Processes, Park, Song, Nam, 2004)
n X(t) o
P {X(t)X(t + ) < 0} = P <0
X(t + )
= FZ (0)
1 1 r1
= + tan1
2 1 1 r 2
1 1
= sin1 r
2
1 1 RX ( )
= sin1 ,
2 RX (0)
P {X(t)X(t + ) > 0} = 1 P {X(t)X(t + ) < 0}
1 1 RX ( )
= + sin1 .
2 RX (0)
Thus the autocorrelation of the limiter output is
2 1 RX ( )
RY ( ) = sin ,
RX (0)
from which we have E{Y 2(t)} = RY (0) = 1 and Y2 = 1 {1 2FX (0)}2 = 1.
4.2 Properties of random processes / 4.2.1 Stationary process and independent process
Random processes - Chapter 4 Random process 20
Power spectrum
The Fourier transform of the autocorrelation function of a w.s.s. random process is
called the power spectrum, power spectral density, or spectral density. It is usually
assumed that the mean is zero.
If the mean is not zero, the power spectral density is defined by the Fourier trans-
form of the autocovariance instead of the autocorrelation.
Telegraph signal
Consider Poisson points with parameter . Let N (t) be the number of points in the
interval (0, t]. As shown in the figure above, consider the continuous-time random
process X(t) = (1)N (t) with X(0) = 1. Here, Wi is the time between adjacent
Poisson points. Assuming > 0, the autocorrelation of X(t) is
RX ( ) = E{X(t + )X(t)}
= 1 P {the number of points in the interval (t, t + ] is even}
+(1) P {the number of points in the interval (t, t + ] is odd}
n ( )2 o n ( )3 o
= e 1+ + e t + +
2! 3!
= e cosh e sinh
= e2 .
We can obtain a similar result when < 0, Combining the two results, we have
RX ( ) = e2| |. Consider a random variable A which is independent of X(t) and
takes on 1 or 1 with equal probability. Let Y (t) = AX(t). We then have
E{Y (t)} = E{A}E{X(t)}
= 0,
E{Y (t1)Y (t2)} = E{A2}E{X(t1)X(t2)}
= E{A2}RX (t1 t2)
= e2|t1t2|
since E{A} = 0, E{A2} = 1. Thus {Y (t)} is w.s.s., and the power spectral density
of {Y (t)} is
SY () = F{e2| |}
4
= 2 .
+ 42
SX () 0.
Thus the process {Y (t)} is w.s.s. and the power spectral density SY () equals to
SX (). In addition, the crosscorrelation and cross power spectral density of {X(t)}
and {Y (t)} are
RXY (t, s) = E{X(t)Y (s)} = E{X(t)X (s d)}
= RX (t s + d) = RX ( + d)
and
Z
SXY () = F{RX ( + d)} = RX ( + d)ej d
Z
= RX (u)ejuejddu = SX ()ejd.
If the input random process is one-sided and w.s.s., however, the output
of an LTI filter is not w.s.s. in general.
4.2 Properties of random processes / 4.2.3 Random process and linear systems
Random processes - Chapter 4 Random process 28
and
RY (t1, t2) = E{Y (t1)Y (t2)}
nZ Z o
= E X(t1 )h()d X (t2 )h ()d
Z Z
= RX (t1 , t2 )h()h()dd
Z
= RXY (t1 , t2)h()d,
respectively.
4.2 Properties of random processes / 4.2.3 Random process and linear systems
Random processes - Chapter 4 Random process 29
4.2 Properties of random processes / 4.2.3 Random process and linear systems
Random processes - Chapter 4 Random process 30
We can express the autocorrelation and power spectral density of the output in
terms of those of the input. Specifically, we have
RY ( ) = RX ( ) h( )
and
SY () = SX ()|H()|2,
where h(t) is called the deterministic autocorrelation of h(t) and is defined by
h(t) = F 1(|H()|2)
= h(t)
Z h (t)
= h(t + )h( )d.
Let SY () be the power spectral density of the output process {Yt}. Then we can
obtain
RY ( ) = F 1{SY ()}
= F 1{|H()|2SX ()}.
4.2 Properties of random processes / 4.2.3 Random process and linear systems
Random processes - Chapter 4 Random process 31
Coherence function
A measure of the degree to which two w.s.s. processes are related by an LTI trans-
formation is the coherence function ()
defined by
SXY ()
XY () = .
[SX ()SY ()]1/2
Here, |()| = 1 if and only if {X(t)} and {Y (t)} are the linearly related, that is,
Y (t) = X(t) h(t). Note the similarity between the coherence function () and
the correlation coefficient , a measure of the degree to which two random variables
are linearly related. The coherence function exhibits the property
|()|
1
similar to the correlation coefficient between two random variables.
4.2 Properties of random processes / 4.2.3 Random process and linear systems
Random processes - Chapter 4 Random process 32
Ergodic theorem*
such that
If there exists a random variable X
n1
1X
Xi X, discrete time random process,
n i=0 n
Z T
1
X(t)dt X, continuous time random process.
T 0 T
Suppose that nature at the beginning of time randomly selects one of two coins
with equal probability, one having bias p and the other having bias q. After the
coin is selected it is flipped once per second forever. The output random process
is a one-zero sequence depending on the face of a coin. Clearly, the time average
will converge: it will converge to p if the first coin was selected and to q if the
second coin was selected. That is, the time average will converge to a random
variable. In particular, it will not converge to the expected value p/2 + q/2.
Let {Xn} be a discrete time random process with mean E{Xn} and
autocovariance function KX (i, j). The process need not be stationary
in any sense. A necessary and sufficient condition for
n1
1X
l.i.m. Xi = m
n n
i=0
is
n1
1X
lim E{Xi} = m
n n
i=0
and
n1 n1
1 XX
lim KX (i, k) = 0.
n n2
i=0k=0
then
T (i) K(i)|2} = 0
lim E{|K
T
Ergodicity*
Shift operator
An operator T for which T = {xt+1, t I}, where = {xt, t I} is an infinite
sequence in the probability space (RI , B(R)I , P ), is called a shift operator.
Stationary process
A discrete time random process with process distribution P is stationary if
P (T 1F ) = P (F ) for any element F in B(R)I.
Invariant event
An event F is said to be invariant with respect to the shift operator T if and only if
T 1F = F .
Random processes
A counting process
Since the marginal pdf is binomial, the process {Yn} is called a binomial counting
process.
The process {Yn} is not stationary since the marginal pmf depends on the time
index n.
Consider the modified Bernoulli process for which the event failure is represented
by 1 instead of 0.
+1, for success in the nth trial,
Zn =
1, for failure in the nth trial.
of the variables Zn. The process {Wn} is referred to as one dimensional random
walk or random walk.
Since
Zi = 2Xi 1,
it follows that the random walk process {Wn} is related to the binomial counting
process {Yn} by
Wn = 2Yn n.
Using the results on the mean and autocorrelation functions of the binomial count-
ing process and the linearity of expectation, we have
mW (n) = (2p 1)n,
KW (n1, n2) = 4p(1 p) min(n1, n2),
2
W n
= 4p(1 p)n.
Random process
Assuming y0 = 0,
pYn|Y n1 (yn|y n1) = Pr(Yn = yn|X1 = y1, Xi = yi yi1, i = 2, 3, , n 1)
= Pr(Xn = yn yn1|Xi = yi yi1, i = 1, 2, , n 1)
= pXn|X n1 (yn yn1|yn1 yn2, , y2 y1, y1),
where X n1 = (Xn1, Xn2, , X1).
If {Xn} are i.i.d.,
pYn|Y n1 (yn|y n1) = pX (yn yn1)
since Xn is independent of Xk for k < n.
Thus the joint pmf is
pY n (y n) = pYn|Y n1 (yn|y n1) pY n1 (y n1)
...
n n
= pY1 (y1) pYi|Yi1, ,Y1 (yi|yi1, , y1) = pX (yi yi1).
i=2 i=1
Discrete time i.s.i. processes (such as the binomial counting process and discrete
random walk) has the following property:
pYn|Y n1 (yn|y n1) = pYn|Yn1 (yn|yn1),
Pr{Yn = yn|Y n1 = y n1} = Pr{Yn = yn|Yn1 = yn1}.
Roughly speaking, given the most recent past sample (or the current sample), the
other past samples do not aect the probability of what happens next.
A discrete time discrete alphabet random process with this property is called a
Markov process. Thus all i.s.i. processes are Markov processes.
Let Ak be the event that the man loses all the money when he has started with k
won and B be the event the man wins a game. Then,
P (Ak ) = P (Ak |B)P (B) + P (Ak |B c)P (B c).
Since the game will start again with k + 1 won if a toss of a coin results in a head
and k 1 won if a toss of a coin results in a tail, it is easy to see that
P (Ak |B) = P (Ak+1),
P (Ak |B c) = P (Ak1).
If p = 0.5, we have pk = (A1 + A2k)1k since q/p = 1. Thus using the boundary
conditions p0 = 1 and pN = 0, we can nd
k
pk = 1 .
N
The discrete time Wiener process is also called the discrete time diusion process
or discrete time Brownian motion.
Since the discrete time Wiener process is formed as sums of an i.i.d. process, it
has i.s.i.. Thus we have E{Yn} = 0 and KY (k, j) = 2 min(k, j).
The Wiener process is a rst-order autoregressive process.
4.4 Discrete time process with i.s.i. / 4.4.3 discrete time Wiener process
Random processes - Chapter 4 Random process 9
The discrete time Wiener process is a Gaussian process with mean func-
tion m(t) = 0 and autocovariance function KX (t, s) = 2 min(t, s).
Since the discrete time Wiener process is an i.s.i. process, we have
fYn|Yn1 (yn|yn1) = fX (yn yn1),
fYn|Y n1 (yn|y n1) = fYn|Yn1 (yn|yn1)
= fX (yn yn1).
As in the discrete alphabet case, a process with this property is called a Markov
process.
Markov process
A discrete time random process {Yn} is called a rst order Markov process if it
satises
Pr{Yn yn|yn1, yn2, } = Pr{Yn yn|yn1}.
for all n, yn, yn1, yn2, .
4.4 Discrete time process with i.s.i. / 4.4.3 discrete time Wiener process
Random processes - Chapter 4 Random process 1
Random processes
As in the case of discrete time processes, these can be used to nd the joint pmf or
pdf as
n
pYt1 , ,Ytn (y1, , yn) = pYti Yti1 (yi yi1),
i=1
n
fYt1 , ,Ytn (y1, , yn) = fYti Yti1 (yi yi1).
i=1
If a process {Yt} has i.s.i., and the cdf pdf, or pmf for Yt = Yt Y0 is
given, the process can be completely described as shown above.
Wiener process
Wiener process
A process is called Wiener process if it satises
The initial position is zero. That is, W (0) = 0.
The mean is zero. That is, E{W (t)} = 0, t 0.
The increments of W (t) are independent, stationary, and Gaussian.
The distribution of Xt2 Xt1 , t2 > t1, depends only on t2 t1, not t1 and t2
individually.
W (t1) = W (t1),
W (t2) = W (t1) + {W (t2) W (t1)},
W (t3) = W (t1) + {W (t2) W (t1)} + {W (t3) W (t2)},
...
W (tk ) = W (t1) + {W (t2) W (t1)} +
+{W (tk ) W (tk1)}.
N0 = 0.
The process has i.s.i.. Hence, the increments over non-overlapping time intervals
are independent random variables.
In a very small time interval, the probability of an increment of 1 is proportional to
the length of the interval, and the probability of an increment larger than 1 is 0.
Thus, Pr{Nt+t Nt = 1} = t + o(t), Pr{Nt+t Nt 2} = o(t), and
Pr{Nt+t Nt = 0} = 1 t + o(t), where is a proportionality constant.
The Poisson counting process is a continuous time discrete alphabet i.s.i. pro-
cess.
We have obtained the Wiener process as the limit of a discrete time discrete
amplitude random-walk process. Similarly, the Poisson counting process can
be derived as the limit of a binomial counting process using the Poisson ap-
proximation.
The probability mass function pNtN0 (k) = pNt (k) of the increment
Nt N0 between the starting time 0 and t > 0
Let us use the notation
p(k, t) = pNtN0 (k), t > 0.
Using the independence of increments and the third property of the Poisson counting
process, we have
k
p(k, t + t) = Pr{Nt = n}Pr{Nt+t Nt = k n|Nt = n}
n=0
k
= Pr(Nt = n)Pr(Nt+t Nt = k n)
n=0
p(k, t + t) p(k, t)
= p(k 1, t) p(k, t).
t
4.5 Continuous time i.s.i. process / 4.5.2 Poisson counting process
Random processes - Chapter 4 Random process 11
Martingale
Martingale property
An independent increment process {Xt} with zero mean satises
E {X(tn) X(tn1)|X(t1), X(t2), , X(tn1)} = 0
for all t1 < t2 < < tn and integer n 2. This property can be rewritten as
E {X(tn)|X(t1), X(t2), , X(tn1)} = X(tn1),
which is called the martingale property.
Martingale
A process {Xn, n 0} with the following properties is called a martingale process.
E{|Xn|} < .
E{Xn+1|X0, , Xn} = Xn.
Random processes
Nk
E{Yk } = E Xi (1)
i=1
= E{E{Yk |Nk }}
= pNk (n)E{Yk |Nk = n}
n
= pNk (n)nE{X}
n
= E{Nk }E{X}.
We have
E{Yt} = E{Nt}E{X},
MYt (u) = E{MXNt (u)}.
E{Yt} = tE{X},
(t)k et
MYt (u) = MXk (u)
k!
k=0
(tMX (u))k
= et
k!
k=0
= et(1MX (u)).