Random Process LN
Random Process LN
Random Process LN
White Noise The term white noise was originally an engineering term and there are subtle, but important dierences in the way it is dened in various econometric texts. Here we dene white noise as a series of un-correlated random variables with zero mean and uniform variance (2 > 0). If it is necessary to make the stronger assumptions of independence or normality this will be made clear in the context and we will refer to independent white noise or normal or Gaussian white noise. Be careful of various denitions and of terms like weak, strong and strict white noise The argument above for second order stationery of Normal white noise follows for white noise. White noise need not be strictly stationary.
Page | 2
Where in case of RV being complex , the y * denotes the complex conjugate of Y. another important parameter related with Correlation is CO Variance which is defined as :where mx and my are ensemble average ( mean ) two random variables X & Y. If X , Y have zero mean it can be seen that they are equal to correlation. To make the Covariance invariant to scaling of data , it is frequently normalized as below
such normalized Covariance is known as CORRELATION CO EFFICIENTS. ( Some time it is also written as Sxy).
This coefficient has got great relevance in data forecasting , processing or signal extraction in an noisy environment etc. For Zero mean it is as shown below bounded by a value 1.
Important Deductions
Page | 3
Important Property
Page | 4
fxy ( ) = separable.
.
This would imply that the covariance would be Zero. Hence the two Independent RVs are Uncorrelated. One of the important aspect is that the variance is sum of two variances i.e:
is orthogonality. The two random variable are said to be Orthogonal if their correlation co efficient is zero.
Linear Mean square Estimators Let us consider a problem of estimating a RV Y , in terms of another random variable X. this situation comes when Y is not directly measurable , so we measure other function having some linear relation with y , using which we try to estimate or guess Y. In estimation process , we define some cost criteria which needs to be minimized to build the confidence in our estimate of Y . In case of Linear estimators we generally minimize , the Mean square Error ( MSE) ( y^ represents estimate of Y )
Page | 5
{ Note actually later in discussion of Optimum Filter we will find that optimum estimate is conditional mean Y^ = E ( Y|X) }
Basic Concept Linear Estimators ^ Let us assume that our estimator is of type Y = aX+b ( linear relation) we have to find out a and b which would minimize the mean square error
^
The minimum value can be found by differentiating the above equation with respect to a and b and setting them to zero.
If we check the first differential equation we get which implies that the error is Orthogonal to data sets used to estimate it that is X. Thisis one of the Fundamental property of Linear Estimators . By solving the above equations we get the values of a and b as under
Page | 6
And also
Important Deductions
Page | 7
If the Bias is zero then the expected value of the estimate is equal to the true value and estimate is said to be Unbiased. For consistency the estimate must converge i.e.
Example
Page | 8
Here the mean is constant and auto correlation r(k,l) depends only in difference of time shifts k and l. Hence we see that a Harmonic process is a wide sense stationary process. Special Notes : (Matrix formulation of Auto Correlation )
is a vector p+1 values of process X [n], then we have the outer product as :
Page | 9
The auto correlation matrix of X[n], WSS process , is a Hermitian Toeplitz matrix. It is non negative semi definite i.e Rx > 0.The eigenvalue of this matrix or a WSS process is always non negative and real valued. Example 2 Ref example 1. We had auto correlation as
Power Spectrum The power spectrum of a WSS sequence is given by its Fourier transform i.e.
Page | 10
Example 4
Determine the Auto correlation Function of the process
Page | 11
setting n-l = m
Page | 12
If we define rh (k) as deterministic correlation function of a Unit sample response h[n] then we have ,
Page | 13
Note
White Noise
Q[z]
X[n]
The inverse of Q[z] is known as whitening filter. The Q[z] or H[z] can be written as
Assuming a[0] and b[0] = 1 . the above equation can be expanded as a polynomial in Z i.e
H [z] = 1+b1z -1 + b2 z-2 + .. + bq z-q / 1+ a1 z-1 +..+ ap z-p this formulation is a representation of ARMA process ( p,q).
In this when q= 0 then the process is generated by filtering white noise by a all Pole filter H[z] = 1/ 1+ when it is known ar Auto Regressive ( AR) process of P. Page | 14
hence we get
Page | 15
Example 6
1. To find a Second order all pole model p=2 , q =0,The eqn to be solved
hence a(1) and a(2) are -1.50 and +1.50. Since a(0) and b(0) are zero we have
Case 2
Page | 16
Case 3
hence
Having known a1
A moving average process of order k is a process Xt that may be described by the equation
Page | 17
Taking the expectation of both sides and using the fact that t is white noise we nd that
It is inferred from above that the variance varX(t ) does not depend on t . By taking the expectation of both sides :-
Page | 18 Above equation shows that shows that R(t , s ) only depends on the difference t s of its arguments. The process Xt hence is wide sense stationary with covariance function
Example 7
Page | 19
Page | 20
where L xt = X t-1
Page | 21
Page | 22
Page | 23
Example Multi step Prediction : In Multistep prediction x(n+) is predicted in terms of Linear combination of p values of x(n) , x(n-1) , x(n-p+1) .
By substituting K = 0, 1,2 3, 4 ,5 6 7.
Page | 24
------------------------------------------------------------------------------------------------------------------------------------------
Page | 25
Page | 26
v __________________________________________________________________________---Reference Statistical Signal Processing : SM Kay Statistical Signal Processing & prediction : M H Hays Open source on Web .