Recursive Least-Squares Adaptive Filters: Dr. Yogananda Isukapalli
Recursive Least-Squares Adaptive Filters: Dr. Yogananda Isukapalli
Recursive Least-Squares Adaptive Filters: Dr. Yogananda Isukapalli
2
Introduction
• Linear least-squares problem was probably first
developed and solved by Gauss (1795) in his work
on mechanics
• L-S solutions have attractive properties;
– can be explicitly evaluated in closed forms
– can be recursively updated as more input data is
made available
– are maximum likelihood estimators in the
presence of Gaussian measurement noise
3
• Over the last several years, a wide variety of
adaptive algorithms based on least squares
criterion has been derived
– RLS (Recursive Least Squares) algorithms and
corresponding fast versions
• FTF (Fast Transversal Filter)
• FAEST (Fast Aposteriori Error Sequential
Technique)
– QR and inverse QR algorithms
– LSL (Least Squares Lattice) and QR
decomposition
4
Least-Squares Problem
Consider a standard observation model in additive noise.
d(i ) = U iH W + n(i )
6
ˆ
2
min d - UW
ˆ
W
F = UH U
ˆ = F -1Z
W
7
RLS algorithm
– The RLS algorithm solves the least squares
problem recursively
– At each iteration when new data sample is
available the filter tap weights are updated
– This leads to savings in computations
– More rapid convergence is also achieved
8
The derivation of RLS algorithm
The attempt is to find a recursive solution to the following
minimization problem,
n 2
e (n) = å l e(i )
n -i
i =1
e(i ) = d (i ) - y (i ) = d (i ) - W H (n)U(i )
U(i ) = [U(i ), U(i - 1),..., U(i - M + 1)]
T
[ n -1
]
F (n) = l å ln-i -1U(i )U H (i ) + dln I + U(n)U H (n)
i =1
12
Application of matrix inversion lemma to the present
Problem is based on the following definitions.
A = Φ( n)
B -1 = lΦ(n - 1)
C = U ( n)
D = 1.
13
P (n) = Φ -1 (n)
P (n) = l-1P (n - 1) - l-1K (n)U H (n)P (n - 1) ( Riccati equation)
l-1P(n - 1)U(n)
K ( n) = = P ( n) U ( n)
1 + l U (n)P (n - 1)U(n)
-1 H
14
ˆ
W ( n) = Φ ( n) Z( n)
-1
= P ( n) Z( n)
= lP (n)Z(n - 1) - P (n)U(n)d (n)
*
15
The desired recursion equation for updating the tap-weight
Vector.
W ˆ (n - 1) + K (n)[d * (n) - U H (n) W
ˆ ( n) = W ˆ (n - 1)]
=W ˆ (n - 1) + K (n)x * (n)
ˆ * (n - 1)
x ( n) = d ( n) - U T ( n) W
ˆ H (n - 1)U(n)
= d ( n) - W
The block diagram shown in the next page illustrates the
Use of a priori estimation error in RLS algorithm.
a priori estimation error is in general different from
a posteriori estimation error e(n)
ˆ
e( n ) = d ( n ) - W ( n ) U ( n )
H
16
• Block diagram of RLS algorithm
17
Summary of RLS algorithm
Initialize the algorithm by setting
Wˆ (0) = 0, P (0) = d -1I.
ìsmall positive constant for high SNR
d =í
îlarge positive constant for low SNR
For each time instant of time, n = 1,2,...compute
p ( n)
p (n) = P(n - 1)U(n). K (n) = ,
l + U (n)p (n)
H
ˆ H (n - 1)U(n),
x (n) = d (n) - W
ˆ H ( n) = W
W ˆ H (n - 1) + K (n)x * (n), and
P(n) = l-1P(n - 1 ) - l-1K (n)U H (n)P (n - 1).
18
Convergence Analysis of the RLS algorithm
The desired response and the tap input vector are assumed
to be related by the multiple linear regression model.
Block Diagram of Linear
Regression Model.
19
Assumption I:
d (n) = w 0H u(n) + e0 (n),
Where,
w 0 is the regression parameter vector.
e0 (n) is the measurment noise.
e0 (n) is white with zero mean and variance s 02 .
e0 (n) is independent of the regressor u(n).
The exponential weighting factor l is unity.
Assumption II:
The input signal vector u(n) is drawn from a stochastic
Process, which is ergodic in the autocorrelation function.
20
Assumption III:
The Fluctuations in the weight-error vector must be slower
Compared with those of the input signal vector u(n).
Convergence in the Mean Value
With the help of above assumptions it can be shown that,
RLS algorithm is convergent in the mean sense for n ≥ M,
Where ‘M’ is the number of taps in the additive transversal
filter. d
E [w
ˆ (n)] = w 0 - p.
n
• Unlike the LMS algorithm, the RLS algorithm does not
have to wait for n to be infinitely large for convergence.
21
• The mean-squared error in the weight vector
The weight error vector is defined as,
ε ( n) = w
ˆ ( n) - w 0
å
H 0
n i =1
l
i
22
Learning curve of the RLS algorithm
Considerations on convergence:
• RLS algorithm converges in about 2M iterations,
where M is the length of transversal filter
• RLS converges typically order of magnitude faster
than LMS algorithm
• RLS produces zero misadjustment when operating
in stationary environment (when n goes to infinity
only measurement error is affecting to the precision
• convergence is independent of the eigenvalue
spread
23
Example of RLS algorithm: Adaptive equalization I
• Block diagram of adaptive equalizer
24
Impulse response of the channel is,
25
Results of Part1
26
Results of Part2
27
– Part 1 summary
• Convergence of RLS algorithm is attained in about
20 iterations (twice the number of taps).
• Rate of convergence of RLS algorithm is relatively
insensitive to variations in the eigenvalue spread.
• Steady-state value of the averaged squared error
produced by the RLS algorithm is small, confirming
that the RLS algorithm produces zero misadjustment.
– Part 2 summary
• The rate of convergence is nearly same for the LMS
and RLS algorithm in noisy environment.
28