0% found this document useful (0 votes)
35 views23 pages

L4-2 Mhe

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 23

CDS 270-2: Lecture 4-2

Moving Horizon Estimation


Henrik Sandberg
19 April 2006

Goals:
• To learn how extended Kalman filters (EKF) and moving horizon
estimation (MHE) works.
• To learn sufficient conditions for stability of MHE.
Reading:
• E.L. Haseltine and J.B. Rawlings: Critical evaluation of extended
Kalman filtering and moving horizon estimation, Ind. Eng.
Chem. Res., vol. 44, no.8, 2005.
• C.V. Rao, J.B. Rawlings, and D.Q. Mayne: Constrained State 1
Estimation for Nonlinear Discrete-Time Systems: Stability and
Moving Horizon Approximations, IEEE TAC, vol.48, no.2, 2003.
Networked Control Systems

Today: The state server with extended Kalman filter and


2
moving horizon estimation.
Problem Formulation
The model:

x k+1 = f k( x k, wk)
yk = h k ( x k ) + v k

(include possible inputs uk in f k) with constraints

xk ∈ Xk, wk ∈ W k, vk ∈ V k .

Possibly nonlinear f k and hk!


The problem: Given the data

Yk = { yi : 0 ≤ i ≤ k},

find the “best” (to be defined) estimate x̂ k+m of x k+m .


(m = 0 filtering, m > 0 prediction, and m < 0 smoothing.)
3
Constraints
• The noise wk and vk can typically have truncated Gaus-
sian distributions:

• Be careful with bounds on measurement noise due to


possibility of outliers.
• Constraints on state x k more problematic to explain. May
lead to acausality. A detailed enough model f k should
not need them. Nevertheless, state constraints are useful
together with simplified models. 4
First Approach: The Extended Kalman Filter
Forget about constraints and linearize along estimated trajec-
tory:

f k( x k, wk)  f k( x̂ kh k, 0) + A k( x k − x̂ kh k) + N kwk
hk( x k)  hk( x̂ kh k−1 ) + Ck( x k − x̂ kh k−1 )

where

V V V
f k( x, 0) , Nk = f k( x̂ kh k , w) h k( x, 0) .

Ak = Ck =
Vx x= x̂kh k Vw w=0 Vx x= x̂kh k−1

Assume
wk Qk 0
    
E [ wTl vTl ] = δ kl = Σ kδ kl
vk 0 Rk
and
5

E{ x0 } = x̄0 and E{( x0 − x̄0 )( x0 − x̄0 )T } = P0 .


Extended Kalman Filter (EKF)
0. Initialization:

x̂0h−1 = x̄0
P0h−1 = P0

1. Corrector:

x̂ kh k = x̂ kh k−1 + K k( yk − hk( x̂ kh k−1 ))


K k = Pkh k−1 CkT ( Ck Pkh k−1 CkT + R k)−1
Pkh k = Pkh k−1 − K k Ck Pkh k−1

2. One-step predictor:

x̂ k+1h k = f k ( x̂ kh k, 0)
Pk+1h k = A k Pkh k ATk + N k Q k N kT 6
Comments on EKF

• EKF is simple and works often well (if linear approximation


is valid, the noise is “small”, and constraints not impor-
tant).
• Hard to prove anything.
• Pkh k and Pkh k−1 are no longer exact estimation error co-
variances. They give a rough estimate of the uncertainty,
though.
7
Problems with Expectations and
non-Gaussian Distributions

The expectation (“mean”) may be a very unlikely process state!

8
Another Approach: MAP Estimates
• Idea: “Compute most likely values of the states x0 , . . . , xT ,
given measurements y0 , . . . , yT −1 .”
• Use Bayesian maximum a posteriori (MAP) criteria:

{ x̂0 , . . . , x̂T } = arg max p( x0 , . . . , xT h YT −1 )


{ x0 ,..., xT }

• Assuming Gaussian white noise and x k+1 = f k( x k) + wk:


arg max p( x0 , . . . , xT h y0 , . . . , yT −1 )
{ x0 ,..., xT }
−1
TY
= arg max px0 ( x0 ) pvk ( yk − h( xk )) p( xk+1 h xk )
{ x0 ,..., xT }
k=0
T
X −1
= arg max log pvk ( yk − hk ( xk )) + log p( xk+1 h xk ) + log px0 ( x0 )
{ x0 ,..., xT }
k=0
T −1
i yk − hk ( xk )i2R−1 + i xk+1 − f ( xk )i2Q−1 + i x0 − x̄0 i2P−1 .
X
= arg min
{ x0 ,..., xT } k k 0
k=0

• Often the variables x0 and w0 , . . . , wT −1 are used instead.


Interpretation
T −1
X
min i yk − hk( x k)i2R−1 + iwki2Q−1 + i x0 − x̄0 i2P−1
x0 ,{w0 ,...,wT −1 } k k 0
k=0
Find initial state and process noise sequence such that
1. measurement data is matched (first term small);
2. process noise not larger than expected (second term
small);
3. initial state not too far away from initial guess (third term
small).

More generally we will consider the problem


T −1
L k(wk, vk) + Γ( x0 ),
X
min
x0 ,{wk }Tk=−01
k=0
10
subject to dynamics and constraints, for positive-definite
functions L k and Γ.
Possibilities and Problems
yk
vk

ŷk
wk

0 k 0 k
T T

We note that
• we can try to enforce the constraints x k ∈ Xk, wk ∈ W k,
vk ∈ Vk in the optimization;
• the optimization problem may have multiple local minima;
• the problem complexity grows at least linearly with the
horizon T ;
• for linear systems without constraints and quadratic cost,
the problem can be solved recursively and leads to the
Kalman filter. 11
Moving Horizon Estimation (MHE)

To reduce the computation cost with increasing T , consider a


moving horizon backwards in time of length N (compare with
RHC):
− − −
T 1 T N 1
!
Φ ∗T = min L k(wk, vk) + L k(wk, vk) + Γ( x0 )
X X
x0 ,{wk }Tk=−01
k=T − N k=0
T −1
!
L k(wk, vk) + Z T − N ( z) .
X
= min
T − N ,{wk } k=T − N
z∈R T −1
12
k=T − N
The Arrival Cost
The arrival cost is defined as
T− N −1
L k(wk, vk) + Γ( x0 )
X
Z T − N ( z) = min
x0 ,{wk }Tk=−0N −1
k=0

subject to constraints, dynamics, and xT − N = z.

• Compare the arrival cost to the terminal cost in RHC.


• The arrival cost is hard to compute. Approximations Ẑ T − N
often necessary. Examples of this later.
• For linear systems without constraints and quadratic cost,
the exact expression is:

Z T − N ( z) = ( z − x̂T − N )T PT−−1 N ( z − x̂T − N ) + Φ ∗T − N ,


Interpretation? Just minimizing arrival cost should give 13

back x̂T − N hT − N −1 (our previous best estimate)!


Minimize the Total Cost Over the Horizon N
Lk
Stage cost
L T −1
Z T − N ( z)

ŷk
ŷT −1
z

yk
yT −1
x̂T − N

T−N k T −1 Time

The state z is the optimal updated estimate of the state at time


T − N , after taking yT − N , . . . , yT −1 into account. 14
“Implementation”
Sketch of algorithm
0. Fix horizon N , and find L k and Z k(hard!).
1. Solve
T −1
!
{ z∗ , {ŵk}TT − L k(wk, vk) + Z T − N ( z)
X
1
− N } = arg min
z∈R T − N , wk
k=T − N

subject to dynamics and x k ∈ Xk, wk ∈ W k, vk ∈ Vk.


(Use (S)QP, NPSOL, NTG,. . . )
2. Compute estimates x̂ khT −1 , k ∈ [T − N , T ]:

x̂T − N hT −1 := z∗ , ..., x̂T hT −1 := f T −1 ( x̂T −1hT −1 , ŵT −1 )

3. Increase T , update Z T − N , and goto 1.


15
Example 1: EKF vs. MHE (Rao et al. 2003)
x1, k+1 = 0.99x1, k + 0.2x2, k
0.5x2, k
x2, k+1 = −0.1x1, k + + wk
1 + x22, k
yk = x1, k − 3x2, k + vk

vk Gaussian, wk truncated Gaussian such that wk ≥ 0.


Compare EKF and MHE ( N = 10, with/without constraints).

16
Example 1: EKF vs. MHE (Rao et al. 2003)
• Constraints are important! Make all the difference in this
example.
• Computation time (interpreted code, 500 MHz processor):
– MHE: ∼3 s per time step
– EKF: negligible
• GNU Octave toolbox NMPC is available at
http://jbrwww.che.wisc.edu/home/tenny/nmpc/
(Use CVS version!)

17
Approximate MHE
T −1
!
Φ̂ T = L k(wk, vk) + Ẑ T − N ( z)
X
min
T − N ,{wk } k=T − N
z∈R T −1
k=T − N

Two common approximate arrival costs Ẑ T − N :


(A) Forget initial guess (independent of z):

Ẑ T − N ( z) = Φ̂ T − N .

(B) Use EKF around estimated trajectory, { x̂ k }0T − N , beyond the


moving horizon:

Ẑ T − N ( z) = ( z − x̂T − N )T PT−−1 N ( z − x̂T − N ) + Φ̂ T − N .


PT − N computed with a second-order Taylor expansion of
L k and Γ, and the one-step predictor formulas.
18
Example 2: Approximation Problems (Rao 2000)
Estimation error plotted:

A bad approximation Ẑ T − N may result in unstable estimations!


Next: We give some sufficient conditions for asymptotic 19

stability of (approximate) MHE.


Asymptotic Stability of Estimation
• Asymptotic stability means that when there is no noise in
the system, i.e., wk = vk = 0, then

i x k − x̂ ki → 0, k → ∞,

if the initial estimate is good enough. That is, the estimate


converges to the true state.
• When bounded noise is present, we require that i x k − x̂ k i
is uniformly bounded for all k.

20
Three Conditions for Asymptotic Stability
C1. Uniform observability: With initial conditions x1 and x2
k+ N −1
0 ≤ ϕ (i x1 − x2 i) ≤ k.
X
i y1k − y2k i, for all
j =k

C2. A cost to change initial estimate:

0 ≤ Ẑ T − N ( z) − Φ̂ T − N ≤ γ (i z − x̂T − N i).

C3. Add no information: Some forgetting of information


outside the moving horizon
T −1
!
L k(wk, vk) + Ẑ T − N ( z)
X
Ẑ T ( p) ≤ min
T − N ,{wk } k=T − N
z∈R T −1
k=T − N
21
subject to xT = p ∈ R T .
Discussion
• C1 is system dependent. A long horizon N is good.
• Condition C3 is often hard to prove. If it is violated, an
estimate p is considered worse than there are reason for.
• The approximation (A) satisfies C2-C3. May have bad
performance, though, since it has complete forgetting.
• The approximation (B) (using EKF) does not necessarily
satisfy C3. Exception: linear dynamics, quadratic cost and
convex constraints.

If C1-C3 and some technical assumptions on


f k, hk, L k, Γ are true (there are solutions etc.), then
MHE is asymptotically stable.

Proof. Lyapunov-type argument. See Rao et al. 2003. Com-


pare with RHC. 22
Summary
• The extended Kalman filter (EKF) is a simple and often
good estimator for nonlinear systems. No constraints are
taken into account. Not many proofs.
• Moving horizon estimation (MHE) is more powerful (and
complicated): nonlinear models, constraints, and asymmet-
ric noise distributions. Dual to RHC.
• Constraints come from physical insight (“positive flows”) or
approximations.
• The arrival cost Z T often needs to be approximated.
(Arrival cost ∼ Terminal cost in RHC.)
• MHE performance relies on good optimization perfor-
mance and good approximations.
• Asymptotic stability follows from some natural conditions:
observability, cost to update, some forgetting,. . . 23

You might also like