Nonlinear Control - An Overview: Fernando Lobo Pereira, Flp@fe - Up.pt
Nonlinear Control - An Overview: Fernando Lobo Pereira, Flp@fe - Up.pt
Nonlinear Control - An Overview: Fernando Lobo Pereira, Flp@fe - Up.pt
References
1 M. Vidyasagar, Nonlinear Systems Analysis, Prentice-Hall, Englewood Cliffs, NJ,
1978.
2 A. Isidori, Nonlinear Control Systems, Springer-Verlag, 199X.
3 E. Sontag, Mathematical Control Theory: Deterministic Finite Dimensional
Systems, Springer-Verlag, 1998.
4 C. T. Chen, Introduction to Linear Systems Theory, Van Nostrand Reinhold
Company, New York, NY, 1970.
5 C. A. Desoer, Notes for a Second Course on Linear Systems, Holt Rhinehart,
Winston, 1970.
6 K. Ogata, State Space Analysis of Control Systems, Prentice-Hall, Englewood
Cliffs, NJ, 1965.
7 J. M. Carvalho, Dynamical Systems and Automatic Control, Prentice-Hall, New
York, NY, 1994.
Introduction
Problem description - system components, objectives and control issues.
Example 1: telescope mirrors control
PICTURE
Introduction
Example 2: Pendulum
Modeling
PICTURE
Introduction
Control Design
½
(0, 0) stable equilibrium
Stationary positions: (1)
(π, 0) unstable equilibrium.
Introduction
Step 2 - Control Synthesis
Take ϕ > 0 and pick u(t) = −αϕ(t) with α > 0. Then
ϕ̈(t) + (α − 1)ϕ(t) = 0.
α > 1 oscillatory behavior.
α = 1 only stable point: ϕ̇(0) = 0 √ (2)
α < 1 stable points: ϕ̇(0) = −ϕ(0) 1 − α
Introduction
Linearization Principle
Introduction
Issues in control system design:
Introduction
• Easy introduction to the main concepts of nonlinear control by making as much use as
possible of linear systems theory and with as little as possible Mathematics.
Introduction
where
• x ∈ <n, y ∈ <q , and u ∈ <m
• A ∈ <n×n, B ∈ <n×m, C ∈ <q×n, and D ∈ <q×m
for some N ≤ n − 1. The last equality holds due to the Cayley Hamilton Theorem:
p(A) = 0.
Here p is the characteristic polynomial of A.
Algorithm
Rt
• Compute eigenvalues and eigenvectors of t0 A(s)ds
• Compute coefficients αi(t, t0) by solving a system of linear functional equations
obtained by noting that A and its eigenvalues satisfy (3)
• Plug the αi(t, t0)’s in (3) in order to obtain φ(t, t0)
Function of a Matrix P P
m −1
f (A) = σk=1 l=0k f (l)(λk )pkl (A) where
(λ−λk )l φk (λ)
• pkl (λ) = l! , l = 0, ..., mk −1
ψ(λ)
• φk (λ) = nk (λ) (λ−λ m , and
k) k
1
• nk (λ) are the coefficients of a partial fraction expansion of ψ(λ) , i.e.,
1
Pσ nk (λ)
ψ(λ) = k=1 (λ−λ )mk k
f) Center Re(λ1) = 0
Minimal Realization
Definition - A representation is a minimal representation if it is completely observable
and completely controllable
Fact - Minimal representation ⇐⇒ rank(RQ) = n
Definitions:
Autonomy - No t-dependence; even through u
Equilibrium Point - f (t, x0) = 0, ∀t > t0
Relevance: x(t) = x0 ∀t > t0
Isolated Equilibrium point - There is a neighborhood of x0 where no additional
equilibrium points can be found.
Exercise - List the Pendulum equilibrium points
Fernando Lobo Pereira, [email protected] 21
Nonlinear Control, 2008, FEUP
Periodic Solutions
Poincare Bendixson Theorem - existence of periodic trajectories.
Let L be the set of limit points of a trajectory S such that L is contained in a closed,
bounded region M which does not contain equilibrium points.
Then: either L or S is a periodic trajectory.
By Limit set it is meant the set of all those points in the state space which are visited
infinitely often as time goes to ∞.
Exercise 1: Take M := {(x, y) : 1/2 ≤ x2 + y 2 ≤ 3/2} and the nonlinear system in the
previous slide.
Exercise 2: Analyze the system f (x) := col(−x1 + x2, −x1 − x2) in the closed unit disk.
Suggestion: consider a change to polar coordinates.
Lyapunov Stability
Concept of Stability
It pertains the property of how a dynamic system drives or not its state to a given initial
equilibrium point from which it was removed by some perturbation.
This is concept is independent of the control activity.
An important application is to support the design of a feedback control so that the closed
loop system has the desired stability properties at the equilibrium points of interest.
Lyapunov Stability
Definitions of Stability Assume with no loss of generality that x0 = 0 is an equilibrium of
ẋ(t) = f (t, x(t)) at t = t0.
Types of stability
Stable at t0 Uniformly Stable over [t0, ∞)
iff iff
∀² ∃δ(t0, ²) such that ∀² ∃δ(²) such that
x0 is
kx(t0)k ≤ δ(t0, ²) and kx(t1)k ≤ δ(²) and t1 ≥ t0
⇓ ⇓
kx(t)k ≤ ² ∀t ≥ t0 kx(t)k ≤ ² ∀t ≥ t1
Asymptotically Stable Uniformly Asymptotically Stable
at t0 iff over [t0, ∞) iff
it is stable at t0 and it is uniformly stable over [t0, ∞) and
x0 is ∃γ(t0) > 0 such that and ∃γ > 0 such that
kx(t0)k ≤ γ(t0) kx(t1)k ≤ γ and t1 ≥ t0
⇓ ⇓
kx(t)k → 0 as t → ∞ kx(t)k → 0 as t → ∞
Lyapunov Stability
Definitions of Stability (cont.)
Exercise 1 - Check if (0, 0) is a stable equilibrium point of
½
ẋ1(t) = x2(t)
ẋ2(t) = −x1(t) + (1 − x21(t))x2(t)
Exercise 2 - Stability and Uniform Stability
a) Check that the solution to ẋ(t) = (6t sin(t) − 2t)x(t) is given by
ln(x(t)/x(t0)) = t20 − t2 + 6t0 cos(t) − 6t cos(t) − 6 sin(t0) + 6 sin(t).
b) Apply the definition of stability, i.e., show that you can pick δ(², t0) = c(t²0) for a
suitable c(t0). How would you choose c(t0)? (get a formula!)
c) Can such a constant c(t0) be chosen for any t0? Give a counterexample.
Observation:Stability and Uniform Stability coincide for Time invariant or periodic
systems. Why?
Exercise 3 - Check that (0, 0) is an asymptotically stable equilibrium point for the system:
½
ẋ1(t) = x1(t)(x21(t) + x22(t) − 1) − x2(t)
ẋ2(t) = x1(t) + x2(t)(x21(t) + x22(t) − 1)
Lyapunov Stability
Auxiliary Definitions
α is a Function of class K if α(·) is nondecreasing, α(0) = 0, and α(p) > 0, ∀p > 0.
V : <n × <+ → < is a Decrescent Function if ∃β(·) of class K s.t. V (t, x) ≤ β(kxk),
∀t ≥ 0, ∀x s.t. kxk ≤ r for some r.
V : <n × <+ → < is a Locally Positive Definite Function if V (t, 0) = 0, ∀t ≥ 0,
V (t, x) ≥ α(kxk), ∀t ≥ 0, ∀x s.t. kxk ≤ r for some r.
V is a Positive Definite Function if it is Locally Positive Definite Function with “r = ∞”
and α(r) ↑ ∞ as r ↑ ∞.
Observation: The time independent versions of the above definitions can be expressed
without the need of the K class functions, i.e.,
V̄ (0) = 0, V̄ (x) > 0 ∀x s.t. kxk ≤ r, ...
.
Fact A time dependent function is a l.p.d. (p.d.) iff it “dominates” a time independent
l.p.d.f. (p.d.f.).
Fernando Lobo Pereira, [email protected] 31
Nonlinear Control, 2008, FEUP
Lyapunov Stability
Lyapunov Stability
Lyapunov’s Direct Method
Main Results The equilibrium point 0 at time t0 is (uniformly) stable (over [t0, ∞)) if ∃ a
C 1 (decrescent) l.p.d.f. V s.t.
V̇ (t, x) ≤ 0, ∀t ≥ 0, ∀x s.t. kxk ≤ r for some r.
Given ², let ²̄ := min{², r, s}, where s is such that V (t, x) ≥ α(kxk), ∀t > 0, ∀kxk ≤ s.
To check that δ > 0 s.t. β(t0, δ) := sup{V (t0, x) : kxk ≤ δ} < α(²̄) is as required in the
definition of stability, note that since V̇ (t, x) ≤ 0 whenever kxk < δ, we have
α(kx(t)k) ≤ V (t, x(t)) ≤ V (t0, x(t0)) ≤ α(²̄), and thus kx(t)k ≤ ².
1 2
R x1
Example Take V (x1, x2) = 2 x2 + 0 g(s)ds and apply Lyapunov theorem to the system:
½
ẋ1 = x2
ẋ2 = −f (x2) − g(x1)
where f (friction) and g (spring restoring force) are continuous and, ∀s ∈ [−s0, s0],
satisfy sf (s) ≥ 0 and sg(s) > 0 for s 6= 0.
Lyapunov Stability
Lyapunov Stability
Lyapunov’s Direct Method
M ⊂ <n is an invariant set for ẋ = f (t, x) if x(t0) ∈ M for some t0 > 0 implies
x(t) ∈ M ∀t ≥ t0.
A set S ⊂ <n is the positive limit set for a trajectory x(·) if ∀x ∈ S, x = limtn x(tn) for
some sequence {tn} s.t. tn → ∞.
Facts
a) For periodic or autonomous systems, the positive set of any trajectory is an invariant
set.
b) The positive limit set of a bounded trajectory is closed and bounded.
c) Let x(·) be bounded and S be the system’s positive limit set
limt↑∞ supy∈S kx(t) − yk = 0.
Another Fact
Consider ẋ = f (x) and V : <n → < to be C 1 s.t.: Sc := {x ∈ <n : V (x) ≤ 0} is
bounded; V is bounded from below on Sc; and V̇ (x) ≤ 0 on Sc.
Then, ∀x0 ∈ Sc, limt↑∞ x(t; x0, 0) ∈ M where M is the largest invariant subset in
{x ∈ Sc : V̇ (x) = 0}.
Fernando Lobo Pereira, [email protected] 35
Nonlinear Control, 2008, FEUP
Lyapunov Stability
Lyapunov Stability
Lyapunov’s Direct Method
LaSalle’s local Theorem
Let ẋ = f (x) and V : <n → < be C 1 l.p.d s.t. V̇ (x) ≤ 0 ∀ kxk ≤ r. Assume
S := {x ∈ <n : V (x) ≤ m, V̇ (x) = 0}, m := supkxk≤r {V (x)} has only the trajectory
x ≡ 0. Then, 0 is asymptotically stable.
LaSalle’s global Theorem
Let ẋ = f (x) and V be autonomous or of period T. V be C 1, p.d. and s.t. V̇ (x) ≤ 0
∀x ∈ <n. Also S := {x ∈ <n : V̇ = 0 ∀t ≥ 0} contains no trivial trajectories. Then, 0 is
globally asymptotically stable.
Observation:
The advantage of LaSalle’s theorems is that Asymptotic Stability is concluded only by
requiring V̇ (x) ≤ 0 and not −V̇ (x) ≤ α(kxk).
The price to pay? The system has to be autonomous or periodic in time.
1
Ry
Exercise - Apply LaSalle’s theorem to ÿ + f (ẏ) + g(y) = 0 with V (y, ẏ) = 2 0 g(s)ds.
f, g are continuous, f (0) = g(0) = 0, sf (s) > 0, sg(s) > 0, ∀s 6= 0.
Fernando Lobo Pereira, [email protected] 37
Nonlinear Control, 2008, FEUP
Lyapunov Stability
Conditions for equilibria instability
“0 is unstable at t0” if ∃ a C 1 V : < × <n → <,
(i) s.t.: V is decrescent, V̇ is l.p.d., V (t, 0) = 0 and ∀² > 0, ∃x ∈ ²B, V (t0, x) > 0.
(ii) s.t.: V is decrescent, V (t, 0) = 0, ∀² > 0, ∃x ∈ ²B, V (t0, x) > 0 and
V̇ (t, x) = λV (t, x) + W (t, x) with λ > 0 and W (t, x)∀t ≥ t0 kxk ≤ r.
(iii) closed Ω and open Ω̄ ⊂ Ω s.t. 0 ∈ intΩ, 0 ∈ ∂ Ω̄, and,
∀t ≥ t0, V is bounded above in Ω, uniformly in t, V (t, x) = 0, on ∂ Ω̄,
∀x ∈ Ω̄, V (t, x) > 0, and V̇ (t, x) ≥ γ(kxk) for some γ of class K.
Lyapunov Stability
Linear Systems
For ẋ(t) = A(t)x(t), t > 0, 0 is an isolated equilibrium.
Let {λi : i = 1, ..., n} be the eigenvalues of A and Φ denote the State Transition Matrix.
Lyapunov Stability
Linear Systems
Consider the autonomous system ẋ(t) = Ax(t).
Classical result
a) Glob. Asympt. Stability iff Re{λi} < 0 and
b) Stability iff Re{λi} = 0 only if λi is a minimal polynomial simple zero.
Lyapunov Stability
Linear Systems
Given A, there are two approaches:
either pick P and study the resulting Q,
or pick Q and study the resulting P .
While the first requires an apriori guess on the stability (of 0), the second is a more
straightforward trial test.
However, there is a problem of nonuniqueness. Hence:
Theorem A
∀ Q, ∃1 P sol. to (LE) iff λi + λ∗j 6= 0 ∀ i, j.
Theorem B
Re(λi) < 0 ∀i iff ∃Q > 0 s.t. ∃1 P sol. to (LE), P > 0 iff ∀Q > 0, ∃1 P sol. to
(LE), P > 0.
Theorem C
Suppose λi + λ∗j 6= 0 and let (LE) be s.t. ∃1 solution P for each Q. If Q > 0, then P has
many negative eigenvalues as A has eigenvalues with positive real part.
Fernando Lobo Pereira, [email protected] 41
Nonlinear Control, 2008, FEUP
Lyapunov Stability
Indirect Method
Key idea: derive (local) stability conclusions for nonlinear systems from results for linear
systems.
Take ẋ(t) = f (t, x(t)) where f is C 1 in x, f (t, 0) = 0 ∀t ≥ 0, and limkxk→0 kg(t,x)k
kxk = 0
where g(t, x) = f (t, x) − A(t)x with A(t) = ∂f ∂x (t, x)|x=0 .
Theorem Assume A(·) is bounded and limkxk→0 supt≥0 kg(t,x)kkxk = 0. Then, if 0 is uniformly
asymptotically stable (UAS) over [0, ∞) for the linearized system (LS), ż(t) = A(t)z(t),
so is for the nonlinear system ẋ(t) = f (t, x(t)).
Exercises (TakeRthe assumptions and definitions of the above theorem).
∞
1- Let P (t) := t Φ0(s, t)Φ(s, t)ds. Show that:
1.1 ∀t ≥ 0 P (t) > 0 and ∃b > a > 0 s.t. ax0x ≤ x0P (t)x ≤ bx0x.
1.2 Ṗ (t) + A0(t)P (t) + P (t)A(t) + I = 0.
2- Let V (t, x) := x0P (t)x. Show that V (t, x) is a decrescent p.d.f. with
V̇ (t, x) = −x0x + 2x0P (t)g(t, x).
3- Take r > 0 s.t. kxk ≤ r ⇒ kg(t, x)k ≤ kxk/(3b) ∀t ≥ 0. Show that
V̇ (t, x) ≤ −x0x/3.
Fernando Lobo Pereira, [email protected] 42
Nonlinear Control, 2008, FEUP
Lyapunov Stability
Indirect Method
Theorem’
Take the data of the previous theorem and assume that A(t) = Ā ∀t ≥ 0. Then, if Ā has
at least an eigenvalue with positive real part, 0 is an unstable equilibrium point for the
nonlinear system.
Exercises Determine the stability of the origin for the following systems (including the
domain of attraction).
a) ÿ½ = (1 − y 2)ẏ − y with µ > 0
ẋ1 = x1 + x2 + x1x2
b)
ẋ2 = −x1 + x22
Lyapunov Stability
The Feedback Stabilization Problem
Take the control system ẋ(t) = f (x(t), u(t)) and specify a feedback control law
u(t) = g(x(t)) so that the reference equilibrium point of the closed loop system
ẋ(t) = f (x(t), g(x(t))) is asymptotically stable.
Assumptions
a) f is C 1 in <n × <m and f (0, 0) = 0.
∂f ∂f
b) rank[B|AB|...|An−1B] = n, with A = ∂x (x, u)|x=0,u=0 , B= ∂u (x, u)|x=0,u=0 .
Observation The assumptions imply that the linearized system around (0, 0),
ż(t) = Az(t) + Bv(t), is controllable.
Fact There is a matrix K s.t. all the eigenvalues of A − BK have negative real parts
and, thus 0 is G.A.S. for the closed loop system ż(t) = (A − BK)z(t).
Another Fact (how to compute such a K? Use LQ results) Given (A, B) as above,
K = Q−1B 0M where M is the solution to the Riccati equation,
−P − A0M − M A + M BQ−1B 0M = 0 for given P > 0 and Q > 0.
Fernando Lobo Pereira, [email protected] 44
Nonlinear Control, 2008, FEUP
Lyapunov Stability
Feedback Stabilization
Theorem (Nonlinear stabilization)
Take ẋ(t) = f (x(t), u(t)) where f s.t. b) and let A and B defined as above.
Take K ∈ s.t all the eigenvalues of A − BK have negative real parts. Then,
u(t) = −Kx(t) =⇒ 0 is an asymptoptically stable equilibrium point of
ẋ(t) = f (x(t), −Kx(t)).
Approach
a) Linearize the nonlinear system
b) Compute K stabilizing the linear system
c) Feed the nonlinear system input with −Kx.
Exercise
Find a feedback stabilizing control for the system
½ 2a + b if 2a + b ≤ 1
ẋ1 = 3x1 + x22 + g(x2, u)
where g(a, b) = 1 if 2a + b > 1 .
ẋ2 = sin(x1) − x2 + u
−1 if 2a + b < −1
Input/Output Stability
The (dynamical) system regarded as an input-output transformation
Formalization requires:
Definition 1 Lp[0, ∞), (Lpe[0, ∞)),
R ∞ p = p1, ...∞ is the
R ∞set of all measurable
f (·)(fT (·)) : [0, ∞) → <) s.t. 0 |f (t)| dt < ∞ ( 0 |fT (t)|pdt < ∞ ∀T > 0).
Note: f might be vector valued.
Definition 2 A : Lnpe → Lm
pe is Lp -stable if
a) f ∈ Lnp implies that Af ∈ Lm p ; and
b) ∃k, c s.t. kAf kp ≤ kkf k + b ∀f ∈ Lp.
Note: Bounded input/Bounded output Stability - p = ∞
R t −α(t−τ )
Example 1 (Af )(t) = 0 e f (τ dτ
Find k and c.
Example 2 (Af )(t) = f 2(t)
Is it a Lp map? Can you find k and c fulfilling the definition of stability.
Input/Output Stability
½
e1 = u1 − y2 y1 = G1(u1 − y2)
(2)
e2 = y1 + u2 % ½ y2 = G2(u2 + y1)
(1)
y1 = G1e1 & e1 = u1 + G2e2
y2 = G2e2 e2 = u2 + G1e1
Theorem Rt
Take (1) with (G1x)(t) := 0 G(t, τ )n1(τ, x(τ ))dτ and (G2x)(t) := n2(t, x(t)).
Here, G(·, ·) and ni : <+ × <n → <n, i = 1, 2, are continuous with ni(·, 0) = 0 and the
Ki-Lipschitz continuity of ni(t, ·).
Then, for i = 1, 2, Gi : Lnpe → Lm pe and
∀u1, u2 ∈ Lnpe, ∃1(e1, e2, y1, y2) ∈ Lnpe s.t. (1) holds.
Definition 3 - (2) is Lp-stable if,
∀u1, u2 ∈ Lnpe, y1, y2 s.t. (2) holds are in Lnp, and
∃k, b s.t., for i = 1, 2, kyikp ≤ k(ku1kp + ku2kp) + b, whenever u1, u2, y1, y2 are s.t. (2)
holds.
Fernando Lobo Pereira, [email protected] 47
Nonlinear Control, 2008, FEUP
½ Rt
e(t) = u(t) − 0 eA(t−τ )y(τ )dτ
(2)
y(t) = f (t, e(t))
Then, if (2) is L2-stable, then the equilibrium 0 of (1) is GAS (Globally Asymptotically
Stable).
Observation: Input/Output techniques yield either GAS or nothing!
Difficult to estimate regions of attraction.
Ā = T AT −1
, B̄ = T B and, C̄ = CT −1
where matrix T represents a linear change of phase coordinates.
A map z = Φ(x) is a nonlinear change of phase coordinates if it is a (global) (local)
diffeomorphism, i.e., invertible and, both Φ and Φ−1 are smooth.
Exercise:
a) det( ∂Φ
∂x ) 6= 0 at x0 implies that Φ is a local diffeomorphism.
b) Check that f¯ = ( ∂Φ −1 ∂Φ −1 −1
∂x f ) ◦ Φ (z), ḡ = ( ∂x g) ◦ Φ (z) and h̄ = h ◦ Φ (z).
Distributions
∂λi
Application of Frobenius theorem: Easy way to solve the p.d.e. ∂x F (x) = 0,
i = 1, . . . , n − d.
Procedure:
(1) Complete ∆ with additional n − d independent vector fields.
(2) Solve o.d.e.s ẋ = fi(x) with x(0) = x0, yielding xi(t) = Φft i (x0), t ∈ [0, zi], in
U² = {z ∈ <n : zi < ²}.
Qn
(3) Take Ψ : U² → < , Ψ(z) := i=1 Φfzii (x0) = Φfz11 ◦ · · · ◦ Φfznn (x0).
n