Compendium Solutions NLcontrol
Compendium Solutions NLcontrol
Compendium Solutions NLcontrol
Created: 1998
Latest update: December 8, 2020
1
Introduction
The exercises are divided into problem areas that roughly match the lecture
schedule. Exercises marked “PhD” are harder than the rest. Some exercises
require a computer with software such as Matlab and Simulink.
Many people have contributed to the material in this compendium. Apart
from the authors, exercises have been suggested by Lennart Andersson, An-
ders Robertsson and Magnus Gäfvert. Exercises have also shamelessly been
borrowed (=stolen) from other sources, mainly from Karl Johan Åström’s
compendium in Nonlinear Control.
Exercises marked with (H) have hints available, listed in the end of each
chapter.
2
1. Nonlinear Models and
Simulation
Exercise 1.1[Khalil, 1996]
The nonlinear dynamic equation for a pendulum is given by
where l > 0 is the length of the pendulum, m > 0 is the mass, k > 0 is a
friction parameter and θ is the angle subtended by the rod and the vertical
axis through the pivot point, see Figure 1.1.
(a) Choose appropriate state variables and write down the state equations.
(b) Find all equilibria of the system.
(c) Linearize the system around the equilibrium points, and determine if
the system equilibria are locally asymptotically stable.
I q̈ 1 + M sin q 1 + k(q 1 − q 2 ) = 0
J q̈ 2 − k(q 1 − q 2 ) = u,
3
Chapter 1. Nonlinear Models and Simulation
q1
M δ¨ = P − Dδ˙ − η 1 E q sin δ
τ Ė q = −η 2 E q + η 3 cos δ + E F D ,
Exercise 1.4
ψ ( t, ·)
−
r u y
C (s I − A)−1 B
4
Chapter 1. Nonlinear Models and Simulation
+
θi sin(·) G(s) y
−
θ0 1
s
ż = Az + B sin e
ė = −C z
Friction
F
−
xr u x
Σ GP I D Σ 1
ms
1
s
v
−
Figure 1.5 shows a block diagram of a mechanical system with friction under
PID control. The friction block is given by
F (v) = F0 sign(v)
Let xr = 0 and rewrite the system equations into feedback connection form
(i.e. a linear system in feedback with a nonlinear system).
5
Chapter 1. Nonlinear Models and Simulation
Exercise 1.7
G aw
- +
r u v y
Gf f + Gp
-
Gf b
Figure 1.6 illustrates one approach to avoid integrator windup. Rewrite the
system into feedback connection form.
Exercise 1.8
Consider the model of a motor with a nonlinear valve in Figure 1.7. Assume
−1
Exercise 1.9
Is the following system (a controlled nonlinear spring) nonlinear locally con-
trollable around x = ẋ = u = 0?
ẍ = − k 1 x − k 2 x3 + u.
6
Chapter 1. Nonlinear Models and Simulation
Exercise 1.10Phd
The equations for the unicycle in Figure 1.8 are given by
θ
( x, y)
Figure 1.8 The “unicycle used in Exercise 1.10.
ẋ = u1 cos θ
ẏ = u1 sin θ
θ˙ = u2 ,
where ( x, y) is the position and θ the angle of the wheel. Is the system nonlin-
ear locally controllable at (0, 0, 0)? (Hint: Linearization gives no information;
use the definition directly).
Exercise 1.11PhD
The system in Figure 1.9 is known as the “rolling penny”. The equations are
θ
( x, y)
Figure 1.9 The “rolling penny” used in Exercise 1.11.
given by
ẋ = u1 cos θ
ẏ = u1 sin θ
θ˙ = u2
Ψ̇ = u1 .
7
Chapter 1. Nonlinear Models and Simulation
Exercise 1.12
Determine if the following system is nonlinear locally controllable at ( x0 , u0 ) =
(0, 0)
Exercise 1.13
Simulate the system G(s) = 1/(s + 1) with a sinusoidal input u = sin ω t.
Find the amplitude of the stationary
√ output for ω = 0.5, 1, 2. Compare with
the theoretical value pG(iω )p = 1/ 1 + ω 2 .
Exercise 1.14
Consider the pendulum model given in Exercise 1.1.
(a) Make a simulation model of the system in Simulink, using for instance
m = 1, = 10, l = 1, k = 0.1. Simulate the system from various initial
states. Is the system stable? Is the equilibrium point unique? Explain
the physical intuition behind your findings.
(b) Use the function linmod in Matlab to find the linearized models for the
equilibrium points. Compare with the linearizations that you derived
in Exercise 1.1.
(c) Use a phase plane tool (such as pplane or pptool, links at the course
homepage) to construct the phase plane of the system. Compare with
the results from (a).
Exercise 1.15
Simulate the example from the lecture with two tanks, using the models
ḣ = (u − q)/ A
p √
q=a 2 h,
where h is the liquid level, u is the inflow to the tank, q the outflow, A the
cross section area of the tank, a the area of the outflow and the acceleration
due to gravity, see Figure 1.10. Use a step input flow. Make a step change
in u from u = 0 to u = c, where c is chosen in order to give a stationary
value of the heights, h 1 = h 2 = 0.1. Make a step change from u = c to
u = 0. Is the process linear? Linearize the system around h 1 = h 2 = 0.1.
Use A1 = A2 = 3 $ 10−3 , a 1 = a 2 = 7 $ 10−6 .
8
Chapter 1. Nonlinear Models and Simulation
1 q
1/A f(u) 1
1 s
q q In
In Sum Gain Integrator Fcn h 1
1 In
h Out
2 In Subsystem2
h Subsystem
Exercise 1.16
Simulate the system with the the oscillating pivot point (the “electric hand-
saw”), see Figure 1.11. Use the equation
1
θ¨ ( t) = ( + aω 2 sin ω t) sin θ ( t).
l
Assume a = 0.02m and ω = 2π · 50 for a hand-saw. Use simulation to find for
what length l the system is locally stable around θ = θ˙ = 0 (Note: asymptotic
stability is not required).
Exercise 1.17
The Lorentz equations
d
x1 = σ ( x2 − x1 )
dt
d
x2 = rx1 − x2 − x1 x3
dt
d
x3 = x1 x2 − bx3 , σ , r, b > 0,
dt
where σ , r, b are constants, are often used as example of chaotic motion.
(a) Determine all equilibrium points.
(b) Linearize the equations around x = 0 and determine for what σ , r, b
this equilibrium is locally asymptotically stable.
9
Hints
Hints
Exercise 1.6
The nonlinear system in feedback with the friction block takes − F as input
and produces V . To find the linear system, treat − F as input and V as
output.
10
2. Linearization and
Phase-Plane Analysis
Exercise 2.1[Khalil, 1996] (H)
For each of the following systems, find and classify all equilibrium points.
(a) ẋ1 = x2
ẋ2 = − x1 + x31 /6 − x2
(b) ẋ1 = − x1 + x2
ẋ2 = 0.1x1 − 2x2 − x21 − 0.1x31
(d) ẋ1 = x2
ẋ2 = − x1 + x2 (1 − 3x21 − 2x22 )
(e) ẋ1 = − x1 + x2 (1 + x1 )
ẋ2 = − x1 (1 + x1 )
ẋ1 = ax1 − x1 x2
ẋ2 = bx21 − cx2
11
Chapter 2. Linearization and Phase-Plane Analysis
(a) ẋ1 = x2
ẋ2 = x1 − 2 tan−1 ( x1 + x2 )
(b) ẋ1 = x2
ẋ2 = − x1 + x2 (1 − 3x21 − 2x22 )
Exercise 2.4
Saturations constitute a severe restriction for stabilization of system. Fig-
ure 2.1 shows three phase portraits, each corresponding to one of the follow-
ing linear systems under saturated feedback control.
(a) ẋ1 = x2
ẋ2 = x1 + x2 − sat(2x1 + 2x2 )
(b) ẋ1 = x2
ẋ2 = − x1 + 2x2 − sat(3x2 )
(c) ẋ1 = x2
ẋ2 = −2x1 − 2x2 − sat(− x1 − x2 )
0.8
1.5 1.5
0.6
1 1
0.4
0.5 0.5
0.2
x2
x2
x2
0 0 0
−0.2
−0.5 −0.5
−0.4
−1 −1
−0.6
−1.5 −1.5
−0.8
−2 −1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2
x1 x1 x1
Figure 2.1 Phase portraits for saturated linear systems in Exercise 2.4
12
Chapter 2. Linearization and Phase-Plane Analysis
(a) ẋ1 = − x2
ẋ2 = x1 − x2 (1 − x21 + 0.1x41 )
(b) ẋ1 = x2
ẋ2 = x1 + x2 − 3 arctan( x1 + x2 )
6 6
4 4
2 2
x2
x2
0 0
−2 −2
−4 −4
−6 −6
−8 −8
−8 −6 −4 −2 0 2 4 6 8 −8 −6 −4 −2 0 2 4 6 8
x1 x1
Figure 2.2 Phase portraits for Exercise 2.5(a) to the left, and Exercise 2.5(b) to the
right.
Exercise 2.6
The following system
u = − Ky
(a) For all values of the gain K , determine the equilibrium points of the
closed loop system.
(b) Determine the equilibrium character of the origin for all values of the
parameter K . Determine in particular for what values the closed loop
system is (locally) asymptotically stable.
13
Chapter 2. Linearization and Phase-Plane Analysis
ẋ1 = x2
P D η1
ẋ2 = − x2 − E q sin x1 .
M M M
Exercise 2.9
Linearize the ball-on-beam equation
7 2r ¨
ẍ − xφ˙2 = sin φ + φ,
5 5
5 t2
3 4
sin(φ 0 ) ·
! "
φ ( t), x( t) = φ 0 ,
7 2
Exercise 2.10
Use a simple trigonometry identity to help find a nominal solution corre-
sponding to u( t) = sin (3t), y(0) = 0, ẏ(0) = 1 for the equation
4 3 1
ÿ + y ( t) = − u( t).
3 3
14
Hints
Exercise 2.11
The equations for motion of a child on a swing are given by
d d
(ml2 φ ) + m l sin φ = 0
dt dt
Here φ ( t) is the angle of the swing, m the mass, and l( t) the distance of the
child to the pivot of the swing. The child can excite the swing by changing
l( t) by moving its center of mass.
(a) Draw phase diagrams for two different constant lenghts l 1 and l 2 .
(b) Assume that it is possible to quickly change between the lenghts l 1 and
l 2 . Show how to jump between the two different systems to increase the
amplitude of the swing.
Hint: During constant l the energy in the system is constant. When l( t) changes
d
quickly φ will be continuous but dt φ ( t) will change in such a way that the
2 d
angular momentum ml dt φ is continuous.
Hints
Exercise 2.1 Set ẋ1 = ẋ2 = 0 and find necessary conditions on the stationary
points by considering the simplest equation. Use this in the other equation.
Exercise 2.5 Note that the sign of x2 determines the sign of ẋ1 .
x1 = r cos(θ )
x2 = r sin(θ )
with r ≥ 0.
15
3. Lyapunov Stability
Exercise 3.1
Consider the scalar system
ẋ = ax3
V ( x) = x4
Exercise 3.2
Consider the pendulum equation with mass m and length l.
ẋ1 = x2
k
ẋ2 = − sin x1 − x2 .
l m
(a) Assume zero friction, (i.e. let k = 0), and that the mass of the pendu-
lum is concentrated at the the tip. Show that the origin is stable by
showing that the energy of the pendulum is constant along all system
trajectories.
(b) Show that the pendulum energy alone cannot be used to show asymp-
totic stability of the origin for the pendulum with non-zero friction,
k > 0. Then use LaSalle’s invariance principle to prove that the origin
is asymptotically stable.
Exercise 3.3
Consider the system
ẍ + d ẋ3 + kx = 0,
1! 2
kx + ẋ2
"
V ( x, ẋ) =
2
is a Lyapunov function. Is the system locally stable, locally asymptotically
stable, and globally asymptotically stable?
16
Chapter 3. Lyapunov Stability
Exercise 3.4
Consider the linear system
−1 0
6 5
ẋ = Ax = x
1 −1
(a) Compute the eigenvalues of A and verify that the system is asymptoti-
cally stable
(b) From the lectures, we know that an equivalent characterization of
stability can be obtained by considering the Lyapunov equation
AT P + P A = − Q
(ii) Solve the Lyapunov function with Q as the identity matrix. Is the
solution P a positive definite matrix?
−1 e2t
5 6
ẋ = x, t ≥ 0.
0 −1
17
Chapter 3. Lyapunov Stability
2x
ẍ + =0
(1 + x2 )2
and is asked to determine whether or not the equation is stable. The students
think “this is an undamped mass-spring system – the spring is nonlinear with
a spring constant of 2/(1 + x2 )2 ”. The student re-writes the system as
ẋ1 = x2
−2x1
ẋ2 =
(1 + x21 )2
where the continuous functions f1 and f2 have the same sign as their argu-
ments, i.e. xi fi ( xi ) > 0 if xi ,= 0, and fi (0) = 0.
(a) Find all equilibrium points of the system. Hint: after putting the time
derivatives to zero, form a linear combination of the two equations to
conclude that either x21 + 2x22 − 4 = 0, or x1 = x2 = 0.
18
Chapter 3. Lyapunov Stability
is an invariant set.
(c) Show that almost all trajectories of the system tend towards the invari-
ant set E.
(d) Is E a limit cycle?
Extra: Simulate the system.
(Remark. Compare with Example 3.13 in the book by Slotine and Li.)
Exercise 3.8
Consider the system
ẋ1 = x2
ẋ2 = −2x1 − 2x2 − 4x31 .
Use the function
V ( x) = 4x21 + 2x22 + 4x41
to show that
(a) the system is globally stable around the origin.
(b) the origin is globally asymptotically stable.
Exercise 3.9
Consider the system
ÿ = sat(−3 ẏ − 2y).
(a) Show that y( t) → 0 as t → 0.
(b) For PhD students. Is it possible to prove global asymptotic stability
using a Lyapunov function V ( x) that satisfies
αpp xpp22 ≤ V ( x) ≤ β pp xpp22 , V̇ ( x) ≤ −γ pp xpp22
for some positive scalars α and β ?
(c) For PhD students. Consider the system
ẍ = u
and show that all feedback laws u = k 1 x + k 2 ẋ that give an asymptoti-
cally stable system, also give an asymptotically stable system when the
actuator saturates, i.e., when
ẍ = sat(u).
(d) For PhD students. Does the results in (c) hold for the triple integrator
d3 x
= sat(u)? (3.2)
dt3
19
Chapter 3. Lyapunov Stability
ẋ1 = x2
ẋ2 = − x1 − max(0, x1 ) · max(0, x2 )
Exercise 3.11
Consider the nonlinear system
ẋ1 = − x1 + x2
ẋ2 = − x1 − x2 + ( x)
(a) Show that V ( x) = 0.5xT x is a Lyapunov function for the system when
( x) = 0.
(b) Use this Lyapunov function to show that the system is globally asymp-
totically stable for all ( x) that satisfy
( x) = ( x2 )
sign(( x2 )) = −sign( x2 )
(c) Let ( x) = x32 . This term does not satisfy the conditions in (b). However,
we can apply Lyapunov’s linearzation method to show that the origin
is still locally asymptotically stable.
For large initial values, on the other hand, simulations reveal that the
system is unstable. It would therefore be interesting to find the set of
“safe” initial values, such that all trajectories that start in this set tend
to the origin. This set is called the region of attraction of the origin. We
will now illustrate how quadratic Lyapunov functions can be used to
estimate the region of attraction.
(i) Show that V̇ ( x) < 0 for p x2 p < 1. This means that V ( x) decreases
for all solutions that are confined in the strip p x2 ( t)p < 1 for all t.
(ii) Recall that level sets for the Lyapunov function are invariant.
Thus, solutions that start inside a proper level set remain there
for all future times. Conclude that the region of attraction can be
estimated as the largest level set
Ω = { x : V ( x) ≤ γ }
20
Chapter 3. Lyapunov Stability
ẋ1 = − x2
ẋ2 = x1 + ( x21 − 1) x2 .
ẋ1 = x2
ẋ2 = x1 − sat(2x1 + x2 ).
x1 x2 = c
ẋ = f ( x, u), x ∈ IRn , u ∈ IR
21
Chapter 3. Lyapunov Stability
ẋ = f ( x, u) = φ ( x) + ψ ( x)u,
A P + P AT − bbT < 0.
(Hint. Some LQR theory may come handy when proving necessity. In
particular, if the system is stabilizable, what can you say R
about the
∞
feedback law u = − kx that you obtain from the LQR cost 0 xT x +
uT u dt?)
Exercise 3.15
It can sometimes be convenient to re-write nonlinearities in a way that is
more easy to manipulate. Consider the single input, open loop stable, linear
system under saturated feedback
ẋ = Ax + Bsat(u)
u = − K x.
ẋ = Ax + µ( x) B K x,
where 0 < µ( x) ≤ 1.
(b) Assume P > 0 is such that
xT ( AT P + P A) x ≤ 0, ∀ x
xT (( A − B K )T P + P ( A − B K )) x ≤ 0, ∀ x
guarantees the closed loop system in (a) to be stable. (The nice thing
about this formulation is that it is possible to construct efficient nu-
merical methods for simultaneously finding both feedback gains K and
Lyapunov matrix P).
22
Chapter 3. Lyapunov Stability
ẋ = Ax + f ( x) + Bsat(u)
u = − K x.
λmin ( Q)
kf < ,
2λmax ( P )
Exercise 3.16
In general, it is non-trivial to find a Lyapunov function for a given nonlinear
system. Several different methods have been derived for specific classes of
systems. In this exercise, we will investigate the following method, known as
Krasovskii’s method.
Consider systems on the form
ẋ = f ( x)
for all x ∈ IRn , and some matrix P = P T > 0. Then, the origin is globally
asymptotically stable with V ( x) = f T ( x) P f ( x) as Lyapunov function.
Prove the validity of the method in the following steps.
(a) Verify that f ( x) can be written as
Z1
f
f ( x) = (σ x) · x dσ .
0 x
xT P f ( x) + f T ( x) P x ≤ − xT x, ∀ x ∈ IRn
23
Hints
Exercise 3.17
Use Krasovskii’s method to justify Lyapunov’s linearization method.
e x2 x1
K 1
+ ( e) s+1 s
−1
x2 as indicated in the figure. Assume that the reference value is zero. The
system equations can then be written as
ẋ1 = x2
ẋ2 = − x2 + K ( e) = − x2 + K (− x1 ).
Hints
Excersice 3.7
b) Show that if x(T ) ∈ E then x21 ( t) + 2x22 ( t) = 4 for all t ≥ T.
c) Define a function V ( x) such that V = 0 on E and V ( x) > 0 if x ∈ / E, and
start by showing that V̇ ≤ 0.
24
4. Input-Output Stability
Exercise 4.1
The norms used in the definitions of stability need not be the usual Euclidian
norm. If the state-space is of finite dimension n (i.e., the state vector has n
components), stability and its type are independent of the choice of norm
(all norms are “equivalent”), although a particular choice of norm may make
analysis easier. For n = 2, draw the unit balls corresponding to the following
norms.
(a) pp xpp2 = x21 + x22 (Euclidian norm)
(b) pp xpp2 = x21 + 5x22
(c) pp xpp = p x1 p + p x2 p
(d) pp xpp = sup(p x1 p, p x2 p)
Recall that a “ball” B( x0 , R), of center x0 and radius R, is the set of x such
that pp x − x0 pp ≤ R, and that the unit ball is B(0, 1).
Exercise 4.2
r1 y1
Σ G
−
y2 r2
ψ (·) Σ
25
Chapter 4. Input-Output Stability
Exercise 4.3
1.5 1.5 1.5
1 1 1
0 0 0
−1 −1 −1
Consider the static nonlinearities shown in Figure 4.2. For each nonlinearity,
(a) determine the minimal sector [α, β ],
(b) determine the gain of the nonlinearity,
(c) determine if the nonlinearity is passive.
4
G(s) =
(s + 1)(s/2 + 1)(s/3 + 1)
is shown in Figure 4.3 together with a circle with center in 1.5 and with
radius 2.85.
Nyquist Diagrams
1
Imaginary Axis
−1
−2
−3
−2 −1 0 1 2 3 4
Real Axis
(a) Determine the maximal stability sector of the form (−α, α).
(b) Use the circle in the figure to determine another stability sector.
(c) What is the maximal stability sector of the form (0, β )?
26
Chapter 4. Input-Output Stability
4
G(s) =
(s − 1)(s/3 + 1)(s/5 + 1)
Nyquist Diagrams
1.5
0.5
Imaginary Axis
−0.5
−1
−1.5
−2
−4 −3.5 −3 −2.5 −2 −1.5 −1 −0.5 0
Real Axis
1
G(s) =
(s + 1)(s + 2)
(b)
s
G(s) =
s2 −s+1
Hint for (b): Here, G(s) is not stable. See lecture slides about the circle
criterion for an unstable system G(s).
27
Chapter 4. Input-Output Stability
0.4
0.2
Im
0
−0.2
−0.4
−0.5 0.4
10
−0.7
10 0.2
−0.9
10
Im
0
270
−0.2
225
−0.4
Phase (deg)
180
−0.6
135
−0.8
90
0 1
10 10 −1
−1.5 −1 −0.5 0 0.5
Frequency (rad/sec) Re
Figure 4.5 Nyquist curves for the system in Exercise 4.6a (above) and Bode and
Nyquist curve for the system in Exercise 4.6b (below)
ẋ( t) = ( A + Bδ ( t)C ) x,
G(s) = C (s I − A)−1 B
28
Chapter 4. Input-Output Stability
1 1 1
0 0 0
−1 −1 −1
Figure 4.6 Nyquist curves for transfer function G(s) in Exercise 4.7.
(d) For PhD students. Let G(s) be a transfer function matrix with m in-
puts and n outputs. Show that if A is Hurwitz, pp ∆ ( t)pp ≤ 1 ∀ t, and
supω ∈IR σmax [C ( jω I − A)−1 B] < 1, then the system is BIBO stable.
Exercise 4.8
The singular values of a matrix A are denoted σi ( A).
(a) Use Matlab to compute σ ( A) for
1 10
5 6
A= .
0 1
p Axp
σ1 ( A) = sup .
x p xp
Exercise 4.9
In the previous chapter, we have seen how we can use Lyapunov functions to
prove stability of systems. In this exercise, we shall see how another type of
auxiliary functions, called storage functions, can be used to assess passivity
of a system.
Consider the nonlinear system
ẋ = f ( x, u)
y = ( x, u) (4.1)
with zero initial conditions, x(0) = 0. Show that if we can find a storage
function V ( x, u) with the following properties
• V ( x, u) is continuously differentiable.
• V (0) = 0 and V ( x, u) ≥ 0 for x ,= 0.
• uT y ≥ V̇ ( x, u).
then, the system (4.1) is passive.
29
Chapter 4. Input-Output Stability
Exercise 4.10
Let P be the solution to
AT P + P A = − I,
where A is an asymptotically stable matrix. Show that G(s) = BT P (s I −
A)−1 B is passive. (Hint. Use the function V ( x) = xT P x.)
θ˙ = ω
ω̇ = −ω + η,
where θ is the shaft angle and η is the input voltage. The dynamic
controller
ż = 2(θ − z) − sat(θ − z)
η = z − 2θ
is used to control the shaft position. Use any method you like to prove
that θ ( t) and ω ( t) converge to zero as t → ∞.
Exercise 4.12
(a) Let uc ( t) be an arbitrary function of time and let H (·) be a passive
system. Show that
is passive from u to y.
(b) Show that the following adaptive system is stable
e( t) = G(s) θ ( t) − θ 0 uc ( t)
! "
θ˙ ( t) = −γ uc ( t) e( t),
Exercise 4.13PhD
Let f be a static nonlinearity in the sector (0, ∞).
30
Chapter 4. Input-Output Stability
1
1+γ s f (·)
1 + γs G(s)
(c) How does the Popov criterion change if f is in the sector (α, β ) instead?
(d) Figure 4.8 shows the Nyquist curve and the Popov curve (Re G(iω ), ω Im G(iω ))
for the system
s+1
G(s) = .
s(s + 0.1)(s2 + 0.5s + 9)
Determine a stability sector (0, β ) using the line in the figure.
0.5
−0.5
−1
−1.5
−2
−10 −9 −8 −7 −6 −5 −4 −3 −2 −1 0
Figure 4.8 Nyquist (dash-dot) and Popov curve (solid) for the system in Exer-
cise 4.13d. The Popov curve is to the right of the dashed line for all ω .
31
Hints
Hints
Exercise 4.7
b) Use the definition of L 2 -norm in the lecture slides to show that γ (ψ ) ≤ 1
by showing
qψ ( y)q2 ≤ qδ q∞ q yq2 ≤ q yq2
and then apply the appropriate theorem.
32
5. Describing Function
Analysis, Limit Cycles
Exercise 5.1 (H)
Match each of the odd, static nonlinearities in Figure 5.1 with one of the
describing functions in Figure 5.2.
Nonlinearity a b
15 15
10 10
5 5
0 0
0 5 10 0 5 10
c d
15 15
10 10
5 5
0 0
0 5 10 0 5 10
Describing functions nr 1 nr 2
4 4
3 3
2 2
1 1
0 0
0 5 10 0 5 10
nr 3 nr 4
4 4
3 3
2 2
1 1
0 0
0 5 10 0 5 10
Figure 5.2 Describing functions N ( A) as a function of A for the odd, static nonlin-
earities in Exercise 5.1.
33
Chapter 5. Describing Function Analysis, Limit Cycles
Exercise 5.2
Compute the describing functions for
(a) the saturation,
(b) the deadzone, and
(c) the piece-wise linear function
in Figure 5.3. (Hint: Use (a) in (b) and (c).)
H H β
−D −D −D α
D D 2D α D
−H −H β
Exercise 5.3
Show that the describing function for a relay with hysteresis in Figure 5.4
satisfies A
3 42 B1/2
1 πA D D
− =− 1− +i .
N ( A) 4H A A
H Im
Re
πA
4H
−D D −π D
4H
−H − N (1A)
Exercise 5.4
If the describing function for the static nonlinearity f ( x) is YN (C ), then
show that the describing function for D f ( x/ D) equals YN (C / D), where D is
a constant.
34
Chapter 5. Describing Function Analysis, Limit Cycles
Exercise 5.5
Show that all odd, static nonlinearities f such that
d f ( x) d 2 f ( x)
> 0, > 0,
dx d x2
for x > 0, have a real describing function Ψ(·) that satisfies the inequalities
f ( a)
Ψ( a) < , a > 0.
a
f ( x) = k 1 x + k 2 x2 + k 3 x3 .
0 1 y
Σ G(s)
1
(a) Assess intuitively the possibility of a limit cycle, by assuming that the
system is started at some small initial state, and notice that the system
can neither stay small (because of instability) nor at saturation values
(by applying the final value theorem of linear control).
(b) Use the describing function method to predict whether the system
exhibits a limit cycle. In such cases, determine the frequency and am-
plitude of the limit cycle. The describing function of a saturation is
plotted in Figure 5.6.
(c) Use the extended Nyquist criterion to assess whether the limit cycle is
stable or unstable.
35
Chapter 5. Describing Function Analysis, Limit Cycles
0.8
0.6
N(A)
0.4
0.2
0
0 2 4 6 8 10
A
Exercise 5.8
Consider a servo motor with transfer function
4
G0 (s) =
s(s + 1)(s + 2)
1
+ −a G (s)
+ −1 a 0
−
(a) Show that the describing function for the relay with dead-zone a is
given by
0
A<a
N ( A) = r
4
1−
a2
A≥a
πA A2
Hint: √
cos(arcsin( x)) = 1 − x, for x ∈ [−1, 1]
(b) How should the parameter a be chosen so that the describing function
method predicts that sustained oscillations are avoided in the closed
loop system?
Exercise 5.9
The Ziegler-Nichols frequency response method suggest PID parameters
based on a system’s ultimate gain K u and ultimate period Tu according
to the following table. The method provides a convenient method for tuning
PID controllers, since K u and Tu can be estimated through simple experi-
ments. Once K u and Tu have been determined, the controller parameters are
directly given by the formulas above.
36
Chapter 5. Describing Function Analysis, Limit Cycles
Parameter Value
K 0.6 K u
Ti 0.5Tu
Td 0.125Tu
r e u y
G(s)
−
(a) Show that the parameters K u and Tu can be determined from the sus-
tained oscillations that may occur in the process under relay feedback.
Use the describing function method to give a formula for computing K u
and Tu based on oscillation data. (amplitude A and angular frequency
ω of the oscillation). Let the relay amplitude be D.
Recall that the ultimate gain and ultimate period are defined in the
following way. Let G(s) be the systems transfer function, and ω u be the
frequency where the system transfer function has a phase lag of −180
degrees. Then we have
Tu = 2π /ω u
K u = 1/pG(iω u )p
(b) What parameters would the relay method give for the process
50
G(s) =
s(s + 1)(s + 10)
which is simulated in Figure 5.9 with D = 1? Compare what you obtain
from analytical computations ( K u = 2.20, Tu = 1.99)
0.5
−0.5
−1
0 5 10 15
37
Chapter 5. Describing Function Analysis, Limit Cycles
r k y
τ1 s+1 τ2 s+1
τ2 s+1 τ1 s+1
α
τ1 > τ2
Exercise 5.12PhD
Show that the system
1
G(s) =
s(s + 1)2
38
Chapter 5. Describing Function Analysis, Limit Cycles
Exercise 5.13PhD
Consider a linear system with relay feedback:
ẋ = Ax + Bu,
y = C x,
u = −sgn y,
C e Ah
δh = δ z.
C ( Az + B)
( Az + B)C Ah
( z + δ z) = − z + ( I − ) e δ z.
C ( Az + B)
We have now shown that the Jacobian of the Poincaré map for a linear
system with relay feedback is equal to the matrix
( Az + B)C Ah
e .
C ( Az + B)
The limit cycle is locally stable if and only if this matrix has all eigen-
values in the unit disc.
39
Hints
1
C (s I − A)−1 B = .
(s + 1)3
( Az + B)C Ah
e .
C ( Az + B)
Hints
Exercise 5.1
Use the interpretation of the describing function N ( A) as “equivalent gain”
for sinusoidal inputs with amplitude A.
Exercise 5.7
Q(s)
b and c) Given G(s) = P (s) , you can split the frequency response into a real
part and imaginary part as:
40
6. Anti-windup, Friction,
Backlash, Quantization
Exercise 6.1
Figure 6.1 (a) shows a controller in polynomial form, R(s)u = T (s)uc −
S (s) y, where u is the control signal, y the measurement variable, and uc
the reference signal. Figure (b) shows an antiwindup scheme for the same
controller. Assume that the anti-windup controller is controlling a process
given by the transfer function A(s) y = B(s)u. Also, put uc = 0.
(a) (b)
uc
uc T
T
1 u y 1 v u
∑ −S ∑
R A aw
y
−S
A aw− R
Exercise 6.2
The following model for friction is described in a PhD thesis by Henrik Olsson:
dz pvp
= v− z
dt (v)
dz
F = σ0 z + σ1 (v) + Fv v,
dt
where σ0 , Fv are positive constants and (v) and σ1 (v) are positive functions
of velocity.
(a) For non-zero constant velocity, determine the stationary value of z and
its stability.
(b) What friction force does the model give in stationarity for non-zero
constant velocity?
(c) Prove that if 0 < (v) ≤ a and p z(0)p ≤ a then
p z( t)p ≤ a, t≥0
41
Chapter 6. Anti-windup, Friction, Backlash, Quantization
Exercise 6.3
Derive the describing function (v input, F output) for
(a) Coulomb friction, F = F0 sign (v)
(b) Coulomb + linear viscous friction F = F0 sign (v) + Fv v
(c) as in b) but with stiction for v = 0.
Exercise 6.4
In Lecture 7 we have studied an adaptive friction compensation scheme for
the process (assuming m = 1)
ẋ = v
v̇ = − F + u
F = sign(v).
b = ( z F + K F pb
F vp)sign(b
v)
ż F b)sign(b
= − K F (u − F v)
b = zv + K v x
v
b + u − K v vb.
żv = − F
b b
ev = v − v
b
eF = F − F b
For t such that vb( t) ,= 0 show that the state equations for the estimation
errors are given by
ḃ
ev ( t) − Kv −1 b
e v ( t)
3 4 3 43 4
=
ḃ
e F ( t) − Kv K F 0 b
e F ( t)
42
Chapter 6. Anti-windup, Friction, Backlash, Quantization
Exercise 6.5
(a) What conclusion does describing function analysis give for the system
in Figure 6.2
(b) Show that the describing function for quantization is given by
0
D
A<
s 42
2
N ( A) = 4D P 2i − 1
n
3
1− D 2n−1
< A< 2n+1
π A i=1 2A 2 D 2 D
Exercise 6.6
Show that a saturation is a passive element.
Exercise 6.7
Consider the mass-spring system with dry friction
ÿ + c ẏ + ky + η( y, ẏ) = 0
where η is defined as
Construct the phase potrait and discuss its qualitative behavior. (Hint: Start
by sketching the behavior for ẏ > 0 and ẏ < 0. Then discuss what happens
at ẏ = 0)
Exercise 6.8
43
Chapter 6. Anti-windup, Friction, Backlash, Quantization
Exercise 6.9
For PhD students. Show that the antiwindup scheme in observer form is
equivalent to the antiwindup scheme in polynomial form with A e equal to
the observer polynomial (see CCS for definitions).
Exercise 6.10
For PhD students. Show that the equilibrium point of an unstable linear
system preceded with a saturation can not be made globally asymptotically
stable with any control law.
44
7. Nonlinear Controller
Design
Exercise 7.1
In some cases, the main nonlinearity of a system can be isolated to a static
nonlinearity on the input. This is, for example, the case when a linear process
is controlled using a actuator with a nonlinear characteristic. A simple design
methodology is then to design a controller C (s) for the linear process and
cancel the effect of the actuator nonlinearity by feeding the computed control
through the inverse of the actuator nonlinearity, see Figure 7.1. Compute the
Controller
f (v) = v2 , v≥0
with k 1 , k 2 ≥ 0.
Use your result to derive the inverse of the important special case of a
dead zone.
(c) A backlash nonlinearity.
xout
D xin
45
Chapter 7. Nonlinear Controller Design
Exercise 7.2
An important class of nonlinear systems can be written on the form
ẋ1 = x2
ẋ2 = x3
..
.
ẋn = f ( x) + ( x)u
u = h( x, v)
that renders the closed loop system from the new input v to the state
linear. What conditions do you have to impose on f ( x) and ( x) in order
to make the procedure well posed?
(b) Apply this procedure to design a feedback for the inverted pendulum
ẋ1 = x2
ẋ2 = a sin( x1 ) + b cos( x2 )u
that makes the closed loop system behave as a linear system with a
double pole in s = −1. Is the control well defined for all x? Can you
explain this intuitively?
(c) One drawback with the above procedure is that it is very sensitive to
modelling errors. Show that this is the case by designing a linearizing
feedback for the system
ẋ = x2 + u
that makes the closed loop system linear with a pole in −1. Apply the
suggested control to the system
ẋ = (1 + ε ) x2 + u
Exercise 7.3
Consider a linear system
ẋ1 = ax2 + bu
ẋ2 = x1
46
Chapter 7. Nonlinear Controller Design
(a) One of the design parameters in the design of a sliding mode controller
is the choice of sliding set. Which of the following sliding sets will result
in a stable sliding mode for the above system?
(i) σ ( x) = 2x1 − x2
(ii) σ ( x) = x1 + 2x2
(iii) σ ( x) = x1
(b) Let the sliding mode be σ ( x) = x1 + x2 . Construct a sliding mode
controller for the system.
(c) How large variations in the parameters a and b can the controller
designed in (b) tolerate in order to still guarantee a stable closed loop
system?
Exercise 7.4
Consider concentration control for a fluid that flows through a pipe, with no
mixing, and through a tank, with perfect mixing. A schematic diagram of
the process is shown in Figure 7.2 (left). The concentration at the inlet of
the pipe is c in ( t). Let the pipe volume be Vd and let the tank volume be Vm .
Furthermore, let the flow be q and let the concentration in the tank at the
outlet be c( t). A mass balance gives
dc( t)
Vm = q(cin ( t − L) − c( t))
dt
where L = Vd /q.
Vd k
0.63k
c in
Time
Vm a
c
L T
(a) Show that for fixed q, the system from input c in to output c can be
reprented by a linear transfer function
K
G(s) = e−sL
sT + 1
47
Chapter 7. Nonlinear Controller Design
Step response
0.5
Amplitude
0
−0.5
−1
0 2 4 6 8 10
Time [s]
Controller Kp Ti Td
P 1/ a
PI 0.9/ a 3L
PID 1.2/ a 2L L/2
ẋ1 = x2
m l ml
ẋ2 = − sin( x1 ) − cos( x1 )u
Jp Jp
A general hint for this exercise: Maple and Matlab Symbolic toolbox are
handy when dealing with long equations!
(a) Denote the total energy of the pendulum by E and determine the value
E 0 corresponding to the pendulum standing in the upright position.
(b) Investigate whether the control strategy
u = k( E ( x) − E 0 )sign( x2 cos( x1 ))
48
Chapter 7. Nonlinear Controller Design
ẋ1 = x1 + u
ẋ2 = x1
y = x2
u = −2x1 − sign( x1 + x2 )
Exercise 7.7
R1
Minimize 0
x2 ( t) + u2 ( t) dt when
ẋ( t) = u( t)
x(0) = 1
x(1) = 0
Exercise 7.8
Neglecting air resistance and the curvature of the earth the launching of a
satellite is described with the following equations
ẋ1 = x3
ẋ2 = x4
F
ẋ3 = cos u
m
F
ẋ4 = sin u −
m
Here x1 is the horizontal and x2 the vertical coordinate and x3 and x4 are the
corresponding velocities. The signal u is the controlled angle. The criterion
is to maximize 0.1x1 + x2 + 5x3 + 3x4 at the end point. Show that the optimal
control signal has the form
At + B
tan u =
Ct + D
and determine A, B, C, D.
49
Chapter 7. Nonlinear Controller Design
Exercise 7.9
Suppose more realistically that m and F vary. Let F = u2 ( t) be a control
signal with limitations
0 ≤ u2 ( t) ≤ umax
and let the mass m = x5 ( t) vary as
ẋ5 = −γ u2
Show that
λ4
umax σ <0
u2 > 0
tan u1 = λ3 and u2 = 0 σ >0
⋆ u2 = 0 ⋆ σ = 0,
where ⋆ means that the solution is unknown. Determine equations for λ and
σ . (You do not have to solve these equations).
Exercise 7.10
Consider the system
I
ẋ1 = x2
ẋ2 = − x1 − x32 + (1 + x1 )u
ẋ1 = f1 ( x1 , x2 , λ1 , λ2 )
ẋ2 = f2 ( x1 , x2 , λ1 , λ2 )
λ̇1 = f3 ( x1 , x2 , λ1 , λ2 )
λ̇2 = f4 ( x1 , x2 , λ1 , λ2 )
Exercise 7.11
Consider the double integrator
ẋ1 = x2
ẋ2 = u, pup ≤ 1
50
Chapter 7. Nonlinear Controller Design
σ ( x) > 0
I
umax
u=
−umax σ ( x) < 0
Draw a phase portrait of the closed loop system under the optimal control.
Exercise 7.12
Consider the problem of controlling the double integrator
I
ẋ1 = x2
ẋ2 = u, pup ≤ 1
from an arbitrary intitial condition x(0) to the origin so that the criterion
Z tf
(1 + pup) dt
0
is minimized (t f is the first time so that x( t f ) = 0). Show that all extremals
are of the form
−1 0 ≤ t ≤ t 1
u( t) = 0 t1 ≤ t ≤ t2
1 t2 ≤ t ≤ t f
or
1
0 ≤ t ≤ t1
u( t) = 0 t1 ≤ t ≤ t2
−1 t 2 ≤ t ≤ t f
for some t 1 , t 2 with 0 ≤ t 1 ≤ t 2 ≤ t f . Some time interval can have the length
0. Assume that the problem is normal.
Exercise 7.13
Consider the system
− 5 2 0
ẋ =
x+ u
−6 2 1
1
from x0 = 0 to x( t f ) = in minimum time with pu( t)p ≤ 3. Show that
1
the optimal controller is either
I
−3 0 ≤ t ≤ t 1
u( t) =
+3 t 1 ≤ t ≤ t f
51
Chapter 7. Nonlinear Controller Design
or I
+3 0 ≤ t ≤ t 1
u( t) =
−3 t 1 ≤ t ≤ t f
for some t 1 .
ẋ = Ax + Bu, pup ≤ 1, x( t f ) = 0,
u( t) = −sign (C T e− At B)
for some vector C. What does this say about the optimal input when A =
B = 1?
Exercise 7.15
What is the conclusion from the maximum principle for the problem
Z1
min u dt,
0
ẋ1 = u
x1 (0) = 0
x1 (1) = 1
Explain.
Exercise 7.16
Consider the control system
ẍ − 2( ẋ)2 + x = u − 1 (7.1)
52
Chapter 7. Nonlinear Controller Design
This problem will use the Lyapunov method for design of a control signal
which will stabilize a system. Consider the system
ẋ1 = − x1 + x2 + x3 · tan( x1 )
ẋ2 = − x32 − x1 (7.2)
ẋ3 = x22 +u
1 2 1 2
V ( x) = x + x
2 1 2 2
Exercise 7.19
In this problem we are going to examine how to stabilize a system using a
bounded control signal u = sat5 (v), i.e.,
5,
v ≥ 5;
u(v) = v, −5 ≤ v ≤ 5;
−5, v ≤ −5;
Your task is to choose the control signal v = v( x1 , x2 ), such that the sys-
tem (7.3)
ẋ1 = x1 x2
ẋ2 = u (7.3)
u = sat5 (v)
53
Chapter 7. Nonlinear Controller Design
Va = x21 + x22
Exercise 7.20
Consider the system
ẋ1 = x2 − x1
ẋ2 = kx21 − x2 + u (7.4)
Exercise 7.21
Consider the system
ẋ1 = x21 + x2
ẋ2 = u
Exercise 7.22
Consider the system
54
Chapter 7. Nonlinear Controller Design
Exercise 7.23
Consider the following nonlinear system:
ẋ1 = x1 + x2
ẋ2 = sin( x1 − x2 ) + u
Exercise 7.24
Consider the following nonlinear system:
Exercise 7.25
Consider the following nonlinear system:
ẋ1 = x1 + x2
ẋ2 = sin( x1 − x2 ) + x3
ẋ3 = u
Design a controller based on back-stepping for the system. You do not need
to substitute back to x1 , x2 , x3 in the computed controller.
Exercise 7.26
Consider the discrete time system
x k+1 = f k ( x k , u k ), k = 0, 1, . . . , N − 1
where
4 ( x) = x2
3 ( x, u) = x2 + u2
2 ( x, u) = x2 + 3u2
1 ( x, u) = x2 + 7u2
0 ( x, u) = x2 + 15u2
55
Hints
Exercise 7.27
Consider the nonlinear optimal control problem
Z1
minimize ( x( t)u( t))2 dt + x(1)2 ,
0
subject to ẋ( t) = x( t)u( t), x(0) = 1.
Hints
Exercise 7.5
Use a Lyapunov function argument with V ( x) = ( E ( x) − E 0 )2 .
Exercise 7.6
Use V ( x) = σ 2 ( x)/2.
Exercise 7.14
You will most likely need the following relations. If
y = eAx [ x = e− A y
and
T
( e A )T = e A
Exercise 7.17
1 2
Use V ( x1 , x2 , x3 ) = ( x + x22 + x23 ).
2 1
Exercise 7.18
Use the Lyapunov function candidate from (b).
56
Solutions to Chapter 1
Solution 1.1
(a) Choose the angular position and velocity as state variables, i.e., let
x1 = θ
x2 = θ˙
We obtain
ẋ1 = x2
k
ẋ2 = − sin( x1 ) − x2
l m
0 = x2
k
0=− sin( x1 ) − x2
l m
0 1
5 6
d
∆x = ∆x (7.5)
dt − l (−1)n − mk
The linearized system is stable for even n, and unstable for odd n. We
can use Lyapunov’s linearization method to conclude that the pendulum
is LAS around the lower equilibrium point, and unstable around the
upper equilibrium point.
Solution 1.2
We choose angular positions and velocities as state variables. Letting x1 = q 1 ,
x2 = q̇ 1 , x3 = q 2 , x4 = q̇ 2 , we obtain
ẋ1 = x2
ML k
ẋ2 = − sin x1 − ( x1 − x3 )
I I
ẋ3 = x4
k 1
ẋ4 = ( x1 − x3 ) + u
J J
57
Solutions to Chapter 1
Solution 1.3
(a) Let x1 = δ , x2 = δ˙, x3 = E q and u = E F D . We obtain
ẋ1 = x2
P D η1
ẋ2 = − x2 − x3 sin x1
M M M
η2 η3 1
ẋ3 = − x3 + cos x1 + u
τ τ τ
ẋ1 = x2
P D η1
ẋ2 = − x2 − E q sin x1
M M M
Solution 1.4
(a) Let
ẋ = Ax + Bu, y = Cx
u = r − ψ ( t, y) = r − ψ ( t, C x)
and hence
ẋ = Ax − Bψ ( t, C x) + Br, y = Cx
(b) To separate the linear dynamics from the nonlinearities, write the
pendulum state equations as
ẋ1 = x2
k
ẋ2 = − x2 − sin( x1 )
m l
0 1 0
5 6 5 6
ẋ = x+ u (7.6)
0 − k/m /l
y = [1 0] x (7.7)
u = − sin( y), (7.8)
58
Solutions to Chapter 1
Solution 1.5
(a) Hint: ė = − y = −C z.
(b) Assume ż = ė = 0, i.e.
0 = Az + B sin( e)
0 = −C z
z = − A−1 B sin( e)
But since G(0) was assumed to be non-zero, this means that sin( e) = 0
and hence e = nπ for any integer n. With this choice of e, the upper
equation reduces to 0 = Az and hence z = 0, since A was invertible.
What we have shown is that if there exist equilibrium points, they have
to be on the form ( z, e) = (0, nπ ). Simply verifying that ż = ė = 0
for these points shows that the equilibrium points are indeed given by
( z, e) = (0, nπ ) for any integer n!
Alternative solution: The equilibrium points are given by ż = 0 and
ė = 0. In steady-state (at an equilibrium point) the amplification of the
transfer function is G(0). Denote the steady state error eo = θ i − θ 0 . If
this should be constant it means that θ 0 is constant (see block diagram
of Fig.1.4) and thus θ˙0 = 0, which is the same signal as the output
y =[
0 = G(0) sin eo
eo = ±nπ, n = 0, 1, 2, . . .
(c) For G(s) = 1/(τ s + 1), we take A = −1/τ , B = 1/τ and C = 1. Then
1 1
ż = − z + sin e
τ τ
ė = − z
ẋ1 = x2
1 1
ẋ2 = − x2 − sin x1 ,
τ τ
59
Solutions to Chapter 1
Solution 1.6
Let G P I D (s) be the transfer function for the PID controller. In order to find
the feedback interconnection form, the first step is to define the input and
output of the non-linearity. In this case F is the output and v is the input.
This implies, that the required linear system Gl has as its input F and as
its output v. By the help of the block-diagram one finds
s
Gl (s) =
ms2 + G P I D (s)
Hence, the whole system is a feedback interconnection of the linear system
Gl (s) and the non-linearity F (v). Observe, the feedback interconnection form
is usually defined such that, the linear part receives a negative feedback from
the non-linearity.
Solution 1.7
The requested form
is obtained with
G f bGp − G aw
Gl (s) = .
1 + G aw
Solution 1.8
ẋ1 = x2
ẋ2 = −2x2 − x1 + f ( x3 )
ẋ3 = r − x1
√
(b) For a constant input r the √ equilibrium point is given by x = (r, 0, ± r).
The linearization for x = (r, 0, r) has
0 1 0
√
A = −1 −2 2 r .
−1 0 0
The characteristic equation is given by
√
λ2 (λ + 2) + 2 r + λ = 0.
60
Solutions to Chapter 1
Solution 1.9
The linearization is given by
ẍ = − k 1 x + u,
Solution 1.10
The linearized system is not controllable. The system is however nonlinear
locally controllable. This can be seen directly from the definition as follows:
We must show that we can drive the system from (0, 0, 0) to a near by
state ( xT , yT , θ T ) using small control signals u1 and u2 . By the sequence
u = (u1 , u2 ) = (0, ε1 ),u = (ε1 , 0),u = (0, −ε1 ), u = (−ε2 , 0) (or in words: "turn
left, forward, turn right, backwards") one can move to the state (0, yT , 0).
Then apply (ε3 , 0) and then (0, ε4 ) to end up in ( xT , yT , θ T ). For any time
T > 0 this movement can be done with small εi if xT , yT and θ T are small.
Solution 1.11
Same solution as in 1.10, except that you have to find a movement afterwards
that changes Ψ without changing the other states. This can be done by
the sequence: L-F-R-B-R-F-L-B where F=forward, B=backwards, L=turn left,
R=turn right.
Solution 1.12
The linearized system at ( x0 , u0 ) is
ẋ1 = u
ẋ2 = x1
1 0
5 6
Wc =
0 1
has full rank. Since the linearized system is controllable the nonlinear system
is also locally controllable at ( x0 , u0 ).
Solution 1.13
See lecture slides. Why does max(abs(y(:))) not give the correct stationary
output amplitude?.
Solution 1.14
61
Solutions to Chapter 1
(a) See Fig 7.4 for an example of an implementation in matlab. Note that
the outputs have been names x1 and x2 . This can make the linearization
in the next task be easier to interpret
(b) linmod(’pendulum’,[0,0]), if the file is named pendulum.slx. The lin-
earization should be the same as in 1.1. However x1 could either be θ
or θ˙ and vice versa for x2 .
Solution 1.15
See lecture slides.
Solution 1.16
With a = 0.02 and w = 100π we get local stability for l ∈ [0.044, 1.9].
Solution 1.17
p p
(a) x =
p 0, and if rp> 1 also x = ( b(r − 1), b(r − 1), r − 1) and x =
(− b(r − 1), − b(r − 1), r − 1).
(b) The linearization around x = 0 is
−σ σ 0
ẋ = r −1 0
0 0 −b
62
Solutions to Chapter 2
Solution 2.1
(a) The equilibrium points are
√ √
( x1 , x2 ) = (0, 0), ( 6, 0), (− 6, 0),
which are stable node, saddle point, and stable focus, respectively.
(c) The equilibrium points are
( x1 , x2 ) = (0, 0),
( x1 , x2 ) = (0, 0),
x21 + x22 = 1
( x1 , x2 ) = (0, 0),
Solution 2.2
The three equilibrium points are
q q
( x1 , x2 ) = (0, 0), ( ( ac/b), a), (− ( ac/b), a).
The first equilibrium point is a saddle. The other equilibria are stable nodes
if 8a ≤ c and stable focuses if 8a > c.
63
Solutions to Chapter 2
Solution 2.3
(a) The system has three equilibrium points
a
a − tan( ) = 0
2
Solution 2.4
Close to the origin, the saturation element opens in the linear region, and
all system are assigned the same closed loop dynamics. Far away from the
origin, the influence of the saturated control can be neglected, and the open
loop dynamics governs the behaviour.
(a) System (a) has one stable and one unstable eigenvalue. For initial val-
ues close to the stable eigenvector, the state will move towards the
origin. For initial values close to the unstable eigenvector, the system
diverges towards infinity. This corresponds to the rightmost phase por-
trait.
(b) All eigenvalues of system (b) are unstable. Thus, for initial values
sufficiently far from the origin, the system state will diverge. This
corresponds to the leftmost phase portrait. Note how the region of
attraction (the set of initial states, for which the state converges to the
origin) is severely limited.
(c) System (c) is stable also in open loop. This corresponds to the phase
portrait in the middle.
Solution 2.5
(a) From the state equations we see that the system has the origin as
a unique equilibrium point. To determine the direction of the arrow
heads we note that if x2 > 0 then ẋ1 < 0, and if x2 < 0 then ẋ1 > 0.
Hence, x1 moves to the left in the upper half plane, and to the right in
the lower half plane. After marking the arrow heads in the plots we see
that the origin is a stable focus. This can be determined by inspection
of the vector fields. We also see that the system has two limit cycles.
The inner one is unstable and the outer one is stable.
64
Solutions to Chapter 2
Solution 2.6
(a) The equilibrium points are obtained by setting ẋ = 0. For K ,= −2, the
origin is the unique equilibrium point. When K = −2, the line x1 = 2x2
is an equilibrium set.
(b) The Jacobian is given by
−1 − K
5 6
f
(0) =
x 1 −2
with eigenvalues
r
3 1
λ=− ± − K.
2 4
Thus, the closed loop system is asymptotically stable about the origin
for K > −2. Depending on the value of K , we can origin has the
following character
1
<K stable focus
4
1
−2 < K < stable node
4
K < −2 saddle.
Solution 2.7
The equilibria are given by sin x01 = P
η Eq , x02 = 0. The characteristic equation
for the linearization becomes
λ2 + αλ + β = 0,
65
Solutions to Chapter 2
Solution 2.8
(a) Just plug into the system dynamics.
(b) To determine stability of the limit cycle, we introduce polar coordinates.
With r ≥ 0:
x1 = r cos(θ )
x2 = r sin(θ )
cos(θ ) −r sin(θ )
3 4 3 43 4
ẋ1 ṙ
=
ẋ2 sin(θ ) r cos(θ ) θ˙
1 r cos(θ ) r sin(θ )
3 4 3 43 4
ṙ ẋ1
=
θ˙ r − sin(θ ) cos(θ ) ẋ2
ṙ = r(1 − r2 ) (7.9)
θ˙ = −1 (7.10)
We see that the the only equilibrium points to (7.9) are 0 and 1 (since
r ≥ 0). Linearizing around r = 1 (i.e. the limit cycle) gives:
r̃˙ = −2r̃
Then
Therefore
1
Ω= x : ≤ pp xpp2 ≤ 2
) *
2
is a compact invariant set. Let
E = x ∈ Ω : V̇ ( x) = 0
) *
E = x : pp xpp2 = 1
) *
66
Solutions to Chapter 2
ẋ( t) = f ( x( t))
f ( x0 ( t))
f ( x( t)) ( f ( x0 ( t)) + ( x( t) − x0 ( t))
x | {z }
x̃( t)
f ( x0 ( t))
ẋ( t) ( f ( x0 ( t)) + x̃( t)
x
So
f ( x0 ( t))
ẋ0 ( t) + x̃˙ ( t) = f ( x0 ( t)) + x̃( t)
x
In subproblem a) we showed that x0 ( t) is a solution to the system, i.e.
ẋ0 ( t) = f ( x0 ( t)) and thus
f ( x0 ( t))
x̃˙( t) = x̃ = A( t) x̃( t)
x
where
A f1( x0 ( t)) f1 ( x0 ( t))
−2 sin2 ( t)
B
1 − sin(2t)
3 4
x1 x2
A( t) = = .
f2 ( x0 ( t)) f2 ( x0 ( t)) −1 − sin(2t) −2 cos2 ( t)
x1 x2
Solution 2.9
7 2r
x̃¨ = cos(φ 0 )φ˜ + φ¨˜
5 5
Solution 2.10
Using the identity
3 1
(sin t)3 = sin t − sin 3t
4 4
we see that u0 ( t) = sin (3t), y0 ( t) = sin t is a nominal solution. The
linearization is given by
1
ỹ¨ + 4 sin2 t · ỹ = − ũ.
3
Solution 2.11
No solution yet.
67
Solutions to Chapter 3
Solution 3.1
(a) Linearization about the system around the origin yields
f
A= = 3ax2
x
Thus, at the origin we have A = 0. Since the linearization has one
eigenvalue on the imaginary axis, linearization fails to determine sta-
bility of the origin.
(b) V (0) = 0, V ( x) ,= 0 for x ,= 0, and V ( x) → ∞ as x → ∞. Thus, V ( x)
satisfies the conditions for being a Lyapunov function candidate. Its
time derivative is
V
V̇ ( x) = f ( x) = 4ax6 (7.12)
x
which is negative definite for a < 0. The desired result now follows
from Lyapunov’s global asymptotic stability theorem.
(c) For a = 0, the system is linear and given by
ẋ = 0
The system has solutions x( t) = x0 for all t. Thus, the system is stable.
A similar conclusion can be drawn from the Lyapunov function used in
(b).
Solution 3.2
(a) Since x2 is angular velocity, the speed of the pendulum tip is given by
lx2 . Since we assume that all mass is concentrated at the tip the kinetic
energy of the pendulum is
ml2 x22
.
2
The potential energy is given by m h, where h is the vertical position of
the pendulum relative to some reference level. We choose this reference
level by letting h = 0 when x1 = 0 (i.e pendulum in downward position).
h can expressed as
h = l(1 − cos( x1 ))
ml2 2
V ( x) = m l(1 − cos( x1 )) + x
2 2
We use V as a candidate Lyapunov function. We see that V is positive
for x ,= 0 and V (0) = 0, and compute the time derivative
dV ( x) X V
= ẋi = m l sin( x1 ) x2 + x2 (−m l sin( x1 )) = 0
dt xi
i
68
Solutions to Chapter 3
dV ( x)
= − kl2 x22
dt
E = ( x1 , x2 ) p V̇ ( x) = 0 = {( x1 , x2 ) p x2 = 0}
) *
ẋ2 = − sin x1 ,= 0, p x1 p ≤ 0.9π
l
Thus the largest invariant set in E is {0}. (Note that since we are
considering local asymptotic stability, it is sufficient to consider p x1 p ≤
π − ε for any sufficiently small positive ε.) By LaSalle’s invariance
principle we conclude that x → 0.
Solution 3.3
With V = kx2 /2 + ẋ2 /2 we get V̇ = − d ẋ4 ≤ 0. Since V̇ = 0 only when
ẋ = 0 and the system equation then gives ẍ = − kx ,= 0 unless also x = 0,
we conclude that x = ẋ = 0 is the only invariant set. The origin is globally
asymptotically stable since the Lyapunov function is radially unbounded.
Solution 3.4
√
(a) The eigenvalues of A are λ = −1/2 ± i 3/2.
(b) (i) We have
0 1 p 11 p 12 p 11 p 12 0 −1 −1 0
5 65 6 5 65 6 5 6
+ =
−1 −1 p 12 p 22 p 12 p 22 1 −1 0 −1
69
Solutions to Chapter 3
Solution 3.6
(a) The mistake is that V is not radially unbounded. The student has
forgotten to check that lim x→∞ V ( x) = ∞. In fact,
x21 1 2
V ( x1 , x2 ) = + x
1+ x21 2 2
1 2 x21
V ( x) = x2 + = V ( x0 ) = c
2 1 + x21
x21
x22 = c − ≥ c−1
1 + x21
√
In this case, we have p x2 p ≥ c − 1. Since ẋ1 = x2 , it follows that
p x1 p → ∞ as t → ∞. Roughly speaking, if the system starts with more
initial stored energy than can possibly be stored as potential energy in
the spring, the trajectories will diverge.
Solution 3.7
Find the equilibrium points for the system.
70
Solutions to Chapter 3
(a) Adding equation (1) times x1 to equation (2) times 2x2 gives
ẋ1 = 4x21 x2
ẋ2 = −2x31 ,
d 2
( x + 2x22 − 4) = 2x1 ẋ1 + 4x2 ẋ2 = 2x31 4x2 + 4x2 (−2x31 ) = 0.
dt 1
Solution 3.8
Verify that V (0) = 0, V ( x) > 0 for x ,= 0 and V ( x) → ∞ for pp xpp → ∞. Now,
(a) We have
d
V ( x1 , x2 ) = 8x1 ẋ1 + 4x2 ẋ2 + 16x31 ẋ1 =
dt
= 8x1 x2 + 4x2 (−2x1 − 2x2 − 4x31 ) + 16x31 x2 =
= −8x22
71
Solutions to Chapter 3
which implies that if x2 should remain zero, then x1 has also to be zero.
The invariance theorem from the lectures can now be used to conclude
global asymptotic stability of the origin.
Solution 3.9
(a) Introduce the state vector x = ( x1 , x2 )T = ( y, ẏ)T . The system dynamics
can now be written as
ẋ1 = x2
ẋ2 = −sat(2x1 + 3x2 )
d 1
V ( x1 , x2 ) = x2 ẋ2 + sat(2x1 + 3x2 )(2x1 + 3x2 )
dt 2
3
= − (sat(2x1 + 3x2 ))2
2
≤0
ÿ = ±1.
Solution 3.10
72
Solutions to Chapter 3
ẋ1 = x2
ẋ2 = − x1
x1 ( t) = A cos( t) + B sin( t)
x2 ( t) = B cos( t) − A sin( t)
Solution 3.11
(a)
1 0
5 6
P = 0.5
0 1
solves the Lyapunov equation with Q as the identity matrix.
Alternative:
(b) We have
0
5 6
T T T
V̇ ( x) = x ( A P + P A) x + 2x P =
( x2 )
= − x21 − x22 + x2 ( x2 ) < 0
which is negative for x22 < 1. One might be tempted to consider the
whole strip
E = x : p x2 p < 1
) *
73
Solutions to Chapter 3
Phase plane
2
1.5
0.5
x2
0
−0.5
−1
−1.5
−2
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
x1
Figure 7.5 Trajectories of the nonlinear system and level surfaces of V ( x) = 0.5xT x.
The region of attraction is the unit circle.
not happen. Since the level sets 0.5( x21 + x22 ) = γ are circles, we conclude
that the largest level set is
1
Ω = { x : V ( x) < }.
2
Solution 3.12
(a) The origin is locally asymptotically stable, since the linearization
0 −1
5 6
d
x̃ = x̃
dt 1 −1
AT P + P A = − I,
1.5 −0.5
5 6
P= (7.13)
−0.5 1
x1 = r cos θ
x2 = r sin θ
74
Solutions to Chapter 3
We get
which is negative for r2 < 1/0.861. Using this, together with λmin ( P ) ≥
0.69, we choose
0.69
c = 0.8 < = 0.801
0.861
Solution 3.13
(a) For p2x1 + x2 p ≤ 1, we have
0 1
5 6
ẋ = x. (7.14)
−1 −1
The system matrix is ass. stable. Hence, the origin is locally asymptot-
ically stable.
(b) We have V ( x) > 0 in the first and third quadrant.
c2
V̇ ( x) = x21 − x1 + .
x21
75
Solutions to Chapter 3
Solution 3.14
(a) Use V as a Lyapunov function candidate and let u be generated by the
nonlinear state feedback
V
u = −( ψ ( x))
x
Solution 3.5
Use convexity wrt K .
Solution 3.16
f
(a) Integration of the equality d
dσ f (σ x) = x (σ x) · x gives the equation
Z1
f
f ( x) = (σ x) · x dσ .
0 x
We get
Z1 Z1 5 6T
T T f T T f
x P f ( x) + f ( x) P x = x P (σ x) xdσ + x (σ x) dσ P x
0 x 0 x
Z1 I 5 6T J
T f f
=x P (σ x) + (σ x) P dσ x ≤ − xT x
0 x x
(c) Suppose that f is bounded, i.e. that q f ( x)q ≤ c for all x. Then
q xT P f + f T P xq ≤ 2cq P qq xq.
76
Solutions to Chapter 3
Solution 3.17
Assume the linearization A = xf of f is asymptotically stable. Then the
equation
P A + AT P = − I,
R∞ T
has a solution P > 0. (To prove that P = 0
e A s e As ds > 0 is such a solution
integrate both sides of
d AT s As T T
e e = AT e A s e As + e A s e As A
ds
from 0 to ∞.) All conditions of Krasovskii’s method are then satisfied and we
conclude that the nonlinear system is asymptotically stable. The instability
result is harder.
Solution 3.18
The system is given by
ẋ1 = x2 =: f1
ẋ2 = − x2 + K ( e) = − x2 + K (− x1 ) =: f2 .
With
1 1
P=
1 2
we get V̇ ≤ 0 if 3 K x21 < 1. Hence the system is locally stable. Actually one
gets V̇ < 0 if 3 K x21 < 1 unless x1 = 0. The invariant set is x1 = x2 = 0. From
LaSalle’s theorem the origin is hence also locally asymptotically stable.
77
Solutions to Chapter 4
Solution 4.1
See the Figure 7.6.
1.5 1.5
1 1
0.5 0.5
0 0
−0.5 −0.5
−1 −1
−1.5 −1.5
−1 0 1 −1 0 1
1.5 1.5
1 1
0.5 0.5
0 0
−0.5 −0.5
−1 −1
−1.5 −1.5
−1 0 1 −1 0 1
Solution 4.2
(a) What are the restrictions that we must impose on the nonlinearities so
that we can apply the various stability theorems?
The Nyquist Criterion ψ ( y) must be a linear function of y, i.e., ψ ( y) =
k 1 y for some constant k 1 .
The Circle Criterion ψ ( y) must be contained in some sector [ k 1 , k 2 ].
Small Gain Theorem ψ ( y) should be contained in a symmetric sector
[− k 2 , k 2 ]. The gain of the nonlinearity is then less or equal to k 2 .
The Passivity Theorem states that one of the systems must be strictly
passive and the other one passive. Here we consider the case where
ψ is strictly passive. Let y = ψ (u). According to the definition in
the lecture notes a system is strictly passive if
for all u and T > 0 and some ε > 0. This requires ψ (0) = 0, and
since ψ is static the following must hold for any input u:
u 1 1
( − )2 ≤ 2 − 1
y 2ε 4ε
78
Solutions to Chapter 4
We first note that the inequality can only hold of ε ≤ 1/2, and
in the following we assume that ε ≤ 1/2. Let x = u/ y. Then the
inequality reads
1 1 − 4ε 2
( x − )2 ≤
2ε 4ε 2
And we note that it holds for
1
x= [ y = 2εu
2ε
u
x = 2ε [ y =
2ε
(4ε 2 − 1)2 ≤ 1 − 4ε 2
1
2ε ≤ x ≤
2ε
1 1
x− ≤0 ∀x ≤ .
2ε 2ε
1
2εu ≤ y(u) ≤ u
2ε
which gives that φ must belong to the sector [ε, ε1 ] for some small
ε>0
These conditions are illustrated in Figure 7.7.
(b) If the above restrictions hold, we get the following conditions on the
Nyquist curve
The Nyquist Criterion The Nyquist curve should not encircle the
point −1/ k 1 .
The Circle Criterion If 0 ≤ k 1 ≤ k 2 , the Nyquist curve should
neither encircle nor intersect the disc defined by −1/ k 2 , −1/ k 1 . If
k 1 < 0 < k 2 G should stay inside the disc.
Small Gain Theorem The Nyquist curve has to be contained in a disc
centered at the origin, with radius 1/ k 2 .
79
Solutions to Chapter 4
0.5 k 0.5
1
k
1
ψ(y)
ψ(y)
0 0
−0.5 −0.5
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
y y
0.5 0.5
ψ(y)
ψ(y)
0 0
−k
−0.5 2 −0.5
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
y y
0.5 0.5
ψ(y)
ψ(y)
0 0
−0.5 −0.5
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
y y
0.5 0.5
ψ(y)
ψ(y)
0 0
−0.5 −0.5
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
y y
80
Solutions to Chapter 4
Solution 4.3
(a) The systems belong to the sectors [0, 1], [0, ∞] and [−1, ∞] respectively.
(b) Only the saturation nonlinearity (the leftmost nonlinearity) has finite
gain, which is equal to one. The other two nonlinearities have infinite
gain.
(c) The nonlinearity is passive if uy ≥ 0. That is if and only if the curve is
contained in the first and third quadrants. The saturation and the sign
nonlinearity are passive. The rightmost nonlinearity is not passive.
Solution 4.4
Since the linear part of the system is Hurwitz, we are free to use all versions
of the circle criterion.
(a) In order to guarantee stability of a nonlinearity belonging to a sym-
metric sector [−α, α], the Nyquist curve has to stay strictly inside a
disk centered at the origin with radius 1/α. We may, for instance, take
α = 0.25 − ε for some small ε > 0.
(b) The Nyquist curve lies inside the disk D(−1.35, 4.35). Thus, stabil-
ity can be guaranteed for all nonlinearities in the sector −0.23, 0.74.
(NOTE: The disk D( x1 , x2 ) is defined as the disk with diameter p x1 − x2 p
which crosses the real axis in x1 and x2 .)
(c) We must find β such that the Nyquist plot lies outside of a half-plane
Re(G(iω )) < −1/β . A rough estimate from the plot is β = 1.1.
Solution 4.5
The open loop system has one unstable pole, and we are restricted to apply
the first or fourth version of the circle criterion. In this example, we can place
a disk with center in −3 and with radius 0.75, and apply the first version of
the Nyquist criterion to conclude stability for all nonlinearities in the sector
[0.27, 0.44].
Solution 4.6
(a) The circle with k 1 = −2, k 2 = 7 does not intersect the Nyquist curve
(see Figure 7.9). Hence the sector (−2, 7) suffices. As always there are
many other circles that can be used (The lower limit can be traded
against the upper limit).
(b) The Nyquist diagram is a circle with midpoint in −0.5 and radius
0.5, see Figure 4.5. Since the open system has two unstable poles the
Nyquist curve should encircle the disc twice. Choosing the circle that
passes through −1/ k 1 = −1 + ε and −1/ k 2 = −ε, we conclude by the
1
Bode-diagram, that the loop is stable for the sector [ , 1/ε ],
1−ε
(One might think that the Bode-diagram only indicates one encir-
clement. However, the Bode-diagram is only for positive frequencies,
and will be mirrored for negative frequencies, yielding two encirclements.)
81
Solutions to Chapter 4
Nyquist Diagrams
0.5
0.4
0.3
0.2
0.1
Imaginary Axis
0
−0.1
−0.2
−0.3
−0.4
−0.5
−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5
Real Axis
Figure 7.9 The Nyquist curves for the system in Exercise 4.6a, and the circle
corresponding to k1 = −2, k2 = 7.
82
Solutions to Chapter 4
Solution 4.7
(a) Introduce y = C x and u = δ y = ψ ( y), then
ẋ = Ax + Bu
y = Cx
(b) ψ satisfies:
Z∞ Z∞
qψ ( y)q22 = pδ ( t) y( t)p2 dt = pδ ( t)p2 p y( t)p2 dt
0 0
Z∞
2
≤ sup pδ ( t)p p y( t)p dt = sup pδ ( t)p2 q yq22 ≤ q yq22
2
t 0 t
83
Solutions to Chapter 4
Solution 4.8
(a) >> A=[1 10; 0 1];svd(A)
ans =
10.0990
0.0990
(b)
3 4
q ABxq q ABxq q Bxq
σ1 ( AB) = sup = sup ·
x q xq x q Bxq qx
3 4
q Ayq q Bxq
sup · sup = σ1 ( A)σ1 ( B)
y q yq x q xq
Solution 4.9
The proof follows directly from the definition of passivity, since, according to
the definition of a storage function
ZT
〈u, y〉T = uT y dt
0
ZT
≥ V̇ ( x) dt = V ( x(T )) − V ( x(0)) = V ( x(T ))
0
84
Solutions to Chapter 4
Solution 4.10
The linear system G(s) corresponds to
ẋ = Ax + Bu, y = BT P x, x(0) = 0.
Let V = xT P x. Then
V̇ = ẋT P x + xT P ẋ
= xT ( AT P + P A) x + 2xT P Bu = − xT x + 2yT u ≤ 2yT u
Solution 4.11
Write the system in state-space form:
d
V = x(−2x + sat( x) + u)
dt
= yu − 2x2 + xsat( x) ≤ xu − x2
as
x2 ≥ xsat( x) ≥ 0.
(a)
d
V ≤ yu − x2 ≤ yu
dt
85
Solutions to Chapter 4
This will be the block-scheme according to Figure 7.10. We see that the
strictly passive system Σ c with input ω and output x2 will be feedback
connected to another subsystem which consists of the DC-motor with
a local feedback with −θ ( coming from one term of η). The transfer
function of this subsystem will be
1
s+1 s
=
1 + 1s s+1 1 s2 +s+1
η 1 ω 1 θ
+ s+1 s
−1
x2
Σc
Figure 7.10
0 1 0 0
ẋ = Ax − Bψ ( y) = −2 −1 1 x − 0 sat([ 1 0 −1 ] x)
2 0 −2 1
86
Solutions to Chapter 4
Nyquist Diagrams
0.8
0.6
0.4
0.2
Imaginary Axis
0
−0.2
−0.4
−0.6
−0.8
−1
−1.5 −1 −0.5 0 0.5
Real Axis
Solution 4.12
(a) We have
ZT
〈 y, u〉 = y( t)u( t) dt =
0
ZT
= {u( t)uc ( t)}{ H (u( t)uc ( t))}dt =
0
ZT
= w( t) H (w( t)) dt = 〈w, H (w)〉
0
e( t) = G(p)θ uc ( t)
θ˙ ( t) = −γ uc ( t) e( t)
Solution 4.13
No solution yet.
87
Solutions to Chapter 5
Solution 5.1
Use the interpretation of describing function as "equivalent gain" and analyse
the gains of each non-linearity sectionally. We have 1-b, 2-c, 3-a, 4-d.
Solution 5.2
Denote the nonlinearity by f . For memoryless, static nonlinearities, the de-
scribing function does not depend on ω, and the describing reduces to
b1 ( A) + ia 1 ( A)
N ( A) =
A
(a) First, we notice that the saturation is an odd function, which implies
that a 1 = 0. In order to simplify the computations of b1 , we set H = 1
and note that the saturation can be described as
A/ D sin(φ ) 0 ≤ φ ≤ φl
I
f ( A sin(φ )) =
1 φ l < φ < π /2
2 D
N ( A) = (φ l + cos(φ l ))
Dπ A
88
Solutions to Chapter 5
2
3 ; <4
H D
N ( A) = 1− φl + cos(φ l )
D π A
(c) Noting that this nonlinearity can be written as the sum of the two
nonlinearities in (a) and (b), we arrive at the describing function
2(α − β )
3 4
D
N ( A) = φl + cos(φ l ) + β .
π A
Solution 5.3
Let the input to the relay be
u( t) = A sin(ω t) = A sin(φ )
89
Solutions to Chapter 5
We obtain
4H
N ( A) = (cos(φ 0 ) − i sin(φ 0 ))
πA
p
The identity cos( z) = 1 − sin2 ( z) gives the desired result.
Solution 5.4
Follows from the integration rule
1
Z
f ( ax) d x = F ( ax)
a
where F ( x) =
R
f ( x) d x.
Solution 5.5
We have
φ ( x) φ ( a)
< , x < a.
x a
and thus
π
2
Z
Φ( a) = φ ( a sin(θ )) sin(θ ) dθ
aπ 0
Zπ
2 φ ( a)
< a sin(θ ) sin(θ ) dθ
aπ 0 a
2
Zπ
= φ ( a) sin2 (θ ) dθ = φ ( a)/ a
aπ 0
Solution 5.6
The describing function is
N ( A) = k 1 + 3 A2 k 3 /4
Note, however, that the output y(T ) of the nonlinearity for the input e( t) =
A sin(φ ) is
y( t) = A2 k 2 /2 + ( k 1 A + 3 A3 k 2 /4) sin(φ )
− A2 k 2 /2 · cos(2φ ) − A3 k 3 /4 · sin(3φ )
We conclude that the term k 2 x22 does not influence N ( A). Still, we can not
just apply the describing function method, since there is a bias term. If the
linear system has integral action, the presence of a constant offset on the
input will have a very big influence after some time.
90
Solutions to Chapter 5
Solution 5.7
(a) When the saturation works in the linear range, we have the closed loop
dynamics
−5s
G(s) =
s2 + (1 − 5)s + 25
which is unstable. Thus, the state can not remain small. In saturation,
on the other hand, the nonlinearity generates a constant(“step”) input
to the system. The final value theorem then gives
−5s
lim y( t) = lim =0
t→∞ s→0 s2 + s + 25
with φ l = arcsin(1/ A). We know that limit cycles can only occur at sat-
uration, so we only consider A ≥ 1. Then N ( A) ∈ (0, 1] and −1/ N ( A)
lies in the interval (−∞, −1].
The frequency response of the system is
which intersects the negative real axis for ω ′ = 5 rad/s. The value of
G(iω ′ ) = −5. Thus, there will be an intersection. The frequency of the
oscillation is estimated to 5 rad/s, the amplitude is given by
1
− = G(iω ′ ) = −5 [ N ( A) = 0.2
N ( A)
91
Solutions to Chapter 5
Nyquist Diagrams
Imaginary Axis
0
−1
−2
−3
−4
−8 −7 −6 −5 −4 −3 −2 −1 0 1
Real Axis
Solution 5.8
(a) Introduce θ 0 = arcsin( a/ A) and proceed similarly to the saturation
nonlinearity.
(b) The describing function has maximum for
√
A∗ = 2a
which gives
2
N ( A∗ ) =
πa
√
The Nyquist curve
√ crosses the negative real axis for ω = 2, for which
the gain is G(i 2) = −2/3. Thus, we should expect no oscillations if
4
a> .
3π
Solution 5.9
(a) The describing function for a relay with amplitude D is given by
4D
N ( A) =
πA
−1/ N ( A) lies on the negative real axis. If the Nyquist curve intersects
the negative real axis, the describing function methods will predict a
sustained oscillation
4D
− pG(iω u )p = −1
πA
Thus, given the amplitude A of the oscillation, we estimate the ultimate
gain as
4D
K u = 1/pG(iω u )p =
πA
The ultimate period is the period time of the oscillations
Tu = 2π /ω
92
Solutions to Chapter 5
(b) From the simulation, we estimate the amplitude A = 0.6 which gives
K u ( 2.12. The ultimate period can be estimated directly from the plot
to be Tu ( 2. Note that the estimates have good correspondence with
the analytical results (which require a full process model)
Solution 5.10
No solution yet.
Solution 5.11
No solution yet.
Solution 5.12
No solution yet.
Solution 5.13
No solution yet.
93
Solutions to Chapter 6
Solution 6.1
We would like to write the system equations as
v = G(s)(−u)
u = φ (v)
Since the saturation element belongs to the sector [0, 1], we invoke the circle
criterion and conclude stability if the Nyquist curve of G(iω ) does not enter
the half plane Re(G(iω )) < −1. This gives the desired condition.
Solution 6.2
The model is given by
dz pvp
=v− z (7.19)
dt (v)
dz
F = σ0 z + σ1 (v) + Fv v (7.20)
dt
(a) For constant velocity v ,= 0, the stationary point z∗ is
(v)
z∗ = v = (v)sign(v).
pvp
d ( z − z∗ ) pvp
=− ( z − z∗ ),
dt (v)
where (v) ≥ 0.
(b) For any constant velocity, v ,= 0, (7.19) converges to z = (v)sign(v),
and F therefore converges to
F = σ0 (v)sign(v) + Fv v
pvp
V̇ = 2z(v − z)
(v)
p zp
≤ 2p zppvp(1 − )
(v)
94
Solutions to Chapter 6
p z( t)p ≥ a
Ω = { zp zT z < a2 }
dz pvp 2 dz
zv = z + z ≥z = V̇ ( t)
dt (v) dt
pvp
Fv = Fv v2 + (σ1 ż + σ0 z)( ż + z) (7.21)
(v)
pvp pvp
≥ σ1 ż2 + σ0 z2 + ( σ1 + σ0 ) z ż (7.22)
(v) (v)
pvp 2 pvp
Fv ≥ σ0 z ż + σ1 ż2 + σ0 z + σ1 z ż (7.23)
(v) (v)
3 42 A 3 42 B
pvp pvp pvp
= V̇ + σ1 ż + z + σ0 − σ1 z2 (7.24)
2 (v) (v) 2 (v)
Fv ≥ V̇
pvp
σ0 − σ1 >0
4 (v)
Solution 6.3
(a) The describing function for a relay has been derived on Lecture 6 to be
4 F0
N ( A) =
πA
95
Solutions to Chapter 6
(b) Using the superposition property of describing functions for static non-
linearities N f + = N f + N , and the fact that for a scalar gain y = ku
the describing function is N ( A) = k, we obtain
4 F0
N ( A) = Fv +
πA
Solution 6.4
The process is given by
ẋ = v
v̇ = − F + u
b = zv + K v x
v
b + u − K v vb
żv = − F
b = ( z F + K F pb
F vp)sign(b
v)
b)sign(b
ż F = − K F (u − F v)
e v = v − vb
b
eF = F − F
b + u − K v vb) − K v v
ė v = v̇ − vḃ = v̇ − żv − K v ẋ = − F + u − (− F
b − K v (v − vb) = − e F − K v e v
= −F + F
1 2
ḃ = Ḟ − ż F sign(b
ė F = Ḟ − F b)
v) − K F vḃ = Ḟ − − K F (u − F
! "
− K F − F̂ + u − K v vb + K v v = Ḟ − K F K v (v − vb) = Ḟ − K F K v e v
The term Ḟ is zero (except at zero velocity where it is not well defined).
Putting Ḟ = 0, we obtain
− Kv −1
5 6 5 65 6
ė v ev
= (7.25)
ė F − Kv K F 0 eF
λ(s) = s2 + K v s − K v K F
96
Solutions to Chapter 6
K v > 0,
− Kv K F > 0
97
Solutions to Chapter 6
D
− D/2
D/2
−D
Solution 6.5
(a) The gain margin for the system is 1.33 > 1.27, thus there should
be no limit cycle since the gain margin exceeds that required for the
worst-case scenario with quantization.
(b) We have already (lecture) seen that the describing function for the
function in Figure 7.13 is given by
0 A < D/2
I
N D ( A) = q
4D
πA 1 − ( 2DA )2 A > D/2
Superposition ( N f1 + f2 = N f1 + N f2 ) gives
NQ = N D + N3 D + N5 D + . . . + N2i+1
Solution 6.6
We have
ZT ZT
〈u, y〉T = uy dt = usat(u) dt ≥ 0
0 0
Solution 6.7
No solution yet.
Solution 6.8
Assume without loss of generality that 0 < u0 < D/2. The input to the
quantizer is u0 + d ( t) where d ( t) is the dither signal. The output y from the
quantizer is
I
0 u0 + d ( t) < D/2
y( t) = Q(u0 + d ( t)) =
D u0 + d ( t) > D/2
98
Solutions to Chapter 6
1 u0
y0 = T · D = u0 .
T D
Hence the dither signal gives increased accuracy, at least if the signal y can
be treated as constant compared to the frequency of the dither signal. The
method does not work for high-frequency signals y.
Solution 6.9
No solution yet.
Solution 6.10
No solution yet.
99
Solutions to Chapter 7
Solution 7.1
Let the output of the nonlinearity be u, so that u = f (v).
(a) We have
u = v2 , v≥0
1 1
v=f−1(u)
u=f(v)
0 0
−1 −1
−2 −2
−2 0 2 −1 0 1
v u
(c) We need xin to jump ±2 D when xout changes sign. xout will change sign
if u goes from increasing to decreasing. Thus the following inverse will
work:
b if u( t) > u( t−)
u + D
xin ( t) = u− D b if u( t) < u( t−)
xin ( t−) otherwise
See Figure 7.15 for an illustration.
100
Solutions to Chapter 7
xin
b
D
u
b
−D
Solution 7.2
(a) We notice that all state equation but the last one are linear. The last
state equation reads
ẋn = f ( x) + ( x)u
1
u = h( x, v) = (− f ( x) + Lx + v)
( x)
ẋn = Lx + v
(You may recognize this as the controller form from the basic control
course). For the control to be well defined, we must require that ( x) ,=
0 for all x.
(b) The above procedure suggest the control
1
u= (− a sin( x1 ) + l 1 x1 + l 2 x2 + v)
b cos( x1 )
0 1 0
5 6 5 6
ẋ = x+ v
l1 l2 1
101
Solutions to Chapter 7
l 1 = −1, l 2 = −2
The control law is well defined for x1 ,= π /2. This corresponds to the
pendulum being horizontal. For x1 = π /2, u has no influence on the
system. Notice how the control “blows up” nearby this singularity. Extra.
You may want to verify by simulations the behaviour of the modified
control
u = sat( h( x, v))
u = − x2 − x + v
ẋ = (1 + ε ) x2 − x2 − x = ε x2 − x
and note that for x > 1/ε, we have ẋ > 0, which implies that the
trajectories tend to infinity. Thus, global cancellation is non-robust in
the sense that it may require a very precise mathematical model.
Solution 7.3
(a) The sliding set in a sliding mode design is invariant, i.e., if x( t s ) belongs
to the sliding surface σ ( x) = 0, at time t s , then it belongs to the set
σ ( x) = 0 for all future times t ≥ t s . Thus, it must hold that
σ ( x) = σ̇ ( x) = 0
σ̇ ( x) = ẋ1 = 0
The dynamics on this sliding set would thus be stable, but not
asymptotically stable.
102
Solutions to Chapter 7
(b) According to the lecture slides, the sliding mode control law is
pT Ax µ
u=− T
− T sign(σ ( x))
p B p B
σ ( x) = pT x = 0
u = −( x1 + x2 ) − µsign( x1 + x2 )
(c) According to the robustness result of the sliding mode controller pre-
sented on the lecture, the above controller will force the system toward
the sliding mode if µ is chosen large enough, and if sign(pT B̂) =
sign(pT B), which implies sign(b̂) = sign(b). Since the nominal design
has b̂ = 1, we must have
b>0 (7.26)
Solution 7.4
(a) Straightforward manipulations give
K 1
G(s) = e−sL = e−sVd /q
sT + 1 sVm /q + 1
(b) The step response gives parameters a = 0.9, L = 1. Using the results
from (a) and a = K L/T we obtain
a = Vd / Vm
L = Vd /q
K p = 0.9/ a = 1
Ti = 3L = 3/q
Here we see that K p remains constant wheras Ti changes with the flow
q.
103
Solutions to Chapter 7
Solution 7.5
(a) The pendulum energy is given by
Jp 2
E ( x) = m l(1 − cos( x1 )) + x
2 2
d
V̇ ( x) = 2( E ( x) − E 0 ) E ( x) =
dt
= 2( E ( x) − E 0 )(m l sin( x1 ) ẋ1 + Jp x2 ẋ2 ) =
= 2( E ( x) − E 0 )(−mlx2 cos( x1 )u)
Phase plane
10
2
x2
−2
−4
−6
−8
−10
−6 −4 −2 0 2 4 6
x1
104
Solutions to Chapter 7
Solution 7.6
Define V = 21 σ ( x). We then get
Solution 7.7
Hamiltonian.
The general form of the Hamiltonian according to Glad/Ljung (18.34) is
H = n 0 ( x2 + u2 ) + λu
Adjoint equation.
This gives c 1 = − e−2 /(1 − e−2 ), c 2 = 1/(1 − e−2 ) and the control signal is
u = ẋ = c 1 e t − c 2 e− t
What about the case n 0 = 0? Then λ is constant and λ(1) = µ ,= 0. Hence
H = λu has no minimum in u, so this case gives no solution candidates.
105
Solutions to Chapter 7
Solution 7.8
Hamiltonian.
φ ( x( t f )) = −0.1x1 ( t f ) − x2 ( t f ) − 5x3 ( t f ) − 3x4 ( t f ) is the criterion to be
minimized. Note that L = 0. Setting α = F /m, we have
H = λ1 x3 + λ2 x4 + λ3 α cos u + λ4 (α sin u − )
Adjoint equation.
λ̇1 = 0
λ̇2 = 0
λ̇3 = −λ1
λ̇4 = −λ2
λ1 ( t f ) = −0.1
λ2 ( t f ) = −1
λ3 ( t f ) = −5
λ4 ( t f ) = −3
λ1 ( t) = −0.1
λ2 ( t) = −1
λ3 ( t) = −5 + 0.1( t − t f )
λ4 ( t) = −3 + t − t f
Optimality conditions.
Minimizing H with respect to u gives
H F F
= −λ3 sin u + λ4 cos u = 0
u m m
F F λ4 −3 + t − t f
[ λ3 sin(u) = λ4 cos(u) [ tan(u) = =
m m λ3 −5 + 0.1t − 0.1t f
This gives A = 1, B = −3 − t f , C = 0.1, D = −5 − 0.1t f .
Solution 7.9
We get
u2 u2
H = λ1 x3 + λ2 x4 + λ3 ( cos u1 ) + λ4 ( sin u1 − ) − λ5γ u2
x5 x5
= σ ( t, u1 )u2 + terms independent of u
λ3 λ4
where σ ( t, u1 ) = x5
cos u1 + x5
sin u1 − λ5γ . Since we want to minimize H
with respect to u:
umax σ < 0
u2 = ⋆ σ =0
0 σ >0
106
Solutions to Chapter 7
and I
λ4
u2 > 0
tan u1 = λ3
⋆ u2 = 0
Solution 7.10
The problem is normal, can use n 0 = 1. We have
2
λ̇1 = − H x1 = −2x1 e x1 − λ2 (−1 + u)
λ̇2 = − H x2 = −2x2 − λ1 + 3x22 λ2
λ(1) = 0
H λ2
=0 [ 2u + λ2 (1 + x1 ) = 0 [ u=− (1 + x1 )
u 2
2
( uH2 = 2 > 0 hence minimum). This gives
ẋ1 = f1 = x2
λ2
ẋ2 = f2 = − x1 − x32 − (1 + x1 )2
2
2
λ̇1 = f3 = −2x1 e x1 − λ2 (−1 + u)
λ̇2 = f4 = −2x2 − λ1 + 3x22 λ2
λ1 (1) = λ2 (1) = 0
Solution 7.11
Hamiltonian. We can minimize the total time by setting L = 1. No terminal
cost gives φ = 0. The constraint at the final time gives
5 6
x1
Ψ( x) = .
x2
H = n 0 + λ1 x2 + λ2 u
λ̇1 = 0
I 6T 5
1 0 µ1
5 6
, λ( t f ) =
λ̇2 = −λ1 0 1 µ2
Which gives
I
λ1 ( t) = µ1
λ2 ( t) = − µ1 t + B
107
Solutions to Chapter 7
Optimality conditions.
u should be the minimizer for H. Normally we look at Hu , however, this will
not be very useful now as H is linear in u. Instead we note that
1
λ2 ( t) < 0
u( t) = arg min H ( x, u, λ, η) = arg min λ2 ( t)u( t) = ? λ2 ( t) = 0
u( t)∈[−1,1] u( t)∈[−1,1]
−1 λ2 ( t) > 0
d x1 x2 x2
= [ x1 + C1 = 2
d x2 u 2u
For u( t) = 1 we get
x1 + C 1 = x22 /2
This gives the phase plane in the Figure 7.17. For u = −1 we get
x1 + C 2 = − x22 /2
This gives the phase plane in the Figure 7.18. Consider especially the two
5
−1
−2
−3
−4
−5
−5 −4 −3 −2 −1 0 1 2 3 4 5
curves for u = ±1 that pass through the origin (C 1 = C 2 = 0). We see that
switching has to occur when a curve intersects with another going to the
origin, i.e. when x1 = − 21 sign{ x2 } x22 . To reach the switching curve wee need
u( t) = −1 above the switch curve and u( t) = 1 below. We therefore see that
the control law is given by
1
; <
u( t) = −sign x1 ( t) + sign{ x2 ( t)} x22 ( t) .
2
108
Solutions to Chapter 7
−1
−2
−3
−4
−5
−5 −4 −3 −2 −1 0 1 2 3 4 5
Figure 7.18 Phase plane for u = −1. The solution is traveling downwards.
Phase plane
5
1
x2
−1
−2
−3
−4
−5
−5 −4 −3 −2 −1 0 1 2 3 4
x1
Solution 7.12
Since we assume the problem is normal (t f is free so this is not obvious) we
have
H = 1 + pup + λ1 x2 + λ2 u.
Minimization wrt pup ≤ 1 gives
λ2 > 1 [ u = −1
pλ2 p < 1 [ u = 0
λ2 < −1 [ u = 1
We also have
λ̇1 = − H x1 = 0 [ λ1 = B
λ̇2 = − H x2 = −λ1 [ λ2 = A − Bt
109
Solutions to Chapter 7
for some constants A, B. If B < 0 we see that λ2 increases (linearly) and hence
u( t) passes through the sequence 1 → 0 → −1, or a subsequence of this. If
B > 0 the (sub-) sequence is passed in the other direction −1 → 0 → −1.
If B = 0 then u is constant: either u = −1, u = 0 or u = 1. The cases
λ2 " 1 and λ2 " −1 are then impossible since the condition H " 0 (since t f
is free) then can not be satisfied.
Solution 7.13
Alternative 1 Use the Bang-bang
theorem (p. 472). Note that ( A, B) is con-
1 0
trollable and Ψ x =
has full rank, hence u( t) is bang-bang. From
0 1
“sats 18.6” we know that there are at most n − 1 = 1 switches in u (the
eigenvalues of A are −1, −2 and are hence real).
Alternative 2 Direct calculation shows
Minimization wrt u shows that pup = 3 where the sign is given by the sign of
σ ( t). From λ̇ = − AT λ and λ( t f ) = Ψ T
x µ = µ we get
σ ( t) = λT B == µT e− A( t− t f ) B = c 1 e− t + c 2 e−2t
Solution 7.14
Hamiltonian. Rt
The objective is to minimize t f = 0 f 1d, so L = 1 and the Hamiltonian is
H = n 0 + λT ( Ax + Bu) = λT Bu + λT Ax + n 0
Adjoint equation.
T
λ̇ = − H x = − AT λ [ λ( t) = e− A t λ(0)
Optimality conditions.
The optimal control signal must minimize H, so
110
Solutions to Chapter 7
Solution 7.15
Minimization of
H = (1 + λ1 )u
gives
1 + λ1 ,= 0 : no minimum in u
1 + λ1 = 0 : all u give minima
This does not prove that all u in fact give minima. It only says that all u( t)
are so far possible minima and we need more information.
But in fact since
Z1 Z1
u dt = ẋ1 dt = x(1) − x(0) = 1
0 0
Solution 7.16
(a) Introduce x1 = x, x2 = ẋ
ẋ1 = x2
(7.27)
ẋ2 = − x1 + 2x22 + u − 1
As the resulting system is linear and time invariant with poles in the
left half plane for all a > 0 it is GAS.
Solution 7.17
1 2
As the hint suggests, the Lyapunov function V ( x1 , x2 , x3 ) = ( x + x22 + x23 )
2 1
is used:
V (0, 0, 0) = 0, V ( x1 , x2 , x3 ) > 0 for pp xpp ,= 0 and V → +∞ as pp xpp → +∞.
dV
= ẋ1 x1 + ẋ2 x2 + ẋ3 x3 =
dt
− x21 + x1 x2 + x1 x3 tan( x1 ) − x42 − x1 x2 + x3 x22 + ux3 = (7.28)
111
Solutions to Chapter 7
Solution 7.18
(a) All singular points are given by { ẋ1 = 0, ẋ2 = 0}:
gives x1 = 0, ±2 and x2 = x1
By writing the system with u( t) " 0 and a = 1 as ẋ = f ( x) we get the
linearizations at the equilibria as
f
ẋ ( ( x − x eq )
x p x= xeq
−3 + 3x21 −1
5 6
f
A( x1 , x2 ) = =
x 1 −1
9 −1
5 6
A(2, 2) =
1 −1
√
eig ( A(2, 2)) = 4 ± 24 ( {8.9, −0.9} ( saddle point )
A(−2, −2) gives the same eigenvalues
−3 −1
5 6
A(0, 0) =
1 −1
eig ( A(0, 0)) = {−2, −2} ( stable node )
The origin is only locally asymptotically stable, since there is more than
one equilibrium point. Moreover, solutions starting in the two unstable
equilibrium points will not converge to the origin.
(b)
V̇ = x1 x˙1 + x2 x˙2 = −3x21 + x41 − x1 x2 + x1 x2 < 0
√
as long as p x1 p < 3 and x1 ,= 0. However we see that we can never
prove global stability
√ using this Lyapunov function candidate since
V̇ > 0 if x1 > 3.
Define
1 2
( x + x22 ) ≤ 1.}
Ω = { x1 , x2 : V ( x) =
2 1
Then Ω is an invariant set for the dynamics as (the boundary of) Ω is
a level set for V and it holds that V̇ ≤ 0 on Ω.
As V̇ = 0 for x1 = 0 we must apply LaSalles theorem. The set E where
V̇ = 0 is given by
√ √
E = {( x1 , x2 ) = (0, t), − 2 ≤ t ≤ 2.}
112
Solutions to Chapter 7
(c) If u( x) = − x31 then all nonlinearities are canceled and the system is
purely linear. The eigenvalues are -2,-2 and thus the origin is GAS.
This can showed by using the Lyapunov function as well.
Solution 7.19
Both Va and Vb are positive definite with respect to ( x1 , x2 ) and radially
unbounded.
(a)
d
Va = 2 ( x1 ẋ1 + x2 ẋ2 ) = 2( x21 + u) x2
dt
u would need to be u = − x21 − f ( x2 ), where f ( x2 ) is some function that
satisfies I
> 0 x2 > 0
f ( x2 ) =
< 0 x2 < 0,
x21 x21
v=− − 4sat( x2 ) [ u = − − 4sat( x2 )
1 + x21 1 + x21
Solution 7.20
Consider the system
ẋ1 = x2 − x1
ẋ2 = kx21 − x2 + u (7.29)
113
Solutions to Chapter 7
1
We use the Lyapunov function candidate V ( x1 , x2 , k̂) = x21 + x22 + ( k − k̂)2 .,
! "
2
and investigate the time derivative
d
V ( x1 , x2 , k̂) = x1 ẋ1 + x2 ẋ2 − ( k − k̂) k̂˙
dt
since k̇ ∼ 0 because k changes very slowly. Inserting the system equations
and some simplifications gives
d
V ( x1 , x2 , k̂) = x1 x2 − x21 + kx21 x2 − x22 + ux2 − ( k − k̂) k̂˙
dt
= − x21 − x22 + x1 x2 + ux2 + k − k̂˙ + x21 x2 + k̂ k̂˙
1 2
we obtain
d
V ( x1 , x2 , k̂) = − x21 − x22 + x1 x2 + ux2 + k̂x21 x2
dt
= − x21 − x22 + x2 u + x1 + k̂x21
! "
Solution 7.21
Start with the system ẋ1 = x21 + φ ( x1 ) which can be stabilized using φ ( x1 ) =
− x21 − x1 . Notice that φ (0) = 0. Take V1 ( x1 ) = x21 /2. To backstep, define
z2 = ( x2 − φ ( x1 )) = x2 + x21 + x1 ,
ẋ1 = − x1 + z2
ż2 = u + (1 + 2x1 )(− x1 + z2 )
114
Solutions to Chapter 7
Solution 7.22
(a) Start with the system ẋ1 = x21 − x31 + φ ( x1 ) which can be stabilized
using φ ( x1 ) = − x21 − x1 . Notice that φ (0) = 0. Take V1 ( x1 ) = x21 /2. To
backstep, define
ζ2 = ( x2 − φ ( x1 )) = x2 + x21 + x1 ,
ẋ1 = − x1 − x31 + ζ2
ζ˙2 = u + (1 + 2x1 )(− x1 − x31 + ζ2 )
Solution 7.23
(a) Defining
f1 ( x1 ) = x1
1 ( x1 ) = 1
f2 ( x1 , x2 ) = sin( x1 − x2 )
2 ( x1 , x2 ) = 1
ẋ1 = f1 ( x1 ) + 1 ( x1 ) x2
ẋ2 = f2 ( x1 , x2 ) + 2 ( x1 , x2 )u
115
Solutions to Chapter 7
(b) Start with the system ẋ1 = x1 + φ ( x1 ) which can be stabilized using
φ ( x1 ) = −2x1 . Notice that φ (0) = 0. Take V1 ( x1 ) = x21 /2. To backstep,
define
ζ2 = ( x2 − φ ( x1 )) = x2 + 2x1 ,
ẋ1 = − x1 + ζ2
ζ˙2 = −2x1 + 2ζ2 + sin(3x1 − ζ2 ) + u
Taking V = V1 ( x1 ) + ζ22 /2 as a Lyapunov function gives
Solution 7.24
(a) Defining
f1 ( x1 ) = −sat( x1 )
1 ( x1 ) = x21
f2 ( x1 , x2 ) = x21
2 ( x1 , x2 ) = 1
the system can be written on the strict feedback form
ẋ1 = f1 ( x1 ) + 1 ( x1 ) x2
ẋ2 = f2 ( x1 , x2 ) + 2 ( x1 , x2 )u
ζ2 = ( x2 − φ ( x1 )) = x2 + x1 ,
116
Solutions to Chapter 7
Solution 7.25
Start with the system ẋ1 = x1 +φ 1 ( x1 ) which can be stabilized using φ 1 ( x1 ) =
−2x1 . Notice that φ 1 (0) = 0. Take V1 ( x1 ) = x21 /2. To backstep, define
ζ2 = ( x2 − φ 1 ( x1 )) = x2 + 2x1 ,
ẋ1 = − x1 + ζ2
ζ˙2 = −2x1 + 2ζ2 + sin(3x1 − ζ2 ) + x3
ẋ1 = − x1 + ζ2
ζ˙2 = −2x1 + 2ζ2 + sin(3x1 − ζ2 ) + φ 2
which is stabilized by
φ2 = − sin(3x1 − ζ2 ) + x1 − 3ζ2
W
V̇2 = − x21 − ζ22
ζ3 = x3 − φ 2 = x3 + sin(3x1 − ζ2 ) − x1 + 3ζ2
=[
ζ˙3 = ẋ3 + cos(3x1 − ζ2 ) · (3 ẋ1 − ζ˙2 ) − ( ẋ1 + 3ζ˙2 )
= u + cos(3x1 − ζ2 )(−2x1 + 4ζ2 − ζ3 ) − 2x1 − 4ζ2 + 3ζ3
ẋ1 = − x1 + ζ2
ζ˙2 = − x1 − ζ2 + ζ3
ζ˙3 = u + cos(3x1 − ζ2 )(−2x1 + 4ζ2 − ζ3 ) − 2x1 − 4ζ2 + 3ζ3
= u + β ( x, z)
Now the control signal appears in the equation, and we can design a control
law. Consider the Lyapunov function candidate V = V2 + ζ32 /2:
117
Solutions to Chapter 7
Choosing
gives
Solution 7.26
(a) Define
N
X −1
Vk ( x k ) = N ( x N ) + j ( x j , µ∗j ( x j ))
j= k
V4 ( x) = 4 ( x) = x2
V3 ( x) = min [ 3 ( x, u) + V4 (2x + u)]
u
u3 = u2 = u1 = u0 = 2.
31x2 if p xp ≤ 1
I
=
31x2 + 30( x − sgn( x))2 otherwise
118
Solutions to Chapter 7
Solution 7.27
The Hamiltonian-Jacobi-Bellman equation for this problem is
Vt ( t, x) = −min[ L +∇ x V f ] \ q̇x2 = −min[ x2 u2 +2qx2 u] = −min[u2 +2qu] x2
u u u
Minimization yields u = −q. Insertion gives
q̇x2 = −(q2 − 2q2 ) x2 = q2 x2 \ q̇ = q2
119
8. Bibliography
Åström, K. J. (1968): Reglerteknik – Olinjära System. TLTH/VBV.
Boyd, S. P. (1997): “Homework assignments in ee375 – advanced analysis of
feedback.” Available from http://www-leland.stanford.edu/class/ee375/.
Khalil, H. K. (1996): Nonlinear Systems, 2nd edition. Prentice Hall, Upper
Saddle River, N.J.
Slotine, J.-J. E. and W. LI (1991): Applied Nonlinear Control. Prentice Hall,
Englewood Cliffs, N.J.
120