Optimization-Based Control: Richard M. Murray Control and Dynamical Systems California Institute of Technology

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Optimization-Based Control

Richard M. Murray
Control and Dynamical Systems
California Institute of Technology

DRAFT v2.1a, January 4, 2010


c California Institute of Technology

All rights reserved.

This manuscript is for review purposes only and may not be reproduced, in whole or in
part, without written consent from the author.
Chapter 2
Optimal Control

This set of notes expands on Chapter 6 of Feedback Systems by Åström and Murray
(ÅM08), which introduces the concepts of reachability and state feedback. We also
expand on topics in Section 7.5 of ÅM08 in the area of feedforward compensation.
Beginning with a review of optimization, we introduce the notion of Lagrange mul-
tipliers and provide a summary of the Pontryagin’s maximum principle. Using these
tools we derive the linear quadratic regulator for linear systems and describe its
usage.
Prerequisites. Readers should be familiar with modeling of input/output control
systems using differential equations, linearization of a system around an equilib-
rium point and state space control of linear systems, including reachability and
eigenvalue assignment. Some familiarity with optimization of nonlinear functions is
also assumed.

2.1 Review: Optimization


Optimization refers to the problem of choosing a set of parameters that maximize
or minimize a given function. In control systems, we are often faced with having to
choose a set of parameters for a control law so that the some performance condition
is satisfied. In this chapter we will seek to optimize a given specification, choosing
the parameters that maximize the performance (or minimize the cost). In this
section we review the conditions for optimization of a static function , and then
extend this to optimization of trajectories and control laws in the remainder of
the chapter. More information on basic techniques in optimization can be found
in [Lue97] or the introductory chapter of [LS95].
Consider first the problem of finding the minimum of a smooth function F :
Rn → R. That is, we wish to find a point x∗ ∈ Rn such that F (x∗ ) ≤ F (x) for all
x ∈ Rn . A necessary condition for x∗ to be a minimum is that the gradient of the
function be zero at x∗ :
∂F ∗
(x ) = 0.
∂x
The function F (x) is often called a cost function and x∗ is the optimal value for x.
Figure 2.1 gives a graphical interpretation of the necessary condition for a minimum.
Note that these are not sufficient conditions; the points x1 and x2 and x∗ in the
figure all satisfy the necessary condition but only one is the (global) minimum.
The situation is more complicated if constraints are present. Let Gi : Rn →
R, i = 1, . . . , k be a set of smooth functions with Gi (x) = 0 representing the
constraints. Suppose that we wish to find x∗ ∈ Rn such that Gi (x∗ ) = 0 and
F (x∗ ) ≤ F (x) for all x ∈ {x ∈ Rn : Gi (x) = 0, i = 1, . . . , k}. This situation can be
2.1. REVIEW: OPTIMIZATION 2-2

F (x)
x1

dx
∂F
∂x
dx

x
x2
x∗

Figure 2.1: Optimization of functions. The minimum of a function occurs at a


point where the gradient is zero.

F (x) ∂G
∂x x3

x G(x) = 0

x2
G(x) = 0 x2
x1
x1
(a) Constrained optimization (b) Constraint normal
vectors

Figure 2.2: Optimization with constraints. (a) We seek a point x∗ that minimizes
F (x) while lying on the surface G(x) = 0 (a line in the x1 x2 plane). (b) We can
parameterize the constrained directions by computing the gradient of the constraint
G. Note that x ∈ R2 in (a), with the third dimension showing F (x), while x ∈ R3
in (b).

visualized as constraining the point to a surface (defined by the constraints) and


searching for the minimum of the cost function along this surface, as illustrated in
Figure 2.2a.
A necessary condition for being at a minimum is that there are no directions
tangent to the constraints that also decrease the cost. Given a constraint function
G(x) = (G1 (x), . . . , Gk (x)), x ∈ Rn we can represent the constraint as a n − k
dimensional surface in Rn , as shown in Figure 2.2b. The tangent directions to the
surface can be computed by considering small perturbations of the constraint that
remain on the surface:
∂Gi ∂Gi
Gi (x + δx) ≈ Gi (x) + (x)δx = 0. =⇒ (x)δx = 0,
∂x ∂x
where δx ∈ Rn is a vanishingly small perturbation. It follows that the normal
directions to the surface are spanned by ∂Gi /∂x, since these are precisely the vectors
that annihilate an admissible tangent vector δx.
Using this characterization of the tangent and normal vectors to the constraint, a
necessary condition for optimization is that the gradient of F is spanned by vectors
2.1. REVIEW: OPTIMIZATION 2-3

that are normal to the constraints, so that the only directions that increase the
cost violate the constraints. We thus require that there exist scalars λi , i = 1, . . . , k
such that
Xk
∂F ∗ ∂Gi ∗
(x ) + λi (x ) = 0.
∂x i=1
∂x
 T
If we let G = G1 G2 ... Gk , then we can write this condition as

∂F ∂G
+ λT =0 (2.1)
∂x ∂x
the term ∂F /∂x is the usual (gradient) optimality condition while the term ∂G/∂x
is used to “cancel” the gradient in the directions normal to the constraint.
An alternative condition can be derivedPby modifying the cost function to incor-
porate the constraints. Defining Fe = F + λi Gi , the necessary condition becomes

∂ Fe ∗
(x ) = 0.
∂x

The scalars λi are called Lagrange multipliers. Minimizing Fe is equivalent to the


optimization given by 
min F (x) + λT G(x) . (2.2)
x

The variables λ can be regarded as free variables, which implies that we need to
choose x such that G(x) = 0 in order to insure the cost is minimized. Otherwise,
we could choose λ to generate a large cost.

Example 2.1 Two free variables with a constraint


Consider the cost function given by

F (x) = F0 + (x1 − a)2 + (x2 − b)2 ,

which has an unconstrained minimum at x = (a, b). Suppose that we add a con-
straint G(x) = 0 given by
G(x) = x1 − x2 .
With this constraint, we seek to optimize F subject to x1 = x2 . Although in this
case we could do this by simple substitution, we instead carry out the more general
procedure using Lagrange multipliers.
The augmented cost function is given by

F̃ (x) = F0 + (x1 − a)2 + (x2 − b)2 + λ(x1 − x2 ),

where λ is the Lagrange multiplier for the constraint. Taking the derivative of F̃ ,
we have
∂ F̃  
= 2x1 − 2a + λ 2x2 − 2b − λ .
∂x
Setting each of these equations equal to zero, we have that at the minimum

x∗1 = a − λ/2, x∗2 = b + λ/2.


2.2. OPTIMAL CONTROL OF SYSTEMS 2-4

The remaining equation that we need is the constraint, which requires that x∗1 = x∗2 .
Using these three equations, we see that λ∗ = a − b and we have
a+b a+b
x∗1 = , x∗2 = .
2 2
To verify the geometric view described above, note that the gradients of F and
G are given by
∂F   ∂G  
= 2x1 − 2a 2x2 − 2b , = 1 −1 .
∂x ∂x
At the optimal value of the (constrained) optimization, we have
∂F   ∂G  
= b−a a−b , = 1 −1 .
∂x ∂x
Although the derivative of F is not zero, it is pointed in a direction that is normal
to the constraint, and hence we cannot decrease the cost while staying on the
constraint surface. ∇
We have focused on finding the minimum of a function. We can switch back and
forth between maximum and minimum by simply negating the cost function:

max F (x) = min −F (x)
x x

We see that the conditions that we have derived are independent of the sign of F
since they only depend on the gradient begin zero in approximate directions. Thus
finding x∗ that satisfies the conditions corresponds to finding an extremum for the
function.
Very good software is available for solving optimization problems numerically of
this sort. The NPSOL and SNOPT libraries are available in FORTRAN (and C).
In MATLAB, the fmin function can be used to solve a constrained optimization
problem.

2.2 Optimal Control of Systems


Consider now the optimal control problem:
Z T

min L(x, u) dt + V x(T )
u(·) 0

subject to the constraint

ẋ = f (x, u), x ∈ Rn , u ∈ Rm .

Abstractly, this is a constrained optimization problem where we seek a feasible


trajectory (x(t), u(t)) that minimizes the cost function
Z T 
J(x, u) = L(x, u) dt + V x(T ) .
0
2.2. OPTIMAL CONTROL OF SYSTEMS 2-5

More formally, this problem is equivalent to the “standard” problem of minimizing a


cost function J(x, u) where (x, u) ∈ L2 [0, T ] (the set of square integrable functions)
and h(z) = ẋ(t) − f (x(t), u(t)) = 0 models the dynamics. The term L(x, u) is
referred to as the integral cost and V (x(T )) is the final (or terminal) cost.
There are many variations and special cases of the optimal control problem. We
mention a few here:
Infinite horizon optimal control. If we let T = ∞ and set V = 0, then we seek to
optimize a cost function over all time. This is called the infinite horizon optimal
control problem, versus the finite horizon problem with T < ∞. Note that if an
infinite horizon problem has a solution with finite cost, then the integral cost term
L(x, u) must approach zero as t → ∞.
Linear quadratic (LQ) optimal control. If the dynamical system is linear and the
cost function is quadratic, we obtain the linear quadratic optimal control problem:
Z T

ẋ = Ax + Bu, J= xT Qx + uT Ru dt + xT (T )P1 x(T ).
0

In this formulation, Q ≥ 0 penalizes state error, R > 0 penalizes the input and
P1 > 0 penalizes terminal state. This problem can be modified to track a desired
trajectory (xd , ud ) by rewriting the cost function in terms of (x − xd ) and (u − ud ).
Terminal constraints. It is often convenient to ask that the final value of the tra-
jectory, denoted xf , be specified. We can do this by requiring that x(T ) = xf or by
using a more general form of constraint:
ψi (x(T )) = 0, i = 1, . . . , q.
The fully constrained case is obtained by setting q = n and defining ψi (x(T )) =
xi (T ) − xi,f . For a control problem with a full set of terminal constraints, V (x(T ))
can be omitted (since its value is fixed).
Time optimal control. If we constrain the terminal condition to x(T ) = xf , let the
terminal time T be free (so that we can optimize over it) and choose L(x, u) = 1,
we can find the time-optimal trajectory between an initial and final condition. This
problem is usually only well-posed if we additionally constrain the inputs u to be
bounded.
A very general set of conditions are available for the optimal control problem that
captures most of these special cases in a unifying framework. Consider a nonlinear
system
ẋ = f (x, u), x = Rn ,
x(0) given, u ∈ Ω ⊂ Rm ,
where f (x, u) = (f1 (x, u), . . . fn (x, u)) : Rn × Rm → Rn . We wish to minimize a
cost function J with terminal constraints:
Z T
J= L(x, u) dt + V (x(T )), ψ(x(T )) = 0.
0

The function ψ : Rn → Rq gives a set of q terminal constraints. Analogous to the


case of optimizing a function subject to constraints, we construct the Hamiltonian:
X
H = L + λT f = L + λi fi .
2.2. OPTIMAL CONTROL OF SYSTEMS 2-6

The variables λ are functions of time and are often referred to as the costate vari-
ables. A set of necessary conditions for a solution to be optimal was derived by
Pontryagin [PBGM62].
Theorem 2.1 (Maximum Principle). If (x∗ , u∗ ) is optimal, then there exists λ∗ (t) ∈
Rn and ν ∗ ∈ Rq such that
x(0) given, ψ(x(T )) = 0
∂H ∂H
ẋi = − λ̇i = ∂V ∂ψ
∂λi ∂xi λ(T ) = (x(T )) + ν T
∂x ∂x
and
H(x∗ (t), u∗ (t), λ∗ (t)) ≤ H(x∗ (t), u, λ∗ (t)) for all u∈Ω
The form of the optimal solution is given by the solution of a differential equation
with boundary conditions. If u = arg min H(x, u, λ) exists, we can use this to choose
the control law u and solve for the resulting feasible trajectory that minimizes the
cost. The boundary conditions are given by the n initial states x(0), the q terminal
constraints on the state ψ(x(T )) = 0 and the n − q final values for the Lagrange
multipliers
∂V ∂ψ
λ(T ) = (x(T )) + ν T .
∂x ∂x
In this last equation, ν is a free variable and so there are n equations in n + q free
variables, leaving n − q constraints on λ(T ). In total, we thus have 2n boundary
values.
The maximum principle is a very general (and elegant) theorem. It allows the
dynamics to be nonlinear and the input to be constrained to lie in a set Ω, allowing
the possibility of bounded inputs. If Ω = Rm (unconstrained input) and H is
differentiable, then a necessary condition for the optimal input is
∂H
= 0.
∂u
We note that even though we are minimizing the cost, this is still usually called the
maximum principle (an artifact of history).
Sketch of proof. We follow the proof given by Lewis and Syrmos [LS95], omitting
some of the details required for a fully rigorous proof. We use the method of La-
grange multipliers, augmenting our cost function by the dynamical constraints and
the terminal constraints:
Z T

˜
J(x(·), u(·), λ(·), ν) = J(x, u) + −λT (t) ẋ(t) − f (x, u) dt + ν T ψ(x(T ))
0
Z T 
= L(x, u) − λT (t) ẋ(t) − f (x, u) dt
0
+ V (x(T )) + ν T ψ(x(T )).

Note that λ is a function of time, with each λ(t) corresponding to the instantaneous
constraint imposed by the dynamics. The integral over the interval [0, T ] plays the
role of the sum of the finite constraints in the regular optimization.
2.3. EXAMPLES 2-7

Making use of the definition of the Hamiltonian, the augmented cost becomes
Z T

˜
J(x(·), u(·), λ(·), ν) = H(x, u) − λT (t)ẋ dt + V (x(T )) + ν T ψ(x(T )).
0

We can now “linearize” the cost function around the optimal solution x(t) = x∗ (t)+
δx(t), u(t) = u∗ (t) + δu(t), λ(t) = λ∗ (t) + δλ(t) and ν = ν ∗ + δν. Taking T as fixed
for simplicity (see [LS95] for the more general case), the incremental cost can be
written as
δ J˜ = J(x
˜ ∗ + δx, u∗ + δu, λ∗ + δλ, ν ∗ + δν) − J(x
˜ ∗ , u ∗ , λ∗ , ν ∗ )
Z T  ∂H  
∂H ∂H
≈ δx + δu − λT δ ẋ + − ẋT δλ dt
0 ∂x ∂u ∂λ
∂V ∂ψ 
+ δx(T ) + ν T δx(T ) + δν T ψ x(T ), u(T ) ,
∂x ∂x
where we have omitted the time argument inside the integral and all derivatives
are evaluated along the optimal solution.
We can eliminate the dependence on δ ẋ using integration by parts:
Z T Z T
− λT δ ẋ dt = −λT (T )δx(T ) + λT (0)δx(0) + λT δx dt.
0 0

Since we are requiring x(0) = x0 , the first term vanishes and substituting this into
δ J˜ yields
Z T    ∂H  
∂H ∂H
δ J˜ ≈ + λT δx + δu + − ẋT δλ dt
0 ∂x ∂u ∂λ
 ∂V ∂ψ  
+ + νT − λT (T ) δx(T ) + δν T ψ x(T ), u(T ) .
∂x ∂x
To be optimal, we require δ J˜ = 0 for all δx, δu, δλ and δν, and we obtain the
(local) conditions in the theorem.

2.3 Examples
To illustrate the use of the maximum principle, we consider a number of analytical
examples. Additional examples are given in the exercises.
Example 2.2 Scalar linear system
Consider the optimal control problem for the system

ẋ = ax + bu, (2.3)

where x = R is a scalar state, u ∈ R is the input, the initial state x(t0 ) is given,
and a, b ∈ R are positive constants. We wish to find a trajectory (x(t), u(t)) that
minimizes the cost function
Z tf
J = 21 u2 (t) dt + 12 cx2 (tf ),
t0
2.3. EXAMPLES 2-8

where the terminal time tf is given and c > 0 is a constant. This cost function
balances the final value of the state with the input required to get to that state.
To solve the problem, we define the various elements used in the maximum
principle. Our integral and terminal costs are given by

L = 21 u2 (t) V = 12 cx2 (tf ).

We write the Hamiltonian of this system and derive the following expressions for
the costate λ:
H = L + λf = 21 u2 + λ(ax + bu)
∂H ∂V
λ̇ = − = −aλ, λ(tf ) = = cx(tf ).
∂x ∂x
This is a final value problem for a linear differential equation in λ and the solution
can be shown to be
λ(t) = cx(tf )ea(tf −t) .
The optimal control is given by
∂H
= u + bλ = 0 ⇒ u∗ (t) = −bλ(t) = −bcx(tf )ea(tf −t) .
∂u
Substituting this control into the dynamics given by equation (2.3) yields a first-
order ODE in x:
ẋ = ax − b2 cx(tf )ea(tf −t) .
This can be solved explicitly as

b2 c ∗ h i
x∗ (t) = x(to )ea(t−to ) + x (tf ) ea(tf −t) − ea(t+tf −2to ) .
2a
Setting t = tf and solving for x(tf ) gives

2a ea(tf −to ) x(to )


x∗ (tf ) = 
2a − b2 c 1 − e2a(tf −to )

and finally we can write

2abc ea(2tf −to −t) x(to )


u∗ (t) = −  (2.4)
2a − b2 c 1 − e2a(tf −to )
b2 c ea(tf −to ) x(to ) h i
x∗ (t) = x(to )ea(t−to ) +  ea(tf −t)
− ea(t+tf −2to )
. (2.5)
2a − b2 c 1 − e2a(tf −to )

We can use the form of this expression to explore how our cost function affects
the optimal trajectory. For example, we can ask what happens to the terminal state
x∗ (tf ) and c → ∞. Setting t = tf in equation (2.5) and taking the limit we find
that
lim x∗ (tf ) = 0.
c→∞


2.4. LINEAR QUADRATIC REGULATORS 2-9

Example 2.3 Bang-bang control


The time-optimal control program for a linear system has a particularly simple
solution. Consider a linear system with bounded input
ẋ = Ax + Bu, |u| ≤ 1,
and suppose we wish to minimize the time required to move from an initial state
x0 to a final state xf . Without loss of generality we can take xf = 0. We choose
the cost functions and terminal constraints to satisfy
Z T
J= 1 dt, ψ(x(T )) = x(T ).
0

To find the optimal control, we form the Hamiltonian

H = 1 + λT (Ax + Bu) = 1 + (λT A)x + (λT B)u.


Now apply the conditions in the maximum principle:
∂H
ẋ = = Ax + Bu
∂λ
∂H
−λ̇ = = AT λ
∂x
u = arg min H = −sgn(λT B)
The optimal solution always satisfies this equation (since the maximum principle
gives a necessary condition) with x(0) = x0 and x(T ) = 0. It follows that the input
is always either +1 or −1, depending on λT B. This type of control is called “bang-
bang” control since the input is always on one of its limits. If λT (t)B = 0 then the
control is not well defined, but if this is only true for a specific time instant (e.g.,
λT (t)B crosses zero) then the analysis still holds. ∇

2.4 Linear Quadratic Regulators


In addition to its use for computing optimal, feasible trajectories for a system, we
can also use optimal control theory to design a feedback law u = α(x) that stabilizes
a given equilibrium point. Roughly speaking, we do this by continuously re-solving
the optimal control problem from our current state x(t) and applying the resulting
input u(t). Of course, this approach is impractical unless we can solve explicitly for
the optimal control and somehow rewrite the optimal control as a function of the
current state in a simple way. In this section we explore exactly this approach for
the linear quadratic optimal control problem.
We begin with the the finite horizon, linear quadratic regulator (LQR) problem,
given by
ẋ = Ax + Bu, x ∈ Rn , u ∈ Rn , x0 given,
Z T
1  1
J˜ = xT Qx x + uT Qu u dt + xT (T )P1 x(T ),
2 0 2
where Qx ≥ 0, Qu > 0, P1 ≥ 0 are symmetric, positive (semi-) definite matrices.
Note the factor of 12 is usually left out, but we included it here to simplify the
2.4. LINEAR QUADRATIC REGULATORS 2-10

derivation. (The optimal control will be unchanged if we multiply the entire cost
function by 2.)
To find the optimal control, we apply the maximum principle. We being by
computing the Hamiltonian H:
1 T 1
H= x Qx x + uT Qu u + λT (Ax + Bu).
2 2
Applying the results of Theorem 2.1, we obtain the necessary conditions
 T
∂H
ẋ = = Ax + Bu x(0) = x0
∂λ
 T
∂H (2.6)
−λ̇ = = Qx x + AT λ λ(T ) = P1 x(T )
∂x
∂H
0= = Qu u + λT B.
∂u
The last condition can be solved to obtain the optimal controller
T
u = −Q−1
u B λ,

which can be substituted into the dynamic equation (2.6) To solve for the optimal
control we must solve a two point boundary value problem using the initial condition
x(0) and the final condition λ(T ). Unfortunately, it is very hard to solve such
problems in general.
Given the linear nature of the dynamics, we attempt to find a solution by setting
λ(t) = P (t)x(t) where P (t) ∈ Rn×n . Substituting this into the necessary condition,
we obtain
T
λ̇ = Ṗ x + P ẋ = Ṗ x + P (Ax − BQ−1
u B P )x,
T
=⇒ −Ṗ x − P Ax + P BQ−1
u BP x = Qx x + A P x.

This equation is satisfied if we can find P (t) such that


− Ṗ = P A + AT P − P BQ−1 T
u B P + Qx , P (T ) = P1 . (2.7)
This is a matrix differential equation that defines the elements of P (t) from a final
value P (T ). Solving it is conceptually no different than solving the initial value
problem for vector-valued ordinary differential equations, except that we must solve
for the individual elements of the matrix P (t) backwards in time. Equation (2.7) is
called the Riccati ODE.
An important property of the solution to the optimal control problem when
written in this form is that P (t) can be solved without knowing either x(t) or u(t).
This allows the two point boundary value problem to be separated into first solving
a final-value problem and then solving a time-varying initial value problem. More
specifically, given P (t) satisfying equation (2.7), we can apply the optimal input
T
u(t) = −Q−1
u B P (t)x.

and then solve the original dynamics of the system forward in time from the ini-
tial condition x(0) = x0 . Note that this is a (time-varying) feedback control that
describes how to move from any state to the origin.
2.4. LINEAR QUADRATIC REGULATORS 2-11

An important variation of this problem is the case when we choose T = ∞ and


eliminate the terminal cost (set P1 = 0). This gives us the cost function
Z ∞
J= (xT Qx x + uT Qu u) dt. (2.8)
0

Since we do not have a terminal cost, there is no constraint on the final value of λ or,
equivalently, P (t). We can thus seek to find a constant P satisfying equation (2.7).
In other words, we seek to find P such that

P A + AT P − P BQ−1 T
u B P + Qx = 0. (2.9)

This equation is called the algebraic Riccati equation. Given a solution, we can
choose our input as
T
u = −Q−1
u B P x.
T
This represents a constant gain K = Q−1 u B P where P is the solution of the
algebraic Riccati equation.
The implications of this result are interesting and important. First, we notice
that if Qx > 0 and the control law corresponds to a finite minimum of the cost,
then we must have that limt→∞ x(t) = 0, otherwise the cost will be unbounded.
This means that the optimal control for moving from any state x to the origin
can be achieved by applying a feedback u = −Kx for K chosen as described as
above and letting the system evolve in closed loop. More amazingly, the gain matrix
K can be written in terms of the solution to a (matrix) quadratic equation (2.9).
This quadratic equation can be solved numerically: in MATLAB the command K
= lqr(A, B, Qx, Qu) provides the optimal feedback compensator.
In deriving the optimal quadratic regulator, we have glossed over a number of
important details. It is clear from the form of the solution that we must have Qu > 0
since its inverse appears in the solution. We would typically also have Qx > 0 so
that the integral cost is only zero when x = 0, but in some instances we might only
case about certain states, which would imply that Qx ≥ 0. For this case, if we let
Qx = H T H (always possible), our cost function becomes
Z ∞ Z ∞
J= xT H T Hx + uT Qu u dt = kHxk2 + uT Qu u dt.
0 0

A technical condition for the optimal solution to exist is that the pair (A, H) be
detectable (implied by observability). This makes sense intuitively by considering
y = Hx as an output. If y is not observable then there may be non-zero initial
conditions that produce no output and so the cost would be zero. This would lead
to an ill-conditioned problem and hence we will require that Qx ≥ 0 satisfy an
appropriate observability condition.
We summarize the main results as a theorem.

Theorem 2.2. Consider a linear control system with quadratic cost:


Z ∞
ẋ = Ax + Bu, J= xT Qx x + uT Qu u dt.
0
2.4. LINEAR QUADRATIC REGULATORS 2-12

Assume that the system defined by (A, B) is reachable, Qx = QTx ≥ 0 and Qu =


QTu > 0. Further assume that Qu = H T H and that the linear system with dynamics
matrix A and output matrix H is observable. Then the optimal controller satisfies
T
u = −Q−1
u B P x, P A + AT P − P BQ−1 T
u B P = −Qx ,

and the minimum cost from initial condition x(0) is given by J ∗ = xT (0)P x(0).
The basic form of the solution follows from the necessary conditions, with the
theorem asserting that a constant solution exists for T = ∞ when the additional
conditions are satisfied. The full proof can be found in standard texts on optimal
control, such as Lewis and Syrmos [LS95] or Athans and Falb [AF06]. A simplified
version, in which we first assume the optimal control is linear, is left as an exercise.
Example 2.4 Optimal control of a double integrator
Consider a double integrator system
   
dx 0 1 0
= x+ u
dt 0 0 1
with quadratic cost given by
 2 
q 0
Qx = , Qu = 1.
0 0
The optimal control is given by the solution of matrix Riccati equation (2.9). Let
P be a symmetric positive definite matrix of the form
 
a b
P = .
b c
Then the Riccati equation becomes
 2   
−b + q 2 a − bc 0 0
= ,
a − bc 2b − c2 0 0
which has solution "p #
2q 3 q
P = √ .
q 2q
The controller is given by
T
p
K = Q−1
u B P = [1/q 2/q].
The feedback law minimizing the given cost function is then u = −Kx.
To better understand the structure of the optimal solution, we examine the
eigenstructure of the closed loop system. The closed-loop dynamics matrix is given
by  
0 p1
Acl = A − BK = .
−1/q − 2/q
The characteristic polynomial of this matrix is
r
2 1
λ2 + λ+ .
q q
2.5. CHOOSING LQR WEIGHTS 2-13

Comparing this to λ2 + 2ζω0 λ + ω02 , we see that


r
1 1
ω0 = , ζ=√ .
q 2
Thus the optimal controller gives a closed loop system with damping ratio ζ = 0.707,
giving a good tradeoff between rise time and overshoot (see ÅM08). ∇

2.5 Choosing LQR weights


One of the key questions in LQR design is how to choose the weights Qx and Qu .
To choose specific values for the cost function weights Qx and Qu , we must use our
knowledge of the system we are trying to control. A particularly simple choice is to
use diagonal weights
   
q1 0 ρ1 0
 ..   .. 
Qx =  . , Qu =  . .
0 qn 0 ρn

For this choice of Qx and Qu , the individual diagonal elements describe how much
each state and input (squared) should contribute to the overall cost. Hence, we
can take states that should remain small and attach higher weight values to them.
Similarly, we can penalize an input versus the states and other inputs through
choice of the corresponding input weight ρj .
Choosing the individual weights for the (diagonal) elements of the Qx and Qu
matrix can be done by deciding on a weighting of the errors from the individual
terms. Bryson and Ho [BH75] have suggested the following method for choosing
the matrices Qx and Qu in equation (2.8): (1) choose qi and ρj as the inverse of
the square of the maximum value for the corresponding xi or uj ; (2) modify the
elements to obtain a compromise among response time, damping and control effort.
This second step can be performed by trial and error.
It is also possible to choose the weights such that only a given subset of variable
are considered in the cost function. Let z = Hx be the output we want to keep
small and verify that (A, H) is observable. Then we can use a cost function of the
form
Qx = H T H Qu = ρI.
The constant ρ allows us to trade off kzk2 versus ρkuk2 .
We illustrate the various choices through an example application.

Example 2.5 Thrust vectored aircraft


Consider the thrust vectored aircraft example introduced in ÅM08, Example 2.9.
The system is shown in Figure 2.3, reproduced from ÅM08. The linear quadratic
regulator problem was illustrated in Example 6.8, where the weights were chosen
as Qx = I and Qu = ρI. Figure 2.4 reproduces the step response for this case.
A more physically motivated weighted can be computing by specifying the com-
parable errors in each of the states and adjusting the weights accordingly. Suppose,
for example that we consider a 1 cm error in x, a 10 cm error in y and a 5◦ error in θ
2.6. ADVANCED TOPICS 2-14

y r

F2

x F1
(a) Harrier “jump jet” (b) Simplified model

Figure 2.3: Vectored thrust aircraft. The Harrier AV-8B military aircraft (a)
redirects its engine thrust downward so that it can “hover” above the ground.
Some air from the engine is diverted to the wing tips to be used for maneuvering.
As shown in (b), the net thrust on the aircraft can be decomposed into a horizontal
force F1 and a vertical force F2 acting at a distance r from the center of mass.

to be equivalently bad. In addition, we wish to penalize the forces in the sidewards


direction since these results in a loss in efficiency. This can be accounted for in the
LQR weights be choosing
2 3
100 0 0 0 0 0
6 0 1 0 0 0 07
6 7 » –
6 0 0 2π/9 0 0 07 1 0
Qx = 6
6 0
7, Qu = 0.1 × .
6 0 0 0 0 07
7 0 10
4 0 0 0 0 0 05
0 0 0 0 0 0

The results of this choice of weights are shown in Figure 2.5. ∇

2.6 Advanced Topics


In this section we briefly touch on some related topics in optimal control, with
reference to more detailed treatments where appropriate.
Singular extremals. The necessary conditions in the maximum principle enforce the
constraints through the of the Lagrange multipliers λ(t). In some instances, we can
get an extremal curve that has one or more of the λ’s identically equal to zero. This
corresponds to a situation in which the constraint is satisfied strictly through the
minimization of the cost function and does not need to be explicitly enforced. We
illustrate this case through an example.

Example 2.6 Nonholonomic integrator


Consider the minimum time optimization problem for the nonholonomic integrator
introduced in Example 1.2 with input constraints |ui | ≤ 1. The Hamiltonian for the
2.6. ADVANCED TOPICS 2-15

1.5 1.5

Position x, y [m]

Position x, y [m]
x
y
1 1

0.5 0.5
rho = 0.1
rho = 1
rho = 10
0 0
0 2 4 6 8 10 0 2 4 6 8 10
Time t [s] Time t [s]
(a) Step response in x and y (b) Effect of control weight ρ

Figure 2.4: Step response for a vectored thrust aircraft. The plot in (a) shows
the x and y positions of the aircraft when it is commanded to move 1 m in each
direction. In (b) the x motion is shown for control weights ρ = 1, 102 , 104 . A higher
weight of the input term in the cost function causes a more sluggish response.

1.4 4
x u1
1.2 y u2
3
1

0.8
2
0.6

0.4
1
0.2

0 0
0 5 10 15 0 5 10 15
(a) Step response in x and y (b) Inputs for the step response

Figure 2.5: Step response for a vector thrust aircraft using physically motivated
LQR weights (a). The rise time for x is much faster than in Figure 2.4a, but there
is a small oscillation and the inputs required are quite large (b).

system is given by
H = 1 + λ1 u1 + λ2 u2 + λ3 x2 u1
and the resulting equations for the Lagrange multipliers are

λ̇1 = 0, λ̇2 = λ3 x2 , λ̇3 = 0. (2.10)

It follows from these equations that λ1 and λ3 are constant. To find the input u
corresponding to the extremal curves, we see from the Hamiltonian that

u1 = −sgn(λ1 + λ3 x2 u1 ), u2 = −sgnλ2 .

These equations are well-defined as long as the arguments of sgn(·) are non-zero
and we get switching of the inputs when the arguments pass through 0.
An example of an abnormal extremal is the optimal trajectory between x0 =
(0, 0, 0) to xf = (ρ, 0, 0) where ρ > 0. The minimum time trajectory is clearly given
2.7. FURTHER READING 2-16

by moving on a straight line with u1 = 1 and u2 = 0. This extremal satisfies the


necessary conditions but with λ2 ≡ 0, so that the “constraint” that ẋ2 = u2 is not
strictly enforced through the Lagrange multipliers. ∇

2.7 Further Reading


There are a number of excellent books on optimal control. One of the first (and
best) is the book by Pontryagin et al. [PBGM62]. During the 1960s and 1970s a
number of additional books were written that provided many examples and served
as standard textbooks in optimal control classes. Athans and Falb [AF06] and
Bryson and Ho [BH75] are two such texts. A very elegant treatment of optimal
control from the point of view of optimization over general linear spaces is given by
Luenberger [Lue97]. Finally, a modern engineering textbook that contains a very
compact and concise derivation of the key results in optimal control is the book by
Lewis and Syrmos [LS95].

Exercises
2.1 (a) Let G1 , G2 , . . . , Gk be a set of row vectors on a Rn . Let F be another row
vector on Rn such that for every x ∈ Rn satisfying Gi x = 0, i = 1, . . . , k, we have
F x = 0. Show that there are constants λ1 , λ2 , . . . , λk such that
k
X
F = λk Gk .
i=1

(b) Let x∗ ∈ Rn be an the extremal point (maximum or minimum) of a function


f subject to the constraints gi (x) = 0, i = 1, . . . , k. Assuming that the gradients
∂gi (x∗ )/∂x are linearly independent, show that there are k scalers λk , i = 1, . . . , n
such that the function
n
X
f˜(x) = f (x) + λi gi (x)
i=1

has an extremal point at x .
2.2 Consider the following control system

q̇ = u
Ẏ = quT − uq T

where u ∈ Rm and Y ∈ Rm×∋ is a skew symmetric matrix, Y T = Y .

(a) For the fixed end point problem, derive the form of the optimal controller
minimizing the following integral
Z
1 1 T
u u dt.
2 0
2.7. FURTHER READING 2-17

(b) For the boundary conditions q(0) = q(1) = 0, Y (0) = 0 and


 
0 −y3 y2
Y (1) =  y3 0 −y1 
−y2 y1 0

for some y ∈ R3 , give an explicit formula for the optimal inputs u.


(c) (Optional) Find the input u to steer the system from (0, 0) to (0, Ỹ ) ∈ Rm ×
Rm×m where Ỹ T = −Ỹ .
(Hint: if you get stuck, there is a paper by Brockett on this problem.)
2.3 In this problem, you will use the maximum principle to show that the shortest
path between two points is a straight line. We model the problem by constructing
a control system
ẋ = u,
where x ∈ R2 is the position in the plane and u ∈ R2 is the velocity vector along
the curve. Suppose we wish to find a curve of minimal length connecting x(0) = x0
and x(1) = xf . To minimize the length, we minimize the integral of the velocity
along the curve, Z 1 Z 1√
J= kẋk dt = ẋT ẋ dt,
0 0
subject to to the initial and final state constraints. Use the maximum principle to
show that the minimal length path is indeed a straight line at maximum velocity.
(Hint: try minimizing using the integral cost ẋT ẋ first and then show this also
optimizes the optimal control problem with integral cost kẋk.)
2.4 Consider the optimal control problem for the system

ẋ = −ax + bu,

where x = R is a scalar state, u ∈ R is the input, the initial state x(t0 ) is given,
and a, b ∈ R are positive constants. (Note that this system is not quite the same as
the one in Example 2.2.) The cost function is given by
Z tf
J = 12 u2 (t) dt + 12 cx2 (tf ),
t0

where the terminal time tf is given and c is a constant.

(a) Solve explicitly for the optimal control u∗ (t) and the corresponding state x∗ (t)
in terms of t0 , tf , x(t0 ) and t and describe what happens to the terminal state
x∗ (tf ) as c → ∞.
(b) Show that the system is differentially flat with appropriate choice of output(s)
and compute the state and input as a function of the flat output(s).
(c) Using the polynomial basis {tk , k = 0, . . . , M − 1} with an appropriate choice
of M , solve for the (non-optimal) trajectory between x(t0 ) and x(tf ). Your answer
should specify the explicit input ud (t) and state xd (t) in terms of t0 , tf , x(t0 ), x(tf )
and t.
2.7. FURTHER READING 2-18

(d) Let a = 1 and c = 1. Use your solution to the optimal control problem and
the flatness-based trajectory generation to find a trajectory between x(0) = 0 and
x(1) = 1. Plot the state and input trajectories for each solution and compare the
costs of the two approaches.
(e) (Optional) Suppose that we choose more than the minimal number of basis
functions for the differentially flat output. Show how to use the additional degrees
of freedom to minimize the cost of the flat trajectory and demonstrate that you can
obtain a cost that is closer to the optimal.
2.5 Repeat Exercise 2.4 using the system
ẋ = −ax3 + bu.
For part (a) you need only write the conditions for the optimal cost.
2.6 Consider the problem of moving a two-wheeled mobile robot (e.g., a Segway)
from one position and orientation to another. The dynamics for the system is given
by the nonlinear differential equation
ẋ = cos θ v, ẏ = sin θ v, θ̇ = ω,
where (x, y) is the position of the rear wheels, θ is the angle of the robot with
respect to the x axis, v is the forward velocity of the robot and ω is spinning rate.
We wish to choose an input (v, ω) that minimizes the time that it takes to move
between two configurations (x0 , y0 , θ0 ) and (xf , yf , θf ), subject to input constraints
|v| ≤ L and |ω| ≤ M .
Use the maximum principle to show that any optimal trajectory consists of
segments in which the robot is traveling at maximum velocity in either the forward
or reverse direction, and going either straight, hard left (ω = −M ) or hard right
(ω = +M ).
Note: one of the cases is a bit tricky and cannot be completely proven with the
tools we have learned so far. However, you should be able to show the other cases
and verify that the tricky case is possible.
2.7 Consider a linear system with input u and output y and suppose we wish to
minimize the quadratic cost function
Z ∞

J= y T y + ρuT u dt.
0

Show that if the corresponding linear system is observable, then the closed loop
system obtained by using the optimal feedback u = −Kx is guaranteed to be
stable.
2.8 Consider the system transfer function
s+b
H(s) = , a, b > 0
s(s + a)
with state space representation
  
0 1 0
ẋ = x+ u,
0 −a 1
 
y= b 1 x
2.7. FURTHER READING 2-19

and performance criterion Z ∞


V = (x21 + u2 )dt.
0

(a) Let  
p11 p12
P = ,
p21 p22
with p12 = p21 and P > 0 (positive definite). Write the steady state Riccati equation
as a system of four explicit equations in terms of the elements of P and the constants
a and b.
(b) Find the gains for the optimal controller assuming the full state is available for
feedback.
(c) Find the closed loop natural frequency and damping ratio.
2.9 Consider the optimal control problem for the system
Z tf
ẋ = ax + bu J=2 1
u2 (t) dt + 12 cx2 (tf ),
t0

where x ∈ R is a scalar state, u ∈ R is the input, the initial state x(t0 ) is given, and
a, b ∈ R are positive constants. We take the terminal time tf as given and let c > 0
be a constant that balances the final value of the state with the input required to
get to that position. The optimal trajectory is derived in Example 2.2.
Now consider the infinite horizon cost
Z ∞
J = 12 u2 (t) dt
t0

with x(t) at t = ∞ constrained to be zero.


(a) Solve for u∗ (t) = −bP x∗ (t) where P is the positive solution corresponding
to the algebraic Riccati equation. Note that this gives an explicit feedback law
(u = −bP x).
(b) Plot the state solution of the finite time optimal controller for the following
parameter values
a = 2, b = 0.5, x(t0 ) = 4,
c = 0.1, 10, tf = 0.5, 1, 10.
(This should give you a total of 6 curves.) Compare these to the infinite time optimal
control solution. Which finite time solution is closest to the infinite time solution?
Why?
2.10 Consider the lateral control problem for an autonomous ground vehicle from
Example 1.1. We assume that we are given a reference trajectory r = (xd , yd )
corresponding to the desired trajectory of the vehicle. For simplicity, we will assume
that we wish to follow a straight line in the x direction at a constant velocity vd > 0
and hence we focus on the y and θ dynamics:
1
ẏ = sin θ vd , θ̇ = tan φ vd .
l
We let vd = 10 m/s and l = 2 m.
2.7. FURTHER READING B-1

(a) Design an LQR controller that stabilizes the position y to yd = 0. Plot the
step and frequency response for your controller and determine the overshoot, rise
time, bandwidth and phase margin for your design. (Hint: for the frequency domain
specifications, break the loop just before the process dynamics and use the resulting
SISO loop transfer function.)
(b) Suppose now that yd (t) is not identically zero, but is instead given by yd (t) =
r(t). Modify your control law so that you track r(t) and demonstrate the perfor-
mance of your controller on a “slalom course” given by a sinusoidal trajectory with
magnitude 1 meter and frequency 1 Hz.

You might also like