Optimization-Based Control: Richard M. Murray Control and Dynamical Systems California Institute of Technology
Optimization-Based Control: Richard M. Murray Control and Dynamical Systems California Institute of Technology
Optimization-Based Control: Richard M. Murray Control and Dynamical Systems California Institute of Technology
Richard M. Murray
Control and Dynamical Systems
California Institute of Technology
This manuscript is for review purposes only and may not be reproduced, in whole or in
part, without written consent from the author.
Chapter 2
Optimal Control
This set of notes expands on Chapter 6 of Feedback Systems by Åström and Murray
(ÅM08), which introduces the concepts of reachability and state feedback. We also
expand on topics in Section 7.5 of ÅM08 in the area of feedforward compensation.
Beginning with a review of optimization, we introduce the notion of Lagrange mul-
tipliers and provide a summary of the Pontryagin’s maximum principle. Using these
tools we derive the linear quadratic regulator for linear systems and describe its
usage.
Prerequisites. Readers should be familiar with modeling of input/output control
systems using differential equations, linearization of a system around an equilib-
rium point and state space control of linear systems, including reachability and
eigenvalue assignment. Some familiarity with optimization of nonlinear functions is
also assumed.
F (x)
x1
dx
∂F
∂x
dx
x
x2
x∗
F (x) ∂G
∂x x3
∗
x G(x) = 0
x2
G(x) = 0 x2
x1
x1
(a) Constrained optimization (b) Constraint normal
vectors
Figure 2.2: Optimization with constraints. (a) We seek a point x∗ that minimizes
F (x) while lying on the surface G(x) = 0 (a line in the x1 x2 plane). (b) We can
parameterize the constrained directions by computing the gradient of the constraint
G. Note that x ∈ R2 in (a), with the third dimension showing F (x), while x ∈ R3
in (b).
that are normal to the constraints, so that the only directions that increase the
cost violate the constraints. We thus require that there exist scalars λi , i = 1, . . . , k
such that
Xk
∂F ∗ ∂Gi ∗
(x ) + λi (x ) = 0.
∂x i=1
∂x
T
If we let G = G1 G2 ... Gk , then we can write this condition as
∂F ∂G
+ λT =0 (2.1)
∂x ∂x
the term ∂F /∂x is the usual (gradient) optimality condition while the term ∂G/∂x
is used to “cancel” the gradient in the directions normal to the constraint.
An alternative condition can be derivedPby modifying the cost function to incor-
porate the constraints. Defining Fe = F + λi Gi , the necessary condition becomes
∂ Fe ∗
(x ) = 0.
∂x
The variables λ can be regarded as free variables, which implies that we need to
choose x such that G(x) = 0 in order to insure the cost is minimized. Otherwise,
we could choose λ to generate a large cost.
which has an unconstrained minimum at x = (a, b). Suppose that we add a con-
straint G(x) = 0 given by
G(x) = x1 − x2 .
With this constraint, we seek to optimize F subject to x1 = x2 . Although in this
case we could do this by simple substitution, we instead carry out the more general
procedure using Lagrange multipliers.
The augmented cost function is given by
where λ is the Lagrange multiplier for the constraint. Taking the derivative of F̃ ,
we have
∂ F̃
= 2x1 − 2a + λ 2x2 − 2b − λ .
∂x
Setting each of these equations equal to zero, we have that at the minimum
The remaining equation that we need is the constraint, which requires that x∗1 = x∗2 .
Using these three equations, we see that λ∗ = a − b and we have
a+b a+b
x∗1 = , x∗2 = .
2 2
To verify the geometric view described above, note that the gradients of F and
G are given by
∂F ∂G
= 2x1 − 2a 2x2 − 2b , = 1 −1 .
∂x ∂x
At the optimal value of the (constrained) optimization, we have
∂F ∂G
= b−a a−b , = 1 −1 .
∂x ∂x
Although the derivative of F is not zero, it is pointed in a direction that is normal
to the constraint, and hence we cannot decrease the cost while staying on the
constraint surface. ∇
We have focused on finding the minimum of a function. We can switch back and
forth between maximum and minimum by simply negating the cost function:
max F (x) = min −F (x)
x x
We see that the conditions that we have derived are independent of the sign of F
since they only depend on the gradient begin zero in approximate directions. Thus
finding x∗ that satisfies the conditions corresponds to finding an extremum for the
function.
Very good software is available for solving optimization problems numerically of
this sort. The NPSOL and SNOPT libraries are available in FORTRAN (and C).
In MATLAB, the fmin function can be used to solve a constrained optimization
problem.
ẋ = f (x, u), x ∈ Rn , u ∈ Rm .
In this formulation, Q ≥ 0 penalizes state error, R > 0 penalizes the input and
P1 > 0 penalizes terminal state. This problem can be modified to track a desired
trajectory (xd , ud ) by rewriting the cost function in terms of (x − xd ) and (u − ud ).
Terminal constraints. It is often convenient to ask that the final value of the tra-
jectory, denoted xf , be specified. We can do this by requiring that x(T ) = xf or by
using a more general form of constraint:
ψi (x(T )) = 0, i = 1, . . . , q.
The fully constrained case is obtained by setting q = n and defining ψi (x(T )) =
xi (T ) − xi,f . For a control problem with a full set of terminal constraints, V (x(T ))
can be omitted (since its value is fixed).
Time optimal control. If we constrain the terminal condition to x(T ) = xf , let the
terminal time T be free (so that we can optimize over it) and choose L(x, u) = 1,
we can find the time-optimal trajectory between an initial and final condition. This
problem is usually only well-posed if we additionally constrain the inputs u to be
bounded.
A very general set of conditions are available for the optimal control problem that
captures most of these special cases in a unifying framework. Consider a nonlinear
system
ẋ = f (x, u), x = Rn ,
x(0) given, u ∈ Ω ⊂ Rm ,
where f (x, u) = (f1 (x, u), . . . fn (x, u)) : Rn × Rm → Rn . We wish to minimize a
cost function J with terminal constraints:
Z T
J= L(x, u) dt + V (x(T )), ψ(x(T )) = 0.
0
The variables λ are functions of time and are often referred to as the costate vari-
ables. A set of necessary conditions for a solution to be optimal was derived by
Pontryagin [PBGM62].
Theorem 2.1 (Maximum Principle). If (x∗ , u∗ ) is optimal, then there exists λ∗ (t) ∈
Rn and ν ∗ ∈ Rq such that
x(0) given, ψ(x(T )) = 0
∂H ∂H
ẋi = − λ̇i = ∂V ∂ψ
∂λi ∂xi λ(T ) = (x(T )) + ν T
∂x ∂x
and
H(x∗ (t), u∗ (t), λ∗ (t)) ≤ H(x∗ (t), u, λ∗ (t)) for all u∈Ω
The form of the optimal solution is given by the solution of a differential equation
with boundary conditions. If u = arg min H(x, u, λ) exists, we can use this to choose
the control law u and solve for the resulting feasible trajectory that minimizes the
cost. The boundary conditions are given by the n initial states x(0), the q terminal
constraints on the state ψ(x(T )) = 0 and the n − q final values for the Lagrange
multipliers
∂V ∂ψ
λ(T ) = (x(T )) + ν T .
∂x ∂x
In this last equation, ν is a free variable and so there are n equations in n + q free
variables, leaving n − q constraints on λ(T ). In total, we thus have 2n boundary
values.
The maximum principle is a very general (and elegant) theorem. It allows the
dynamics to be nonlinear and the input to be constrained to lie in a set Ω, allowing
the possibility of bounded inputs. If Ω = Rm (unconstrained input) and H is
differentiable, then a necessary condition for the optimal input is
∂H
= 0.
∂u
We note that even though we are minimizing the cost, this is still usually called the
maximum principle (an artifact of history).
Sketch of proof. We follow the proof given by Lewis and Syrmos [LS95], omitting
some of the details required for a fully rigorous proof. We use the method of La-
grange multipliers, augmenting our cost function by the dynamical constraints and
the terminal constraints:
Z T
˜
J(x(·), u(·), λ(·), ν) = J(x, u) + −λT (t) ẋ(t) − f (x, u) dt + ν T ψ(x(T ))
0
Z T
= L(x, u) − λT (t) ẋ(t) − f (x, u) dt
0
+ V (x(T )) + ν T ψ(x(T )).
Note that λ is a function of time, with each λ(t) corresponding to the instantaneous
constraint imposed by the dynamics. The integral over the interval [0, T ] plays the
role of the sum of the finite constraints in the regular optimization.
2.3. EXAMPLES 2-7
Making use of the definition of the Hamiltonian, the augmented cost becomes
Z T
˜
J(x(·), u(·), λ(·), ν) = H(x, u) − λT (t)ẋ dt + V (x(T )) + ν T ψ(x(T )).
0
We can now “linearize” the cost function around the optimal solution x(t) = x∗ (t)+
δx(t), u(t) = u∗ (t) + δu(t), λ(t) = λ∗ (t) + δλ(t) and ν = ν ∗ + δν. Taking T as fixed
for simplicity (see [LS95] for the more general case), the incremental cost can be
written as
δ J˜ = J(x
˜ ∗ + δx, u∗ + δu, λ∗ + δλ, ν ∗ + δν) − J(x
˜ ∗ , u ∗ , λ∗ , ν ∗ )
Z T ∂H
∂H ∂H
≈ δx + δu − λT δ ẋ + − ẋT δλ dt
0 ∂x ∂u ∂λ
∂V ∂ψ
+ δx(T ) + ν T δx(T ) + δν T ψ x(T ), u(T ) ,
∂x ∂x
where we have omitted the time argument inside the integral and all derivatives
are evaluated along the optimal solution.
We can eliminate the dependence on δ ẋ using integration by parts:
Z T Z T
− λT δ ẋ dt = −λT (T )δx(T ) + λT (0)δx(0) + λT δx dt.
0 0
Since we are requiring x(0) = x0 , the first term vanishes and substituting this into
δ J˜ yields
Z T ∂H
∂H ∂H
δ J˜ ≈ + λT δx + δu + − ẋT δλ dt
0 ∂x ∂u ∂λ
∂V ∂ψ
+ + νT − λT (T ) δx(T ) + δν T ψ x(T ), u(T ) .
∂x ∂x
To be optimal, we require δ J˜ = 0 for all δx, δu, δλ and δν, and we obtain the
(local) conditions in the theorem.
2.3 Examples
To illustrate the use of the maximum principle, we consider a number of analytical
examples. Additional examples are given in the exercises.
Example 2.2 Scalar linear system
Consider the optimal control problem for the system
ẋ = ax + bu, (2.3)
where x = R is a scalar state, u ∈ R is the input, the initial state x(t0 ) is given,
and a, b ∈ R are positive constants. We wish to find a trajectory (x(t), u(t)) that
minimizes the cost function
Z tf
J = 21 u2 (t) dt + 12 cx2 (tf ),
t0
2.3. EXAMPLES 2-8
where the terminal time tf is given and c > 0 is a constant. This cost function
balances the final value of the state with the input required to get to that state.
To solve the problem, we define the various elements used in the maximum
principle. Our integral and terminal costs are given by
We write the Hamiltonian of this system and derive the following expressions for
the costate λ:
H = L + λf = 21 u2 + λ(ax + bu)
∂H ∂V
λ̇ = − = −aλ, λ(tf ) = = cx(tf ).
∂x ∂x
This is a final value problem for a linear differential equation in λ and the solution
can be shown to be
λ(t) = cx(tf )ea(tf −t) .
The optimal control is given by
∂H
= u + bλ = 0 ⇒ u∗ (t) = −bλ(t) = −bcx(tf )ea(tf −t) .
∂u
Substituting this control into the dynamics given by equation (2.3) yields a first-
order ODE in x:
ẋ = ax − b2 cx(tf )ea(tf −t) .
This can be solved explicitly as
b2 c ∗ h i
x∗ (t) = x(to )ea(t−to ) + x (tf ) ea(tf −t) − ea(t+tf −2to ) .
2a
Setting t = tf and solving for x(tf ) gives
We can use the form of this expression to explore how our cost function affects
the optimal trajectory. For example, we can ask what happens to the terminal state
x∗ (tf ) and c → ∞. Setting t = tf in equation (2.5) and taking the limit we find
that
lim x∗ (tf ) = 0.
c→∞
∇
2.4. LINEAR QUADRATIC REGULATORS 2-9
derivation. (The optimal control will be unchanged if we multiply the entire cost
function by 2.)
To find the optimal control, we apply the maximum principle. We being by
computing the Hamiltonian H:
1 T 1
H= x Qx x + uT Qu u + λT (Ax + Bu).
2 2
Applying the results of Theorem 2.1, we obtain the necessary conditions
T
∂H
ẋ = = Ax + Bu x(0) = x0
∂λ
T
∂H (2.6)
−λ̇ = = Qx x + AT λ λ(T ) = P1 x(T )
∂x
∂H
0= = Qu u + λT B.
∂u
The last condition can be solved to obtain the optimal controller
T
u = −Q−1
u B λ,
which can be substituted into the dynamic equation (2.6) To solve for the optimal
control we must solve a two point boundary value problem using the initial condition
x(0) and the final condition λ(T ). Unfortunately, it is very hard to solve such
problems in general.
Given the linear nature of the dynamics, we attempt to find a solution by setting
λ(t) = P (t)x(t) where P (t) ∈ Rn×n . Substituting this into the necessary condition,
we obtain
T
λ̇ = Ṗ x + P ẋ = Ṗ x + P (Ax − BQ−1
u B P )x,
T
=⇒ −Ṗ x − P Ax + P BQ−1
u BP x = Qx x + A P x.
and then solve the original dynamics of the system forward in time from the ini-
tial condition x(0) = x0 . Note that this is a (time-varying) feedback control that
describes how to move from any state to the origin.
2.4. LINEAR QUADRATIC REGULATORS 2-11
Since we do not have a terminal cost, there is no constraint on the final value of λ or,
equivalently, P (t). We can thus seek to find a constant P satisfying equation (2.7).
In other words, we seek to find P such that
P A + AT P − P BQ−1 T
u B P + Qx = 0. (2.9)
This equation is called the algebraic Riccati equation. Given a solution, we can
choose our input as
T
u = −Q−1
u B P x.
T
This represents a constant gain K = Q−1 u B P where P is the solution of the
algebraic Riccati equation.
The implications of this result are interesting and important. First, we notice
that if Qx > 0 and the control law corresponds to a finite minimum of the cost,
then we must have that limt→∞ x(t) = 0, otherwise the cost will be unbounded.
This means that the optimal control for moving from any state x to the origin
can be achieved by applying a feedback u = −Kx for K chosen as described as
above and letting the system evolve in closed loop. More amazingly, the gain matrix
K can be written in terms of the solution to a (matrix) quadratic equation (2.9).
This quadratic equation can be solved numerically: in MATLAB the command K
= lqr(A, B, Qx, Qu) provides the optimal feedback compensator.
In deriving the optimal quadratic regulator, we have glossed over a number of
important details. It is clear from the form of the solution that we must have Qu > 0
since its inverse appears in the solution. We would typically also have Qx > 0 so
that the integral cost is only zero when x = 0, but in some instances we might only
case about certain states, which would imply that Qx ≥ 0. For this case, if we let
Qx = H T H (always possible), our cost function becomes
Z ∞ Z ∞
J= xT H T Hx + uT Qu u dt = kHxk2 + uT Qu u dt.
0 0
A technical condition for the optimal solution to exist is that the pair (A, H) be
detectable (implied by observability). This makes sense intuitively by considering
y = Hx as an output. If y is not observable then there may be non-zero initial
conditions that produce no output and so the cost would be zero. This would lead
to an ill-conditioned problem and hence we will require that Qx ≥ 0 satisfy an
appropriate observability condition.
We summarize the main results as a theorem.
and the minimum cost from initial condition x(0) is given by J ∗ = xT (0)P x(0).
The basic form of the solution follows from the necessary conditions, with the
theorem asserting that a constant solution exists for T = ∞ when the additional
conditions are satisfied. The full proof can be found in standard texts on optimal
control, such as Lewis and Syrmos [LS95] or Athans and Falb [AF06]. A simplified
version, in which we first assume the optimal control is linear, is left as an exercise.
Example 2.4 Optimal control of a double integrator
Consider a double integrator system
dx 0 1 0
= x+ u
dt 0 0 1
with quadratic cost given by
2
q 0
Qx = , Qu = 1.
0 0
The optimal control is given by the solution of matrix Riccati equation (2.9). Let
P be a symmetric positive definite matrix of the form
a b
P = .
b c
Then the Riccati equation becomes
2
−b + q 2 a − bc 0 0
= ,
a − bc 2b − c2 0 0
which has solution "p #
2q 3 q
P = √ .
q 2q
The controller is given by
T
p
K = Q−1
u B P = [1/q 2/q].
The feedback law minimizing the given cost function is then u = −Kx.
To better understand the structure of the optimal solution, we examine the
eigenstructure of the closed loop system. The closed-loop dynamics matrix is given
by
0 p1
Acl = A − BK = .
−1/q − 2/q
The characteristic polynomial of this matrix is
r
2 1
λ2 + λ+ .
q q
2.5. CHOOSING LQR WEIGHTS 2-13
For this choice of Qx and Qu , the individual diagonal elements describe how much
each state and input (squared) should contribute to the overall cost. Hence, we
can take states that should remain small and attach higher weight values to them.
Similarly, we can penalize an input versus the states and other inputs through
choice of the corresponding input weight ρj .
Choosing the individual weights for the (diagonal) elements of the Qx and Qu
matrix can be done by deciding on a weighting of the errors from the individual
terms. Bryson and Ho [BH75] have suggested the following method for choosing
the matrices Qx and Qu in equation (2.8): (1) choose qi and ρj as the inverse of
the square of the maximum value for the corresponding xi or uj ; (2) modify the
elements to obtain a compromise among response time, damping and control effort.
This second step can be performed by trial and error.
It is also possible to choose the weights such that only a given subset of variable
are considered in the cost function. Let z = Hx be the output we want to keep
small and verify that (A, H) is observable. Then we can use a cost function of the
form
Qx = H T H Qu = ρI.
The constant ρ allows us to trade off kzk2 versus ρkuk2 .
We illustrate the various choices through an example application.
y r
F2
x F1
(a) Harrier “jump jet” (b) Simplified model
Figure 2.3: Vectored thrust aircraft. The Harrier AV-8B military aircraft (a)
redirects its engine thrust downward so that it can “hover” above the ground.
Some air from the engine is diverted to the wing tips to be used for maneuvering.
As shown in (b), the net thrust on the aircraft can be decomposed into a horizontal
force F1 and a vertical force F2 acting at a distance r from the center of mass.
1.5 1.5
Position x, y [m]
Position x, y [m]
x
y
1 1
0.5 0.5
rho = 0.1
rho = 1
rho = 10
0 0
0 2 4 6 8 10 0 2 4 6 8 10
Time t [s] Time t [s]
(a) Step response in x and y (b) Effect of control weight ρ
Figure 2.4: Step response for a vectored thrust aircraft. The plot in (a) shows
the x and y positions of the aircraft when it is commanded to move 1 m in each
direction. In (b) the x motion is shown for control weights ρ = 1, 102 , 104 . A higher
weight of the input term in the cost function causes a more sluggish response.
1.4 4
x u1
1.2 y u2
3
1
0.8
2
0.6
0.4
1
0.2
0 0
0 5 10 15 0 5 10 15
(a) Step response in x and y (b) Inputs for the step response
Figure 2.5: Step response for a vector thrust aircraft using physically motivated
LQR weights (a). The rise time for x is much faster than in Figure 2.4a, but there
is a small oscillation and the inputs required are quite large (b).
system is given by
H = 1 + λ1 u1 + λ2 u2 + λ3 x2 u1
and the resulting equations for the Lagrange multipliers are
It follows from these equations that λ1 and λ3 are constant. To find the input u
corresponding to the extremal curves, we see from the Hamiltonian that
u1 = −sgn(λ1 + λ3 x2 u1 ), u2 = −sgnλ2 .
These equations are well-defined as long as the arguments of sgn(·) are non-zero
and we get switching of the inputs when the arguments pass through 0.
An example of an abnormal extremal is the optimal trajectory between x0 =
(0, 0, 0) to xf = (ρ, 0, 0) where ρ > 0. The minimum time trajectory is clearly given
2.7. FURTHER READING 2-16
Exercises
2.1 (a) Let G1 , G2 , . . . , Gk be a set of row vectors on a Rn . Let F be another row
vector on Rn such that for every x ∈ Rn satisfying Gi x = 0, i = 1, . . . , k, we have
F x = 0. Show that there are constants λ1 , λ2 , . . . , λk such that
k
X
F = λk Gk .
i=1
q̇ = u
Ẏ = quT − uq T
(a) For the fixed end point problem, derive the form of the optimal controller
minimizing the following integral
Z
1 1 T
u u dt.
2 0
2.7. FURTHER READING 2-17
ẋ = −ax + bu,
where x = R is a scalar state, u ∈ R is the input, the initial state x(t0 ) is given,
and a, b ∈ R are positive constants. (Note that this system is not quite the same as
the one in Example 2.2.) The cost function is given by
Z tf
J = 12 u2 (t) dt + 12 cx2 (tf ),
t0
(a) Solve explicitly for the optimal control u∗ (t) and the corresponding state x∗ (t)
in terms of t0 , tf , x(t0 ) and t and describe what happens to the terminal state
x∗ (tf ) as c → ∞.
(b) Show that the system is differentially flat with appropriate choice of output(s)
and compute the state and input as a function of the flat output(s).
(c) Using the polynomial basis {tk , k = 0, . . . , M − 1} with an appropriate choice
of M , solve for the (non-optimal) trajectory between x(t0 ) and x(tf ). Your answer
should specify the explicit input ud (t) and state xd (t) in terms of t0 , tf , x(t0 ), x(tf )
and t.
2.7. FURTHER READING 2-18
(d) Let a = 1 and c = 1. Use your solution to the optimal control problem and
the flatness-based trajectory generation to find a trajectory between x(0) = 0 and
x(1) = 1. Plot the state and input trajectories for each solution and compare the
costs of the two approaches.
(e) (Optional) Suppose that we choose more than the minimal number of basis
functions for the differentially flat output. Show how to use the additional degrees
of freedom to minimize the cost of the flat trajectory and demonstrate that you can
obtain a cost that is closer to the optimal.
2.5 Repeat Exercise 2.4 using the system
ẋ = −ax3 + bu.
For part (a) you need only write the conditions for the optimal cost.
2.6 Consider the problem of moving a two-wheeled mobile robot (e.g., a Segway)
from one position and orientation to another. The dynamics for the system is given
by the nonlinear differential equation
ẋ = cos θ v, ẏ = sin θ v, θ̇ = ω,
where (x, y) is the position of the rear wheels, θ is the angle of the robot with
respect to the x axis, v is the forward velocity of the robot and ω is spinning rate.
We wish to choose an input (v, ω) that minimizes the time that it takes to move
between two configurations (x0 , y0 , θ0 ) and (xf , yf , θf ), subject to input constraints
|v| ≤ L and |ω| ≤ M .
Use the maximum principle to show that any optimal trajectory consists of
segments in which the robot is traveling at maximum velocity in either the forward
or reverse direction, and going either straight, hard left (ω = −M ) or hard right
(ω = +M ).
Note: one of the cases is a bit tricky and cannot be completely proven with the
tools we have learned so far. However, you should be able to show the other cases
and verify that the tricky case is possible.
2.7 Consider a linear system with input u and output y and suppose we wish to
minimize the quadratic cost function
Z ∞
J= y T y + ρuT u dt.
0
Show that if the corresponding linear system is observable, then the closed loop
system obtained by using the optimal feedback u = −Kx is guaranteed to be
stable.
2.8 Consider the system transfer function
s+b
H(s) = , a, b > 0
s(s + a)
with state space representation
0 1 0
ẋ = x+ u,
0 −a 1
y= b 1 x
2.7. FURTHER READING 2-19
(a) Let
p11 p12
P = ,
p21 p22
with p12 = p21 and P > 0 (positive definite). Write the steady state Riccati equation
as a system of four explicit equations in terms of the elements of P and the constants
a and b.
(b) Find the gains for the optimal controller assuming the full state is available for
feedback.
(c) Find the closed loop natural frequency and damping ratio.
2.9 Consider the optimal control problem for the system
Z tf
ẋ = ax + bu J=2 1
u2 (t) dt + 12 cx2 (tf ),
t0
where x ∈ R is a scalar state, u ∈ R is the input, the initial state x(t0 ) is given, and
a, b ∈ R are positive constants. We take the terminal time tf as given and let c > 0
be a constant that balances the final value of the state with the input required to
get to that position. The optimal trajectory is derived in Example 2.2.
Now consider the infinite horizon cost
Z ∞
J = 12 u2 (t) dt
t0
(a) Design an LQR controller that stabilizes the position y to yd = 0. Plot the
step and frequency response for your controller and determine the overshoot, rise
time, bandwidth and phase margin for your design. (Hint: for the frequency domain
specifications, break the loop just before the process dynamics and use the resulting
SISO loop transfer function.)
(b) Suppose now that yd (t) is not identically zero, but is instead given by yd (t) =
r(t). Modify your control law so that you track r(t) and demonstrate the perfor-
mance of your controller on a “slalom course” given by a sinusoidal trajectory with
magnitude 1 meter and frequency 1 Hz.