The Use of Matlab in The Solution of Linear Quadratic Regulator (LQR) Problems

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Computing, Information Systems, Development Informatics & Allied Research Journal

Vol. 5 No. 4. December 2014 – www.cisdijournal.net

THE USE OF MATLAB IN THE SOLUTION OF LINEAR QUADRATIC


REGULATOR (LQR) PROBLEMS
Ajasa, Abiodun Afis
1
Department of Electronic and Computer Engineering, Faculty of Engineering,
Lagos State University, Epe, Lagos, Nigeria.
[email protected]

Sebiotimo, Abdul-Azeez Arisekola


2
Department of Radio Network Planning and Optimization, Huawei Technologies Co. Nig. Ltd.,
2940, Aguiyi Ironsi Street, Maitama, Abuja.
[email protected]

ABSTRACT

This paper deals with the theoretical and computational analysis of linear quadratic regulator (LQR) problems, with the aim of
providing solutions to them with MATLAB simulation and implementation. Linear quadratic regulator is a special type of
optimal control problem that deals with linear system and minimization of cost or objective functions that are quadratic in state
and in control (subject to some constraints) with performance index determination. This work involves solving the associated
algebraic Riccati equation (ARE) of the control systems and obtaining the optimal control gain. The work is mainly applicable in
satellite and spacecraft launching technology to determine minimum time for spacecraft to meet its target, minimum fuel
consumption for the operation of a rocket or spacecrafts and the minimum cost in the design of the spacecraft, through finding
solution to the Algebraic Riccati Equation (ARE), whose solutions give the minimization parameters for the design and operation
of the spacecraft. More interestingly, MATLAB is also used to find solution to the LQR problems.

Structure of Q and R parameters are needed in the determination of optimal control gain of the systems, as they vary
minimization of the quadratic performance index. These structures are explicated extensively in this work. Finally, solution to
linear quadratic regulator problems is achieved through the use of the MATLAB, as it gives the positive definite matrix P, poles
of the control system which shows that the system is always stable and the optimal control gain which shows that the cost or
objective function is minimized.

Keywords: linear quadratic regulator (LQR), algebraic Riccati equation (ARE), controllability, observability, quadratic optimal
control, root locus, hermittian function, quadratic function, Sylvester’s Criterion

1. INTRODUCTION

Classical (conventional) control theory is concerned with single-input-single-output (SISO) systems, and it is mainly based on
Laplace transforms theory. It is used in system representation in block diagram form. Modern control theory is contrasted with
conventional control theory in that the former is applicable to multiple- input-multiple-output (MIMO) systems. Systems for
modern control may be linear or nonlinear, time-invariant or time- varying, while the classical control is only applicable to linear
time-invariant systems. Also, modern control theory is essentially a time-domain approach while conventional control theory is a
complex frequency-domain approach [1].

Dynamic systems are systems whose variables change with time [2]. In other words, they are systems represented with
differential equations that define control systems in state space. Hence, state space

representation of a system is a set of first-order differential equations which completely define a system.

The state space representation is also known as the state model of the system, which consists of system equation, which has ‘A’,
the System or Coupling Matrix and ‘B’ as the Input Matrix and also the output equation which has ’C’ as the Output Matrix and
‘D’ as the Direct transition or Feed-forward Matrix [3], as demonstrated in eq. (1).

and

To obtain a controller that will minimize LQR cost function, the algebraic Riccati equation (ARE) must be solved.

15
Computing, Information Systems, Development Informatics & Allied Research Journal
Vol. 5 No. 4. December 2014 – www.cisdijournal.net

2. LITERATURE REVIEW

Linear quadratic regulator (LQR) problem is a special type of optimal control that deals with linear systems (in state and in
control) and minimization of objective or cost function that are quadratic or the quadratic performance index [4].

Singh and Gupta considered LQR control in 2011for a scalar system when the sensor, controller and actuator all have their own
clocks that may drift apart from each other. They also considered both an affine and a quadratic clock model. They analyzed the
loss of performance incurred as a function of how asynchronous the clocks are for a quadratic cost function. In the end, they were
able to obtain guidelines on how often to utilize the communication resources to synchronize the clocks [5].

Review of past work showed that Alberto B. et al. presented a technique to compute the explicit state-feedback solution to both
the finite and infinite horizon linear quadratic optimal control problem subject to state and input constraints. They succeeded in
showing that the control law is piece-wise linear and continuous for both the finite horizon problem (model predictive control)
and the usual infinite time measure (constrained linear quadratic regulation). As a practical consequence of the result, constrained
linear quadratic regulation becomes attractive also for systems with high sampling rates, as on-line quadratic programming
solvers are no more required for the implementation [6].

Similarly, linear quadratic regulators (LQR) were applied by Polyaka and Tempob in 2001 to study robust design of uncertain
systems in a probabilistic setting. Systems affected by random bounded nonlinear uncertainty were considered so that classical
optimization methods based on linear matrix inequalities cannot be used without conservatism. The approach used was a blend of
randomization techniques for the uncertainty along with convex optimization for the controller parameters [7].

In Costa’s work, he suggested that a linear feedback law can be used to improve the dynamic response of the converter in small-
signal models of quasi-resonant (QR) converters with current-mode control, obtained by state-space averaging. He added that this
can be done by seeking a gain vector that places the closed-loop poles at the locations desired. Instead, he used optimal control
techniques, together with those small-signal models, to design a robust voltage controller for QR converters that overcomes some
uncertainties that may corrupt the results obtained by classical design methods [8].

In Rantzer and Johansson’s work titled: “Piecewise Linear Quadratic Optimal Control”, the use of piecewise quadratic cost
functions was extended from stability analysis of piecewise linear systems to performance analysis and optimal control. Lower
bounds on the optimal control cost were obtained by semi-definite programming based on the Bellman inequality. This also gave
an approximation to the optimal control law. A compact matrix notation was also introduced to support the calculations and it
was proven that the framework of piecewise linear systems could be used to analyze smooth nonlinear dynamics with arbitrary
accuracy [9].

In 2010, Swigart and Lall developed controller synthesis algorithms for decentralized control problems, where individual
subsystems were connected over a network. Their work was focused on the simplest information structure, consisting of two
interconnected linear systems, and they constructed the optimal controller subject to a decentralization constraint via a spectral
factorization approach. They provided explicit state-space formulae for the optimal controller, characterized its order, and show
that its states are those of a particular optimal estimator [10].

By applying the overbounding method of Petersen and Hollot in 1994, Douglas and Athans derived a linear quadratic regulator
that is robust to real parametric uncertainty. The resulting controller was determined from the solution of a single modified
Riccati equation. This controller has the same guaranteed robustness properties as standard linear quadratic designs for known
systems. It is proven that when applied to a structural system, the controller achieves its robustness by minimizing the potential
energy of uncertain stiffness elements, and minimizing the rate of dissipation of the uncertain damping elements [11].

Previously, Safonov and Athans had explained in their work in 1976 that multi-loop linear-quadratic state-feedback (LQSF)
regulators demonstrated robustness against a variety of large dynamical, time-varying, and nonlinear variations in open-loop
dynamics. The results were interpreted in terms of the classical concepts of gain and phase margin, thus strengthening the link
between classical and modern feedback theory [12].

More recently (2013), Lamperski and Cowan, in their paper titled: “Time-changed Linear Quadratic regulators”, gave a solution
to the linear quadratic regulator problem in which the state is perfectly known, but the controller’s measure of time is a stochastic
process derived from a strictly increasing L´evy process. They concluded that the optimal controller is linear and can be
computed from a generalization of the classical Riccati differential equation [13].

16
Computing, Information Systems, Development Informatics & Allied Research Journal
Vol. 5 No. 4. December 2014 – www.cisdijournal.net

3. METHODOLOGY

This section describes the computational tool, concepts and methods used in providing solutions to the linear quadratic regulator
(LQR) problems and the associated algebraic Riccati equation (ARE) of the control systems.

3.1 MATLAB
MATLAB (an abbreviation for MATrix LABoratory) is matrix-based system software for writing programs to solve
mathematical, scientific and engineering calculations [1]. MATLAB is used extensively in the analysis and design of control
systems, such as generating the transfer function and state space representation or state model of a control system, determining
the stability, controllability and observability of a system, the frequency response of a system with respect to different input
signals, algebraic riccati equation e.t.c.

3.2 Concepts Of Controllability, Observability And Quadratic Optimal Control

Any control system that is based on LQR must satisfy the following conditions in minimization of the performance index through
finding the positive-definite solution of the algebraic riccati equation given below,

(a) The system must be completely state controllable


(b) R must be positive definite
(c) The system must be observable
(d) Q must be semi-positive definite

3.2.1 Controllability

A model or system is said to be completely state controllable or simply controllable, if and only if there exists a control input u(t)
that will drive all the initial state, x(to) at initial time, to, to any desired final state, x(tf), over a finite time interval – ≥0.
Otherwise, the model is not completely state controllable [3]. There are two methods used in determining whether a system is
completely controllable or not, namely;
(a) Kalman’s method
(b) Gilbert’s method

In this work, only Kalman’s method will be explained because MATLAB implementation of controllability is based on this
method.

3.2.1.1 Kalman’s Method for Controllability

This method involves the use of Controllability Matrix, , given as:

where B is the input matrix of the state space representation of the control system, A is the system matrix and n is the order of
the controllability matrix. A system is controllable if and only if:

This implies that it is of full rank.

Example 1.0
Consider the system defined by the following state equation:

(a) Is the system completely state controllable?


(b) Use MATLAB approach to determine if the system is controllable?

17
Computing, Information Systems, Development Informatics & Allied Research Journal
Vol. 5 No. 4. December 2014 – www.cisdijournal.net

Solution
(a) Comparing the given state equations with the general state model:

and

So that

Now using the Controllability Matrix:

Therefore,

where = 1(1) - 0(2) = 1 – 0 = 1 ≠ 0

Since the determinant of the Controllability Matrix, is NOT zero (0), then the rank is 2, i.e. it is of full rank.

rank ( ) = 2 = n = full rank

Then, the system is said to be completely state controllable.

(b) The MATLAB solution to the problem is shown in MATLAB Program 1.0 below.

MATLAB Program 1.0: Solution to Example 1.0

>> %Enter matrices A, B , C and D of the ;


>>state model;
>> % Enter the matrices A, B, C and D of ;
>>%the state model;
>> A=[2 0;1 0];
>> B=[1;0];
>> C=[1 -1];
>> D=0 ;
>> %The matrices above are now ;
>> %represented in state model ;
>> %using 'ss' command as shown below;
>> %the state space rep. is denoted by 'SM';
>> SM=ss(A,B,C,D);
>> % The controllability matrix 'Øc ' is
>>obtained using the 'ctrb' command;
>> % as shown below;
>>Øc =ctrb(SM)
>> Øc =
1 2
0 1

>> %The rank of the Øc is obtained using the rank


command as seen below;

>> rank (Øc)


ans =
2
>> % Since rank (Øc)= n = full rank, the system is
CONTROLLABLE

18
Computing, Information Systems, Development Informatics & Allied Research Journal
Vol. 5 No. 4. December 2014 – www.cisdijournal.net

Example 2.0
The following model has been proposed to describe the motion of a constant-velocity guided missile:

Verify that the system is not controllable by analyzing the Controllability Matrix using the ctrb function in MATLAB
Solution
MATLAB Program 2.0: Solution to Example 2.0

>> %Enter the matrices A,B,C,D;


>> A=[0 1 0 0 0;-0.1 -0.5 0 0 0;0.5 0 0 0 0;
0 0 10 0 0;0.5 1 0 0 0];
>> B=[0;1;0;0;0];
>> C=[0 0 0 1 0];
>> SM=ss(A,B,C,D);
>> %The controllability matrix,Øc is ;
>>obtained below;
>>Øc =ctrb(SM)
Øc =
0 1.0000 -0.5000 0.1500 -0.0250
1.0000 -0.5000 0.1500 -0.0250 -0.0025
0 0 0.5000 -0.2500 0.0750
0 0 0 5.0000 -2.5000
0 1.0000 0 -0.1000 0.0500
>> % The rank of the controllability matrix ;
>>%is found below;
>>rank( c)
ans =
4
>> % Since rank ( ) = 4 ≠ n ≠ full rank, the system
is NOT CONTROLLABLE
>> %IS NOT CONTROLLABLE.

3.2.2 Observability

A model is said to be completely observable or simply observable, if and only if all the state x(t) of the model can be
reconstructed simply by full knowledge of the control input u(t) and the output y(t) over a finite time interval – ≥ 0.
Otherwise, the model is not completely observable [3].

19
Computing, Information Systems, Development Informatics & Allied Research Journal
Vol. 5 No. 4. December 2014 – www.cisdijournal.net

There are also two methods used in determining whether a system is completely observable or not, namely;
(a) Kalman’s method
(b) Gilbert’s method

As earlier established, only Kalman’s method will be explained because MATLAB implementation of observability is based on
this method.

Kalman’s Method for Observability

This method involves the use of Observability Matrix, Øo, defined as:

Or
where A is the system matrix, C is the output matrix and n is the order of the Observability Matrix. A system is observable if and
only if:

i.e. it is of full rank

Example 3.0
Consider the system in Example 1.0. Is the system completely state observable? Use MATLAB approach to determine if the
system is observable.

Solution
(a) As before:

The observability matrix,

Similarly, the determinant of the Observability Matrix, is:

Since the determinant exists, i.e. it is not equal to zero (0), then:

Therefore, the system is said to be completely state observable.

(b) The MATLAB program below verifies that the system is observable using the ‘obsv’ command MATLAB program
2.5.

20
Computing, Information Systems, Development Informatics & Allied Research Journal
Vol. 5 No. 4. December 2014 – www.cisdijournal.net

MATLAB Program 3.0: Solution to Example 3.0

>> A=[2 0;1 0];


>> B=[1;0];
>> C=[1 -1];
>> SM=ss(A,B,C,D);
>>Oo=obsv(SM)
Oo =
1 -1
1 0
>>rank( o)
ans =
3.2.32OPTIMAL CONTROL

>> %Since rank( o) = 2 = n =full rank;


>>%the system is COMPLETELY STATE;
>>%CONTROLLABLE;

3.2.3 Optimal Control

Optimal Control is a technique that involves optimization of the performance of control systems by minimizing or maximizing
their objective or output. For instance, if it is desired to obtain the best possible performance of a heater while minimizing say
energy or time or some other desirable parameters, then this concern can be seen in the light of optimal control. Another typical
optimal control problem is the launching of a rocket into space. To do this; first and foremost, the system must be completely
controllable, then the energy required or the time of lift or even the fuel consumption rate may be minimized so as to
correspondingly maximize some other features. These features could include the total cost expended in the travel of the rocket,
the objective the rocket is to attain while in space among others [14].

Optimization refers to the science of maximizing or minimizing objectives. Optimization requires a measure of performance.
When mathematically formulated, this measure of performance is called the objective (or cost function) [4]. Optimization of
control systems is called optimal control. Examples of optimal control problems are:
(a) Minimum – energy problem
(b) Minimum – fuel problem
(c) Minimum – time problem
A typical optimal control problem formulation is:

=
subject to
= f(x, u, t) (7)
In design, the objective is to find a control function u that will minimize the cost function.

3.2.4 Quadratic Optimal Control

In designing control systems, we are often interested in choosing the control vector u(t) such that a given performance index is
minimized. It can be proved that a quadratic performance index, where the limit of integration are o and ∞, such as

(8)

where L (x,u) is a quadratic or Hermitian function of x and u, that will yield linear control law, that is defined by:

where k is an r x n matrix .

21
Computing, Information Systems, Development Informatics & Allied Research Journal
Vol. 5 No. 4. December 2014 – www.cisdijournal.net

Therefore, the design of optimal control systems and optimal regulator systems based on such quadratic performance indexes
boils down to the determination of the elements of the matrix k. An advantage of using the quadratic optimal control scheme is
that the systems are based on concept of controllability. In designing control systems based on minimization of quadratic
performance indeces, the Riccati equations must be solved.

In the following, the problem of determining the optimal control vector, u(t) is considered for the system described by the general
state space representation and the performance index given by:

where Q is a positive-definite (or positive- semi-definite) Hermitian or real systematic matrix, and u is unconstrained.

3.2.5 Concepts Of Definiteness Of Scalar Functions

The scalar functions of optimal control system can either be in quadratic or hermitan form.

(a) Quadratic Form


A class of scalar functions that plays an important role in the stability analysis based on the second law of Liapunov is
the quadratic form. The quadratic form is of the form [1]:

Note that x is a real vector and P is a real symmetric matrix.

(b) Hermitian Form


If x is a complex n-vector and P is a Hermitian matrix, then the complex quadratic form is called the Hermitian form.
An example is:

(c) Sylvester’s Criterion


The definiteness of the quadratic function can be determined by the Sylvester’s criterion. Sylvester’s criterion states
that the necessary and sufficient conditions for the quadratic form V(x) to be positive definite are that all the successive
principal minors of P must be positive. i.e

An optimal quadratic function can be:

(A) Positive definite


(B) Positive semi-definite
(C) Negative definite
(D) Negative semi-definite
(E) Indefinite

(A) Conditions for a scalar function to be POSITIVE DEFINITE


a. P must be non-singular
b. All successive principal minors of P must be positive or greater than zero.
(B) Conditions for a scalar function to be POSITIVE
SEMI-DEFINITE
a. P must be singular
b. All principal minors must be non-negative
(C) Conditions for a scalar function to be NEGATIVE
DEFINITE
a. A scalar function V(x) is said to be negative definite if -V(x) is positive definite.
(D) Conditions for a scalar function to be NEGATIVE
SEMI-DEFINITE
a. A scalar function V(x) is positive semi-definite
(E) Conditions for a scalar function to be INDEFINITE
a. A scalar function is said to be indefinite when it cannot be classified as any of the other four types of definiteness
above.

22
Computing, Information Systems, Development Informatics & Allied Research Journal
Vol. 5 No. 4. December 2014 – www.cisdijournal.net

Example 4.0
Determine the definiteness of the following
scalar functions:
(i)
(ii)

Solution
(i)
Representing the equation above in the quadratic
form, we obtain;

It is noted that , since P is symmetric and


equals half of the co-efficient of the term:
in the function.

Applying the Sylvester criterion, we have;

the principal minor, =1˃0 and

the determinant of P, P = 1(2) – 0(0) = 2 ˃ 0

∴ P is non-singular and
the scalar function, is
Positive-definite

(ii)

Representing the function in quadratic form results into

Applying Sylvester’s criterion:

the principal minor, = -10 < 0

The determinant of P, P = -10(-4) – (-6)(-6) = 4 ˃ 0. Clearly, the result showed that the function, V(x) is not positive definite
since < 0, but P is non-singular. As such, negative definiteness must be tested for.

Representing in quadratic form, we have

The principal minor, = 10 ˃ 0

Similarly,P = 10(4) – 6(6) = 40 - 36 = 4 ˃ 0

23
Computing, Information Systems, Development Informatics & Allied Research Journal
Vol. 5 No. 4. December 2014 – www.cisdijournal.net

Since, -V(x) is positive definite, therefore, the Scalar function V(x) is negative definite.

3.2.6 Linear Quadratic Regulator Problems

If attention is restricted to linear systems (in state space form) and if only simple quadratic performance indexes are used, then
the designed systems will be linear quadratic regulators. It is special optimal control problem. Regulator behaviour is important
for many types of control systems such as the attitude control of satellites or spacecrafts. The quadratic performance index (PI) is
expressed as:

and the choice of this simple quadratic PI means that the PI, which is used in practice, hardly make a good representation of what
could readily be minimized. The focus in practical application is directed towards obtaining the required closed-loop performance
by Q and R until the resulting PI is minimized to a suitable result on the plant. The advantage of using the LQR is that the
designed system will be stable, except in the case when the system is not controllable [4]. In designing control systems based on
the minimization of simple quadratic performance indices, the Riccati equation needs to be solved so as to obtain a suitable
controller that minimizes the LQR cost function and this leads to the formulation of the LQR.

The Riccati matrix equation or algebraic Riccati equation (ARE) is given as;

and

which truly confirms the minimization of J w.r.t K.

(a) Structure OF Q AND R


The choice of elements in Q and R requires some mathematical properties so that the performance index (P1) will have a
guaranteed and well defined minimum in order that the Riccati equation will have a solution.
These properties are:
1. Both Q and R must be symmetric matrices
2. Q must be positive semi-definite and Q can be factored as Q = C’qCq where Cq is any matrix such that (Cq, A) is
observable.
3. R must be positive definite

These conditions are necessary and sufficient for the existence and uniqueness of the optimal controller that will asymptotically
stabilize the system [15].

Again Q and R are usually chosen to be purely diagonal matrices for two reasons: First, it becomes easy to ensure the correct
‘definiteness’ properties (the principal minor of Q must be non-negative and that of R must be positive). Second, the diagonal
element or the principal minor should penalize individual states or inputs. However, it should be noted that the choice of Q is not
unique. Several different Q matrices will often yield the same controller, and there is an equivalent diagonal version of Q in every
case, so nothing is sacrificed in general by making Q diagonal. The relative weightings chosen for Q and R determine the relative
emphasis placed upon reducing and saving control energy. If it were important to minimize control effort at all costs, then the
numbers in R would be made much greater than those in Q for example. This would usually give a slow performance, but
interactive design process such as the use of MATLAB will be used to investigate the range of values for Q and R that will be
used to the best response on the design problem section.

(b) Test for the Validity of Q


Q being positive semi-definite is not enough to say that Q is valid. But, if the positive semi-definite matrix Q satisfies the
following rank condition:

rank ( ) = n = full rank (16)

24
Computing, Information Systems, Development Informatics & Allied Research Journal
Vol. 5 No. 4. December 2014 – www.cisdijournal.net

where

and n is the order of the matrix Q

Example 5.0

For the following system described by A and B matrices and LQR performance criteria measured by Q and R, solve the
associated algebraic Riccati equation and find the optimal control gain.

A = -2, B = 4, Q = 4, R = 1

Solution

Given that: A = -2, B = 4, Q = 4, R = 1


The algebraic Riccati equation (ARE) is given as:

Substituting A, B, Q and R gives


[-2] [P] + [P] [-2] – [P] [4] [1] [4] [P] + 4 = 0

– 2P – 2P + 4 – 16P2 = 0
16P2 + 4P – 4 = 0

P =

P = 0.3904 or –0.6404

Since P must be greater than zero (positive definite),


P = 0.3904
The optimal control gain is:

So that K = [1] [4] [0.3904]

K = +1.5614

Example 6.0

For the system described by A and B matrices below and LQR performance criteria by Q and R.

(a) Test Q if Q and R are valid or not


(b) If Q is valid, solve the associated algebraic Riccati equation.
(c) Find the optimal control gain.
Solution
(a) R is positive definite as it is non-singular and
principal minor is greater than zero. Hence, R is
valid for this system.

To test for Q,
The principal element of Q = Q11 = 1 > 0
and the determinant of Q = |Q| = 0 [singular]
Hence, Q is positive semi-definite
Recall that;

25
Computing, Information Systems, Development Informatics & Allied Research Journal
Vol. 5 No. 4. December 2014 – www.cisdijournal.net

It is noted that: = Q and Q.Q =

To find the rank of , it is first transposed to have

Therefore Q is valid for this optimal control system.

(b) Solving the associated ARE:

where

and
(because matrix P is symmetric)

(c) The optimal control gain is calculated by:

Approximately,

The optimal control in feedback gain is

(c) Optimal Root Locus

It has been seen that a special choice of Q and R allows us to investigate the effects of weights on the location of closed-loop
poles. The optimal closed-loop poles can be obtained from the root locus of Gq(s) Gq(-s). Such root loci are
generally called a symmetric root locus or root-square locus (RSL).

The optimal characteristic equation is given by

26
Computing, Information Systems, Development Informatics & Allied Research Journal
Vol. 5 No. 4. December 2014 – www.cisdijournal.net

The RSL has both the left-half plane (LHP) and the right-half plane (RHP). Each plane has branches with the optimal closed-loop
poles plotted along them. RSL is symmetric with respect to both the imaginary and the real axes. There are two rules used in
plotting the RSL, namely:
(1) Negative-gain sketching rules
(2) Negative-gain sketching rules.

Taking number of poles of the optimal system to be n and number of zeros to be m, while r = n – m.

In general, if r (the number of poles minus the number of zeros in the plant transfer function) is odd, we have to use negative-gain
sketching rules. Otherwise, we use positive-gain sketching rules for plotting the root locus. These rules will be explained through
the solved examples.
To find the optimal control gain K using RSL, we first determine the optimal pole location from the RSL, form the characteristic
polynomial and set this equal to the characteristic polynomial of (A – BK) and solve for K. i.e = 0 (Optimal
Characteristic equation).

Example 7.0

Consider the system presented in Example 5.0 with factor Q = C’qCq and let R = ρ. Then find Gq(s) G(-s) and the plot
appropriate square root locus

Solution
A = -2, B = 4, Q = 4, R =1
Q = 4 = C’q.Cq = 2(2)
Cq = 2
Gq(s) is given as:
Gq(s) = CqФ(s) B = Cq

To plot the appropriate root -square locus,considering the transfer function, Gq(s) where number of poles, n=1 and number of
zeros, m = 0 and therefore r = n – m = 1 = odd. Hence; the negative –gain sketching rules will be used, as shown in figure 3.2
below.

The optimal poles are s = -2 and s = 2.

Root Locus
1

0.8

0.6

0.4

0.2
Im ag ina ry A x is

-0.2

-0.4

-0.6

-0.8

-1
-10 -8 -6 -4 -2 0 2 4 6 8 10
Real Axis

Figure 1.0: Root-Square Locus using Negative-Gain

27
Computing, Information Systems, Development Informatics & Allied Research Journal
Vol. 5 No. 4. December 2014 – www.cisdijournal.net

The MATLAB program is used to obtain the root square locus of the system in Example 6.0

MATLAB Program 4.0: To plot the RSL of the


System described in Example 6.0

>>num=[16];
>>den=[-1 0 4];
>> TF=tf(num,den)
Transfer function:
-16
-------
s^2 - 4

>>rlocus(TF)
4.0 RESULTS AND DISCUSSION

The MATLAB approach to the solution of linear regulator problems and steps to take in using MATLAB to solve the problems
are explained in this section.

4. MATLAB SOLUTION TO LQR PROBLEMS

Control systems are designed herein, especially the spacecrafts, using quadratic performance indexes both analytically and
computationally with MATLAB. In the design, which is based on the minimization of quadratic performance indexes, Riccati
equations will be solved. The solutions of these Riccati equations are used in the design of the control systems to ensure
minimum time is used by the spacecraft, minimum fuel consumption in launching of system and minimum cost in design of the
system.

MATLAB has a command called “care” that gives the solution to the associated algebraic Riccati equation (ARE) and determines
the optimal control gain matrix.

(a) “Care” Command for Solving ARE

In MATLAB, the command care (A, B, Q, R) or care (A, B, C’xC,R) solves the continuous-time, linear, quadratic regular
problem and the associated Riccati equation. The command calculates the optimal feedback gain matrix K such that the feedback
control law.

minimizes the performance

subject to the constraint equation

Another command
[P, S, G] = care (A, B, Q, R)
also returns matrix P, the unique positive-definite solution to the associated matrix Riccati equation

and also gives the poles of the system to determine the stability of the system.

28
Computing, Information Systems, Development Informatics & Allied Research Journal
Vol. 5 No. 4. December 2014 – www.cisdijournal.net

If matrix (A – BK) is a stable matrix, such a positive-definite solution P always exists. The closed-loop poles or eigenvalues of A
– BK are also obtained by this command as said earlier. It is important to note that for certain systems matrix A – BK cannot be
made a stable matrix, whatever K is chosen.

(b) How to Use MATLAB to Solve LQR Problems

In using MATLAB to solve the linear quadratic regulator problems by finding a positive definite solution of the associated ARE,
the following steps are to be taken:

Step 1: Launch MATLAB


Step 2: Wait until the command window and command prompt appear. The command prompt is indicated by “>>”.
Step 3: Enter the matrices A, B, Q and R as follows; as an example
>> A = [a11 a12 a13 ; b21 b22 b23 ; c31 c32 c33] ;
>> B = [b11; b21; b31] ;
>> C = [c11 c12 c13] ;
>> R = [K];
where Q = C’*C
Step 4: Solve the associated ARE by using “care” command as shown below.
>> [P, s, K] = care (A, B, C’* C, R)
where P = Positive definite symmetric matrix
S = Poles of the system
K = Optimal control gain

Step 5: Press enter after step 4 and solution to the LQR problem is displayed by giving you
Matrix P, poles of the system to determine stability and optimal control gain.

Example 8.0

Consider the system described by


ẋ = Ax + Bu, where

The performance index J is given by

where and R =

Assume that the following control u is used

Determine the optimal feedback gain matrix K

Solution

The solution to the problem is shown below in the MATLAB program 5.0

29
Computing, Information Systems, Development Informatics & Allied Research Journal
Vol. 5 No. 4. December 2014 – www.cisdijournal.net

MATLAB Program 5.0: Solution to Example 8.0

>> A= [0 1;0 -1];


>> B= [0;1];
>> Q= [1 0;0 1];
>> R= [1];
>> [P,s,K]=care(A,B,Q,R)
P=
2.0000 1.0000
1.0000 1.0000
S=
s=
-1.0000
-1.0000
K=
1.0 1.0000

Example 9.0

Solve the associated algebraic Riccati equation and find the optimal control gain of a system described by the state space
equations:
and

where

Solution

30
Computing, Information Systems, Development Informatics & Allied Research Journal
Vol. 5 No. 4. December 2014 – www.cisdijournal.net

The MATLAB Program below shows the solution to the problem

MATLAB Program 6.0: Solution to Example 9.0

>> A=[0 1 0;0 0 1;0 -2 -3];


>> B=[0;0;1];
>> C=[1 0 0];
>> R=[1];
>> [P,s,K]=care(A,B,C'*C,R)
P=
3.2870 3.4020 1.0000
3.4020 4.1824 1.2870
1.0000 1.2870 0.4020
s=
-2.0198
The-0.6911
matrix, + 0.1321i
-0.6911 - 0.1321i
K=
1.0000control
The optimal 1.2870
gain, 0.4020

The poles of the system, s lie in left-half plane and therefore, the system is stable.

5.0 CONCLUSION

This paper was focused on the essential and application of optimal control theory formulations. It gave a theoretical, analytical
and computational solution to the linear quadratic regulator (LQR) problems with the MATLAB simulations. It also gave an
insight into how mathematicians and engineers can improve the control technology of satellite and spacecraft by minimization of
the fuel consumption, time minimization and energy minimization. There are still fertile fields or areas yet to be explored in the
research areas of optimal control. This has necessitated the interest of modern mathematicians and other pure science related
field.

5.1 Recommendation

The concept of linear quadratic regulators or generally quadratic optimal control should be revisited from a multidisciplinary
point of view to give a wider content of reference ideas and knowledge base; and also to enhance a richer and more accurate
policy formulation. The results obtained can be made better through the use of modern simulation software. This will not enhance
visualization of solution but also allow for deliberate error introduction for the extreme situation analysis testing.

31
Computing, Information Systems, Development Informatics & Allied Research Journal
Vol. 5 No. 4. December 2014 – www.cisdijournal.net

REFERENCES

1. Katsuhiko Ogata (2002): “Modern Control Engineering”, 4th Edition, Prentice-Hall, New Jersey, Asseizh Publications,
pp.: 52 - 62 and pp.:896 - 979.

2. Desineni Subbaram Naidu (2003): “Optimal Control Systems”, Idaho State University, USA, 1st Edition, CRC Press,
Boca Raton, London, New York, Washington D.C, pp.: 1 - 20.

3. Donald E. Kirk (2004): “Optimal Control Theory, An Introduction”, San Jose State University, San Jose, California, 1st
Edition, Dover Publications ,Inc., Mineola, New York, pp.: 1 – 24 and pp.: 30 - 40.

4. Hans P. Geering (2007): “Optimal Control with Engineering Applications”, 1st Edition, Zurich, Switzerland, Spinger-
Verlag Inc.,Berlin, Heidelberg, pp.: 3-15.

5. Rahul Singh and Vijay Gupta (2011): “On LQR Control with Asynchronous Clocks” Department of Electrical
Engineering, University of Notre Dome, Notre Dome, IN 46556, Dec. 2011.

6. Alberto Bemporad, Manfred Morari, Vivek Dua and Efstratios, N. Pistikopoulos (2002): “The Explicit Linear
Quadratic Regulator for Constrained Systems”, Elsevier Science Ltd., Automatica 38 (2002), pp.: 3 – 20.

7. B. T. Polyak and R. Tempo (2001): “Probabilistic Robust Design with Linear Quadratic Regulators”, Elsevier Science
B. V., Systems and Control Letters, 43 (2001), pp.: 343 – 353.

8. J. M. Dores Costa: “Design of Linear Quadratic Regulators for Quasi-Resonant DC –DC Converters”, Supported by
INESC-ID, R. Alves Redol 9, 1000 Lisbon, Portugal.

9. Anders Rantzer and Mikael Johansson (2000): “Piecewise Linear Quadratic Optimal Control”, IEEE Transactions on
Automatic Control, Volume 45, Number 4, pp.: 629 – 637, April 2000.

10. John Swigart and Sanjay Lall (2010): “An Explicit State-Space Solution for a Decentralized Two-Player Optimal
Linear Quadratic Regulator”, American Control Conference, Marriott Waterfront, Baltimore, MD, USA, June 30 –
July 02, 2010, pp.: 6385 – 6390.

11. Joel Douglas and Michael Athens (1994): “Robust Linear Quadratic Designs with Real Parameter Uncertainty”, IEEE
Transactions on Automatic Control, Volume 39, Number 01, Jan. 1994, pp.: 106 – 111.

12. Michael Safonov and Michael Anthans (1976): “Gain and Phase Margin for Multiloop LQG Regulators”, IEEE
Transactions on Automatic Control, 1976, pp.: 1 – 28.

13. Andrew Lamperski and Noah J. Cowan (2013): “Time-Changed Linear Quadratic Regulators”, European Control
Conference (ECC), Zurich, Switzerland, July 17 – 19, 2013, pp.: 198 – 203.
14. L.A Buraimoh-Igbo (2002): “Control System Engineering, Lagos State University”,1st Edition, D-One Computers,
Lagos State University, Epe Campus ,Epe, Lagos, Nigeria , PP.: 4 - 170.

15. Raymond I. Stefani and Clement J. Savant (2001): “Design of Feedback Control System”, pp.: 178 – 217 and pp.: 646 -
687.

32

You might also like