Unesco - Eolss Sample Chapters: System Characteristics: Stability, Controllability, Observability
Unesco - Eolss Sample Chapters: System Characteristics: Stability, Controllability, Observability
Unesco - Eolss Sample Chapters: System Characteristics: Stability, Controllability, Observability
Jerzy Klamka
Institute of Automatic Control, Technical University, Gliwice, Poland
Contents
1. Introduction
2. Mathematical Model
S
TE S
3. Stability
4. Controllability
R
AP LS
4.1. Fundamental Results
4.2. Stabilizability
4.3. Output Controllability
C EO
4.4. Controllability with Constrained Controls
4.5. Controllability After the Introducing of Sampling
4.6. Perturbations of Controllable Dynamical Systems
4.7. Minimum Energy Control
E –
5. Observability
H
6. Conclusions
PL O
Glossary
Bibliography
M SC
Biographical Sketch
Summary
SA NE
stability. In the second part controllability of dynamical control system is defined and,
using the controllability matrix, necessary and sufficient conditions for controllability
are presented. Additionally, the important case of controllability with constrained
controls is also discussed. The third part is devoted to a study of observability. In this
part necessary and sufficient observability conditions are formulated using the
observability matrix. In conclusion, several remarks concerning special cases of stability,
controllability, and observability of linear control systems are given. It should be noted
that all the results are given without proofs but with suitable literature references.
1. Introduction
and observability was started at the beginning of the 1960s, when the theory of
controllability and observability based on a description in the form of state space for
both time-invariant and time-varying linear control systems was worked out. The
concept of stability is extremely important, because almost every workable control
system is designed to be stable. If a control system is not stable, it is usually of no use in
practice. Many dynamical systems are such that the control does not affect the complete
state of the dynamical system but only a part of it. On the other hand, in real industrial
processes it is very often possible to observe only a certain part of the complete state of
the dynamical system. Therefore, it is very important to determine whether or not
control and observation of the complete state of the dynamical system are possible.
Roughly speaking, controllability generally means, that it is possible to steer a
dynamical system from an arbitrary initial state to an arbitrary final state using the set of
admissible controls. On the other hand, observability means that it is possible to recover
the initial state of the dynamical system from knowledge of the input and output.
S
TE S
Stability, controllability, and observability play an essential role in the development of
R
AP LS
modern mathematical control theory. There are important relationships between stability,
controllability, and observability of linear control systems. Controllability and
observability are also strongly connected with the theory of minimal realization of linear
C EO
time-invariant control systems. It should be pointed out that a formal duality exists
between the concepts of controllability and observability.
In the literature there are many different definitions of stability, controllability, and
observability, which depend on the type of dynamical control system. The main purpose
E –
of this article is to present a compact review of the existing stability, controllability, and
H
systems. It should be noted that for linear control systems, stability, controllability, and
observability conditions have pure algebraic forms and are fairly easily computable.
M SC
2. Mathematical Model
In the theory of linear time-invariant dynamical control systems the most popular and
the most frequently used mathematical model is given by the following differential state
equation and algebraic output equations
where x(t)∈Rn is a state vector, u(t)∈Rm is an input vector, y(t)∈Rp is an output vector,
and A, B, and C are real matrices of appropriate dimensions.
It is well known that for a given initial state x(0)∈Rn and control u(t)∈Rm, t≥0, there
exist a unique solution x(t;x(0),u)∈Rn of the state equation (1) of the following form
t
x(t ; x(0), u ) = exp( At ) x(0) + ∫ exp( A(t − s )) Bu ( s )ds
0
S
TE S
equivalence transformation z(t)=Px(t). Then the state equation (1) and output equation
R
AP LS
(2) becomes
Dynamical systems (1), (2), and (3), (4) are said to be equivalent and many of their
PL O
which leads to the so-called Jordan canonical form of the dynamical system. If the
matrix J is in the Jordan canonical form, then Eqs. (3) and (4) are said to be in a Jordan
canonical form. It should be stressed, that that every dynamical system (1), (2) has an
equivalent Jordan canonical form.
SA NE
3. Stability
U
In order to introduce the stability definitions we need the concept of equilibrium state.
We see from this definition that if a trajectory reaches an equilibrium state and if no
input is applied, the trajectory will stay at the equilibrium state forever. Clearly, for
linear dynamical systems the zero state is always an equilibrium state.
Definition 2: An equilibrium state xe is said to be stable if and only if for any positive ε,
there exists a positive number δ(ε) such that x(0) − x e ≤ δ implies that
Roughly speaking, an equilibrium state xe is stable if the response due to any initial state
that is sufficiently near to xe will not move far away from xe. If the response will, in
addition, go back to xe, then xe is said to be asymptotically stable.
S
TE S
R
AP LS
Let si = Re(si) + jIm(si), i=1,2,3,...,r, r≤n denote the distinct eigenvalues of the matrix A
and let “Re” and “Im” stand for the real part and the imaginary part of the eigenvalue si,
respectively.
C EO
Theorem 1: Every equilibrium state of the dynamical system (1) is stable if and only if
all the eigenvalues of A have nonpositive (negative or zero) real parts, i.e., Re(si)≤0 for
i=1,2,3,...,r and those with zero real parts are simple zeros of the minimal polynomial of
E –
A.
H
PL O
Theorem 2: The zero state of the dynamical system (1) is asymptotically stable if and
only if all the eigenvalues of A have negative real parts, i.e., Re(si)<0 for i=1,2,3,...,r.
M SC
From the above theorems it directly follows that the stability and asymptotic stability of
a dynamical system depend only on the matrix A and are independent of the matrices B
SA NE
and C. Suppose that the dynamical system (1) is stable or asymptotically stable, then the
dynamical system remains stable or asymptotically stable after arbitrary equivalence
transformation. This is natural and intuitively clear because an equivalence
transformation changes only the basis of the state space. Therefore, we have the
U
following corollary.
Corollary 1: Stability and asymptotic stability are both invariant under any equivalence
transformation.
4. Controllability
Let us recall the most popular and frequently used fundamental definition of
controllability for linear control systems with constant coefficients.
This definition requires only that any initial state x(0) can be steered to any final state x1.
The trajectory of the system is not specified. Furthermore, there are no constraints
imposed on the control. In order to formulate easily computable algebraic controllability
criteria let us introduce the so-called controllability matrix W defined as follows:
W = [B,AB,A2B,...,An-1B].
rank W = n
S
TE S
R
AP LS
Corollary 2: Dynamical system (1) is controllable if and only if the n × n-dimensional
symmetric matrix WWT is nonsingular.
C EO
Since the controllability matrix W does not depend on time t1, then from Theorem 3 and
Corollary 2 it directly follows that in fact the controllability of a dynamical system does
not depend on the length of control interval. Let us observe that in many cases, in order
to check controllability it is not necessary to calculate the controllability matrix W but
E –
only a matrix with a smaller number of columns. It depends on the rank of the matrix B
H
and the degree of the minimal polynomial of the matrix A, where the minimal
PL O
polynomial is the polynomial of the lowest degree that annihilates matrix A. This is
based on the following corollary.
M SC
Corollary 3: Let rank B = r, and q is the degree of the minimal polynomial of the
matrix A. Then dynamical system (1) is controllable if and only if
SA NE
rank[B,AB,A2B,...,An-kB] = n
In a case where the eigenvalues of the matrix A, si, i=1,2,3,...,n are known, we can
check controllability using the following corollary.
Suppose that the dynamical system (1) is controllable, then the dynamical system
remains controllable after the equivalence transformation. This is natural and intuitively
clear because an equivalence transformation changes only the basis of the state space.
Therefore, we have the following corollary.
4.2. Stabilizability
It is well known that the controllability concept for dynamical system (1) is strongly
related to its stabilizability by the linear static state feedback of the following form
S
TE S
Introducing the linear static state feedback given by equality (5) we directly obtain the
R
AP LS
linear differential state equation for the feedback linear dynamical system of the
following form
C EO
x'(t) = (A + BK)x(t) + Bv(t) (6)
Corollary 6: Dynamical system (1) is controllable if and only if for an arbitrary matrix
M SC
From Corollary 6 it follows that under the controllability assumption we can arbitrarily
form the spectrum of the dynamical system (1) by the introduction of suitable defined
SA NE
linear static state feedback (5). Hence, we have the following result.
Theorem 4: The pair of matrices (A,B) represents the controllable dynamical system (1)
U
if and only if for each set Λ consisting of n complex numbers and symmetric with
respect to real axis, there exists constant state feedback matrix K such that the spectrum
of the matrix (A+BK) is equal to the set Λ.
Let Re(sj)≥0, for j=1,2,3,...,q≤n; in other words, sj are unstable eigenvalues of the
dynamical system (1). An immediate relation between controllability and stabilizability
of the dynamical system (1) gives the following theorem.
Theorem 5: The dynamical system (1) is stabilizable if and only if all its unstable
modes are controllable, that is,
S
TE S
R
AP LS
Similar to the state controllability of a dynamical control system, it is possible to define
the so-called output controllability for the output vector y(t) of a dynamical system.
Although these two concepts are quite similar, it should be mentioned that the state
C EO
controllability is a property of the differential state equation (1), whereas the output
controllability is a property both of the state equation (1) and algebraic output equation
(2).
Definition 6: Dynamical system (1), (2) is said to be output controllable if for every y(0)
E –
and every vector y1∈Rp, there exist a finite time t1 and control u1(t)∈Rm, that transfers
H
rank[CB,CAB,CA2B,...,CAn-1B] = p
SA NE
It should be pointed out, that the state controllability is defined only for the linear
differential state equation (1), whereas the output controllability is defined for the input-
output description, that is, it depends also on the linear algebraic output equation (2).
U
If the control system is output controllable, its output can be transferred to any desired
vector at certain instant of time. A related problem is whether it is possible to steer the
output following a preassigned curve over any interval of time. A control system whose
output can be steered along the arbitrary given curve over any interval of time is said to
be output function controllable or functional reproducible.
In practice admissible controls are required to satisfy additional constraints. Let U⊂Rm
be an arbitrary set and let the symbol M(U) denotes the set of admissible controls, i.e.,
the set of controls u(t)∈U for t∈[0,∞).
Definition 7: The dynamical system (1) is said to be U-controllable to zero if for any
initial state x(0)∈Rn, there exist a finite time t1<∞ and an admissible control u(t)∈M(U),
t∈[0,t1], such that x(t1; x(0),u) = x1.
Definition 8: The dynamical system (1) is said to be U-controllable from zero if for any
final state x1∈Rn, there exist a finite time t1<∞ and an admissible control u(t)∈M(U),
t∈[0,t1], such that x(t1;0,u) = x1.
Definition 9: The dynamical system (1) is said to be U-controllable if for any initial
state x(0)∈Rn, and any final state x1∈Rn, there exist a finite time t1<∞ and an admissible
control u(t)∈M(U), t∈[0,t1], such that x(t1; x(0),u) = x1.
Generally, for arbitrary set U it is rather difficult to give easily computable criteria for
constrained controllability. However, for certain special cases of the set U it is possible
S
TE S
to formulate and prove algebraic constrained controllability conditions.
R
AP LS
Theorem 7: The dynamical system (1) is U-controllable to zero if and only if all the
following conditions are satisfied simultaneously:
C EO
1. There exists w∈U such that Bw=0.
2. The convex hull CH(U) of the set U has nonempty interior in the space Rm.
3. Rank[B,AB,A2B,...,An-1B] = n.
4. There is no real eigenvector v∈Rn of the matrix Atr satisfying vtrBw≤0 for all
E –
w∈U.
H
For the single input system, that is, m=1, Theorem 7 reduces to the following corollary:
M SC
Corollary 7: Suppose that m=1 and U=[0,1]. Then the dynamical system (1) is U-
controllable to zero if and only if it is controllable without any constraints; that is,
rank[B,AB,A2B,...,An-1B] = n, and matrix A has only complex eigenvalues.
SA NE
Theorem 8: Suppose the set U is a cone with vertex at zero and a nonempty interior in
the space Rm. Then the dynamical system (1) is U-controllable from zero if and only if
U
1. Rank[B,AB,A2B,...,An-1B] = n.
2. There is no real eigenvector v∈Rn of the matrix Atr satisfying vtrBw≤0 for all
w∈U.
For the single input system, that is, m=1, Theorem (7) reduces to the following corollary.
Corollary 8: Suppose that m=1 and U=[0,1]. Then the dynamical system (1) is U-
controllable from zero if and only if it is controllable without any constraints; in other
words, rank[B,AB,A2B,...,An-1B] = n , and matrix A has only complex eigenvalues.
-
-
-
Bibliography
Chen C.T. (1970). Introduction to Linear System Theory. New York: Holt, Rinehart and Winston. [This
monograph presents controllability, observability and duality results for linear finite dimensional
dynamical systems.]
Kaczorek T. (1993). Linear Control Systems. New York: Research Studies Press and John Wiley. [This
S
TE S
monograph presents controllability, observability and duality results for continuous and discrete linear
dynamical systems.]
R
AP LS
Kaczorek T. (2002). Positive 1D and 2D Systems. London. Springer-Verlag. [This monograph presents
controllability, observability and duality results for continuous and discrete positive linear dynamical
systems.]
C EO
Klamka J. (1991). Controllability of Dynamical Systems. Dordrecht: Kluwer Academic. [This monograph
contains many controllability results for different types of continuous and discrete dynamical systems.]
Klamka J. (1993). Controllability of dynamical systems-a survey. Archives of Control Sciences 2, 281–
307. [This work presents a comprehensive discussion on controllability problems for different types of
E –
dynamical systems.]
H
Biographical Sketch
PL O
Jerzy Klamka was born in Poland in 1944. He received M.Sc. and Ph.D. degrees in control engineering
M SC
from the Silesian Technical University in Gliwice, Poland, in 1968 and 1974, respectively. He also
received M.Sc. and Ph.D. degrees in mathematics from the Silesian University in Katowice, Poland, in
1971 and 1978, respectively. In 1981 he received habilitation in control engineering and in 1990 titular
professor in control engineering from the Silesian Technical University in Gliwice.
SA NE
Since 1968 he has been working for the Institute of Control Engineering of the Silesian technical
University in Gliwice, where he is now a full professor. In 1973 and 1980 he taught semester courses in
mathematical control theory at the Stefan Banach International Mathematical Center in Warsaw.
He has been a member of the American Mathematical Society (AMS) since 1976, and Polish
U
Mathematical Society (PTM) since 1982. He is also a permanent reviewer for Mathematical Reviews
(from 1976) and for Zentralblatt für Mathematik (from 1982). In 1981 and 1991 he was awarded the
Polish Academy of Sciences awards. In 1978, 1982, and 1990 he received the awards of the Ministry of
Education, and in 1994 he was awarded the Polish Mathematical Society award.
In 1991 he published the monograph Controllability of Dynamical Systems, (Kluwer Academic
Publishers, Dordrecht, the Netherlands). In the last 30 years he has published more than 100 papers in
international journals, for example, in: IEEE Transactions on Automatic Control, Automatica,
International Journal of Control, Journal of Mathematical Analysis and Applications, Systems and
Control Letters, Foundations of Control Engineering, Systems Science, Kybernetika, IMA Journal on
Mathematical Control and Information, Nonlinear Analysis, Theory, Methods and Applications, Systems
Analysis, Modeling, Simulation, Archives of Control Science, Applied Mathematics and Computer
Science, Advances in Systems Science and Applications, Bulletin of the Polish Academy of Sciences,
Mathematical Population Dynamics, Lecture Notes in Control and Information Sciences, Analele
Universitati din Timisoara, and Acta Mathematicae Silesiane. He has taken part in many international
Congresses, Conferences, and Symposiums.
His major current interest is controllability theory for linear and nonlinear dynamical systems, and in
particular controllability of distributed parameter systems, dynamical systems with delays, and
multidimensional discrete systems.
S
TE S
R
AP LS
C EO
E –
H
PL O
M SC
SA NE
U