Appendices PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 57

Appendix A

NOTATION, SYMBOLS, AND


ACRONYMS

Notation Meaning
t Continuous-time variable
f (t) Continuous-time signal
k Discrete-time variable
{f [k]} Discrete-time sequence
Sampling period
f (k) Sampled version of f (t)
Delta operator
q forward shift operator
K (k) Kronecker delta
(t) Dirac delta
E{...} Expected value of ...
c Controllability matrix in state space description
o Observability matrix in state space description
{...} Set of eigenvalues of matrix ...
(t to ) unit step (continuous time) at time t = to
[k ko ] unit step (discrete time) at time k = ko
f s (t) Dirac impulse-sampled version of f (t)
F [...] Fourier transform of ...
L [...] Laplace transform of ...
D [...] Delta-transform of ...
Z [...] Z-transform of ...
F 1 [...] inverse Fourier transform of ...
L1 [...] inverse Laplace transform of ...
D1 [...] inverse Delta-transform of ...
continued on next page

888
889

continued from previous page


Notation Meaning
Z 1 [...] inverse Z-transform of ...
s Laplace-transform complex variable
angular frequency
Delta-transform complex variable
z Z-transform complex variable
F (j) Fourier transform of f (t)
F (s) Laplace transform of f (t)
F () Delta-transform of {f [k]}
Fq (z) Z-transform of {f [k]}
f1 (t) f2 (t) Time convolution of f1 (t) and f2 (t)
F1 (s) F2 (s) Complex convolution of F1 (s) and F2 (s)
<{...} real part of ...
={...} imaginary part of ...
Cmn set of all m n matrices with complex entries
H2 Hilbert space of those functions square-integrable along the
imaginary axis and analytic in the right-half plane
L1 Hilbert space of those functions absolutely integrable along
the imaginary axis
L2 Hilbert space of those functions square-integrable along the
imaginary axis.
H Hilbert space of those functions bounded along the imagi-
nary axis and analytic in the right-half plane
RH Hilbert space of those rational functions bounded along
the imaginary axis and analytic in the right-half plane
L Hilbert space of those functions bounded along the imagi-
nary axis.
N set of all natural numbers
R+ set of real numbers larger than zero
R set of real numbers smaller than zero
Rmn set of all m n matrices with real entries
S set of all real rational functions with (finite) poles strictly
inside the LHP
Z set of all integer numbers
[ik ] Matrix where the element in the ith row and k th column
is denoted by ik
[A]ik element in the ith row and k th of matrix A
[A]i ith row of matrix A
[A]k k th columm of matrix A
continued on next page
890 Notation, Symbols, and Acronyms Appendix A

continued from previous page


Notation Meaning
(...) complex conjugate of ...
Gh0 (s) transfer function of a zero-order hold
Gh1 (s) transfer function of a first-order hold
Hh...i operator notation, i.e. H operates on ...
H1 H2 h...i composite operators, i.e., H1 hH2 h...ii
Ik identity matrix in Rkk
d.c. direct current, i.e., zero-frequency signal
d.o.f. degrees of freedom
CTARE Continuous-Time Algebraic Riccati Equation
DTARE Discrete-Time Algebraic Riccati Equation
CTDRE Continuous-Time Dynamic Riccati Equation
DTDRE Discrete-Time Dynamic Riccati Equation
IMC Internal Model Control
IMP Internal Model Principle
LHP left half-plane
OLHP open left half-plane
RHP right half- plane
ORHP open right-half plane
NMP nonminimum phase
MFD Matrix fraction description
LMFD Left matrix fraction description
RMFD Right matrix fraction description
LTI Linear time invariant
LQR Linear quadratic regulator
w.r.t with respect to ...

Table A.1. Notation, symbols and acronyms


Appendix B

SMITHMCMILLAN FORMS

B.1 Introduction
SmithMcMillan forms correspond to the underlying structures of natural MIMO
transfer-function matrices. The key ideas are summarized below.

B.2 Polynomial Matrices


Multivariable transfer functions depend on polynomial matrices. There are a num-
ber of related terms that are used. Some of these are introduced here:
Definition B.1. A matrix (s) = [pik (s)] Rn1 n2 is a polynomial matrix if
pik (s) is a polynomial in s, for i = 1, 2, . . . , n1 and k = 1, 2, . . . , n2 .
Definition B.2. A polynomial matrix (s) is said to be a unimodular matrix
if its determinant is a constant. Clearly, the inverse of a unimodular matrix is also
a unimodular matrix.
Definition B.3. An elementary operation on a polynomial matrix is one of the
following three operations:
(eo1) interchange of two rows or two columns;
(eo2) multiplication of one row or one column by a constant;
(eo3) addition of one row (column) to another row (column) times a polynomial.
Definition B.4. A left (right) elementary matrix is a matrix such that, when
it multiplies from the left (right) a polynomial matrix, then it performs a row (col-
umn) elementary operation on the polynomial matrix. All elementary matrices are
unimodular.
Definition B.5. Two polynomial matrices 1 (s) and 2 (s) are equivalent ma-
trices, if there exist sets of left and right elementary matrices, {L 1 (s), L2 (s), . . . , Lk1 }
and {R1 (s), R2 (s), . . . , Rk2 }, respectively, such that
1 (s) = Lk1 (s) L2 (s)L1 2 (s)R1 (s)R2 (s) Rk2 (B.2.1)

891
892 SmithMcMillan Forms Appendix B

Definition B.6. The rank of a polynomial matrix is the rank of the matrix
almost everywhere in s. The definition implies that the rank of a polynomial matrix
is independent of the argument.
Definition B.7. Two polynomial matrices V(s) and W(s) having the same number
of columns (rows) are right (left) coprime if all common right (left) factors are
unimodular matrices.
Definition B.8. The degree ck (rk ) of the k th column (row) [V(s)]k ( [V(s)]k )
of a polynomial matrix V(s) is the degree of highest power of s in that column (row).
Definition B.9. A polynomial matrix V(s) Cmm is column proper if

lim det(V(s) diag sc1 , sc2 , . . . , scm )



(B.2.2)
s

has a finite, nonzero value.


Definition B.10. A polynomial matrix V(s) Cmm is row proper if
lim det(diag sr1 , sr2 , . . . , srm V(s))

(B.2.3)
s

has a finite, nonzero value.

B.3 Smith Form for Polynomial Matrices


Using the above notation, we can manipulate polynomial matrices in ways that
mirror the ways we manipulate matrices of reals. For example, the following result
describes a diagonal form for polynomial matrices.
Theorem B.1 (Smith form). Let (s) be a m1 m2 polynomial matrix of rank
r; then (s) is equivalent to either a matrix f (s) (for m1 < m2 ) or to a matrix
c (s) (for m2 < m1 ), with

 
  E(s)
f (s) = E(s) f ; c (s) = (B.3.1)
c
E(s) = diag(1 (s), . . . , r (s), 0, . . . , 0) (B.3.2)
where f and c are matrices with all their elements equal to zero.
Furthermore i (s) are monic polynomials for i = 1, 2, . . . , r, such that i (s) is a
factor in i+1 (s), i.e. i (s) divides i+1 (s).
If m1 = m2 , then (s) is equivalent to the square matrix E(s).

Proof (by construction)


(i) By performing row and column interchange operations on (s), bring to posi-
tion (1,1) the least degree polynomial entry in (s). Say this minimum degree
is 1
Section B.3. Smith Form for Polynomial Matrices 893

(ii) Using elementary operation (e03) (see definition B.3), reduce the term in the
position (2,1) to degree 2 < 1 . If the term in position (2,1) becomes zero,
then go to the next step, otherwise, interchange rows 1 and 2 and repeat the
procedure until the term in position (2,1) becomes zero.
(iii) Repeat step (ii) with the other elements in the first column.
(iv) Apply the same procedure to all the elements but the first one in the first row.
(v) Go back to step (ii) if nonzero entries due to step (iv) appear in the first
column. Notice that the degree of the entry (1,1) will fall in each cycle, until
we finally end up with a matrix which can be partitioned as

(j)
11 (s) 0 0 ... 0 0
0

0
(s) = . (B.3.3)

..

j (s)

0
0

(j)
where 11 (s) is a monic polynomial.
(j)
(vi) If there is an element of j (s) which is of lesser degree than 11 (s), then
add the column where this element is to the first column and repeat steps (ii)
(j)
to (v). Do this until the form (B.3.3) is achieved with 11 (s) of less or, at
most, equal degree to that of every element in j (s). This will yield further
reduction in the degree of the entry in position (1,1).
(j)
(vii) Make 1 (s) = 11 (s).
(viii) Repeat the procedure from steps (i) through (viii) to matrix j (s).

Actually the polynomials i (s) in the above result can be obtained in a direct
fashion, as follows:

(i) Compute all minor determinants of (s).


(ii) Define i (s) as the (monic) greatest common divisor (g.c.d.) of all i i minor
determinants of (s). Make 0 (s) = 1.
(iii) Compute the polynomials i (s) as

i (s)
i (s) = (B.3.4)
i1 (s)
894 SmithMcMillan Forms Appendix B

B.4 SmithMcMillan Form for Rational Matrices


A straightforward application of Theorem B.1 leads to the following result, which
gives a diagonal form for a rational transfer-function matrix:
Theorem B.2 (SmithMcMillan form). Let G(s) = [Gik (s)] be an m m ma-
trix transfer function, where Gik (s) are rational scalar transfer functions:

(s)
G(s) = (B.4.1)
DG (s)
where (s) is an mm polynomial matrix of rank r and DG (s) is the least common
multiple of the denominators of all elements Gik (s).
Then, G(s) is equivalent to a matrix M(s), with

 
1 (s) r (s)
M(s) = diag ,... , , 0, . . . , 0 (B.4.2)
1 (s) r (s)
where {i (s), i (s)} is a pair of monic and coprime polynomials for i = 1, 2, . . . , r.
Furthermore, i (s) is a factor of i+1 (s) and i (s) is a factor of i1 (s).

Proof
We write the transfer-function matrix as in (B.4.1). We then perform the algorithm
outlined in Theorem B.1 to convert (s) to Smith normal form. Finally, canceling
terms for the denominator DG (s) leads to the form given in (B.4.2).
222
We use the symbol GSM (s) to denote M(s), which is the SmithMcMillan form
of the transfer-function matrix G(s) .
We illustrate the formula of the SmithMcMillan form by a simple example.
Example B.1. Consider the following transfer-function matrix


4 1
(s + 1)(s + 2) s+1

G(s) =


(B.4.3)
2 1
s+1 2(s + 1)(s + 2)
We can then express G(s) in the form (B.4.1):

4 (s + 2)
" #
(s)
G(s) = ; (s) = 1 ; DG (s) = (s + 1)(s + 2)
DG (s) 2(s + 2)
2
(B.4.4)
Section B.5. Poles and Zeros 895

The polynomial matrix (s) can be reduced to the Smith form defined in Theorem
B.1. To do that, we first compute its greatest common divisors:

0 = 1 (B.4.5)
 
1
1 = gcd 4; (s + 2); 2(s + 2); =1 (B.4.6)
2
2 = gcd{2s2 + 8s + 6} = s2 + 4s + 3 = (s + 1)(s + 3) (B.4.7)

This leads to
1 2
1 = = 1; 2 = = (s + 1)(s + 3) (B.4.8)
0 1

From here, the SmithMcMillan form can be computed to yield

1

0
SM (s + 1)(s + 2)
G (s) = (B.4.9)

s + 3
0
s+2

B.5 Poles and Zeros


The SmithMcMillan form can be utilized to give an unequivocal definition of poles
and zeros in the multivariable case. In particular, we have:

Definition B.11. Consider a transfer-function matrix , G(s).

(i) pz (s) and pp (s) are said to be the zero polynomial and the pole polynomial
of G(s), respectively, where

4 4
pz (s) = 1 (s)2 (s) r (s); pp (s) = 1 (s)2 (s) r (s) (B.5.1)

and where 1 (s), 2 (s), . . . , r (s) and 1 (s), 2 (s), . . . , r (s) are the polyno-
mials in the SmithMcMillan form, GSM (s) of G(s).
Note that pz (s) and pp (s) are monic polynomials.

(ii) The zeros of the matrix G(s) are defined to be the roots of p z (s), and the poles
of G(s) are defined to be the roots of pp (s).

(iii) The McMillan degree of G(s) is defined as the degree of pp (s).

In the case of square plants (same number of inputs as outputs), it follows that
det[G(s)] is a simple function of pz (s) and pp (s). Specifically, we have
896 SmithMcMillan Forms Appendix B

pz (s)
det[G(s)] = K (B.5.2)
pp (s)

Note, however, that pz (s) and pp (s) are not necessarily coprime. Hence, the
scalar rational function det[G(s)] is not sufficient to determine all zeros and poles
of G(s). However, the relative degree of det[G(s)] is equal to the difference between
the number of poles and the number of zeros of the MIMO transfer-function matrix.

B.6 Matrix Fraction Descriptions (MFD)


A model structure that is related to the SmithMcMillan form is that of a matrix
fraction description (MFD). There are two types, namely a right matrix fraction
description (RMFD) and a left matrix fraction description (LMFD).
We recall that a matrix G(s) and its Smith-McMillan form GSM (s) are equiv-
alent matrices. Thus, there exist two unimodular matrices, L(s) and R(s), such
that

GSM (s) = L(s)G(s)R(s) (B.6.1)

This implies that if G(s) is an m m proper transfer-function matrix, then



there exist a m m matrix L(s)
and an m m matrix R(s), such as


G(s) = L(s)G SM
(s)R(s) (B.6.2)


where L(s)
and R(s) are, for example, given by


L(s) = [L(s)]1 ;
R(s) = [R(s)]1 (B.6.3)

We next define the following two matrices:

4
N(s) = diag(1 (s), . . . , r (s), 0, . . . , 0) (B.6.4)
4
D(s) = diag(1 (s), . . . , r (s), 1, . . . , 1) (B.6.5)

where N(s) and D(s) are m m matrices. Hence, GSM (s) can be written as

GSM (s) = N(s)[D(s)]1 (B.6.6)

Combining (B.6.2) and (B.6.6), we can write


Section B.6. Matrix Fraction Descriptions (MFD) 897


G(s) = L(s)N(s)[D(s)] 1
R(s) = [L(s)N(s)][[
R(s)] 1
D(s)]1 = GN (s)[GD (s)]1
(B.6.7)

where

4 4

GN (s) = L(s)N(s);
GD (s) = [R(s)] 1
D(s) (B.6.8)

Equations (B.6.7) and (B.6.8) define what is known as a right matrix fraction
description (RMFD).
It can be shown that GD (s) is always column-equivalent to a column proper ma-
trix P(s). (See definition B.9.) This implies that the degree of the pole polynomial
pp (s) is equal to the sum of the degrees of the columns of P(s).
We also observe that the RMFD is not unique, because, for any nonsingular
m m matrix (s), we can write G(s) as

G(s) = GN (s)(s)[GD (s)(s)]1 (B.6.9)

where (s) is said to be a right common factor. When the only right common
factors of GN (s) and GD (s) are unimodular matrices, then, from definition B.7,
we have that GN (s) and GD (s) are right coprime. In this case, we say that the
RMFD (GN (s), GD (s)) is irreducible.
It is easy to see that when a RMFD is irreducible, then

s = z is a zero of G(s) if and only if GN (s) loses rank at s = z; and

s = p is a pole of G(s) if and only if GD (s) is singular at s = p. This means


that the pole polynomial of G(s) is pp (s) = det(GD (s)).

Remark B.1. A left matrix fraction description (LMFD) can be built similarly,
with a different grouping of the matrices in (B.6.7). Namely,


G(s) = L(s)[D(s)] 1
N(s)R(s)
= [D(s)[L(s)] 1 1
] [N(s)R(s)] = [GD (s)]1 GN (s)
(B.6.10)

where

4 4

GN (s) = N(s)R(s);
GD (s) = D(s)[L(s)] 1
(B.6.11)

222
898 SmithMcMillan Forms Appendix B

The left and right matrix descriptions have been initially derived starting from
the SmithMcMillan form. Hence, the factors are polynomial matrices. However,
it is immediate to see that they provide a more general description. In particular,
GN (s), GD (s), GN (s) and GN (s) are generally matrices with rational entries. One
possible way to obtain this type of representation is to divide the two polynomial
matrices forming the original MFD by the same (stable) polynomial.
An example summarizing the above concepts is considered next.
Example B.2. Consider a 2 2 MIMO system having the transfer function


4 0.5
(s + 1)(s + 2) s+1

G(s) =


(B.6.12)
1 2
s+2 (s + 1)(s + 2)
B.2.1 Find the SmithMcMillan form by performing elementary row and column
operations.
B.2.2 Find the poles and zeros.
B.2.3 Build a RMFD for the model.

Solution
B.2.1 We first compute its SmithMcMillan form by performing elementary row
and column operations. Referring to equation (B.6.1), we have that

1

0
(s + 1)(s + 2)
GSM (s) = L(s)G(s)R(s) =

2
s + 3s + 18
(B.6.13)
0
(s + 1)(s + 2)

with

1 s+2

0 1
L(s) = 4 ; R(s) = 8 (B.6.14)
2(s + 1) 8 0 1

B.2.2 We see that the observable and controllable part of the system has zero and
pole polynomials given by

pz (s) = s2 + 3s + 18; pp (s) = (s + 1)2 (s + 2)2 (B.6.15)


Section B.6. Matrix Fraction Descriptions (MFD) 899

which, in turn, implies that there are two transmission zeros, located at 1.5
j3.97, and four poles, located at 1, 1, 2 and 2.
B.2.3 We can now build a RMFD by using (B.6.2). We first notice that

s+2

4 0

L(s) = [L(s)]1 = 1 ;

R(s) = [R(s)]1 = 1 8
s+1 0 0
8
(B.6.16)

Then, using (B.6.6), with

" # " #
1 0 (s + 1)(s + 2) 0
N(s) = ; D(s) =
0 s2 + 3s + 18 0 (s + 1)(s + 2)
(B.6.17)

the RMFD is obtained from (B.6.7), (B.6.16), and (B.6.17), leading to

4 0

4 0  
1 0
GN (s) = 1 0 s2 + 3s + 18 =

s2 + 3s + 18

s+1 s+1
8 8
(B.6.18)

and

s + 2 "(s + 1)(s + 2)
#
0
GD (s) = 1 8 (B.6.19)
0 1 0 (s + 1)(s + 2)

(s + 1)(s + 2)2

(s + 1)(s + 2) 8
= (B.6.20)


0 (s + 1)(s + 2)

These can then be turned into proper transfer-function matrices by introducing


common stable denominators.
222
Appendix C

RESULTS FROM ANALYTIC


FUNCTION THEORY

C.1 Introduction
This appendix summarizes key results from analytic function theory leading to the
Cauchy Integral formula and its consequence, the PoissonJensen formula.

C.2 Independence of Path


Consider functions of two independent variables, x and y. (The reader can think of
x as the real axis and y as the imaginary axis.)
Let P (x, y) and Q(x, y) be two functions of x and y, continuous in some domain
D. Say we have a curve C in D, described by the parametric equations

x = f1 (t), y = f2 (t) (C.2.1)

We can then define the following line integrals along the path C from point A
to point B inside D.

B t2
df1 (t)
Z Z
P (x, y)dx = P (f1 (t), f2 (t)) dt (C.2.2)
A t1 dt
Z B Z t2
df2 (t)
Q(x, y)dy = Q(f1 (t), f2 (t)) dt (C.2.3)
A t1 dt
R
Definition C.1. The line integral P dx + Qdy is said to be independent of the
path in D if, for every pair of points A and B in D, the value of the integral is
independent of the path followed from A to B.

222
We then have the following result.

901
902 Results From Analytic Function Theory Appendix C

R
Theorem C.1. If P dx + Qdy is independent of the path in D, then there exists
a function F (x, y) in D such that
F F
= P (x, y); = Q(x, y) (C.2.4)
x y
hold throughout
R D. Conversely, if a function F (x, y) can be found such that (C.2.4)
hold, then P dx + Qdy is independent of the path.

Proof
Suppose that the integral is independent of the path in D. Then, choose a point
(x0 , y0 ) in D and let F (x, y) be defined as follows

Z x,y
F (x, y) = P dx + Qdy (C.2.5)
x0 ,y0

where the integral is taken on an arbitrary path in D joining (x0 , y0 ) and (x, y).
Because the integral is independent of the path, the integral does indeed depend
only on (x, y) and defines the function F (x, y). It remains to establish (C.2.4).

(x1 , y) (x, y)

(x0 , y0 )

Figure C.1. Integration path

For a particular (x, y) in D, choose (x1 , y) so that x1 6= x and so that the


line segment from (x1 , y) to (x, y) in D is as shown in Figure C.1. Because of
independence of the path,

Z x1 ,y Z x,y
F (x, y) = (P dx + Qdy) + (P dx + Qdy) (C.2.6)
x0 ,y0 x1 ,y

We think of x1 and y as being fixed while (x, y) may vary along the horizontal
line segment. Thus F (x, y) is being considered as function of x. The first integral
on the right-hand side of (C.2.6) is then independent of x.
Hence, for fixed y, we can write
Section C.2. Independence of Path 903

Z x
F (x, y) = constant + P (x, y)dx (C.2.7)
x1

The fundamental theorem of Calculus now gives

F
= P (x, y) (C.2.8)
x
A similar argument shows that

F
= Q(x, y) (C.2.9)
y
Conversely, let (C.2.4) hold for some F . Then, with t as a parameter,

Z x2 ,y2 Z t2  
F dx F dy
F (x, y) = P dx + Qdy = + dt (C.2.10)
x1 ,y1 t1 x dt y dt
Z t2
F
= dt (C.2.11)
t1 t
= F (x2 , y2 ) F (x1 , y1 ) (C.2.12)
222
R
Theorem C.2. If the integral P dx + Qdy is independent of the path in D, then

I
P dx + Qdy = 0 (C.2.13)

on every closed Rpath in D. Conversely if (C.2.13 ) holds for every simple closed
path in D, then P dx + Qdy is independent of the path in D.

Proof
Suppose that the integral is independent of the path. Let C be a simple closed path
in D, and divide C into arcs AB~ and BA~ as in Figure C.2.

I Z Z
(P dx + Qdy) = P dx + Qdy + P dx + Qdy (C.2.14)
C
ZAB ZBA
= P dx + Qdy P dx + Qdy (C.2.15)
AB AB

The converse result is established by reversing the above argument.


222
904 Results From Analytic Function Theory Appendix C

~
AB B

C ~
A BA

Figure C.2. Integration path

Theorem
R C.3. If P (x, y) and Q(x, y) have continuous partial derivatives in D and
P dx + Qdy is independent of the path in D, then

P Q
= in D (C.2.16)
y x

Proof
By Theorem C.1, there exists a function F such that (C.2.4) holds. Equation
(C.2.16) follows by partial differentiation.
222
Actually, we will be particularly interested in the converse to Theorem C.3.
However, this holds under slightly more restrictive assumptions, namely a simply
connected domain.

C.3 Simply Connected Domains


Roughly speaking, a domain D is simply connected if it has no holes. More precisely,
D is simply connected if, for every simple closed curve C in D, the region R enclosed
by C lies wholly in D. For simply connected domains we have the following:

Theorem C.4 (Greens theorem). Let D be a simply connected domain, and


let C be a piecewise-smooth simple closed curve in D. Let P (x, y) and Q(x, y) be
functions that are continuous and that have continuous first partial derivatives in
D. Then

I Z Z  
Q P
(P dx + Qdy) = dxdy (C.3.1)
R x y

where R is the region bounded by C.


Section C.3. Simply Connected Domains 905

Proof
We first consider a simple case in which R is representable in both of the forms:

f1 (x) f2 (x) for a x b (C.3.2)


g1 (y) g2 (y) for c y d (C.3.3)

Then

Z Z Z b Z f2 (x)
P P
dxdy = dxdy (C.3.4)
R y a f1 (x) y

One can now integrate to achieve

Z Z Z b
P
dxdy = [P (x, f2 (x)) P (x, f1 (x))]dx (C.3.5)
R y a
Z b Z b
= P (x, f2 (x))dx P (x, f1 (x))dx (C.3.6)
a a
I
= P (x, y)dx (C.3.7)
C

By a similar argument,

Z Z I
Q
dxdy = Q(x, y)dy (C.3.8)
R x C

For more complex regions, we decompose into simple regions as above. The
result then follows.
222
We then have the following converse to Theorem C.3.
Theorem C.5. Let P (x, y) and Q(x, y) have
H continuous derivatives in D and let
Q
D be simply connected. If P
y = x , then P dx + Qdy is independent of path in
D.

Proof
Suppose that

P Q
= in D (C.3.9)
y x
Then, by Greens Theorem (Theorem C.4),
906 Results From Analytic Function Theory Appendix C

I Z Z  
Q P
P dx + Qdy = dxdy = 0 (C.3.10)
c R x y
222

C.4 Functions of a Complex Variable


In the sequel, we will let z = x + jy denote a complex variable. Note that z is
not the argument in the Z-transform, as used at other points in the book. Also,
y). This will
a function f (z) of a complex variable is equivalent to a function f(x,
have real and imaginary parts u(x, y) and v(x, y) respectively.
We can thus write

f (z) = u(x, y) + jv(x, y) (C.4.1)

Note that we also have

Z Z
f (z)dz = (u(x, y) + jv(x, y))(dx + jdy)
C C
Z Z Z Z 
= u(x, y)dx v(x, y)dy + j u(x, y)dy + v(x, y)dx
C C C C

We then see that the previous results are immediately applicable to the real and
imaginary parts of integrals of this type.

C.5 Derivatives and Differentials


Let w = f (z) be a given complex function of the complex variable z. Then w is
said to have a derivative at z0 if

f (z0 + z) f (z0 )
lim (C.5.1)
z0 z
exists and is independent of the direction of z. We denote this limit, when it
exists, by f 0 (z0 ).

C.6 Analytic Functions


Definition C.2. A function f (z) is said to be analytic in a domain D if f has a
continuous derivative in D.
222
Section C.6. Analytic Functions 907

Theorem C.6. If w = f (z) = u+jv is analytic in D, then u and v have continuous


partial derivatives satisfying the Cauchy-Riemman conditions.

u v u v
= ; = (C.6.1)
x y y x
Furthermore

w u v v v u u v u
= +j = +j = j = j (C.6.2)
z x x y x x y y y

Proof
Let z0 be a fixed point in D and let = f (z0 +z)f (z0 ). Because f is analytic,
we have

4
= z + z; = f 0 (z0 ) (C.6.3)

where = a + jb and  goes to zero as |z0 | goes to zero. Then

u + jv = (a + jb)(x + jy) + (1 + j2 )(x + jy) (C.6.4)

So

u = ax by + 1 x 2 y (C.6.5)
v = bx + ay + 2 x + 1 y (C.6.6)

Thus, in the limit, we can write

du = adx bdy; dv = bdx ady (C.6.7)

or

u v u v
=a= ; = b = (C.6.8)
x y y x
222
Actually, most functions that we will encounter will be analytic, provided the
derivative exists. We illustrate this with some examples.
Example C.1. Consider the function f (z) = z 2 . Then

f (z) = (x + jy)2 = x2 y 2 + j(2xy) = u + jv (C.6.9)


908 Results From Analytic Function Theory Appendix C

The partial derivatives are

u v u v
= 2x; = 2y; = 2y; = 2x (C.6.10)
x x y y
Hence, the function is clearly analytic.
Example C.2. Consider f (z) = |z| .
d|z|
This function is not analytic, because d|z| is a real quantity and, hence, dz will
depend on the direction of z.
Example C.3. Consider a rational function of the form:

(z 1 )(z 2 ) (z m ) N (z)
W (z) = K = (C.6.11)
(z 1 )(z 2 ) (z n ) D(z)

 
W 1 N (z) D(z)
= 2 D(z) N (z) (C.6.12)
z D (z) z z

These derivatives clearly exist, save when D = 0, that is at the poles of W (z).
Example C.4. Consider the same function W (z) defined in (C.6.11). Then

 
ln(W ) 1 N (z) D(z) 1 N (z) 1 D(z)
= D(z) N (z) =
z N (z)D(z) z z N (z) z D(z) z
(C.6.13)

Hence, ln(W (z)) is analytic, save at the poles and zeros of W (z).

C.7 Integrals Revisited


Theorem C.7 (Cauchy Integral
R Theorem). If f (z) is analytic in some simply
connected domain D, then f (z)dz is independent of path in D and

I
f (z)dz = 0 (C.7.1)
C

where C is a simple closed path in D.

Proof
This follows from the CauchyRiemann conditions together with Theorem C.2.
222
Section C.7. Integrals Revisited 909

We are also interested in the value of integrals in various limiting situations.


The following examples cover relevant cases.
We note that if LC is the length of a simple curve C, then

Z

f (z)dz max(|f (z)|)LC (C.7.2)
zC
C

Example C.5. Assume that C is a semicircle centered at the origin and having
radius R. The path length is then LC = R. Hence,
if f (z) varies as z 2 , then |f (z)| on C must vary as R2 hence, the integral
on C vanishes for R .
if f (z) varies as z 1 , then |f (z)| on C must vary as R1 then, the integral
on C becomes a constant as R .
Example C.6. Consider the function f (z) = ln(z) and an arc of a circle, C,
described by z = ej for [1 , 1 ]. Then

Z
4
I = lim f (z)dz = 0 (C.7.3)
0 C

This is proven as follows. On C, we have that f (z) = ln(). Then

I = lim [(2 1 ) ln()] (C.7.4)


0

We then use the fact that lim|x|0 (x ln x) = 0, and the result follows.
Example C.7. Consider the function

 a
f (z) = ln 1 + n n1 (C.7.5)
z
and a semicircle, C, defined by z = Rej for 2 , 2 . Then, if C is followed
 

clockwise,

(
0 for n > 1
Z
4
IR = lim f (z)dz = (C.7.6)
R C ja for n = 1

This is proven as follows.


On C, we have that z = Rej ; then

Z
2  a 
IR = lim j ln 1 + n ejn Rej d (C.7.7)
R
2
R
910 Results From Analytic Function Theory Appendix C

We also know that

lim ln(1 + x) = x (C.7.8)


|x|0

Then

Z
a 2
IR = lim j ej(n1) d (C.7.9)
R Rn1
2

From this, by evaluation for n = 1 and for n > 1, the result follows.

222

Example C.8. Consider the function

 a
f (z) = ln 1 + ez n n 1; >0 (C.7.10)
z
and a semicircle, C, defined by z = Rej for 2 , 2 . Then, for clockwise C,
 

Z
4
IR = lim f (z)dz = 0 (C.7.11)
R C

This is proven as follows.


On C, we have that z = Rej ; then

Z    
2 a z
IR = lim j ln 1 + (
z d (C.7.12)
R
2
z n + 1) ez z=Rej

We recall that, if is a positive real number and <{z} > 0, then

z
lim =0 (C.7.13)
|z| ez

Moreover, for very large R, we have that


 a z  1 z
ln 1 + z (C.7.14)
z n+1 ez z n ez z=Rej

z=Rej

Thus, in the limit, this quantity goes to zero for all positive n. The result then
follows.
Section C.7. Integrals Revisited 911

222
Example C.9. Consider the function

 
za
f (z) = ln (C.7.15)
z+a

and a semicircle, C, defined by z = Rej for 2 , 2 . Then, for clockwise C,


 

Z
4
IR = lim f (z)dz = j2a (C.7.16)
R C

This result is obtained by noting that

a
1
   
za z
 a  a
ln = ln a = ln 1 ln 1 + (C.7.17)
z+a 1+ z z z
and then applying the result in example C.7.
222
Example C.10. Consider a function of the form

a1 a2
f (z) = + 2 +... (C.7.18)
z z
and C, an arc of circle z = Rej for [1 , 2 ]. Thus, dz = jzd, and

Z Z 2
dz
= jd = j(2 1 ) (C.7.19)
C z 1

Thus, as R , we have that

Z
f (z)dz = ja1 (2 1 ) (C.7.20)
C
222
Example C.11. Consider, now, f (z) = z n . If the path C is a full circle, centered
at the origin and of radius R, then
I Z
z n dz = Rn ejn jRej d

(C.7.21)
C
(
0 for n 6= 1
= (C.7.22)
2j for n = 1 (integration clockwise)
912 Results From Analytic Function Theory Appendix C

222
We can now develop Cauchys Integral Formula.
Say that f (z) can be expanded as

a1
f (z) = + a0 + a1 (z z0 ) + a2 (z z0 )2 + . . . (C.7.23)
z z0
the a1 is called the residue of f (z) at z0 .

C
C1
C2 
z0

Figure C.3. Path for integration of a function having a singularity

Consider the path shown in Figure C.3. Because f (z) is analytic in a region
containing C, we have that the integral around the complete path shown in Figure
C.3 is zero. The integrals along C1 and C2 cancel. The anticlockwise circular
integral around z0 can be computed by following example C.11 to yield 2ja1 .
Hence, the integral around the outer curve C is minus the integral around the circle
of radius . Thus,

I
f (z)dz = 2ja1 (C.7.24)
C

This leads to the following result.


Theorem C.8 (Cauchys Integral Formula). Let g(z) be analytic in a region.
Let q be a point inside the region. Then g(z)
zq has residue g(q) at z = q, and the
integral around any closed contour C enclosing q in a clockwise direction is given
by

I
g(z)
dz = 2jg(q) (C.7.25)
C zq
Section C.8. Poisson and Jensen Integral Formulas 913

222
We note that the residue of g(z) at an interior point, z = q, of a region D can
be obtained by integrating g(z)
zq on the boundary of D. Hence, we can determine
the value of an analytic function inside a region by its behaviour on the boundary.

C.8 Poisson and Jensen Integral Formulas


We will next apply the Cauchy Integral formula to develop two related results.
The first result deals with functions that are analytic in the right-half plane
(RHP). This is relevant to sensitivity functions in continuous-time systems, where
Laplace transforms are used.
The second result deals with functions that are analytic outside the unit disk.
This will be a preliminary step to analyzing sensitivity functions in discrete time,
on the basis of Z-transforms.

C.8.1 Poissons Integral for the Half-Plane


Theorem C.9. Consider a contour C bounding a region D. C is a clockwise con-
tour composed by the imaginary axis and a semicircle to the right, centered at the
origin and having radius R . This contour is shown in Figure C.4. Consider
some z0 = x0 + jy0 with x0 > 0.
Let f (z) be a real function of z, analytic inside D and of at least the order of
z 1 ; f (z) satisfies

lim |z||f (z)| = 0< zD (C.8.1)


|z|

then


1 f (j)
Z
f (z0 ) = d (C.8.2)
2 j z0

Moreover, if (C.8.1) is replaced by the weaker condition

|f (z)|
lim =0 zD (C.8.3)
|z| |z|

then


1
Z
x0
f (z0 ) = f (j) d (C.8.4)
x20 + (y0 )2
914 Results From Analytic Function Theory Appendix C

Ci
R

C = C i C

Figure C.4. RHP encircling contour

Proof
Applying Theorem C.8, we have

1 f (z) 1 f (z) 1 f (z)


I Z Z
f (z0 ) = dz = dz dz (C.8.5)
2j C z z0 2j Ci z z0 2j C z z0
f (z)
Now, if f (z) satisfies (C.8.1), it behaves like z 1 for large |z|, i.e., zz 0
is like
2
z . The integral along C then vanishes and the result (C.8.2) follows.
To prove (C.8.4) when f (z) satisfies (C.8.3), we first consider z1 , the image of
f (z)
z0 through the imaginary axis, i.e., z1 = x0 + jy0 . Then zz 1
is analytic inside
D, and, on applying Theorem C.7, we have that

1 f (z)
I
0= dz (C.8.6)
2j C z z1

By combining equations (C.8.5) and (C.8.6), we obtain

I  
1 f (z) f (z) 1 z0 z 1
I
f (z0 ) = dz = f (z) dz
2j C z z0 z z1 2j C (z z0 )(z z1 )
(C.8.7)
Section C.8. Poisson and Jensen Integral Formulas 915

Because C = Ci C , the integral over C can be decomposed into the integral


along the imaginary axis , Ci , and the integral along the semicircle of infinite radius,
C . Because f (z) satisfies (C.8.3), this second integral vanishes, because the factor
z0 z1 2
(zz0 )(zz1 ) is of order z at .
Then


1 z0 z 1
Z
f (z0 ) = f (j) d (C.8.8)
2 (j z0 )(j z1 )

The result follows upon replacing z0 and z1 by their real; and imaginary-part
decompositions.
222

Remark C.1. One of the functions that satisfies (C.8.3) but does not satisfy (C.8.1)
is f (z) = ln g(z), where g(z) is a rational function of relative degree n r 6= 0. We
notice that, in this case,

 
| ln g(z)| |K||nr ln R + jnr |
lim = lim =0 (C.8.9)
|z| |z| R R

where K is a finite constant and is an angle in [ 2 , 2 ].

Remark C.2. Equation (C.8.4) equates two complex quantities. Thus, it also ap-
plies independently to their real and imaginary parts. In particular,


1
Z
x0
<{f (z0 )} = <{f (j)} d (C.8.10)
x20 + (y0 )2

This observation is relevant to many interesting cases. For instance, when f (z)
is as in remark C.1,

<{f (z)} = ln |g(z)| (C.8.11)

For this particular case, and assuming that g(z) is a real function of z, and that
y0 = 0, we have that (C.8.10) becomes


1 2x0
Z
ln |g(z0 )| = ln |g(j)| d (C.8.12)
0 x20 + (y0 )2

where we have used the conjugate symmetry of g(z).


916 Results From Analytic Function Theory Appendix C

C.8.2 PoissonJensen Formula for the Half-Plane


Lemma C.1. Consider a function g(z) having the following properties

(i) g(z) is analytic on the closed RHP;

(ii) g(z) does not vanish on the imaginary axis;

(iii) g(z) has zeros in the open RHP, located at a1 , a2 , . . . , an ;


| ln g(z)|
(iv) g(z) satisfies lim|z| |z| = 0.

Consider also a point z0 = x0 + jy0 such that x0 > 0; then

n
z0 a i 1
Z
X x0
ln |g(z0 )| = ln

+

2 + ( y )2 ln |g(j)|)d (C.8.13)
i=1
z 0 + a i x
0 0

Proof
Let

n
4 Y z + a i
g(z) = g(z) (C.8.14)
i=1
z ai

Then, ln g(z) is analytic within the closed unit disk. If we now apply Theorem
C.9 to ln g(z), we obtain

n
z0 + ai
 
1
Z
X x0
ln g(z0 ) = ln g(z0 ) + ln = ln g(j)d
i=1
z0 a i x20 + ( y0 )2
(C.8.15)

We also recall that, if x is any complex number, then <{ln x} = <{ln |x|+jx} =
ln |x|. Thus, the result follows upon equating real parts in the equation above and
noting that

ln |
g (j)| = ln |g(j)| (C.8.16)

222

C.8.3 Poissons Integral for the Unit Disk


Theorem C.10. Let f (z) be analytic inside the unit disk. Then, if z0 = rej , with
0 r < 1,
Section C.8. Poisson and Jensen Integral Formulas 917

2
1
Z
f (z0 ) = P1,r ( )f (ej )d (C.8.17)
2 0

where P1,r (x) is the Poisson kernel defined by

4 2 r 2
P,r (x) = 0 r < , x< (C.8.18)
2 2r cos(x) + r 2

Proof
Consider the unit circle C. Then, using Theorem C.8, we have that

1 f (z)
I
f (z0 ) = dz (C.8.19)
2j C z z0

Define

4 1 j
z1 = e (C.8.20)
r
Because z1 is outside the region encircled by C, the application of Theorem C.8
yields

1 f (z)
I
0= dz (C.8.21)
2j C z z1

Subtracting (C.8.21 ) from (C.8.19 ) and changing the variable of integration,


we obtain

2  
1 1
Z
r
f (z0 ) = f (ej )ej d (C.8.22)
2 0 ej rej rej ej

from which the result follows.


222
Consider now a function g(z) which is analytic outside the unit disk. We can
then define a function f (z) such that

 
4 1
f (z) = g (C.8.23)
z
918 Results From Analytic Function Theory Appendix C

Assume that one is interested in obtaining an expression for g(


0 ), where 0 =
1
rej , r > 1. The problem is then to obtain an expression for f 0 . Thus, if we
4 1
define z0 = 0 = r1 ej , we have, on applying Theorem C.10, that

2
1
Z
g(0 ) = P1, 1r ( )g(ej )d (C.8.24)
2 0

where

r2 1
P1, 1r ( ) = (C.8.25)
r2 2rcos( + ) + 1
If, finally, we make the change in the integration variable = , the following
result is obtained.

2
1 r2 1
Z
g(rej ) = g(ej )d (C.8.26)
2 0 r2 2rcos( ) + 1
Thus, Poissons integral for the unit disk can also be applied to functions of a
complex variable which are analytic outside the unit circle.

C.8.4 PoissonJensen Formula for the Unit Disk


Lemma C.2. Consider a function g(z) having the following properties:

(i) g(z) is analytic on the closed unit disk;


(ii) g(z) does not vanish on the unit circle;
(iii) g(z) has zeros in the open unit disk, located at
1,
2 , . . . ,
n .

Consider also a point z0 = rej such that r < 1; then


n Z 2
X z0 i 1
ln |g(z0 )| = ln z + P1,r ( ) ln |g(ej )|d (C.8.27)
i=1
1
i 0 2 0

Proof
Let

n
4 Y i z
1
g(z) = g(z) (C.8.28)
i=1
zi
Section C.8. Poisson and Jensen Integral Formulas 919

Then ln g(z) is analytic on the closed unit disk. If we now apply Theorem C.10
to ln g(z), we obtain

n 2
i z0
 
1 1
X Z
ln g(z0 ) = ln g(z0 ) + ln = P1,r ( ) ln g(ej )d
i=1
z0 i 2 0
(C.8.29)

We also recall that, if x is any complex number, then ln x = ln|x| + jx. Thus
the result follows upon equating real parts in the equation above and noting that

ln g(ej ) = ln g(ej )

(C.8.30)

222
Theorem C.11 (Jensens formula for the unit disk). Let f (z) and g(z) be an-
alytic functions on the unit disk. Assume that the zeros of f (z) and g(z) on the unit
disk are 1, n and 1 , 2 , . . . , m
2 , . . . , respectively, where none of these zeros
lie on the unit circle.
If
4 f (z)
h(z) = z < (C.8.31)
g(z)
then
2 |

1 f (0)
+ ln |1 2 . . . m
Z
j
ln |h(e )|d = ln
(C.8.32)
2 0 g(0) | 2 . . .
1 n |

Proof
We first note that ln |h(z)| = ln |z| + ln |f (z)| ln |g(z)|. We then apply the
PoissonJensen formula to f (z) and g(z) at z0 = 0 to obtain

z0 i

z0
i
P1,r (x) = P1,0 (x) = 1; ln
= ln |
i |; ln
= ln |i |
1 z0
i 1 i z0
(C.8.33)

We thus have that

n 2
1
X Z
ln |f (0)| = ln |
i | ln |f (ej )|d (C.8.34)
i=1
2 0
n 2
1
X Z
ln |g(0)| = ln |
i | ln |g(ej )|d (C.8.35)
i=1
2 0
920 Results From Analytic Function Theory Appendix C

The result follows upon subtracting equation (C.8.35) from (C.8.34), and noting
that

Z 2

ln ej d = 0

(C.8.36)
2 0

222
Remark C.3. Further insights can be obtained from equation (C.8.32) if we as-
sume that, in (C.8.31), f (z) and g(z) are polynomials;
n
Y
f (z) = Kf (z i ) (C.8.37)
i=1
n
Y
g(z) = (z i ) (C.8.38)
i=1

then
Qn
f (0) i
= |Kf | Qi=1 (C.8.39)
m i

g(0)
i=1

Thus, 1 , 2 , . . . n and 1 , 2 , . . . m are all the zeros and all the poles of h(z),
respectively, that have nonzero magnitude.
This allows equation (C.8.32) to be rewritten as

2
1 |01 02 . . . 0nu |
Z
ln |h(ej )|d = ln |Kf | + ln (C.8.40)
2 0 |10 20 . . . mu
0 |

where 01 , 02 , . . . 0nu and 10 , 20 , . . . mu


0
are the zeros and the poles of h(z), respec-
tively, that lie outside the unit circle .
222
Section C.9. Application of the PoissonJensen Formula to Certain Rational Functions 921

C.9 Application of the PoissonJensen Formula to Certain Ratio-


nal Functions

Consider the biproper rational function h(z) given by

f(z)
h(z) = z (C.9.1)
g(z)
is a integer number, and f(z) and g(z) are polynomials of degrees mf and mg ,


respectively. Then, due to the biproperness of h(z), + mf = mg .
we have that
Further assume that
(i) g(z) has no zeros outside the open unit disk,

(ii) f(z) does not vanish on the unit circle, and

(iii) f(z) vanishes outside the unit disk at 1 , 2 , . . . , m .
Define

 
f (z) 4 1
h(z) = =h (C.9.2)
g(z) z
where f (z) and g(z) are polynomials.
Then it follows that
(i) g(z) has no zeros in the closed unit disk;
(ii) f (z) does not vanish on the unit circle;
(iii) f (z) vanishes in the open unit disk at 1 , 2 , . . . , m , where i = i1 for
i = 1, 2, . . . , m ;
(iv) h(z) is analytic in the closed unit disk;
(v) h(z) does not vanish on the unit circle;
(vi) h(z) has zeros in the open unit disk, located at 1 , 2 , . . . , m .
We then have the following result
Lemma C.3. Consider the function h(z) defined in (C.9.2) and a point z0 = rej
such that r < 1; then


m
z0 i
Z 2
+ 1
X
j
ln |h(z0 )| = ln z0 2 0 P1,r ( ) ln |h(e )|d (C.9.3)
i=1
1 i

where P1,r is the Poisson kernel defined in (C.8.18).


922 Results From Analytic Function Theory Appendix C

Proof
This follows from a straightforward application of Lemma C.2.
222
Section C.10. Bodes Theorems 923

C.10 Bodes Theorems


We will next review some fundamental results due to Bode.

Theorem C.12 (Bode integral in the half plane). Let l(z) be a proper real,
rational function of relative degree nr . Define

4
g(z) = (1 + l(z))1 (C.10.1)

and assume that g(z) has neither poles nor zeros in the closed RHP. Then

(
Z 0 for nr > 1
ln |g(j)|d = 4 (C.10.2)
0 2 for nr = 1 where = limz zl(z)

Proof
Because ln g(z) is analytic in the closed RHP,

I
ln g(z)dz = 0 (C.10.3)
C

where C = Ci C is the contour defined in Figure C.4.


Then

I Z Z
ln g(z)dz = j ln g(j)d ln(1 + l(z))dz (C.10.4)
C C

For the first integral on the right-hand side of equation (C.10.4), we use the
conjugate symmetry of g(z) to obtain

Z Z
ln g(j)d = 2 ln |g(j)|d (C.10.5)
0

For the second integral, we notice that, on C , l(z) can be approximated by

a
(C.10.6)
z nr
The result follows upon using example C.7 and noticing that a = for nr = 1.
222
1
Remark C.4. If g(z) = (1 + ez l(z)) for > 0, then result (C.10.9) becomes
924 Results From Analytic Function Theory Appendix C

Z
ln |g(j)|d = 0 nr > 0 (C.10.7)
0

The proof of (C.10.7) follows along the same lines as those of Theorem C.12
and by using the result in example C.8.
Theorem C.13 (Modified Bode integral). Let l(z) be a proper real, rational
function of relative degree nr . Define

4
g(z) = (1 + l(z))1 (C.10.8)
Assume that g(z) is analytic in the closed RHP and that it has q zeros in the open
RHP, located at 1 , 2 , . . . , q with <(i ) > 0. Then

Pq
Z i=1 i for nr > 1
ln |g(j)|d =
+ Pq 4
0
2 i=1 i for nr = 1 where = limz zl(z)
(C.10.9)

Proof
We first notice that ln g(z) is no longer analytic on the RHP. We then define

q
4 Y z + i
g(z) = g(z) (C.10.10)
i=1
z i

Thus, ln g(z) is analytic in the closed RHP. We can then apply Cauchys integral
in the contour C described in Figure C.4 to obtain

q I
z + i
I I X
ln g(z)dz = 0 = ln g(z)dz + ln dz (C.10.11)
C C i=1 C z i

The first integral on the right-hand side can be expressed as

I Z Z
ln g(z)dz = 2j ln |g(j)|d + ln g(z)dz (C.10.12)
C 0 C

where, by using example C.7.

(
0 for nr > 1
Z
ln g(z)dz = 4 (C.10.13)
C j for nr = 1 where = limz zl(z)
Section C.10. Bodes Theorems 925

The second integral on the right-hand side of equation (C.10.11) can be com-
puted as follows:


z + i j + i z + i
I Z Z
ln dz = j ln d + ln dz (C.10.14)
C z i j i C z i

We note that the first integral on the right-hand side is zero, and by using
example C.9, the second integral is equal to 2ji . Thus, the result follows.
222
Remark C.5. Note that g(z) is a real function of z, so

q
X q
X
i = <{i } (C.10.15)
i=1 i=1

222
1
Remark C.6. If g(z) = (1 + ez l(z)) for > 0, then the result (C.10.9) be-
comes

Z q
X
ln |g(j)|d = <{i } nr > 0 (C.10.16)
0 i=1

The proof of (C.10.16) follows along the same lines as those of Theorem C.13
and by using the result in example C.8.
Remark C.7. The Poisson, Jensen, and Bode formulae assume that a key function
is analytic, not only inside a domain D, but also on its border C. Sometimes, there
may exist singularities on C. These can be dealt with by using an infinitesimal
circular indentation in C, constructed so as to leave the singularity outside D. For
the functions of interest to us, the integral along the indentation vanishes. This is
illustrated in example C.6 for a logarithmic function, when D is the right-half plane
and there is a singularity at the origin.
222
Appendix D

PROPERTIES OF
CONTINUOUS-TIME
RICCATI EQUATIONS

This appendix summarizes key properties of the Continuous-Time Differential Ric-


cati Equation (CTDRE);
dP
= AT P(t) P(t)A + P(t)B1 BT P(t) (D.0.1)
dt
P(tf ) = f (D.0.2)

and the Continuous-Time Algebraic Riccati Equation (CTARE)

0 = AT P PA + PB1 BT P (D.0.3)

D.1 Solutions of the CTDRE


The following lemma gives a useful alternative expression for P(t).
Lemma D.1. The solution, P(t), to the CTDRE (D.0.1), can be expressed as

P(t) = N(t)[M(t)]1 (D.1.1)

where M(t) Rnn and N(t) Rnn satisfy the following equation:

A B1 BT M(t)
    
d M(t)
= (D.1.2)
dt N(t) AT N(t)

subject to

N(tf )[M(tf )]1 = f (D.1.3)

927
928 Properties of Continuous-Time Riccati Equations Appendix D

Proof
We show that P(t), as defined above, satisfies the CTDRE. We first have that

dP(t) dN(t) d[M(t)]1


= [M(t)]1 + N(t) (D.1.4)
dt dt dt
The derivative of [M(t)]1 can be computed by noting that M(t)[M(t)]1 = I;
then

dI dM(t) d[M(t)]1
=0= [M(t)]1 + M(t) (D.1.5)
dt dt dt
from which we obtain

d[M(t)]1 dM(t)
= [M(t)]1 [M(t)]1 (D.1.6)
dt dt
Thus, equation (D.1.4) can be used with (D.1.2) to yield

dP(t)
= AT N(t)[M(t)]1 + N(t)[M(t)]1 A +
dt (D.1.7)
N(t)[M(t)]1 B[]1 BT N(t)[M(t)]1

which shows that P(t) also satisfies (D.0.1), upon using (D.1.1).
The matrix on the right-hand side of (D.1.2), namely,

B1 BT
 
A
H= H R2n2n (D.1.8)
AT

is called the Hamiltonian matrix associated with this problem.


Next, note that (D.0.1) can be expressed in compact form as

 
  I dP(t)
P(t) I H = (D.1.9)
P(t) dt
Then, not surprisingly, solutions to the CTDRE, (D.0.1), are intimately con-
nected to the properties of the Hamiltonian matrix.
We first note that H has the following reflexive property:

 
0 In
H = THT T1 with T= (D.1.10)
In 0
Section D.1. Solutions of the CTDRE 929

where In is the identity matrix in Rnn .


Recall that a similarity transformation preserves the eigenvalues; thus, the eigen-
values of H are the same as those of HT . On the other hand, the eigenvalues of H
and HT must be the same. Hence, the spectral set of H is the union of two sets, s
and u , such that, if s , then u . We assume that H does not contain
any eigenvalue on the imaginary axis (note that it suffices, for this to occur, that
1
(A, B) be stabilizable and that the pair (A, 2 ) have no undetectable poles on the
stability boundary). In this case, s can be so formed that it contains only the
eigenvalues of H that lie in the open LHP. Then, there always exists a nonsingular
transformation V R2n2n such that

 
1 Hs 0
[V] HV = (D.1.11)
0 Hu

where Hs and Hu are diagonal matrices with eigenvalue sets s and u , respectively.
We can use V to transform the matrices M(t) and N(t), to obtain


   
M(t) 1 M(t)
= [V] (D.1.12)
N(t) N(t)

Thus, (D.1.2) can be expressed in the equivalent form:



    
d M(t) Hs 0 M(t)
= (D.1.13)
dt N(t) 0 Hu N(t)

If we partition V in a form consistent with the matrix equation (D.1.13), we


have that

 
V11 V12
V= (D.1.14)
V21 V22

The solution to the CTDRE is then given by the following lemma.


Lemma D.2. A solution for equation (D.0.1) is given by

P(t) = P1 (t)[P2 (t)]1 (D.1.15)

where

P1 (t) = V21 + V22 eHu (tf t) Va eHs (tf t) (D.1.16)


1
P2 (t) = V11 + V12 eHu (tf t) Va eHs (tf t)

(D.1.17)
4
h i1
Va = [V22 f V12 ]1 [V21 f V11 ] = N(t f)
f ) M(t (D.1.18)
930 Properties of Continuous-Time Riccati Equations Appendix D

Proof
From (D.1.12), we have

f ) + V12 N(t
M(tf ) = V11 M(t f) (D.1.19)
f ) + V22 N(t
N(tf ) = V21 M(t f)

Hence, from (D.1.3),

h ih i1
f ) + V22 N(t
V21 M(t f) f ) + V12 N(t
V11 M(t f) = f (D.1.20)

or

h ih i1
f )[M(t
V21 + V22 N(t f )]1 f )[M(t
V11 + V12 N(t f )]1 = f (D.1.21)

or

N(t f )]1 = [V22 f V12 ]1 [V21 f V11 ] = Va


f )[M(t (D.1.22)

Now, from (D.1.10),

P(t) = N(t)[M(t)]1 (D.1.23)


h ih i1

= V21 M(t)
+ V22 N(t)
V11 M(t)
+ V12 N(t)
h ih i1

= V21 + V22 N(t)[
M(t)] 1
V11 + V12 N(t)[
M(t)] 1

and the solution to (D.1.13) is

f ) = eHs (tf t) M(t)


M(t (D.1.24)
f ) = e u f N(t)
N(t H (t t)

Hence,


N(t)[
M(t)] 1 f )[M(t
= eHu (tf t) N(t f )]1 eHs (tf t) (D.1.25)

Substituting (D.1.25) into (D.1.23) gives the result.


222
Section D.2. Solutions of the CTARE 931

D.2 Solutions of the CTARE


The Continuous Time Algebraic Riccati Equation (CTARE) has many solutions,
because it is a matrix quadratic equation. The solutions can be characterized as
follows.
Lemma D.3. Consider the following CTARE:

0 = PB1 BT P + PA + AT P (D.2.1)

(i) The CTARE can be expressed as

 
  I
P I H =0 (D.2.2)
P

where H is defined in (D.1.8).


(ii) Let V be defined so that

 
1 a 0
V HV = (D.2.3)
0 b

where a , b are any partitioning of the (generalized) eigenvalues of H such


that, if is equal to (a )i for same i, then = (b )j for some j.
Let

 
V11 V12
V= (D.2.4)
V21 V22

1
Then P = V21 V 11 is a solution of the CTARE.

Proof
(i) This follows direct substitution.
(ii) The form of P ensures that

   
P I V = 0 (D.2.5)
   
1 P
V = (D.2.6)
I 0
932 Properties of Continuous-Time Riccati Equations Appendix D

where * denotes a possible nonzero component.


Hence,
   
  1 P
 
P I VV = 0 (D.2.7)
I 0
= 0 (D.2.8)

222

D.3 The stabilizing solution of the CTARE


We see from Section D.2 that we have as many solutions to the CTARE as there are
ways of partitioning the eigevalues of H into the groups a and b . Provided that
1
(A, B) is stabilizable and that ( 2 , A) has no unobservable modes in the imaginary
axis, then H has no eigenvalues in the imaginary axis. In this case, there exists
a unique way of partitioning the eigenvalues so that a contains only the stable
eigenvalues of H. We call the corresponding (unique) solution of the CTARE the
stabilizing solution and denote it by Ps .
Properties of the stabilizing solution are given in the following.

Lemma D.4. (a) The stabilizing solution has the property that the closed loop A
matrix,

Acl = A BKs (D.3.1)

where
s
K = 1 B T Ps (D.3.2)

has eigenvalues in the open left-half plane.


1
(b) If ( 2 , A) is detectable, then the stabilizing solution is the only nonnegative
solution of the CTARE.
1
(c) If ( 2 , A) has no unobservable modes inside the stability boundary, then the
stabilizing solution is positive definite, and conversely.
1
(d) If ( 2 , A) has an unobservable mode outside the stabilizing region, then in
addition to the stabilizing solution, there exists at least one other nonnegative
solution of the CTARE. However, the stabilizing solution, Ps has the property
that
0
Ps P 0 (D.3.3)
0
where P is any other solution of the CTARE.
Section D.4. Convergence of Solutions of the CTARE to the Stabilizing Solution of the CTARE933

Proof
For part (a), we argue as follows:
Consider ( D.1.11) and (D.1.14). Then
   
V11 V11
H = Hs (D.3.4)
V21 V21

which implies that

V11 Hs V11 1
     
I I
H = H = (D.3.5)
V21 V11 1 P V21 Hs V11 1

If we consider only the first row in (D.3.5), then, using (D.1.8), we have

V11 Hs V11 1 = A B1 BT P = A BK (D.3.6)

Hence, the closed-loop poles are the eigenvalues of Hs and, by construction, these
are stable.
We leave the reader to pursue parts (b), (c), and (d) by studying the references
given at the end of Chapter 24.
222

D.4 Convergence of Solutions of the CTARE to the Stabilizing


Solution of the CTARE
Finally, we show that, under reasonable conditions, the solution of the CTDRE will
converge to the unique stabilizing solution of the CTARE. In the sequel, we will be
particularly interested in the stabilizing solution to the CTARE.
1
Lemma D.5. Provided that (A, B) is stabilizable and that ( 2 , A) has no unob-
servable poles on the imaginary axis and that f > Ps , then

lim P(t) = Ps (D.4.1)


tf

Proof
We observe that the eigenvalues of H can be grouped so that s contains only
eigenvalues that lie in the left-half plane. We then have that

lim eHs (tf t) = 0 and lim eHu (tf t) = 0 (D.4.2)


tf tf

given that Hs and Hu are matrices with eigenvalues strictly inside the LHP.
The result then follows from (D.1.16) to (D.1.17).
934 Properties of Continuous-Time Riccati Equations Appendix D

1
Remark D.1. Actually, provided that ( 2 , A) is detectable, then it suffices to have
f 0 in Lemma D.5
222

D.5 Duality between Linear Quadratic Regulator and Optimal


Linear Filter
The close connections between the optimal filter and the LQR problem can be
expressed directly as follows: We consider the problem of estimating a particular
linear combination of the states, namely,

z(t) = f T x(t) (D.5.1)


(The final solution will turn out to be independent of f , and thus will hold for
the complete state vector.)
Now we will estimate z(t) by using a linear filter of the following form:

Z t
z(t) = h(t )T y 0 ( )d + g T x
o (D.5.2)
0

where h(t) is the impulse response of the filter and where xo is a given estimate
of the initial state. Indeed, we will assume that (22.10.17) holds, that is, that the
initial state x(0) satisfies

E(x(0) x o )T = P o
o )(x(0) x (D.5.3)
We will be interested in designing the filter impulse response, h( ), so that z(t)
is close to z(t) in some sense. (Indeed, the precise sense we will use is a quadratic
form.) From (D.5.1) and (D.5.2), we see that

z(t) = z(t) z(t)


Z t
T
= f x(t) h(t )T y 0 ( )d g T x
o
0 (D.5.4)
Z t
= f T x(t) h(t )T Cx( ) + v(t) d g T xo


0

Equation (D.5.4) is somewhat difficult to deal with, because of the cross-product


between h(t ) and x(t) in the integral. Hence, we introduce another variable, ,
by using the following equation
d( )
= AT ( ) CT u( ) (D.5.5)
d
(t) = f (D.5.6)
Section D.5. Duality between Linear Quadratic Regulator and Optimal Linear Filter 935

where u( ) is the reverse time form of h:

u( ) = h(t ) (D.5.7)

Substituting (D.5.5) into (D.5.4) gives


Z t T
d( )
z(t) =f T x(t) + + AT ( ) x( )d
0 d
Z t (D.5.8)
)d g T x
u( )v( o
0

Using integration by parts, we then obtain

t
z(t) =f T x(t) + T x( ) 0 g T xo

Z t
dx( ) dv( )
 (D.5.9)
+ ( )T + ( )T Ax( ) u( )T d
0 d d
Finally, using (22.10.5) and (D.5.6), we obtain

Z t 
dw( ) dv( )
z(t) = (0)T (x(0) xo ) + ( )T u( )T d
0 d d (D.5.10)
T
((0) + g) x
o

Hence, squaring and taking mathematical expectations, we obtain (upon using


(D.5.3), (22.10.3), and (22.10.4) ) the following:
Z t
z (t)2 } = (0)T Po (0) + ( )T Q( ) + u( )T Ru( ) d

E{
0 (D.5.11)
T 2
+ k ((0) + g) xo k

The last term in (D.5.11) is zero if g = (0). Thus, we see that the design of
the optimal linear filter can be achieved by minimizing

Z t
J = (0)T Po (0) + ( )T Q( ) + u( )T Ru( ) d

(D.5.12)
0

where ( ) satisfies the reverse-time equations (D.5.5) and (D.5.6).


We recognize the set of equations formed by (D.5.5), (D.5.6), and (D.5.12) as
a standard linear regulator problem, provided that the connections shown in
Table D.1 are made.
Finally, by using the (dual) optimal control results presented earlier, we see that
the optimal filter is given by
936 Properties of Continuous-Time Riccati Equations Appendix D

Regulator Filter Regulator Filter

t tf 0
A AT Q
T
B C R
x f Po

Table D.1. Duality in quadratic regulators and filters

Z t
zo ( ) = uo ( )T y 0 ( )d + g T x
o (D.5.13)
o

where

uo ( ) = Kf ( )( ) (D.5.14)
1
Kf ( ) = R C( ) (D.5.15)

and ( ) satisfies the dual form of (D.0.1), (22.4.18):

d(t)
= Q (t)CT R1 C(t) + (t)AT + A(t) (D.5.16)
dt

(0) = Po (D.5.17)

Substituting (D.5.14) into (D.5.5), (D.5.6) we see that

d( )
= AT ( ) + CT Kf ( )( ) (D.5.18)
d
(t) = f (D.5.19)
uo ( ) = Kf ( )( ) (D.5.20)
g = (0) (D.5.21)

We see that uo ( ) is the output of a linear homogeneous equation. Let = (t ),


and define () as the state transition matrix from  = 0 for the time-varying
system having A matrix equal to A Kf (t )T C . Then

Section D.5. Duality between Linear Quadratic Regulator and Optimal Linear Filter 937

( ) = (t )T f (D.5.22)
T
(0) = (t) f
u0 ( ) = Kf ( )(t )T f

Hence, the optimal filter satisfies

Zt
T
z(t) = g x
o + uo y 0 ( )d (D.5.23)
0
Zt
T
= (0) xo + f T (t )Kf T ( )y 0 ( )d
0
Zt

= f T (t)
xo + (t )Kf T ( )y 0 ( )d
0
= fT x
(t)

where

Zt
x(t) = (t)
xo + (t )Kf T ( )y 0 ( )d (D.5.24)
0

We then observe that (D.5.24) is actually the solution of the following state
space (optimal filter).

x(t) 
d 
= A Kf T (t)C x(t) + Kf T (t)y 0 (t) (D.5.25)
dt
x(0) = x
o (D.5.26)
T
z(t) = f x(t) (D.5.27)

We see that the final solution depends on f only through (D.5.27). Thus, as
predicted, (D.5.25), (D.5.26) can be used to generate an optimal estimate of any
linear combination of states.
Of course, the optimal filter (D.5.25) is identical to that given in (22.10.23)
All of the properties of the optimal filter follow by analogy from the (dual)
optimal linear regulator. In particular, we observe that (D.5.16) and (D.5.17) are a
CTDRE and its boundary condition, respectively. The only difference is that, in the
optimal-filter case, this equation has to be solved forward in time. Also, (D.5.16)
has an associated CTARE, given by
938 Properties of Continuous-Time Riccati Equations Appendix D

Q CT R1 C + AT + A = 0 (D.5.28)

Thus, the existence, uniqueness, and properties of stabilizing solutions for (D.5.16)
and (D.5.28) satisfy the same conditions as the corresponding Riccati equations for
the optimal regulator.
Appendix E

MATLAB SUPPORT

The accompanying disc contains a set of MATLAB-SIMULINK files. These files


provide support for many problems posed in this book, and, at the same time,
facilitate the study and application of selected topics.

939
940 MATLAB support Appendix E

File name Chapter Brief description


amenl.mdl Chap. 19 SIMULINK schematic to evaluate the perfor-
mance of a linear design on a particular nonlinear
plant.
apinv.mdl Chap. 2 SIMULINK schematic to evaluate approximate in-
verses for a nonlinear plant.
awu.mat Chap. 26 MATLAB data file it contains the data re-
quired to use SIMULINK schematics in file
mmawu.mdl. This file must be previously loaded
to run the simulation.
awup.m Chap. 11 MATLAB program to decompose a biproper
controller in a form suitable to implement an
anti-windup strategy requires the function
p elcero.m.
c2del.m Chap. 3 MATLAB function to transform a transfer func-
tion for a continuous-time system with zero-order
hold into a discrete-transfer function in delta form.
cint.mdl Chap. 22 SIMULINK schematic to evaluate the perfor-
mance of a MIMO control loop in which the con-
troller is based on state estimate feedback.
css.m Chap. 7 MATLAB function to compute a one-d.o.f. con-
troller for an nth -order SISO, strictly proper plant
(continuous or discrete) described in state space
form. The user must supply the desired observer
poles and the desired control poles. This program
requires the function p elcero.m.
data newss.m Chap. 11 MATLAB program to generate the data required
for newss.mdl this program requires lam-
bor.m.
dcc4.mdl Chap. 10 SIMULINK schematic to evaluate the perfor-
mance of a cascade architecture in the control of a
plant with time delay and generalised disturbance.
dcpa.mdl Chap. 13 SIMULINK schematic to evaluate the perfor-
mance of the digital control for a linear,
continuous-time plant.
dead1.mdl Chap. 19 SIMULINK schematic to study a compensation
strategy for deadzones.
del2z.m Chap. 13 MATLAB function to transform a discrete-time
transfer function in delta form to its Z-transform
equivalent.
continued on next page
941

continued from previous page


File name Directory Brief description
dff3.mdl Chap. 10 SIMULINK schematic to evaluate the perfor-
mance of disturbance feedforward in the control
of a plant with time delay and generalised distur-
bance.
distff.mdl Chap. 10 SIMULINK schematic to compare a one d.o.f. con-
trol against a two-d.o.f. control in the control of a
plant with time delay.
distffun.mdl Chap. 10 SIMULINK schematic to evaluate the perfor-
mance of disturbance feedforward in the control
of an unstable plant and generalised disturbance.
lambor.m Chap. 11 MATLAB program to synthesise an observer this
routine can be easily modified to deal with differ-
ent plants.
lcodi.mdl Chap. 13 SIMULINK schematic to compare discrete-time
and continuous-time PID controllers for the con-
trol of an unstable plant.
linnl.mat Chap. 19 MATLAB data file, with the linear design data
used in solved problem.
mimo1.mdl Chap. 21 SIMULINK schematic with a motivating example
for the control of MIMO systems.
mimo2.mdl Chap. 22 SIMULINK schematic to simulate a MIMO design
based on an observer plus state estimate feedback.
mimo2.mat Chap. 22 MATLAB data file for mimo2.mdl.
mimo3.mdl Chap. 25 SIMULINK schematic for the triangular control of
a MIMO stable and nonminimum phase plant, by
using an IMC architecture.
mimo4.mdl Chap. 26 SIMULINK schematic for the decoupled control of
a MIMO stable and minimum phase plant plant,
using an IMC architecture.
minv.m Chap. 25 MATLAB function to obtain the inverse (in state
space form) of a biproper MIMO system in state
space form.
mmawe.mdl Chap. 26 SIMULINK schematic for the (dynamically decou-
pled) control of a MIMO system with input satura-
tion an anti-windup mechanism is used, and di-
rectionality is (partially) recovered by scaling the
control error.
continued on next page
942 MATLAB support Appendix E

continued from previous page


File name Directory Brief description
mmawu.mdl Chap. 26 SIMULINK schematic for the (dynamically decou-
pled) control of a MIMO system with input satura-
tion an anti-windup mechanism is used, and di-
rectionality is (partially) recovered by scaling the
controller output.
newss.mdl Chap. 11 SIMULINK schematic to study a (weighted)
switching strategy to deal with state-saturation
constraints.
nmpq.mdl Chap. 15 SIMULINK schematic to evaluate disturbance
compensation and robustness in the IMC control
of a NMP plant. .
oph2.m Chap. 16 MATLAB function to perform H2 minimization to
solve the model-matching problem.
p elcero.m Chap. 7 MATLAB function to eliminate leading zeros in a
polynomial.
paq.m Chap. 7 MATLAB function to solve the pole assignment
equation: The problem can be set either for
Laplace transfer functions or by using the Delta-
transform. This program requires the function
p elcero.m.
phloop.mdl Chap. 19 SIMULINK schematic to evaluate the IMC control
of a pH neutralisation plant by using approximate
nonlinear inversion.
phloop.mat Chap. 19 MATLAB data file associated phloop.mdl
piawup.mdl Chap. 11 SIMULINK schematic to evaluate an anti-windup
strategy in linear controllers, by freezing the inte-
gral action when its output saturates.
pid1.mdl Chap. 6 SIMULINK schematic to analyze the performance
of a PID control that uses empirical tuning meth-
ods.
pidemp.mdl Chap. 6 SIMULINK schematic to use the ZieglerNichols
tuning method based on closed-loop oscillation:
The plant is linear, but of high order, with input
saturation and noisy measurements.
pmimo3.m Chap. 25 MATLAB program to compute the Q controller
for solved problem.
qaff1.mdl Chap. 15 SIMULINK schematic to analyze the loop perfor-
mance of an IMC control loop of a NMP plant.
continued on next page
943

continued from previous page


File name Directory Brief description
qaff22.mdl Chap. 15 SIMULINK schematic to analyze the loop perfor-
mance of the Smith controller in Q form.
qawup.mdl Chap. 11 SIMULINK schematic to implement an anti-
windup mechanism in the IMC architecture the
decomposition of Q(s) was done by using MAT-
LAB function awup.m.
sat uns.mdl Chap. 15 SIMULINK schematic to study saturation in un-
stable plants with disturbances of variable dura-
tion.
slew1.mdl Chap. 11 SIMULINK schematic to evaluate the perfor-
mance of a PI controller with anti-windup mecha-
nism to control a plant with slew-rate limitation.
smax.m Chap. 9 MATLAB function to compute a lower bound for
the peak of the nominal sensitivity So the plant
model has a number of unstable poles, and the
effect of one particular zero in the open RHP is
examined.
softloop1.mdl Chap. 19 SIMULINK schematic to compare the perfor-
mances of linear and nonlinear controllers for a
particular nonlinear plant.
softpl1.mdl Chap. 19 SIMULINK schematic of a nonlinear plant.
sugdd.mat Chap. 24 MATLAB data file: it contains the controller
required to do dynamically decoupled control of
the sugar mill.
sugmill.mdl Chap. 24 SIMULINK schematic for the multivariable con-
trol of a sugar mill station.
sugpid.mdl Chap. 24 SIMULINK schematic for the PID control of a
sugar mill station the design for the multivari-
able plant is based on a SISO approach.
sugtr.mat Chap. 24 MATLAB data file it contains the controller re-
quired to do triangularly decoupled control of the
sugar mill.
tank1.mdl Chap. 2 SIMULINK schematic to illustrate the idea of in-
version of a nonlinear plant.
tmax.m Chap. 9 MATLAB function to compute a lower bound for
the peak of the nominal complementary sensitivity
To . The plant model has a number of NMP zeros,
and the effect of one particular pole in the open
RHP is examined.
continued on next page
944 MATLAB support Appendix E

continued from previous page


File name Directory Brief description
z2del.m Chap. 13 MATLAB routine to transform a discrete-time
transfer function in Z-transform form to its Delta-
transform equivalent.

Table E.1. Description of MATLAB support files

You might also like