Appendices PDF
Appendices PDF
Appendices PDF
Notation Meaning
t Continuous-time variable
f (t) Continuous-time signal
k Discrete-time variable
{f [k]} Discrete-time sequence
Sampling period
f (k) Sampled version of f (t)
Delta operator
q forward shift operator
K (k) Kronecker delta
(t) Dirac delta
E{...} Expected value of ...
c Controllability matrix in state space description
o Observability matrix in state space description
{...} Set of eigenvalues of matrix ...
(t to ) unit step (continuous time) at time t = to
[k ko ] unit step (discrete time) at time k = ko
f s (t) Dirac impulse-sampled version of f (t)
F [...] Fourier transform of ...
L [...] Laplace transform of ...
D [...] Delta-transform of ...
Z [...] Z-transform of ...
F 1 [...] inverse Fourier transform of ...
L1 [...] inverse Laplace transform of ...
D1 [...] inverse Delta-transform of ...
continued on next page
888
889
SMITHMCMILLAN FORMS
B.1 Introduction
SmithMcMillan forms correspond to the underlying structures of natural MIMO
transfer-function matrices. The key ideas are summarized below.
891
892 SmithMcMillan Forms Appendix B
Definition B.6. The rank of a polynomial matrix is the rank of the matrix
almost everywhere in s. The definition implies that the rank of a polynomial matrix
is independent of the argument.
Definition B.7. Two polynomial matrices V(s) and W(s) having the same number
of columns (rows) are right (left) coprime if all common right (left) factors are
unimodular matrices.
Definition B.8. The degree ck (rk ) of the k th column (row) [V(s)]k ( [V(s)]k )
of a polynomial matrix V(s) is the degree of highest power of s in that column (row).
Definition B.9. A polynomial matrix V(s) Cmm is column proper if
E(s)
f (s) = E(s) f ; c (s) = (B.3.1)
c
E(s) = diag(1 (s), . . . , r (s), 0, . . . , 0) (B.3.2)
where f and c are matrices with all their elements equal to zero.
Furthermore i (s) are monic polynomials for i = 1, 2, . . . , r, such that i (s) is a
factor in i+1 (s), i.e. i (s) divides i+1 (s).
If m1 = m2 , then (s) is equivalent to the square matrix E(s).
(ii) Using elementary operation (e03) (see definition B.3), reduce the term in the
position (2,1) to degree 2 < 1 . If the term in position (2,1) becomes zero,
then go to the next step, otherwise, interchange rows 1 and 2 and repeat the
procedure until the term in position (2,1) becomes zero.
(iii) Repeat step (ii) with the other elements in the first column.
(iv) Apply the same procedure to all the elements but the first one in the first row.
(v) Go back to step (ii) if nonzero entries due to step (iv) appear in the first
column. Notice that the degree of the entry (1,1) will fall in each cycle, until
we finally end up with a matrix which can be partitioned as
(j)
11 (s) 0 0 ... 0 0
0
0
(s) = . (B.3.3)
..
j (s)
0
0
(j)
where 11 (s) is a monic polynomial.
(j)
(vi) If there is an element of j (s) which is of lesser degree than 11 (s), then
add the column where this element is to the first column and repeat steps (ii)
(j)
to (v). Do this until the form (B.3.3) is achieved with 11 (s) of less or, at
most, equal degree to that of every element in j (s). This will yield further
reduction in the degree of the entry in position (1,1).
(j)
(vii) Make 1 (s) = 11 (s).
(viii) Repeat the procedure from steps (i) through (viii) to matrix j (s).
Actually the polynomials i (s) in the above result can be obtained in a direct
fashion, as follows:
i (s)
i (s) = (B.3.4)
i1 (s)
894 SmithMcMillan Forms Appendix B
(s)
G(s) = (B.4.1)
DG (s)
where (s) is an mm polynomial matrix of rank r and DG (s) is the least common
multiple of the denominators of all elements Gik (s).
Then, G(s) is equivalent to a matrix M(s), with
1 (s) r (s)
M(s) = diag ,... , , 0, . . . , 0 (B.4.2)
1 (s) r (s)
where {i (s), i (s)} is a pair of monic and coprime polynomials for i = 1, 2, . . . , r.
Furthermore, i (s) is a factor of i+1 (s) and i (s) is a factor of i1 (s).
Proof
We write the transfer-function matrix as in (B.4.1). We then perform the algorithm
outlined in Theorem B.1 to convert (s) to Smith normal form. Finally, canceling
terms for the denominator DG (s) leads to the form given in (B.4.2).
222
We use the symbol GSM (s) to denote M(s), which is the SmithMcMillan form
of the transfer-function matrix G(s) .
We illustrate the formula of the SmithMcMillan form by a simple example.
Example B.1. Consider the following transfer-function matrix
4 1
(s + 1)(s + 2) s+1
G(s) =
(B.4.3)
2 1
s+1 2(s + 1)(s + 2)
We can then express G(s) in the form (B.4.1):
4 (s + 2)
" #
(s)
G(s) = ; (s) = 1 ; DG (s) = (s + 1)(s + 2)
DG (s) 2(s + 2)
2
(B.4.4)
Section B.5. Poles and Zeros 895
The polynomial matrix (s) can be reduced to the Smith form defined in Theorem
B.1. To do that, we first compute its greatest common divisors:
0 = 1 (B.4.5)
1
1 = gcd 4; (s + 2); 2(s + 2); =1 (B.4.6)
2
2 = gcd{2s2 + 8s + 6} = s2 + 4s + 3 = (s + 1)(s + 3) (B.4.7)
This leads to
1 2
1 = = 1; 2 = = (s + 1)(s + 3) (B.4.8)
0 1
1
0
SM (s + 1)(s + 2)
G (s) = (B.4.9)
s + 3
0
s+2
(i) pz (s) and pp (s) are said to be the zero polynomial and the pole polynomial
of G(s), respectively, where
4 4
pz (s) = 1 (s)2 (s) r (s); pp (s) = 1 (s)2 (s) r (s) (B.5.1)
and where 1 (s), 2 (s), . . . , r (s) and 1 (s), 2 (s), . . . , r (s) are the polyno-
mials in the SmithMcMillan form, GSM (s) of G(s).
Note that pz (s) and pp (s) are monic polynomials.
(ii) The zeros of the matrix G(s) are defined to be the roots of p z (s), and the poles
of G(s) are defined to be the roots of pp (s).
In the case of square plants (same number of inputs as outputs), it follows that
det[G(s)] is a simple function of pz (s) and pp (s). Specifically, we have
896 SmithMcMillan Forms Appendix B
pz (s)
det[G(s)] = K (B.5.2)
pp (s)
Note, however, that pz (s) and pp (s) are not necessarily coprime. Hence, the
scalar rational function det[G(s)] is not sufficient to determine all zeros and poles
of G(s). However, the relative degree of det[G(s)] is equal to the difference between
the number of poles and the number of zeros of the MIMO transfer-function matrix.
G(s) = L(s)G SM
(s)R(s) (B.6.2)
where L(s)
and R(s) are, for example, given by
L(s) = [L(s)]1 ;
R(s) = [R(s)]1 (B.6.3)
4
N(s) = diag(1 (s), . . . , r (s), 0, . . . , 0) (B.6.4)
4
D(s) = diag(1 (s), . . . , r (s), 1, . . . , 1) (B.6.5)
where N(s) and D(s) are m m matrices. Hence, GSM (s) can be written as
G(s) = L(s)N(s)[D(s)] 1
R(s) = [L(s)N(s)][[
R(s)] 1
D(s)]1 = GN (s)[GD (s)]1
(B.6.7)
where
4 4
GN (s) = L(s)N(s);
GD (s) = [R(s)] 1
D(s) (B.6.8)
Equations (B.6.7) and (B.6.8) define what is known as a right matrix fraction
description (RMFD).
It can be shown that GD (s) is always column-equivalent to a column proper ma-
trix P(s). (See definition B.9.) This implies that the degree of the pole polynomial
pp (s) is equal to the sum of the degrees of the columns of P(s).
We also observe that the RMFD is not unique, because, for any nonsingular
m m matrix (s), we can write G(s) as
where (s) is said to be a right common factor. When the only right common
factors of GN (s) and GD (s) are unimodular matrices, then, from definition B.7,
we have that GN (s) and GD (s) are right coprime. In this case, we say that the
RMFD (GN (s), GD (s)) is irreducible.
It is easy to see that when a RMFD is irreducible, then
Remark B.1. A left matrix fraction description (LMFD) can be built similarly,
with a different grouping of the matrices in (B.6.7). Namely,
G(s) = L(s)[D(s)] 1
N(s)R(s)
= [D(s)[L(s)] 1 1
] [N(s)R(s)] = [GD (s)]1 GN (s)
(B.6.10)
where
4 4
GN (s) = N(s)R(s);
GD (s) = D(s)[L(s)] 1
(B.6.11)
222
898 SmithMcMillan Forms Appendix B
The left and right matrix descriptions have been initially derived starting from
the SmithMcMillan form. Hence, the factors are polynomial matrices. However,
it is immediate to see that they provide a more general description. In particular,
GN (s), GD (s), GN (s) and GN (s) are generally matrices with rational entries. One
possible way to obtain this type of representation is to divide the two polynomial
matrices forming the original MFD by the same (stable) polynomial.
An example summarizing the above concepts is considered next.
Example B.2. Consider a 2 2 MIMO system having the transfer function
4 0.5
(s + 1)(s + 2) s+1
G(s) =
(B.6.12)
1 2
s+2 (s + 1)(s + 2)
B.2.1 Find the SmithMcMillan form by performing elementary row and column
operations.
B.2.2 Find the poles and zeros.
B.2.3 Build a RMFD for the model.
Solution
B.2.1 We first compute its SmithMcMillan form by performing elementary row
and column operations. Referring to equation (B.6.1), we have that
1
0
(s + 1)(s + 2)
GSM (s) = L(s)G(s)R(s) =
2
s + 3s + 18
(B.6.13)
0
(s + 1)(s + 2)
with
1 s+2
0 1
L(s) = 4 ; R(s) = 8 (B.6.14)
2(s + 1) 8 0 1
B.2.2 We see that the observable and controllable part of the system has zero and
pole polynomials given by
which, in turn, implies that there are two transmission zeros, located at 1.5
j3.97, and four poles, located at 1, 1, 2 and 2.
B.2.3 We can now build a RMFD by using (B.6.2). We first notice that
s+2
4 0
L(s) = [L(s)]1 = 1 ;
R(s) = [R(s)]1 = 1 8
s+1 0 0
8
(B.6.16)
" # " #
1 0 (s + 1)(s + 2) 0
N(s) = ; D(s) =
0 s2 + 3s + 18 0 (s + 1)(s + 2)
(B.6.17)
4 0
4 0
1 0
GN (s) = 1 0 s2 + 3s + 18 =
s2 + 3s + 18
s+1 s+1
8 8
(B.6.18)
and
s + 2 "(s + 1)(s + 2)
#
0
GD (s) = 1 8 (B.6.19)
0 1 0 (s + 1)(s + 2)
(s + 1)(s + 2)2
(s + 1)(s + 2) 8
= (B.6.20)
0 (s + 1)(s + 2)
C.1 Introduction
This appendix summarizes key results from analytic function theory leading to the
Cauchy Integral formula and its consequence, the PoissonJensen formula.
We can then define the following line integrals along the path C from point A
to point B inside D.
B t2
df1 (t)
Z Z
P (x, y)dx = P (f1 (t), f2 (t)) dt (C.2.2)
A t1 dt
Z B Z t2
df2 (t)
Q(x, y)dy = Q(f1 (t), f2 (t)) dt (C.2.3)
A t1 dt
R
Definition C.1. The line integral P dx + Qdy is said to be independent of the
path in D if, for every pair of points A and B in D, the value of the integral is
independent of the path followed from A to B.
222
We then have the following result.
901
902 Results From Analytic Function Theory Appendix C
R
Theorem C.1. If P dx + Qdy is independent of the path in D, then there exists
a function F (x, y) in D such that
F F
= P (x, y); = Q(x, y) (C.2.4)
x y
hold throughout
R D. Conversely, if a function F (x, y) can be found such that (C.2.4)
hold, then P dx + Qdy is independent of the path.
Proof
Suppose that the integral is independent of the path in D. Then, choose a point
(x0 , y0 ) in D and let F (x, y) be defined as follows
Z x,y
F (x, y) = P dx + Qdy (C.2.5)
x0 ,y0
where the integral is taken on an arbitrary path in D joining (x0 , y0 ) and (x, y).
Because the integral is independent of the path, the integral does indeed depend
only on (x, y) and defines the function F (x, y). It remains to establish (C.2.4).
(x1 , y) (x, y)
(x0 , y0 )
Z x1 ,y Z x,y
F (x, y) = (P dx + Qdy) + (P dx + Qdy) (C.2.6)
x0 ,y0 x1 ,y
We think of x1 and y as being fixed while (x, y) may vary along the horizontal
line segment. Thus F (x, y) is being considered as function of x. The first integral
on the right-hand side of (C.2.6) is then independent of x.
Hence, for fixed y, we can write
Section C.2. Independence of Path 903
Z x
F (x, y) = constant + P (x, y)dx (C.2.7)
x1
F
= P (x, y) (C.2.8)
x
A similar argument shows that
F
= Q(x, y) (C.2.9)
y
Conversely, let (C.2.4) hold for some F . Then, with t as a parameter,
Z x2 ,y2 Z t2
F dx F dy
F (x, y) = P dx + Qdy = + dt (C.2.10)
x1 ,y1 t1 x dt y dt
Z t2
F
= dt (C.2.11)
t1 t
= F (x2 , y2 ) F (x1 , y1 ) (C.2.12)
222
R
Theorem C.2. If the integral P dx + Qdy is independent of the path in D, then
I
P dx + Qdy = 0 (C.2.13)
on every closed Rpath in D. Conversely if (C.2.13 ) holds for every simple closed
path in D, then P dx + Qdy is independent of the path in D.
Proof
Suppose that the integral is independent of the path. Let C be a simple closed path
in D, and divide C into arcs AB~ and BA~ as in Figure C.2.
I Z Z
(P dx + Qdy) = P dx + Qdy + P dx + Qdy (C.2.14)
C
ZAB ZBA
= P dx + Qdy P dx + Qdy (C.2.15)
AB AB
~
AB B
C ~
A BA
Theorem
R C.3. If P (x, y) and Q(x, y) have continuous partial derivatives in D and
P dx + Qdy is independent of the path in D, then
P Q
= in D (C.2.16)
y x
Proof
By Theorem C.1, there exists a function F such that (C.2.4) holds. Equation
(C.2.16) follows by partial differentiation.
222
Actually, we will be particularly interested in the converse to Theorem C.3.
However, this holds under slightly more restrictive assumptions, namely a simply
connected domain.
I Z Z
Q P
(P dx + Qdy) = dxdy (C.3.1)
R x y
Proof
We first consider a simple case in which R is representable in both of the forms:
Then
Z Z Z b Z f2 (x)
P P
dxdy = dxdy (C.3.4)
R y a f1 (x) y
Z Z Z b
P
dxdy = [P (x, f2 (x)) P (x, f1 (x))]dx (C.3.5)
R y a
Z b Z b
= P (x, f2 (x))dx P (x, f1 (x))dx (C.3.6)
a a
I
= P (x, y)dx (C.3.7)
C
By a similar argument,
Z Z I
Q
dxdy = Q(x, y)dy (C.3.8)
R x C
For more complex regions, we decompose into simple regions as above. The
result then follows.
222
We then have the following converse to Theorem C.3.
Theorem C.5. Let P (x, y) and Q(x, y) have
H continuous derivatives in D and let
Q
D be simply connected. If P
y = x , then P dx + Qdy is independent of path in
D.
Proof
Suppose that
P Q
= in D (C.3.9)
y x
Then, by Greens Theorem (Theorem C.4),
906 Results From Analytic Function Theory Appendix C
I Z Z
Q P
P dx + Qdy = dxdy = 0 (C.3.10)
c R x y
222
Z Z
f (z)dz = (u(x, y) + jv(x, y))(dx + jdy)
C C
Z Z Z Z
= u(x, y)dx v(x, y)dy + j u(x, y)dy + v(x, y)dx
C C C C
We then see that the previous results are immediately applicable to the real and
imaginary parts of integrals of this type.
f (z0 + z) f (z0 )
lim (C.5.1)
z0 z
exists and is independent of the direction of z. We denote this limit, when it
exists, by f 0 (z0 ).
u v u v
= ; = (C.6.1)
x y y x
Furthermore
w u v v v u u v u
= +j = +j = j = j (C.6.2)
z x x y x x y y y
Proof
Let z0 be a fixed point in D and let = f (z0 +z)f (z0 ). Because f is analytic,
we have
4
= z + z; = f 0 (z0 ) (C.6.3)
So
u = ax by + 1 x 2 y (C.6.5)
v = bx + ay + 2 x + 1 y (C.6.6)
or
u v u v
=a= ; = b = (C.6.8)
x y y x
222
Actually, most functions that we will encounter will be analytic, provided the
derivative exists. We illustrate this with some examples.
Example C.1. Consider the function f (z) = z 2 . Then
u v u v
= 2x; = 2y; = 2y; = 2x (C.6.10)
x x y y
Hence, the function is clearly analytic.
Example C.2. Consider f (z) = |z| .
d|z|
This function is not analytic, because d|z| is a real quantity and, hence, dz will
depend on the direction of z.
Example C.3. Consider a rational function of the form:
(z 1 )(z 2 ) (z m ) N (z)
W (z) = K = (C.6.11)
(z 1 )(z 2 ) (z n ) D(z)
W 1 N (z) D(z)
= 2 D(z) N (z) (C.6.12)
z D (z) z z
These derivatives clearly exist, save when D = 0, that is at the poles of W (z).
Example C.4. Consider the same function W (z) defined in (C.6.11). Then
ln(W ) 1 N (z) D(z) 1 N (z) 1 D(z)
= D(z) N (z) =
z N (z)D(z) z z N (z) z D(z) z
(C.6.13)
Hence, ln(W (z)) is analytic, save at the poles and zeros of W (z).
I
f (z)dz = 0 (C.7.1)
C
Proof
This follows from the CauchyRiemann conditions together with Theorem C.2.
222
Section C.7. Integrals Revisited 909
Z
f (z)dz max(|f (z)|)LC (C.7.2)
zC
C
Example C.5. Assume that C is a semicircle centered at the origin and having
radius R. The path length is then LC = R. Hence,
if f (z) varies as z 2 , then |f (z)| on C must vary as R2 hence, the integral
on C vanishes for R .
if f (z) varies as z 1 , then |f (z)| on C must vary as R1 then, the integral
on C becomes a constant as R .
Example C.6. Consider the function f (z) = ln(z) and an arc of a circle, C,
described by z = ej for [1 , 1 ]. Then
Z
4
I = lim f (z)dz = 0 (C.7.3)
0 C
We then use the fact that lim|x|0 (x ln x) = 0, and the result follows.
Example C.7. Consider the function
a
f (z) = ln 1 + n n1 (C.7.5)
z
and a semicircle, C, defined by z = Rej for 2 , 2 . Then, if C is followed
clockwise,
(
0 for n > 1
Z
4
IR = lim f (z)dz = (C.7.6)
R C ja for n = 1
Z
2 a
IR = lim j ln 1 + n ejn Rej d (C.7.7)
R
2
R
910 Results From Analytic Function Theory Appendix C
Then
Z
a 2
IR = lim j ej(n1) d (C.7.9)
R Rn1
2
From this, by evaluation for n = 1 and for n > 1, the result follows.
222
a
f (z) = ln 1 + ez n n 1; >0 (C.7.10)
z
and a semicircle, C, defined by z = Rej for 2 , 2 . Then, for clockwise C,
Z
4
IR = lim f (z)dz = 0 (C.7.11)
R C
Z
2 a z
IR = lim j ln 1 + (
z d (C.7.12)
R
2
z n + 1) ez z=Rej
z
lim =0 (C.7.13)
|z| ez
a z 1 z
ln 1 + z (C.7.14)
z n+1 ez z n ez z=Rej
z=Rej
Thus, in the limit, this quantity goes to zero for all positive n. The result then
follows.
Section C.7. Integrals Revisited 911
222
Example C.9. Consider the function
za
f (z) = ln (C.7.15)
z+a
Z
4
IR = lim f (z)dz = j2a (C.7.16)
R C
a
1
za z
a a
ln = ln a = ln 1 ln 1 + (C.7.17)
z+a 1+ z z z
and then applying the result in example C.7.
222
Example C.10. Consider a function of the form
a1 a2
f (z) = + 2 +... (C.7.18)
z z
and C, an arc of circle z = Rej for [1 , 2 ]. Thus, dz = jzd, and
Z Z 2
dz
= jd = j(2 1 ) (C.7.19)
C z 1
Z
f (z)dz = ja1 (2 1 ) (C.7.20)
C
222
Example C.11. Consider, now, f (z) = z n . If the path C is a full circle, centered
at the origin and of radius R, then
I Z
z n dz = Rn ejn jRej d
(C.7.21)
C
(
0 for n 6= 1
= (C.7.22)
2j for n = 1 (integration clockwise)
912 Results From Analytic Function Theory Appendix C
222
We can now develop Cauchys Integral Formula.
Say that f (z) can be expanded as
a1
f (z) = + a0 + a1 (z z0 ) + a2 (z z0 )2 + . . . (C.7.23)
z z0
the a1 is called the residue of f (z) at z0 .
C
C1
C2
z0
Consider the path shown in Figure C.3. Because f (z) is analytic in a region
containing C, we have that the integral around the complete path shown in Figure
C.3 is zero. The integrals along C1 and C2 cancel. The anticlockwise circular
integral around z0 can be computed by following example C.11 to yield 2ja1 .
Hence, the integral around the outer curve C is minus the integral around the circle
of radius . Thus,
I
f (z)dz = 2ja1 (C.7.24)
C
I
g(z)
dz = 2jg(q) (C.7.25)
C zq
Section C.8. Poisson and Jensen Integral Formulas 913
222
We note that the residue of g(z) at an interior point, z = q, of a region D can
be obtained by integrating g(z)
zq on the boundary of D. Hence, we can determine
the value of an analytic function inside a region by its behaviour on the boundary.
then
1 f (j)
Z
f (z0 ) = d (C.8.2)
2 j z0
|f (z)|
lim =0 zD (C.8.3)
|z| |z|
then
1
Z
x0
f (z0 ) = f (j) d (C.8.4)
x20 + (y0 )2
914 Results From Analytic Function Theory Appendix C
Ci
R
C = C i C
Proof
Applying Theorem C.8, we have
1 f (z)
I
0= dz (C.8.6)
2j C z z1
I
1 f (z) f (z) 1 z0 z 1
I
f (z0 ) = dz = f (z) dz
2j C z z0 z z1 2j C (z z0 )(z z1 )
(C.8.7)
Section C.8. Poisson and Jensen Integral Formulas 915
1 z0 z 1
Z
f (z0 ) = f (j) d (C.8.8)
2 (j z0 )(j z1 )
The result follows upon replacing z0 and z1 by their real; and imaginary-part
decompositions.
222
Remark C.1. One of the functions that satisfies (C.8.3) but does not satisfy (C.8.1)
is f (z) = ln g(z), where g(z) is a rational function of relative degree n r 6= 0. We
notice that, in this case,
| ln g(z)| |K||nr ln R + jnr |
lim = lim =0 (C.8.9)
|z| |z| R R
Remark C.2. Equation (C.8.4) equates two complex quantities. Thus, it also ap-
plies independently to their real and imaginary parts. In particular,
1
Z
x0
<{f (z0 )} = <{f (j)} d (C.8.10)
x20 + (y0 )2
This observation is relevant to many interesting cases. For instance, when f (z)
is as in remark C.1,
For this particular case, and assuming that g(z) is a real function of z, and that
y0 = 0, we have that (C.8.10) becomes
1 2x0
Z
ln |g(z0 )| = ln |g(j)| d (C.8.12)
0 x20 + (y0 )2
n
z0 a i 1
Z
X x0
ln |g(z0 )| = ln
+
2 + ( y )2 ln |g(j)|)d (C.8.13)
i=1
z 0 + a i x
0 0
Proof
Let
n
4 Y z + a i
g(z) = g(z) (C.8.14)
i=1
z ai
Then, ln g(z) is analytic within the closed unit disk. If we now apply Theorem
C.9 to ln g(z), we obtain
n
z0 + ai
1
Z
X x0
ln g(z0 ) = ln g(z0 ) + ln = ln g(j)d
i=1
z0 a i x20 + ( y0 )2
(C.8.15)
We also recall that, if x is any complex number, then <{ln x} = <{ln |x|+jx} =
ln |x|. Thus, the result follows upon equating real parts in the equation above and
noting that
ln |
g (j)| = ln |g(j)| (C.8.16)
222
2
1
Z
f (z0 ) = P1,r ( )f (ej )d (C.8.17)
2 0
4 2 r 2
P,r (x) = 0 r < , x< (C.8.18)
2 2r cos(x) + r 2
Proof
Consider the unit circle C. Then, using Theorem C.8, we have that
1 f (z)
I
f (z0 ) = dz (C.8.19)
2j C z z0
Define
4 1 j
z1 = e (C.8.20)
r
Because z1 is outside the region encircled by C, the application of Theorem C.8
yields
1 f (z)
I
0= dz (C.8.21)
2j C z z1
2
1 1
Z
r
f (z0 ) = f (ej )ej d (C.8.22)
2 0 ej rej rej ej
4 1
f (z) = g (C.8.23)
z
918 Results From Analytic Function Theory Appendix C
2
1
Z
g(0 ) = P1, 1r ( )g(ej )d (C.8.24)
2 0
where
r2 1
P1, 1r ( ) = (C.8.25)
r2 2rcos( + ) + 1
If, finally, we make the change in the integration variable = , the following
result is obtained.
2
1 r2 1
Z
g(rej ) = g(ej )d (C.8.26)
2 0 r2 2rcos( ) + 1
Thus, Poissons integral for the unit disk can also be applied to functions of a
complex variable which are analytic outside the unit circle.
n Z 2
X z0 i 1
ln |g(z0 )| = ln z + P1,r ( ) ln |g(ej )|d (C.8.27)
i=1
1
i 0 2 0
Proof
Let
n
4 Y i z
1
g(z) = g(z) (C.8.28)
i=1
zi
Section C.8. Poisson and Jensen Integral Formulas 919
Then ln g(z) is analytic on the closed unit disk. If we now apply Theorem C.10
to ln g(z), we obtain
n 2
i z0
1 1
X Z
ln g(z0 ) = ln g(z0 ) + ln = P1,r ( ) ln g(ej )d
i=1
z0 i 2 0
(C.8.29)
We also recall that, if x is any complex number, then ln x = ln|x| + jx. Thus
the result follows upon equating real parts in the equation above and noting that
ln g(ej ) = ln g(ej )
(C.8.30)
222
Theorem C.11 (Jensens formula for the unit disk). Let f (z) and g(z) be an-
alytic functions on the unit disk. Assume that the zeros of f (z) and g(z) on the unit
disk are 1, n and 1 , 2 , . . . , m
2 , . . . , respectively, where none of these zeros
lie on the unit circle.
If
4 f (z)
h(z) = z < (C.8.31)
g(z)
then
2 |
1 f (0)
+ ln |1 2 . . . m
Z
j
ln |h(e )|d = ln
(C.8.32)
2 0 g(0) | 2 . . .
1 n |
Proof
We first note that ln |h(z)| = ln |z| + ln |f (z)| ln |g(z)|. We then apply the
PoissonJensen formula to f (z) and g(z) at z0 = 0 to obtain
z0 i
z0
i
P1,r (x) = P1,0 (x) = 1; ln
= ln |
i |; ln
= ln |i |
1 z0
i 1 i z0
(C.8.33)
n 2
1
X Z
ln |f (0)| = ln |
i | ln |f (ej )|d (C.8.34)
i=1
2 0
n 2
1
X Z
ln |g(0)| = ln |
i | ln |g(ej )|d (C.8.35)
i=1
2 0
920 Results From Analytic Function Theory Appendix C
The result follows upon subtracting equation (C.8.35) from (C.8.34), and noting
that
Z 2
ln ej d = 0
(C.8.36)
2 0
222
Remark C.3. Further insights can be obtained from equation (C.8.32) if we as-
sume that, in (C.8.31), f (z) and g(z) are polynomials;
n
Y
f (z) = Kf (z i ) (C.8.37)
i=1
n
Y
g(z) = (z i ) (C.8.38)
i=1
then
Qn
f (0) i
= |Kf | Qi=1 (C.8.39)
m i
g(0)
i=1
Thus, 1 , 2 , . . . n and 1 , 2 , . . . m are all the zeros and all the poles of h(z),
respectively, that have nonzero magnitude.
This allows equation (C.8.32) to be rewritten as
2
1 |01 02 . . . 0nu |
Z
ln |h(ej )|d = ln |Kf | + ln (C.8.40)
2 0 |10 20 . . . mu
0 |
f(z)
h(z) = z (C.9.1)
g(z)
is a integer number, and f(z) and g(z) are polynomials of degrees mf and mg ,
respectively. Then, due to the biproperness of h(z), + mf = mg .
we have that
Further assume that
(i) g(z) has no zeros outside the open unit disk,
(ii) f(z) does not vanish on the unit circle, and
(iii) f(z) vanishes outside the unit disk at 1 , 2 , . . . , m .
Define
f (z) 4 1
h(z) = =h (C.9.2)
g(z) z
where f (z) and g(z) are polynomials.
Then it follows that
(i) g(z) has no zeros in the closed unit disk;
(ii) f (z) does not vanish on the unit circle;
(iii) f (z) vanishes in the open unit disk at 1 , 2 , . . . , m , where i = i1 for
i = 1, 2, . . . , m ;
(iv) h(z) is analytic in the closed unit disk;
(v) h(z) does not vanish on the unit circle;
(vi) h(z) has zeros in the open unit disk, located at 1 , 2 , . . . , m .
We then have the following result
Lemma C.3. Consider the function h(z) defined in (C.9.2) and a point z0 = rej
such that r < 1; then
m
z0 i
Z 2
+ 1
X
j
ln |h(z0 )| = ln z0 2 0 P1,r ( ) ln |h(e )|d (C.9.3)
i=1
1 i
Proof
This follows from a straightforward application of Lemma C.2.
222
Section C.10. Bodes Theorems 923
Theorem C.12 (Bode integral in the half plane). Let l(z) be a proper real,
rational function of relative degree nr . Define
4
g(z) = (1 + l(z))1 (C.10.1)
and assume that g(z) has neither poles nor zeros in the closed RHP. Then
(
Z 0 for nr > 1
ln |g(j)|d = 4 (C.10.2)
0 2 for nr = 1 where = limz zl(z)
Proof
Because ln g(z) is analytic in the closed RHP,
I
ln g(z)dz = 0 (C.10.3)
C
I Z Z
ln g(z)dz = j ln g(j)d ln(1 + l(z))dz (C.10.4)
C C
For the first integral on the right-hand side of equation (C.10.4), we use the
conjugate symmetry of g(z) to obtain
Z Z
ln g(j)d = 2 ln |g(j)|d (C.10.5)
0
a
(C.10.6)
z nr
The result follows upon using example C.7 and noticing that a = for nr = 1.
222
1
Remark C.4. If g(z) = (1 + ez l(z)) for > 0, then result (C.10.9) becomes
924 Results From Analytic Function Theory Appendix C
Z
ln |g(j)|d = 0 nr > 0 (C.10.7)
0
The proof of (C.10.7) follows along the same lines as those of Theorem C.12
and by using the result in example C.8.
Theorem C.13 (Modified Bode integral). Let l(z) be a proper real, rational
function of relative degree nr . Define
4
g(z) = (1 + l(z))1 (C.10.8)
Assume that g(z) is analytic in the closed RHP and that it has q zeros in the open
RHP, located at 1 , 2 , . . . , q with <(i ) > 0. Then
Pq
Z i=1 i for nr > 1
ln |g(j)|d =
+ Pq 4
0
2 i=1 i for nr = 1 where = limz zl(z)
(C.10.9)
Proof
We first notice that ln g(z) is no longer analytic on the RHP. We then define
q
4 Y z + i
g(z) = g(z) (C.10.10)
i=1
z i
Thus, ln g(z) is analytic in the closed RHP. We can then apply Cauchys integral
in the contour C described in Figure C.4 to obtain
q I
z + i
I I X
ln g(z)dz = 0 = ln g(z)dz + ln dz (C.10.11)
C C i=1 C z i
I Z Z
ln g(z)dz = 2j ln |g(j)|d + ln g(z)dz (C.10.12)
C 0 C
(
0 for nr > 1
Z
ln g(z)dz = 4 (C.10.13)
C j for nr = 1 where = limz zl(z)
Section C.10. Bodes Theorems 925
The second integral on the right-hand side of equation (C.10.11) can be com-
puted as follows:
z + i j + i z + i
I Z Z
ln dz = j ln d + ln dz (C.10.14)
C z i j i C z i
We note that the first integral on the right-hand side is zero, and by using
example C.9, the second integral is equal to 2ji . Thus, the result follows.
222
Remark C.5. Note that g(z) is a real function of z, so
q
X q
X
i = <{i } (C.10.15)
i=1 i=1
222
1
Remark C.6. If g(z) = (1 + ez l(z)) for > 0, then the result (C.10.9) be-
comes
Z q
X
ln |g(j)|d = <{i } nr > 0 (C.10.16)
0 i=1
The proof of (C.10.16) follows along the same lines as those of Theorem C.13
and by using the result in example C.8.
Remark C.7. The Poisson, Jensen, and Bode formulae assume that a key function
is analytic, not only inside a domain D, but also on its border C. Sometimes, there
may exist singularities on C. These can be dealt with by using an infinitesimal
circular indentation in C, constructed so as to leave the singularity outside D. For
the functions of interest to us, the integral along the indentation vanishes. This is
illustrated in example C.6 for a logarithmic function, when D is the right-half plane
and there is a singularity at the origin.
222
Appendix D
PROPERTIES OF
CONTINUOUS-TIME
RICCATI EQUATIONS
0 = AT P PA + PB1 BT P (D.0.3)
where M(t) Rnn and N(t) Rnn satisfy the following equation:
A B1 BT M(t)
d M(t)
= (D.1.2)
dt N(t) AT N(t)
subject to
927
928 Properties of Continuous-Time Riccati Equations Appendix D
Proof
We show that P(t), as defined above, satisfies the CTDRE. We first have that
dI dM(t) d[M(t)]1
=0= [M(t)]1 + M(t) (D.1.5)
dt dt dt
from which we obtain
d[M(t)]1 dM(t)
= [M(t)]1 [M(t)]1 (D.1.6)
dt dt
Thus, equation (D.1.4) can be used with (D.1.2) to yield
dP(t)
= AT N(t)[M(t)]1 + N(t)[M(t)]1 A +
dt (D.1.7)
N(t)[M(t)]1 B[]1 BT N(t)[M(t)]1
which shows that P(t) also satisfies (D.0.1), upon using (D.1.1).
The matrix on the right-hand side of (D.1.2), namely,
B1 BT
A
H= H R2n2n (D.1.8)
AT
I dP(t)
P(t) I H = (D.1.9)
P(t) dt
Then, not surprisingly, solutions to the CTDRE, (D.0.1), are intimately con-
nected to the properties of the Hamiltonian matrix.
We first note that H has the following reflexive property:
0 In
H = THT T1 with T= (D.1.10)
In 0
Section D.1. Solutions of the CTDRE 929
1 Hs 0
[V] HV = (D.1.11)
0 Hu
where Hs and Hu are diagonal matrices with eigenvalue sets s and u , respectively.
We can use V to transform the matrices M(t) and N(t), to obtain
M(t) 1 M(t)
= [V] (D.1.12)
N(t) N(t)
V11 V12
V= (D.1.14)
V21 V22
where
Proof
From (D.1.12), we have
f ) + V12 N(t
M(tf ) = V11 M(t f) (D.1.19)
f ) + V22 N(t
N(tf ) = V21 M(t f)
h ih i1
f ) + V22 N(t
V21 M(t f) f ) + V12 N(t
V11 M(t f) = f (D.1.20)
or
h ih i1
f )[M(t
V21 + V22 N(t f )]1 f )[M(t
V11 + V12 N(t f )]1 = f (D.1.21)
or
Hence,
N(t)[
M(t)] 1 f )[M(t
= eHu (tf t) N(t f )]1 eHs (tf t) (D.1.25)
0 = PB1 BT P + PA + AT P (D.2.1)
I
P I H =0 (D.2.2)
P
1 a 0
V HV = (D.2.3)
0 b
V11 V12
V= (D.2.4)
V21 V22
1
Then P = V21 V 11 is a solution of the CTARE.
Proof
(i) This follows direct substitution.
(ii) The form of P ensures that
P I V = 0 (D.2.5)
1 P
V = (D.2.6)
I 0
932 Properties of Continuous-Time Riccati Equations Appendix D
222
Lemma D.4. (a) The stabilizing solution has the property that the closed loop A
matrix,
where
s
K = 1 B T Ps (D.3.2)
Proof
For part (a), we argue as follows:
Consider ( D.1.11) and (D.1.14). Then
V11 V11
H = Hs (D.3.4)
V21 V21
V11 Hs V11 1
I I
H = H = (D.3.5)
V21 V11 1 P V21 Hs V11 1
If we consider only the first row in (D.3.5), then, using (D.1.8), we have
Hence, the closed-loop poles are the eigenvalues of Hs and, by construction, these
are stable.
We leave the reader to pursue parts (b), (c), and (d) by studying the references
given at the end of Chapter 24.
222
Proof
We observe that the eigenvalues of H can be grouped so that s contains only
eigenvalues that lie in the left-half plane. We then have that
given that Hs and Hu are matrices with eigenvalues strictly inside the LHP.
The result then follows from (D.1.16) to (D.1.17).
934 Properties of Continuous-Time Riccati Equations Appendix D
1
Remark D.1. Actually, provided that ( 2 , A) is detectable, then it suffices to have
f 0 in Lemma D.5
222
Z t
z(t) = h(t )T y 0 ( )d + g T x
o (D.5.2)
0
where h(t) is the impulse response of the filter and where xo is a given estimate
of the initial state. Indeed, we will assume that (22.10.17) holds, that is, that the
initial state x(0) satisfies
E(x(0) x o )T = P o
o )(x(0) x (D.5.3)
We will be interested in designing the filter impulse response, h( ), so that z(t)
is close to z(t) in some sense. (Indeed, the precise sense we will use is a quadratic
form.) From (D.5.1) and (D.5.2), we see that
u( ) = h(t ) (D.5.7)
t
z(t) =f T x(t) + T x( ) 0 g T xo
Z t
dx( ) dv( )
(D.5.9)
+ ( )T + ( )T Ax( ) u( )T d
0 d d
Finally, using (22.10.5) and (D.5.6), we obtain
Z t
dw( ) dv( )
z(t) = (0)T (x(0) xo ) + ( )T u( )T d
0 d d (D.5.10)
T
((0) + g) x
o
The last term in (D.5.11) is zero if g = (0). Thus, we see that the design of
the optimal linear filter can be achieved by minimizing
Z t
J = (0)T Po (0) + ( )T Q( ) + u( )T Ru( ) d
(D.5.12)
0
t tf 0
A AT Q
T
B C R
x f Po
Z t
zo ( ) = uo ( )T y 0 ( )d + g T x
o (D.5.13)
o
where
uo ( ) = Kf ( )( ) (D.5.14)
1
Kf ( ) = R C( ) (D.5.15)
d(t)
= Q (t)CT R1 C(t) + (t)AT + A(t) (D.5.16)
dt
(0) = Po (D.5.17)
d( )
= AT ( ) + CT Kf ( )( ) (D.5.18)
d
(t) = f (D.5.19)
uo ( ) = Kf ( )( ) (D.5.20)
g = (0) (D.5.21)
( ) = (t )T f (D.5.22)
T
(0) = (t) f
u0 ( ) = Kf ( )(t )T f
Zt
T
z(t) = g x
o + uo y 0 ( )d (D.5.23)
0
Zt
T
= (0) xo + f T (t )Kf T ( )y 0 ( )d
0
Zt
= f T (t)
xo + (t )Kf T ( )y 0 ( )d
0
= fT x
(t)
where
Zt
x(t) = (t)
xo + (t )Kf T ( )y 0 ( )d (D.5.24)
0
We then observe that (D.5.24) is actually the solution of the following state
space (optimal filter).
x(t)
d
= A Kf T (t)C x(t) + Kf T (t)y 0 (t) (D.5.25)
dt
x(0) = x
o (D.5.26)
T
z(t) = f x(t) (D.5.27)
We see that the final solution depends on f only through (D.5.27). Thus, as
predicted, (D.5.25), (D.5.26) can be used to generate an optimal estimate of any
linear combination of states.
Of course, the optimal filter (D.5.25) is identical to that given in (22.10.23)
All of the properties of the optimal filter follow by analogy from the (dual)
optimal linear regulator. In particular, we observe that (D.5.16) and (D.5.17) are a
CTDRE and its boundary condition, respectively. The only difference is that, in the
optimal-filter case, this equation has to be solved forward in time. Also, (D.5.16)
has an associated CTARE, given by
938 Properties of Continuous-Time Riccati Equations Appendix D
Q CT R1 C + AT + A = 0 (D.5.28)
Thus, the existence, uniqueness, and properties of stabilizing solutions for (D.5.16)
and (D.5.28) satisfy the same conditions as the corresponding Riccati equations for
the optimal regulator.
Appendix E
MATLAB SUPPORT
939
940 MATLAB support Appendix E