Robust Design of Linear Control Laws For Constrained Nonlinear Dynamic Systems

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Robust Design of Linear Control Laws for

Constrained Nonlinear Dynamic Systems

Boris Houska

Moritz Diehl

Electrical Engineering Department (OPTEC & ESAT/SCD)


Kasteelpark Arenberg 10, 3001 Leuven/Heverlee, Belgium.
[email protected], [email protected]
Abstract: In this paper we present techniques to solve robust optimal control problems for
nonlinear dynamic systems in a conservative approximation. Here, we assume that the nonlinear
dynamic system is aected by a time-varying uncertainty whose L-innity norm is known to
be bounded. By employing specialized explicit upper estimates for the nonlinear terms in the
dynamics we propose a strategy to design a linear control law which guarantees that given
constraints on the states and controls are robustly satised when running the system in closed-
loop mode. Finally, the mathematical techniques are illustrated by applying them to a tutorial
example.
1. INTRODUCTION
In the recent decades, robust optimization problems have
received much attention. Especially robust optimization
for convex (or concave) problems is a well-established re-
search eld for which ecient algorithms exist (cf. Ben-Tal
and Nemirovski [1998], El-Ghaoui and Lebret [1997]). Un-
fortunatly, non-convex robust optimization problems are
much more dicult to solve. Although there is a mature
theory on semi-innite optimization available (c.f. Jongen
et al. [1998], R uckmann and Stein [2001]) there are only
a few special cases in which algorithms can succesfully be
applied (c.f. Floudas and Stein [2007]).
When robust control problems are regarded, there exists
a huge amount of literature on linear system theory (cf.
e.g. Zhou et al. [1996] and the reference therein). As soon
as nonlinear dynamic systems are considered much less
aproaches exist. Some authors, e.g. Nagy and Braatz [2004,
2007] as well as Diehl et al. [2006], have suggested heuristic
techniques for nonlinear robust optimal control. However,
in general, these heuristic approaches do not provide a
guarantee that a nonlinear system does not violate given
hard-constraints in worst-case situations.
The contribution of this paper is that we propose a compu-
tationally tractable way of solving robust nonlinear opti-
mal control design problems for time varying uncertainties
in a conservative approximation. For this aim, we need to
assume that an explicit estimate of the nonlinear terms in
the right-hand side function f is given. We demonstrate

Research supported by Research Council KUL: CoE EF/05/006


Optimization in Engineering(OPTEC), OT/03/30, IOF-
SCORES4CHEM, GOA/10/009 (MaNet), GOA/10/11, several
PhD/postdoc and fellow grants; Flemish Government: FWO:
PhD/postdoc grants, projects G.0452.04, G.0499.04, G.0211.05,
G.0226.06, G.0321.06, G.0302.07, G.0320.08, G.0558.08, G.0557.08,
G.0588.09,G.0377.09, research communities (ICCoS, ANMMM,
MLDM); IWT: PhD Grants, Belgian Federal Science Policy Oce:
IUAP P6/04; EU: ERNSI; FP7-HDMPC, FP7-EMBOCON,
Contract Research: AMINAL. Other: Helmholtz-viCERP,
EMBOCON, COMET-ACCM.
for a tutorial problem how such an explicit estimate can
be constructed illustrating that the results in this paper
are not only of theoretical nature but can also be applied
in practice.
In Section 2 we concentrate on the problem statement
while Section 3 focuses on uncertain linear system with L-
innity bounded uncertainties. In Section 4 the main result
of this paper on uncertain nonlinear constrained systems
is proven. Finally, we demonstrate the applicability of the
proposed strategies in Section 5 by applying them to a
small tutorial example. Section 6 concludes.
Notation: Besides mathematical standard notations, we
introduce the set D
n
++
R
nn
which denotes throughout
this paper the set of the diagonal and positive denite
matrices in R
nn
.
2. ROBUST NONLINEAR OPTIMAL CONTROL
PROBLEMS
In this section we introduce uncertain optimal control
problems for dynamic systems of the form
x(t) = F(x(t), u(t), w(t)) , x(0) = 0 ,
where x : [0, T] R
nx
denotes the states, u : [0, T] R
nu
the control inputs, and w : [0, T] R
nw
an unknown time-
varying input which can inuence the nonlinear right-hand
side function F : R
nx
R
nu
R
nw
R
nx
. Throughout
this paper, we assume that our only knowledge about the
uncertainty w is that it is contained in an uncertainty set

which is dened as

:= { w() | for all [0, T] : w()

1 } .
In words,

contains the uncertainties w() whose L-


innity norm is bounded by 1.
In this paper, we are interested in designing a feedback
law in order to compensate the uncertainties w. Here, we
constraint ourselves to the case that the feedback law is
linear, i.e. we set u(t) := K(t)x(t) with K : [0, T]
R
nunx
denoting the feedback gain. Now, the dynamics
of the closed loop system can be summarized as
Preprints of the 18th IFAC World Congress
Milano (Italy) August 28 - September 2, 2011
Copyright by the
International Federation of Automatic Control (IFAC)
13438
x(t) = f(x(t), K(t), w(t)) := F(x(t), K(t)x(t), w(t)) .
Moreover, we assume that we have f(0, K, 0) = 0 for all
K R
nunx
, i.e. we assume that x
ref
(t) = 0 is the steady
state which we would like to track. The uncertain optimal
gain design problem of our interest can now be stated as
min
x(),K()
[ K() ]
subject to x() = f(x(), K(), w())
x(0) = 0
C
i
(K()) x() d
i
for all T
i
(1)
with i {1, . . . , m}. The constraints are assumed to be
linear with a given matrix C : R
nunx
R
mnx
and a
given vector d R
m
. The sets T
i
[0, T] denote the set of
times for which the constraints should be satised. Here,
we can e.g. use T
i
= [0, T] if we want to formulate a path
constraint or T
i
= {T} if we are interested in a terminal
constraint. Note that the above formulation includes the
possiblity of formulating both state and control bounds as
the controls u(t) = K(t)x(t) are linear in x.
Our aim is now to solve the above optimal control problem
guaranteeing that the constraints are satised for all
possible uncertainties w

. Thus, we are interested


in the following robust counterpart problem:
min
u()
[u()] subject to V
i
[ t, u() ] d
i
for all t T
i
.
Here, the robust counterpart functional V is dened
component-wise by
V
i
[ t, K() ] := max
x(),w()
C
i
(K(t))x(t)
s.t.
for all [0, t] :
x() = f(x(), K(), w())
x(0) = 0
w()

.
(2)
Note that the above problem is dicult to solve as it has
a bi-level or min-max structure. For the case that f is
linear in x and w, the lower-level maximization problem
can be regarded as a convex problem as

is a convex
set. This lower-level convex case has in a similar context
been discussed in Houska and Diehl [2009, 2010] where
Lyapunov dierential equations have been employed in
order to reformulate the min-max problem into a standard
optimal control problem.
However, for the case that f is nonlinear, the problem is
much harder to solve as local maxima in the lower level
problem can not be excluded. The aim of this paper is
to develop a conservative approximation strategy to over-
estimate the functions V
i
planning to solve the robust
counterpart problem approximately but with guarantees.
For this aim, we will have to go one step back within the
next Section 3 where we start with an analysis of linear
dynamic systems. Later, in Section 4, we will come back
to a discussion of the more dicult nonlinear problem
3. LINEAR DYNAMIC SYSTEMS WITH TIME
VARYING UNCERTAINTY
In this section, we introduce the basic concept of robust
optimization for linear dynamic systems with innite di-
mensional uncertainties. We are interested in a dynamic
system of the form
x(t) = A(t)x(t) +B(t)w(t) with x(0) = 0 . (3)
Here, x : R R
nx
denotes the state while w : R R
nw
is assumed to be a time varying uncertainty. Moreover,
A : R R
nxnx
and B : R R
nxnw
are assumed to be
given (Lebesgue-) integrable functions.
As outlined in the previous section, we are interested in
computing the maximum excitation V (t) of the system at
a given time t in a given direction c R
nx
:
V (t) := max
x(),w()
c
T
x(t)
s.t.
for all [0, t] :
x() = A()x() +B()w()
x(0) = 0
w()

.
(4)
The above maximization problem can be regarded as an
innite dimensional linear program which is convex as the
set

is convex. Following the ideas from Ben-Tal and


Nemirovski [1998] we suggest to analyze the dual of the
above maximization problem in order to compute V via a
minimization problem.
In order to construct the dual problem, we need a time
varying multiplier : [0, T] R
nw
to account for the
constraints of the form w
i
()
2
1 which have to be
satised for all times and all indices i {1, . . . , n
w
}.
Moreover, we express the state function x of the linear
dynamic system explicitly as
x(t) =

t
0
H
t
()w() d , (5)
with the impulse response function H
t
() := G(t, )B().
Here, G : RR R
nxnx
denotes the fundamental solu-
tion of the linear dierential equation (3), which is dened
as the solution of the following dierential equation:
G(t, )
t
=A(t)G(t, ) with G(, ) = 1 (6)
for all t, R.
Now, the dual problem for the function V can be written
as
V (t) = inf
()>0
max
w()
c
T

t
0
H
t
()w() d

nw

i=1

t
0

i
()

w
i
()
2
1

d
= inf
()0

t
0
c
T
H
t
()()
1
H
t
()
T
c
4
d
+

t
0
Tr [ () ] d .
Here, we use the short hand
() := diag(()) D
nw
++
to denote the diagonal matrix valued function whose
entries are the components of the multiplier function .
The following Theorem provides a non-relaxed reformula-
tion of the above dual problem such that the associated
Preprints of the 18th IFAC World Congress
Milano (Italy) August 28 - September 2, 2011
13439
value function V can be computed more conveniently. The
proof of this Theorem can be found in the Appendix of
this paper:
Theorem 1. The function V , which is dened to be the
optimal value of the optimization problem (4), can equiv-
alently be expressed as
V (t) = inf
P(),(),R()D
nw
++

1 ()

c
T
P(t)c
s.t.

P() = A()P() +P()A()


T
+Tr [ R() ] P()
+B()R
1
()B()
T
P(0) = 0

() = Tr [ R() ] ()
(0) = 1
(7)
with P : [0, T] R
nxnx
and : [0, T] [0, 1] being
auxiliary states.
The main reason why we are interested in the above
theorem is that it allows us to guarantee that the reach-
able states are independent of the choice of w within an
ellipsoidal tube. Let us formulate this result in form of the
following corollary:
Corollary 2. Let R : [0, T] D
nw
++
be any given diagonal
and positive matrix valued function and P(t) as well as
(t) the associated Lyapunov states dened by (7). If we
dene the matrix
Q(t) := (1 (t)) P(t)
as well as the ellipsoidal set
E(Q(t)) :=

Q(t)
1
2
v | v
T
v 1

, (8)
then we have for all times t [0, T] the set inclusion

t
0
H
t
()w()d | w()

E(Q(t)) .
Proof: This corrolary is a direct consequence of Theo-
rem 1 as this Theorem holds for all directions c R
nx
and
for all times t.
2
Summarizing the above results, the matrix Q(t) can at
each time t be interpreted as the coecients of an outer
ellipsoid E(Q(t)) which contains the set of reachable states
at the time t under the assumption that the function w is
contained in

. In addition, we know from Theorem 1


that there exists for every direction c R
nx
and every
time t [0, T] a function R : [0, T] cl

D
nw
++

such that
the associated outer ellipsoid E(Q(t)) touches the set of
reachable states in this given direction c at time t.
4. A CONSERVATIVE APPROXIMATION
STRATEGY FOR NONLINEAR ROBUST OPTIMAL
CONTROL PROBLEMS
In this section, we come back to the discussion of ro-
bust counterpart problems for nonlinear dynamic systems.
Here, we are interested in a conservative approximation
strategy. Unfortunately, we have to require suitable as-
sumptions on the function f in order to develop such a
strategy. In this paper, we propose to employ the following
assumption:
Assumption 3. We assume that the right-hand side func-
tion f is dierentiable and that there exists for each
component f
i
of the function f an explicit nonlinearity
estimate l
i
: R
nunx
R
nxnx
R
+
with
|f
i
(x, K, w) A
i
x B
i
w| l
i
(K, Q) (9)
for all x E(Q) and for all w with w

1 as well as
all possible choices of K and Q 0. Here, we have used
the short hands A
i
:=
fi(0,K,0)
x
and B
i
:=
fi(0,K,0)
w
.
From a mathematical point of view, the above assumption
does not add a main restriction as we do not even re-
quire Lipschitz-continuity of the Jacobian of f. However,
in practice, it might of course be hard to nd suitable
functions l
i
which satisfy the above property. Nevertheless,
once we nd such an upper estimate, tractable conserva-
tive reformulations of the original non-convex min-max
optimal control problem can be found. This is the aim
of this section. In order to motivate how we can nd such
functions l
i
, we consider a simple example:
Example 4. Let the function component f
i
be convex
quadratic in x but linear in w, i.e. we have
|f
i
(x, K, w) A
i
x B
i
w| = x
T
S
i
(K)x
for some positive semi-denite matrix S
i
(K). In this case,
we can employ the function
l
i
(K, Q) := Tr( S
i
(K)Q)
in order to satisfy the above assumption. A less conserva-
tive choice would be
l
i
(K, Q) :=
max
( Q
1
2
S
i
(K) Q
1
2
)
which would involve a computation of a maximum eigen-
value.
Now, we dene the matrix valued function

B : R
nunx
R
nxnx
R
nxnx
R
nx(nw+nx)
as

B(K, Q) =

f
i
(0, K, 0)
w
, diag ( l(K, Q) )

. (10)
Theorem 5. For any

R : [0, T] D
(nw+nx)(nw+nx)
++
and
any K() regard the solution of the dierential equation

P() =A(K())P() +P()A(K())


T
+Tr


R()

P()
+

B(K(), Q())

R
1
()

B(K(), Q())
T
P(0) =0

() =Tr


R()

()
(0) =1
Preprints of the 18th IFAC World Congress
Milano (Italy) August 28 - September 2, 2011
13440
with Q() := [1 ()] P(). Then for all t [0, T] we
have the conservative upper bound
V
i
[ t, K() ]

C
i
(K(t)) Q(t) C
i
(K(t))
T
(11)
on the worst case functionals V
i
which have been dened
in (2). Here, we use the notation A(K) :=
fi(0,K,0)
x
.
Proof: The above result is a consequence of the Theo-
rem 1 from the previous section applied to a system of the
form
x() = A(K())x() +

B(K(), Q()) w()
x(0) = 0 .
(12)
Note that the system (12) is equivalent to the original
nonlinear system once we dene the auxiliary uncertainty
w by
w :=

w
D(K, Q) (f(x, K, w) A(K)x B(K)w)

.
with D(K, Q) := diag ( l(K, Q) )
1
. Here, w summarizes
both the physical uncertainties w as well as the inuence of
the nonlinear terms. Note that due to the construction of
w, we know that w()

1 for all [0, T]. Thus, we


can transfer the result from Theorem 1 in order to obtain
a proof of the inequality (11).
2
In the next Section we discuss a tutorial example in order
to show how the above Theorem can be applied in practice.
5. A SMALL TUTORIAL EXAMPLE
Let us demonstrate the applicability of the results in
this paper by formulating a control design problem for a
nonlinear inverted pendulum. The dynamic model is given
by
x = F(x, K, w) =

x
2
g
L
sin(x
1
) +
u
L
cos(x
1
) +
w
mL
2

.(13)
Here, g is the gravitational constant while m is the mass,
L the length, and x
1
the excitation angle of the pendulum.
Note that x
1
= x
2
is denoting the associated angular
velocity. Moreover, u is the controllable acceleration of the
joint of the pendulum which can be moved in horizontal
direction. For x = 0, u = 0 and w = 0 the pendulum has
an unstable steady state. Thus, we will need a feedback
control to stabilize the inverted pendulum at this point.
Note that there is an uncertain torque w acting at the
pendulum.
The right-hand side function f for the closed loop system
takes the form
f(x, K, w) =

x
2
g
L
sin(x
1
) +
Kx
L
cos(x
1
) +
w
mL
2

(14)
where we employ the linear feedback gain K R
12
to be
optimized. It is possible to show that the function
l(K, Q) =

0
g
L
r
1
(Q) +
r
2
(Q)
L

KQK
T

(15)
with
r
1
(Q) :=

Q
1,1
sin(

Q
1,1
)

and
r
2
(Q) :=

1 cos(

Q
1,1
)

is an upper bound function satisfying the condition (9)


within Assumption 3 for all K R
12
and all Q R
22
with

Q
11


2
. Note that the above upper estimate l is
locally quite tight in the sense that we have at least
l(u, Q) O( Q
3
2
) .
However, there are also other estimates possible.
In the following, we assume that the uncertain torque satis-
es w

. We are interested in minimizing the L


2
norm
of the feedback and estimator gains, i.e.

T
0
K(t)
2
F
dt,
while guaranteeing that path constraints of the form
d x
1
(t) d
are satised in closed loop mode for all possible uncertain-
ties w

and for all times t [0, T].


Using Theorem 5 we can formulate this gain design prob-
lem as
inf
P(),Q(),(),K(),

R()D
3
++

T
0
K()
2
F
d
s.t.

for all [0, T] :


d

Q
11
()

P() = A(K())P() +P()A(K())


T
+Tr

R()

P()
+

B(K(), Q())

R
1
()

B(K(), Q())
T
Q() = P() [1 ()]
P(0) = 0

() = Tr


R()

()
(0) = 1 .
Note that the above optimization problem is a standard
optimal control problem which can be solved with existing
nonlinear optimal control software. Any feasible solution
of this problem yields a feedback and an estimator gain
which guarantees that the path constraints of the form
d x
1
(t) d are robustly satised for all possible
uncertainties w

when running the nonlinear system


in closed loop mode. Note that control bounds of the form
v v v could be imposed in an analogous way as v is
linear in x.
In this paper, the software ACADO Toolkit (c.f. Houska
et al. [2010]) has been employed in order to solve the above
optimal control problem with
L = 1 m , m = 1 kg , g = 9, 81
m
2
2
,
T = 5 s , and d =

8
.
Figure 1 shows the state x
1
in a worst-case simulation
of the closed-loop system using the optimized feedback
gain K. Here, the worst case uncertainty w(t) = 1 Nm has
been found by local maximization. It is guaranteed that
Preprints of the 18th IFAC World Congress
Milano (Italy) August 28 - September 2, 2011
13441
Fig. 1. A closed-loop simulation of the state x
1
for the
torque w(t) = 1 Nm. The dotted line at d =

8
is a
conservative upper bound on the worst-case excitation
of x
1
.
x
1
satises the constraints of the form d x
1
(t) d
independ of the choice of w but this theoretical result does
not state how conservative the result might be. However,
the constant uncertainty w(t) = 1 Nm turns out to be a
local maximizer of x
1
for which
max
t[0,5]
x
1
(t) 0.33
1
1.19

8
is satised. Thus, we can state that in this application the
level of conservatism was less than 19 %.
6. CONCLUSION
In this paper, we have developed a conservative approxi-
mation strategy for robust nonlinear optimal control prob-
lems. Here, the main assumption on the right-hand side
function f was that we can nd an explicit upper bound
expression l which over-estimates the nonlinear terms in
the dierential equation. The approach has been trans-
ferred to control design problems and applied to a tutorial
example explaining how the proposed strategies can be
used in practice.
REFERENCES
A. Ben-Tal and A. Nemirovski. Robust Convex Optimiza-
tion. Math. Oper. Res., 23:769805, 1998.
M. Diehl, H.G. Bock, and E. Kostina. An approximation
technique for robust nonlinear optimization. Mathemat-
ical Programming, 107:213230, 2006.
L. El-Ghaoui and H. Lebret. Robust Solutions to Least-
Square Problems to Uncertain Data Matrices. SIAM
Journal on Matrix Analysis, 18:10351064, 1997.
C.A. Floudas and O. Stein. The Adaptative Convexi-
cation Algorithm: a Feasible Point Method for Semi-
Innite Programming. SIAM Journal on Optimization,
18(4):11871208, 2007.
B. Houska and M. Diehl. Robust nonlinear optimal
control of dynamic systems with ane uncertainties.
In Proceedings of the 48th Conference on Decision and
Control, Shanghai, China, 2009.
B. Houska and M. Diehl. Nonlinear Robust Optimization
of Uncertainty Ane Dynamic Systems under the L-
innity Norm. In In Proceedings of the IEEE Multi -
Conference on Systems and Control, Yokohama, Japan,
2010.
B. Houska, H.J. Ferreau, and M. Diehl. ACADO Toolkit
An Open Source Framework for Automatic Control and
Dynamic Optimization. Optimal Control Applications
and Methods, (DOI: 10.1002/oca.939), 2011. (in print).
H.T. Jongen, J.J. R uckmann, and O. Stein. Generalized
semi-innite optimization: A rst order optimality con-
dition and examples. Mathematical Programming, pages
145158, 1998.
Z.K. Nagy and R.D. Braatz. Open-loop and closed-loop
robust optimal control of batch processes using distri-
butional and worst-case analysis. Journal of Process
Control, 14:411422, 2004.
Z.K. Nagy and R.D. Braatz. Distributional uncertainty
analysis using power series and polynomial chaos ex-
pansions. Journal of Process Control, 17:229240, 2007.
J.J. R uckmann and O. Stein. On linear and linearized
generalized semi-innite optimization problems. Ann.
Oper. Res., pages 191208, 2001.
K. Zhou, J.C. Doyle, and K. Glover. Robust and optimal
control. Prentice Hall, Englewood Clis, NJ, 1996.
APPENDIX
In this appendix we provide a proof of Theorem 1. For this
aim, we rst consider the following Lemma:
Lemma 6. Let : [0, t] R
nw
++
be a given positive
and (Lebesgue-) integrable function while the short hand
:= diag() 0 denotes the associated diagonal
matrix valued function. If we dene the function r : [0, t]
R
++
and R : [0, t] S
nw
++
by
[0, t] : r() :=
Tr [ () ]

Tr [ (s) ] ds
(16)
with >

t
0
Tr [ (s) ] ds being a suciently large constant
and
[0, t] : R() :=
1

() exp

r(s) ds

(17)
then the following statements are true:
1) The functions r and R are positive and integrable
function.
2) The inverse relation
() =R() exp

Tr [ R(s) ] ds

(18)
is satised for all [0, t].
3) The integral over the trace of can equivalently be
expressed as

t
0
Tr [ () ] d =

1
1
exp

t
0
Tr [ R(s) ] ds

. (19)
Proof: The positiveness and integrability of the func-
tions r and R follows immediately from their deni-
tions (16) and (17) together with the assumption >

t
0
Tr [ (s) ] ds. Let us compute the integral
Preprints of the 18th IFAC World Congress
Milano (Italy) August 28 - September 2, 2011
13442

r(

) d

(16)
=

Tr [ (

) ]

Tr [ (s) ] ds
d

= log

1
1

Tr [ (s) ] ds

. (20)
for all [0, t]. In the next step, we solve (20) with respect
to the term

Tr [ (s) ] ds nding

Tr [ (s) ] ds =

1 exp

r(s) ds

. (21)
It remains to derive from the denition (16) that
[0, t] : Tr [ ()]
(16)
= r()

Tr [ (s) ] ds

(21)
= r() exp

r(s)ds

.
Comparing this relation with the denition (17) we recog-
nize that we must have r() = Tr [ R()] for all [0, t].
Thus, the denition (17) implies the relation (18). Finally,
we note that the equation (19) follows from (21) for = 0
using once more that r() = Tr [ R()].
2
The main reason why the above Lemma is useful is that
it allows us to perform a variable substitution. I.e. we
plan to replace the time-varying multiplier () in the
optimization problem (7) by the new function R employing
the denitions (16) and (17).
The proof of Theorem 1
Using the denition (4) of V (t) we know that there exists
a sequence of diagonal and positive denite functions
(
n
())
nN
such that
V (t) = lim
n

t
0
c
T
H
t
()
n
()
1
H
t
()
T
c
4
d
+

t
0
Tr [
n
() ] d .
Thus, we can also construct a sequence (
n
, R
n
())
nN
with
n
>

t
0
Tr [
n
(s) ] ds such that an application of
Lemma 6 yields
V (t) =
lim
n

t
0
c
T
H
t
()R
n
()
1
H
t
()
T
c e

Tr[ Rn(s) ] ds
4
n
d
+
n

1 e

t
0
Tr[ Rn(s) ] ds

.
Consequently, we must have
V (t) = inf
,R()>0
c
T
P(t)c
4
+

1 e

t
0
Tr[ R(s) ] ds

= inf
()>0

1 (t)

c
T
P(t)c . (22)
Here, we have used that the function
P(t) :=

t
0
H
t
()R()
1
H
t
()
T
e

Tr[ R(s) ] ds
d
solves the Lyapunov dierential equations in (7) uniquely.
Thus, we obtain the statement of the Theorem.
2
Preprints of the 18th IFAC World Congress
Milano (Italy) August 28 - September 2, 2011
13443

You might also like