5.1 - Analysis Based On State Space

Download as pdf or txt
Download as pdf or txt
You are on page 1of 62

Diseño Mecatrónico

Analysis Based on State Space


MCTG1013

Marcelo Fajardo-Pruna, PhD Francisco Yumbla, PhD


[email protected] [email protected]
ORCID ID: 0000-0002-5348-4032 ORCID ID: 0000-0003-4220-010X
SCOPUS ID: 57195539927 SCOPUS ID: 57201852791
Introduction to State Space Model

• State Space Model is a mathematical model in control engineering.

• It is a state-space representation of a physical system of a set of inputs and outputs along


with some set of state variables related by first-order differential equations.

• State variables in this model are a type of variable whose value changes over time and
depends on the values that have been given for the input variables.

• The value of the output variables depends on the value of the state and input variables.
Putting a model into state-space representation is the basis for many methods in control
analysis and the dynamics process.
Continuos Representation of State Space Model

• The continuous-time form of state-space model of Linear Time-Invariant(LTI) can be


represented as:

𝒙ሶ 𝑡 = 𝐴𝑥 𝑡 + 𝐵𝑢(𝑡)
𝒚 𝑡 = 𝐶𝑥 𝑡 + 𝐷𝑢(𝑡)

• The first equation is the state equation, and the second equation is the output equation
respectively.
• 𝑥 𝑡 is the state vector. • A is the system matrix

• 𝑥ሶ 𝑡 is the differential state vector. • B is the input matrix and C is the output matrix
respectively.
• 𝑢(𝑡) is the input vector.
• D is known as the feed-forward matrix.
• 𝑦 𝑡 is called the output vector.
Basic Terms related to State-Space Model

• State: It is a group of different variables, which concludes the whole history of the given
system to predict the future values of the variables that are, output variables.

• State Space: State Space is known as the set of all possible and known states of a
system. In state-space, each unique point represents a state of the system.

• State Variable: The state variables are one of the sets of state variables or system
variables that represent the whole system at any given period.

• State Vector: State Vector is a vector in which state variables are represented as
elements.

• Stability: In General, in any state-space model, we can define the stability of the system
using the eigenvalues of the state-space matrix A.
Example of State-Space Model

• Consider the example shown below and derive a state-apace model for the figure. The
input here is 𝑓𝑎 and the output is 𝑦.

• we can derive free body equations at two points from the above-shown figure, x, and y.

𝑚𝑥ሷ + 𝑘1 𝑥 + 𝑘2 𝑥 − 𝑘1 𝑦 = 𝑓𝑎 𝑏𝑦ሶ + 𝑘1 𝑦 − 𝑘1 𝑥 = 0
• In the above figure, we have three energy storage elements, so we obtain three state
equations. Here, the energy-storing elements are the spring 𝑘2 , the spring 𝑘1 , and mass
m. So, our state variables will be x and y.
𝑞1 = 𝑥
𝑞2 = 𝑥ሶ
𝑞3 = 𝑦
• Now, we need to find equations for these derivatives. The motion equations from the free
body diagram when input 𝑢 = 𝑓𝑎 are,
𝑞ሶ 1 = 𝑥ሶ = 𝑞2
1
𝑞ሶ 2 = 𝑥ሷ = (𝑓𝑎 − 𝑘1 𝑞1 − 𝑘2 𝑞1 + 𝑘1 𝑞3 )
𝑚
𝑘1
𝑞ሶ 3 = 𝑦ሶ = (𝑞1 − 𝑞3 )
𝑏
𝒒ሶ 𝑡 = 𝐴𝑞 𝑡 + 𝐵𝑢(𝑡)
𝒚 𝑡 = 𝐶𝑞 𝑡 + 𝐷𝑢(𝑡)

0 1 0
𝑘1 + 𝑘2 𝑘1 0
− 0 1
𝐴= 𝑚 𝑚 ,𝐵 =
𝑘1 𝑘1 𝑚
0 − 0
𝑏 𝑏

𝐶= 0 0 1 ,𝐷 = 0
Advantages of state-space analysis

1. The analysis is done based on the initial condition.

2. More accurate than transfer function techniques.

3. Analysis of multi-input and multi-output systems is made easy by using the state-space
model.

4. Gives information about controllability.

5. The state-space analysis applies to all dynamic systems


Canonical State-Space Realizations

• For any given system, there are essentially an infinite number of possible state space
models that will give the identical input/output dynamics. Thus, it is desirable to have
certain standardized state space model structures: these are the so-called canonical
forms.

• For any given system, there are essentially an infinite number of possible state space
models that will give the identical input/output dynamics.

• Thus, it is desirable to have certain standardized state space model structures: these are
the so-called canonical forms.
• Consider the system defined by

• where 𝑢 is the input, 𝑦 is the output and represents the 𝑛𝑡ℎ derivative of 𝑦 with respect
to time. Taking the Laplace transform of both sides we get:

• which yields the transfer function:

• Given the a system having transfer function as defined above, we will define the
controllable canonical and observable canonical forms.
Controllable Canonical Form

• The controllable canonical form arranges the coefficients of the transfer function
denominator across one row of the A matrix:

• The controllable canonical from is useful for the pole placement controller design
technique.
• Consider the system given by

• Obtain a state space representation in controllable canonical form.

• By inspection, 𝑛 = 2 (the highest exponent of s), therefore 𝑎1 = 3, 𝑎2 = 2, 𝑏0 = 0, 𝑏1 = 1


and 𝑏2 = 3. Therefore, we can simply write the state space model as follows:
Observable Canonical Form

• The observable canonical form is defined in terms of the transfer function coefficients of
as follows:

• Note the relationship between the observable and controllable forms:


• Consider the system given by

• Obtain the observable canonical form state space model.

• By inspection, 𝑛 = 2 (the highest exponent of s), therefore 𝑎1 = 3, 𝑎2 = 2, 𝑏0 = 0, 𝑏1 = 1


and 𝑏2 = 3. Therefore, we can simply write the observable canonical form model as
follows:
Diagonal Canonical Form

• The diagonal canonical form is a state space model in which the poles of the transfer
function are arranged diagonally in the A matrix.

• The denominator polynomial can be rewritten by partial fraction expansion as follows:

• Then the diagonal canonical form state space model can be written as follows:
• Consider the system given by

• Find the diagonal canonical form state space model.

• The transfer function of the system can be re-written with the denominator factored as
follows:

• therefore, 𝑝1 = 1 and 𝑝2 = 2. Trivially, the partial fraction expansion of the denominator


gives 𝑐1 = 2 and 𝑐2 = 1 and the diagonal from model can be written as:
Time Domain Solution of Diagonal Form

• A useful result of the diagonal form model is that the state transition matrix Φ 𝑡 = 𝑒 𝐴𝑡 is
easily evaluated without resorting to involved calculations:

• This greatly simplifies the task of computing the analytical solution to the response to
initial conditions.
Jordan Form

• The Jordan form is a type of diagonal form canonical model in which the poles of the
transfer function are arranged diagonally in the A matrix. Consider the case in which the
denominator polynomial of the transfer function involves multiple repeated roots:

• The denominator polynomial can be rewritten by partial fraction expansion as follows:


• Then the Jordan canonical form state space model can be written as follows:
Discrete State Space Representation

• The state space representation is one of the ways to model dynamical systems

• The model of the behavior of the position of a mechanical system driven by a dc motor
can be described by the following state space representation:

• Where 𝑥 𝑡 and 𝑢 𝑡 are the state vector and the control input
• The control input 𝑢 𝑡 is the voltage that we send to the DC motor

• The state vector 𝑥 𝑡 is composed of:

• the current of the armature, 𝑖 𝑡

• the speed of the mechanical part, 𝑤 𝑡

• the position of the part, 𝜃 𝑡


State Space Concept

• Most of the systems that are computer controlled are in general considered to evolve
continuously in time.

• Let us consider a system described by the following state space equations:

• where 𝑥 𝑡 , 𝑢 𝑡 and 𝑣 𝑡 represent respectively the state, the control input and the
disturbance of the system

• This dynamics can be rewritten as the one in previous slide by redefining the control as

𝑢 𝑡
and the control matrix as 𝐵 𝐵1
𝑣 𝑡
• The solution of the state space equation is given by:

• where 𝑡0 is the initial time and Φ 𝑡 is the transition matrix Φ(𝑡) = ℒ −1 𝑠Ι − 𝐴 −1 .

• Let 𝑡0 = 𝑘𝑇 and 𝑡 = 𝑘 + 1 𝑇, where 𝑇 is the sampling period of the system. With a zero-
order-hold the control 𝑢 𝜎 is supposed to be constant and equal to the value taken at
period 𝑘𝑇, i.e. 𝑢 𝜎 = 𝑢 𝑘𝑇 , for 𝑘𝑇 < 𝜎 < 𝑘 + 1 𝑇. Defining Ψ 𝑇 and 𝑊 𝑘𝑇 as:
• we obtain the following state space representation in the discrete-time domain:

• It is customary to use the representation 𝑥 𝑘 in place of 𝑥 𝑘𝑇 . Therefore, 𝑥 𝑘 means


the vector 𝑥 𝑡 at time 𝑡 = 𝑘𝑇.

• The state space representation of a linear time invariant discrete system when the
external disturbance is equal to zero for all 𝑘 ≥ 0 is given by:
Time Response and Its Computation

• The more general form of the discrete-time state space representation is given by:

• where 𝐺 = Φ, 𝐻 = Ψ, 𝐶 and 𝐷 are constant matrices with appropriate dimensions.

Block diagram of discrete-time linear system


Example
• Consider a system with the following dynamics for the output 𝑦 𝑡 :

• 𝑢 𝑡 Reference Input
• 𝑣 𝑡 a unit-step disturbance
• The difference equation of this system:
10𝑦ሷ 𝑡 + 𝑦ሶ 𝑡 = 10𝑢 𝑡 + 𝑣 𝑡

• By choosing
𝑦 𝑡 = 𝑥1 𝑡
𝑦ሶ 𝑡 = 𝑥ሶ 1 𝑡 = 𝑥2 𝑡
𝑦ሷ 𝑡 = 𝑥2ሶ 𝑡 = −0.1𝑥2 𝑡 + 𝑢 𝑡 + 0.1𝑣 𝑡
• The state space equations are:

𝑥ሶ 1 𝑡 0 1 𝑥1 𝑡 0 0
= + 𝑢 𝑡 + 𝑣 𝑡
𝑥ሶ 2 𝑡 0 −0.1 𝑥2 𝑡 1 0.1

𝑥1 𝑡
𝑦 𝑡 = 1 0
𝑥2 𝑡

• First of all notice that the fast dynamics in this system is linked to the pole −0.1, which
corresponds to a constant time τ = 10s. Therefore an appropriate choice of the sampling period
T is equal 1s.

−1 −1
1 s + 0.1 1
Φ 𝑡 =ℒ 𝑠𝐼 − 𝐴 = ℒ
𝑠 𝑠 + 0.1 0 𝑠

1/s + 0.1 1
Φ 𝑡 = ℒ −1
• We also have:

• Therefore:

• Since 𝑣 𝑡 = 1 for 𝑡 ≥ 0, then we have:

• If 𝑇 = 1 second, then we obtain:


Canonical forms

• It is well known that a physical system may have many state space representations. Most
of the cases, we consider the canonical forms (controllable from, observable form and the
Jordan form). The one, we just presented, is the controllable form. The other forms will be
developed here.

• For the Jordan form, remark that:

• which implies that:


• Letting now:

• which implies:

• From this we get the following state space representation:


• For the observable canonical form notice that from:

• we get:

• From this we obtain:

• By letting:
• that give in turn:

• Finally we get the following description for our system:

• where
• Another description can be obtained by letting:

• The computations of all the matrices for the discrete-time description can be done in a
similar way as we did for the controllable form.
Time Response and Its Computation

• Consider the state difference equation:

• Before giving the solution of this difference equation, let us show that

• In fact, the z-transform of the transition matrix, 𝜙 𝑘 = 𝜙 𝑘𝑇 , is given by:


• Pre-multiplying both side of this relation by𝜙 𝑇 𝑧 𝐺 = 𝜙 𝑇 and subtracting the result
from this relation, we get:

• which can be rewritten as:

• Taking now the inverse z-transform on both sides of this relation, we get:
• Another approach can be used to show this. In fact, the Z-transform of the previous state
description gives:

• Multiplying by 𝑧𝐼 − 𝐺 −1 , we get:

• Taking now the Z-inverse transform, we obtain:

• Notice that 𝐺 𝑘 = 𝒵 −1 [ 𝑧𝐼 − 𝐺 −1
𝑧] is the transition matrix.
• Finally, we get the following expression for the solution of the difference equation:

• It is also important to note that the solution, can be obtained using a recursive approach.
• 𝑘 = 0,

• 𝑘 = 1,

• 𝑘 = 𝑁 − 1 𝑇,

• 𝑘 = 2,

• Substituting the (𝑁 − 1) equations for 𝑥 𝑁 − 1 𝑇 , 𝑥 𝑁 − 2 𝑇 ···, 𝑥 𝑇 , we get:


• For 𝑁 = 𝑘 we get:

• The characteristic equation is:

• Recall that a discrete system is stable if and only if the roots of the characteristic equation
lie inside the unit circle centered at the origin.
Stability (eigenvalues)

• Given a state space description

• for a system, the transfer function is given by:

• Therefore, the poles of H(s) are uncancelled eigenvalues of A.

• Note that the eigenvalues of A appear as exponents in the solution of state x(t) (although
some of them may not appear at the output due to pole-zero cancellations).
• As a result for a given (A, B, C, D) to be stable (internal stability), all eigenvalues of A
should be stable.

• Consider the example

• where

• Eigenvalues of A are −2(stable) and 3(unstable).

• Output is equal to the first state, which is decoupled from the second state: 𝑦(𝑡) =
𝑥1 (𝑡).
• The transfer function of this system:

• The transfer function has only a stable pole (-2) (after the pole-zero cancellation).

• Now let's look at the states


• So the first state and output are fine, however, 𝑥2 (𝑡) will grow unbounded. As a result,
1
• The transfer function 𝐻(𝑠) = is input/output stable.
𝑠+2

• Its state space realization given above is unstable (internally unstable realization of a
stable transfer function).
• We can actually provide many stable state-space descriptions for the same system, one
of which is:
Stability Lyapunov

• In this section we present another approach for the stability analysis that was developed
by Lyapunov.

• This method is powerful since it can be applied to linear and nonlinear systems and it is
referred in the literature as second method of Lyapunov.

• We assume that the system is unforced (𝑢 𝑡 = 0, ∀𝑡 ≥ 0) and responds only to initial


conditions.

• The method has the disadvantages that gives only sufficient condition and it relies of the
choice of a Lyapunov function which is more complex for nonlinear systems.
• A linear discrete-time system 𝑥 𝑘 + 1 = 𝐺𝑥 𝑘 , with 𝑥 𝑘 its solution at period 𝑘, is stable
if it exists a scalar function 𝑉 𝑥 𝑘 , called Lyapunov function, that satisfies the following
conditions:

1. 𝑉 𝑥 𝑘 must be positive definite

2. and satisfying the following:

• The variation of 𝑉 𝑥 𝑘 between two consecutive values 𝑘 + 1 and 𝑘 of 𝑥 𝑘 must be


negative definite, i.e.
• that must satisfy the following:

• For the choice of the Lyapunov function 𝑉 𝑥 𝑘 , there exist several possibilities to find an

adequate function 𝑉 𝑥 𝑘 . For linear systems, we generally choose the following form:
• where 𝑃 is an appropriate matrix with appropriate dimension.

1. In order for 𝑉 𝑥 𝑘 to be positive definite, it is sufficient that 𝑃 is a symmetric and


positive-definite matrix.

2. Regarding the condition Δ𝑉 𝑥 𝑘 , since 𝑥 𝑘 + 1 = 𝐺𝑥 𝑘 , we have:

• One solution for Δ𝑉 𝑥 𝑘 to be negative definite is that 𝑄 is symmetric and positive-


definite matrix.
• Theorem Consider a linear time-invariant system with the following description:
𝑥 𝑘 + 1 = 𝐺𝑥 𝑘

• The equilibrium point 𝑥෤ = 0 is asymptotically stable if and only if for any given symmetric
and positive matrix 𝑄, there exists a symmetric and positive-definite matrix P solution of
the following:
𝐺 𝑇 𝑃𝐺 − 𝐺 = −𝑄

• Then 𝑉 𝑥 𝑘 = 𝑥 𝑇 𝑘 𝑃𝑥(𝑘) is a Lyapunov, and Δ𝑉 𝑥 𝑘 = −𝑥 𝑇 𝑘 𝑄𝑥 𝑘 .


• It is important to notice that the stability of linear systems depends only on the system
itself and not on the inputs and this is shown by 𝐺 𝑇 𝑃𝐺 − 𝐺 = −𝑄 since the matrix 𝐺
represents this system.
Controllability and Observability

• These two concepts play an important role in the stabilization problem of any dynamical
system.

• The controllability is in some sense related to the possibility of driving the state of the
system into a particular state, like the origin for instance, by using an appropriate control
signal in a finite time.

• The concept of observability is related to the possibility of observing, through output


measurements, the state of a system that we may use for control purpose for instance.
• Let us consider the following dynamics:

• with 𝑥𝑘 ∈ 𝑅𝑛×1 , 𝐺 ∈ 𝑅𝑛×𝑛 , 𝐻 ∈ 𝑅𝑛×1 and 𝐶 ∈ 𝑅1×𝑛


Controllability

• Definition The system is state controllable if there exists a piecewise-constant control


signal 𝑢 𝑘𝑇 defined over a finite sampling interval 0 ≤ 𝑘𝑇 < 𝑛𝑇 such that starting from
any initial state, the state 𝑥 𝑘𝑇 can be made zero for 𝑘𝑇 ≥ 𝑛𝑇.

• Definition If every state is controllable, then the system is said to be completely state
controllable.

• Definition A system
𝑥 𝑘 + 1 = 𝐺𝑥 𝑘 + 𝐻𝑢 𝑘 , 𝑥 0 = 𝑥0

• is controllable provided that there exists a sequence of inputs 𝑢 0 , 𝑢 1 ,··· , 𝑢 𝑁 with


finite values that transfers the system from any initial state 𝑥 0 to any final state 𝑥 𝑁
with 𝑁 finite.
• In fact, notice that
• Therefore, for given 𝑥 0 and 𝑥 𝑁 , we get:

• Since 𝑥 𝑁 ∈ 𝑅𝑛 , this algebraic equation will give a solution only if the rank of the matrix
𝐻 𝐺𝐻 ··· 𝐺 𝑁−1 𝐻 is equal to n.

• This matrix in known as the controllability matrix and it is defined by:


𝒞 = 𝐻 𝐺𝐻 ··· 𝐺 𝑁−1 𝐻
• Theorem The system is completely controllable if 𝒞 is of rank n.

• Theorem The system controllability is invariant under an equivalent transformation of the


system description.
Observability

• First of all using the dual system, the observability of the original system can be seen as
the controllability of its dual.

• The dual system is described by the following dynamics:

• The controllability of this system implies the observability of the system and vice versa.
• Definition The system is said to be observable if every initial state x(0) can be
determined from the observation of the output over a finite k sampling periods.

• Definition The system is completely observable is every state is observable.


• For simplicity, let us consider that the input is equal to zero for all 𝑘 ≥ 0. In this case, we
have:

• Notice that,
• Since 𝑥(0) ∈ 𝑅𝑛 , this algebraic equation will have a solution only when the matrix

• has a rank equal to n.

• Observability matrix is defined by:


• Theorem The system is completely observable if 𝒪 is of rank n.

• Theorem The system observability is invariant under an equivalent transformation of the


system description.
Diseño Mecatrónico
Analysis Based on State Space
MCTG1013

You might also like