SEEA1602

Download as pdf or txt
Download as pdf or txt
You are on page 1of 230

SCHOOL OF ELECTRICAL AND ELECTRONICS

DEPARTMENT OF ELECTRICAL AND ELECTRONICS

UNIT – I – Advanced Control Systems – SEEA1602

1
STATE SPACE ANALYSIS
1.1 INTRODUCTION

The state variable approach is a powerful tool / technique for the analysis and design of
control systems. The analysis and design of the following systems can be carried using state
space method.

1. Linear system
2. Non-linear system
3. Time invariant system
4. Time varying system
5. Multiple input and multiple output system.

The state space analysis is a modern approach and also easier for analysis using digital
computers. The conventional (or old) methods of analysis employs the transfer function of the
system. The drawbacks in the transfer function model and analysis are,

1. Transfer function is defined under zero initial conditions.


2. Transfer function is applicable to linear time invariant systems,
3. Transfer function analysis is restricted to single input and single output systems.
4. Does not provides information regarding the internal state of the system.

The state variable analysis can be applied for any type of systems. The analysis can be
carried with initial conditions and can be carried on multiple input and multiple output systems.
In this method of analysis, it is not necessary that the state variables represent physical
quantities of the system, but variables that do not represent physical quantities and those that
are neither measurable not observable may be chosen as state variables.

1.2 STATE SPACE FORMULATION

The state of a dynamic system is a minimal set of variable (known as state variables)
such that the knowledge of these vairables at t = t 0 together with the knowledge of the imputs
fo t ≥ t0, completely determibnes the behaivour of the sytem for t > t 0 (or) A set of vairables
which describes the system at any time instant are called state variables.

In the state variable formulation of a system, in general, a system consists of m-inputs,


p-outputs and n-state variabels. The state space representation of the system may be visualized
in Figure 1.1.

Let, State varibles = x1(t), x2(t), x3(t),……………………. xn(t)


Input varibles = u1(t), u2(t), u3(t),……………………. um(t)
Output varibles = y1(t), y2(t), y3(t),……………………. yp(t),

Figure 1.1 State space representation of system

2
The different variables may be represented by the vectors (column matrix) as shown
below.

STATE EQUATIONS

The state variable representation can be arranged in the form of n number of first
order differential equation as shown below.

…1.1

The n number of differential equations may be written in vector notation as

…1.2

The set of all possible values which the input vector U(t) can have (assume) at time t
forms the input space of the system. Similarly, the set of all possible values which the output
vector Y(t) can assume at time t forms the output space of the system and the set of all possible
values which the state vector X(t) can assume at time t forms the state space of the system.

1.3 STATE MODEL OF LINEAR SYSTEM

The state model of a system consist of the state equation and output equation. The state
equation of a system is a function of state variables and inputs as defined by equation (1.2).
For linear time invariant systems the first derivations of state variable can be expressed as a
linear combination of state variables and inputs.

…1.3

where the coefficients aij and bij are constants.

3
In the matrix form the above equations can be expressed as,

…1.4

The matrix equation (1.4) can also be written as, Ẋ(t) = A X(t) + B U(t) …1.5

where, X(t) = State vector of order (n  1)


U(t) = Input vector of order (m  1)
A = System matric of order (n  n)
B = Input matric of order (n  m)

Note: For convenience the input, output and state variables are denoted as u 1, u2,…, y1, y2,…
and x1, x2,…; but actual they are functions of time, t.

The equation, Ẋ(t) = A X(t) + B U(t) is called the state equation of Linear Time Invariant
(LTI) system.

The output at any time are functions of state variables and inputs.

 Output vector, Y(t) = f(X(t), U(t)) …1.6

Hence the output variables can be expressed as a linear combination of state variables
and inputs.

... 1.7

where the coefficients cij and dij are constants.

In the matrix form the above equations can be expressed as,

…1.8

The matrix equation (1.8) can also be written as, Y(t) = C X(t) + D U(t) …1.9

4
where, X(t) = State vector of order (n  1)
U(t) = Input vector of order (m  1)
Y(t) = Output vector of order (p  1)
C = Output matrix of order (p  n)
D = Transmission matrix of order (p  m)

The equation Y(t) = C X (t) + D U(t) is called the output equation of Linear Time
Invariant (LTI) system.

The state model of a system consists of state equation and output equation. (or) The
state equation and output equation together called as state model of the system. Hence the state
model of a linear time invariant system (LTI) system is given by the following equations.

Ẋ(t) = A X(t) + B U(t) …………. State equation.


Y(t) = C X (t) + D U(t) …………. Output equation.

1.4 STATE DIAGRAM

The pictorial representation of the state model of the system is called state diagram. The
state diagram of the system can be either in Block Diagram form or in signal flow graph form.

The state diagram describes the relationships among the state variables and provides
physical interpretations of the state variables. The time domain state diagram may be obtained
directly from the differential equation governing the system and this diagram can be used for
simulation of the system in analog computers.

The s-domain sate diagram can be obtained from the transfer function of the system.
The state diagram provides a direct relation between time domain and s-domain. [i.e., the time
domain equations can be directly obtained from the s-domain state diagram].

The state diagram (Block diagram and signal flow graph) of a state model is constructed
using three basic elements, Scalar, Adder and Integrator.

Scalar: The scalar is used to multiply a signal by a constant. The input signal x(t) is
multiplied by the scalar a to give the output, a x(t).

Adder: The adder is used to add two or more signals. The output of the adder is the
sum of incoming signals.

Integrator: The integrator is sued to integrate the signals. They are used to integrate
the derivatives of state variables to get the state variables. The initial conditions of the state
variable can be added by using an adder after integrator.

The time domain and s-domain elements of block diagram are shown in Table 1.1. The
time domain and s-domain elements of signals flow graph are shown in Table 1.2.

5
Table 1.1 Elements of Block Diagram

Element Time domain s-domain

Scalar

Adder

Integrator

Table 1.2 Elements of Signal Flow Diagram

Element Time domain s-domain

Scalar

Adder

Integrator

The state model of linear time invariant system is given by the equations.

Ẋ(t) = A X(t) + B U(t) …………. State equation.


Y(t) = C X (t) + D U(t) …………. Output equation.

The block time domain diagram representation of the state model is shown in Figure
1.2 and the time domain signal flow graph representation of the system is shown in Figure 1.3.

Figure 1.2 Block diagram of state model

6
CONSTRUCTION OF TIME DOMAIN STATE DIAGRAM

In state space modelling, n-number of first order differential equations are formed for
a nth order system. In order to integrate n-numbers of first derivatives, the state diagram requires
n-numbers of integrators. Therefore the first step in constructing the state diagram is to draw
n-numbers of integrators. Mark the input to the integrators as first derivatives of state variables
and so the output of the integrators are state variables. [If initial conditions are given, then they
can be added at the output of integrators using adders].

In each state equation, the first derivative of state variable is expressed as a function of
state variables and inputs. Therefore from the knowledge of a state equation, the state variables
and inputs are multiplied by appropriate scalars and then added to get the first derivative of a
state variable. Now, the first derivative of the state variable is given as input to the
corresponding integrator. Similarly the input of all other integrators are obtained by considering
the state equations one by one.

Each output equation is a function of state variables and inputs. Therefore from the
knowledge of an output, equation, the state variables and inputs are multiplied by appropriate
scalars and then added to get an output. Similar procedure is followed to generate all other
outputs.

1.5 STATE – SPACE REPRESENTATION USING PHYSICAL VARIABLES

In state-space modelling of systems, the choice of state variables is arbitrary. One of


the possible choice of state variables is the physical variables. The physical variables of
electrical systems are current or voltage in the R, L and C elements. The physical variables of
mechanical systems are displacement, velocity and acceleration. The advantages of choosing
the physical variables (or quantities) of the system as state variables are the following,

1. The state variables can be utilized for the purpose of feedback.


2. The implementation of design with state variable feedback becomes straight
forward.
3. The solution of state equation gives time variation of variables which have
direct relevance to the physical system.

The drawback in choosing the physical quantities as state variables is that the solution
of state equation may become a difficult task.

In state space modelling using physical variables, the sta6te equations are obtained from
the differential equations governing the system. The differential equations governing a system
are obtained from a basic model of the system which is developed using the fundamental
elements of the system.

7
ELECTRICAL SYSTEM

The basic model of a electrical system can be obtained by using the fundamental
elements Resistor, Capacitor and Inductor. Using these elements the electrical network or
equivalent circuit of the system is drawn. Then the differential equations governing the
electrical systems can be formed by writing Kirchoff’s current law equations by choosing
various nodes in the network or Kirchoff’s voltage law by choosing various closed path in the
network. The current-voltage relation of the basic elements R, L and C are given in Table 1.3.

Table 1.3

Voltage across the Current through the


Element
element element

A minimal number of state variables are chosen for obtaining the state model of the
system. The best choice of state variables in electrical system are currents and voltages in
energy storage elements. The energy storage elements are inductance and capacitance. The
physical variables in the differential equations are replaced by state variables and the equations
are rearranged as first order differential equations. These set of first order equations constitutes
the state equation of the system.

The inputs to the system are exciting voltage sources or current sources. The outputs in
electrical system are usually voltages or currents in energy dissipating element. The resistance
is energy dissipating element in electrical network. In general the output variables can be any
voltage or current in the network.

MECANICAL TRANSLATIONAL SYSTEM

The basic model of mechanical translational system can be obtained by using three
basic elements mass, spring and dash-pot. When a force applied to a mechanical translational
system, it is opposed by opposing forces due to mass, friction and elasticity of the system. The
forces acting on a body are governed by Newton’s second law of motion.

The differential equations governing the system are obtained by writing force balance
equations at various nodes in the system. A node is a meeting point of elements. The Table 1.4
shows the force balance equations of idealized elements.

List of symbol used in mechanical translational system are

y = Displacement, m
v = dy/dt = Velocity, m/sec

8
a = dv/dt = d2y/dt2 = Acceleration, m/sec2
f = Applied force, N (Newton)
fm = Opposing force offered by mass of the body, N
fk = Opposing force offered by the elasticity of the body (spring), N
fb = Opposing force offered by the friction of the body (dash-pot), N
M = Mass, Kg
K = Stiffness of spring, N/m
B = Viscous friction coefficient, N/(m/sec).

Guidelines to form the state model of mechanical translational systems

1. For each node in the system one differential equation can be framed by equating the
sum of applied forces to the sum of opposing forces. Generally, the nodes are mass
element.

Table 1.4 Force balance equations of idealized elements

Element Force balance equations

2. Assign a displacement to each nods and draw a free body diagram for each node. The
free body diagram is obtained by drawing each mass of node separately and then
marking all the forces acting on it.
3. In the free body diagram, the opposing forces due to mass, spring and dash-pot are
always act in a direction opposite to applied force. The displacement, velocity and
acceleration will be in the direction of applied force or in the direction opposite to that
of opposing force.
4. For each free body diagram write one differential equation by equating the sum of
applied forces to the sum of opposing forces.

9
5. Choose a minimum number of state variables. The choice of state variables are
displacement, velocity or acceleration.
6. The physical variables in differential equations are replaced by state variables and the
equations are rearranged as first order differential equations. These set of first order
equations constitute the state equation of the system
7. The inputs are the applied forces and the outputs are the displacement, velocity or
acceleration of the desired nodes.

MECHANICAL ROTATIONAL SYSTEM

The basic model of mechanical rotational system can be obtained by using three basic
elements moment of inertia of mass, rotational dash-pot and rotational spring. When a torque
is applied to a mechanical rotational system, it is opposed by opposing torques due to momen6t
of inertia, friction and elasticity of the system. The torque acting on a body are governed by
Newton’s second law of motion.

The differential equations governing the system are obtained by writing torque balance
equations at various nodes in the system. A node is a meeting point of elements. The Table 1.5
shows the torque balance equations of the idealized elements.

List of symbols used in mecanical rotational system

 = Angular displacement and


d/dt = Angular velocity, rad/sec
d2/dt2 = Angular acceleration, rad/sec
T = Applied torque, N-m
J = Moment of inertia, Kg-m2/rad
B = Rotational frictional coefficient, N-m/rad/sec)
K = Stiffness of the spring, N-m/rad.

Table 1.5 Torque balance equations of idealized elements

Element Torque balance equations

10
Guidelines to form the state model of mechanical rotational systems

1. For each node in the system one differential equation can be framed by equating the
sum of applied torques to the sum of opposing torques. Generally the nodes are mass
elements but in some cases the nodes may be without mass element.
2. Assign an angular displacement to each node and draw a free body diagram for each
node. The free body diagram is obtained by drawing each node separately and then
drawing all the torques acting on it.
3. In the free body diagram, the opposing torques due to moment of inertia, spring and
dash-pot are always act in a direction opposite to applied force. The angular
displacement, velocity and acceleration will be in the direction of applied torque or in
the direction opposite to that of opposing torque.
4. For each free body diagram write one differential equation by equating the sum of
applied torque to the sum of opposing torques.
5. Choose a minimum number of state variables. The choice of state variables are angular
dispalcement, velocity or acceleration.
6. The physical variables in differential equations are replaced by state variables and the
equations are rearraged as first order differential equations. These set of first order
equations constitute the state equation of the system.
7. The inputs are the applied torques and the outputs are the angular displacement, velocity
or acceleration of the desired nodes.

EXAMPLE 1.1

Obtain the state model of the electrical network shown in Fig 1.1.1 by choosing number
of state variables.

Figure 1.1.1 Figure 1.1.2

11
SOLUTION

Let us choose the current through the inductances i1, i2 and voltage across the capacitor
vs as state variables. The assumed directions of currents and polarity of the voltage are shown
in Fig 1.1.2.

[Note: The best choice of state variables in electrical network are currents and voltages
in energy storage elements ].

Let the three state varaibles x1, x2 and x3 be related to physical quantities as show below.

x1 = i1 = Current through L1
x2 = i2 = Current through L2
x3 = vo = Voltage across capacitor

At node A, by Kirchoff’s current law (refer Figure 1.1.3),

…1.1.1
On substiuting the state variables for physical variables in Eqn. (1.1.1) we get,

…1.1.2

Figure 1.1.3 Figure 1.1.4

By Kirchoff’s voltage law in the closed path shown in Figure 1.1.4 we get,

..1.1.3
On substituting the state variables for physical variables in Eqn (1.1.3) we get,

12
Also, let u(t) = e(t) = input to the system

…1.1.4
By Kirchoff’s voltage law in the closed path shown in Figure 1.1.5 we get,

…1.1.5
On substituting the state variables for physical variables in Eqn. (1.1.5) we get,

Figure 1.15
…1.1.6
The equations (1.1.2), (1.1.4) and (1.1.6) are the state equations of the system. Hence
the state equations of the system are,

On arranging the state equations in the matrix form we get,

State equation ……1.1.7

Let us choose the voltage across the resistances as output variables and the output
variables are denoted by y1 and y2.

 y1 = i1 R1 …1.1.8
and y2 = i2 R2 …1.1.9

13
On substituting the state variables in equations (1.1.8) and (1.1.9) we get,

(i.e. i1 = x1 and i2 = x2)


y1 = x1R1 ; y2 = x2R2

On arranging the above equations in the matrix form we get

Output equation …1.1.10

The state equation (Eqn (1.1.7)] and output equation (Eqn (1.1.10)] together constitute
the state model of the system.

EXAMPLE 1.2

Obtain the state model of the electrical network shown in Figure 1.2.1 by choosing v1(t)
and v2(t) as state variables.

SOLUTION

Connect a voltage source at the inputs as shown in Figure 1.2.2. Convert the Voltage
source to current source as shown in Figure 1.2.3. At node 1, by Kirchofrf’s current law we
can write (Refer Figure 1.2.4).

Figure 1.2.1 Figure 1.2.2 Figure 1.2.3 Figure 1.2.4

1.2.1

At node 2, by Kirchoff’s current law, we can write (Refer Figure 1.2.5)

…1.2.2

Let the state variables be x1 and x2 and they are related to physical variable as shown
below.

v1 = x1 and v2 = x2

Also, Let v(t) = u = input.

Figure 1.2.5

14
On substituting the state variables in equation (1.2.1) and (1.2.2) we get,

…1.2.3

…1.2.4

From equation (1.2.3) we get,

…1.2.5

From equation (1.2.4) we get,

…1.2.6

The equation (1.2.5) and (1.2.6) are state equations of the system. Hence the state
equations of the system are

On arranging the state equations in the matrix form,

…1.2.7

The output, y = v1(t) = x1

 The output equation is …1.2.8

The state equation [Eqn (1.2.7)] and output equation [Eqn (1.2.8)] together constitute
the state model of the system.

15
EXAMPLE 1.3

Construct the state model of mechanical system shown in Figure 1.3.1.

SOLUTION

Free body diagram of M1 is shown in Figure 1.3.2

Figure 1.3.1

Figure 1.3.2
By Newton’s second law, the force balance equation at node M1 is

…1.3.1

Free body diagram of M2 in shown in Figure 1.3.3.

Figure 1.3.3
By Newton’s second law, the force balance equation at node M2 is

…1.3.2

Let us choose four state variable x1, x2, x3 and x4. Also, let the input f(t) = u. The state
variables are related to physical variables as follows.

16
On substituting, in
equation (1.3.1) we get,

…1.3.3

On substituting in
equation (1.3.2) we get,

…1.3.4

The state variables x1 = y1.

On differentiating x1 = y1 with respect to t we get,

…1.3.5

The state variable, x2 = y2.

On differentiating x2 = y2 with respect to t we get,

..1.3.6

The equations (1.3.3) to (1.3.6) are state equations of the mechanical system. Hence the
state equations of the mechanical system are,

On arranging the state equations in the matrix form, we get,

17
Let the displacements y1 and y2 be the outputs of the system.

 y1 = x1 and y2 = x2.

The output equation in matrix form is given by,

…1.3.8

The state equation [Eqn (1.3.7)] and the output equation [Eqn (1.3.8)] together called
state model of the system.

EXAMPLE 1.4

Obtain the state model of the mechanical system shown in Figure 1.4.1 by choosing a
minimum of three state variables.

Figure 1.4.1

SOLUTION

Le the three state variables be x1, x2 and x3 and they are related to physical variables
as shown below.

Free body diagram of mass M is shown in Figure 1.4.2


Figure 1.4.2
18
By Newton’s second law, the force balance equation at node M is,

…1.4.1

…1.4.2

The free body diagram of node 2 (meeting point of K2 and B).

Writing force balance equation at the meeting point of K2 and B we get,

…1.4.3

The state variable, x1 = y1. On differentiating this expression with respect of t we get

…1.4.4

19
The state equations are given by equations (1.4.4), (1.4.3) and (1.4.2).

On arranging the state equations in the matrix form,

…1.4.5

If the desired outputs are y1 and y2, then y1 = x1 and y2 = x2

The output equation to the matrix form is given by

…1.4.6

The state equation [Eqn (1.4.5)] and the output equation [Eqn (1.4.6)] together
constitute the state model of the system.

EXAMPLE 1.5

Determine the state model of armature controlled dc motor.

SOLUTION

The speed of DC motor is directly proportional to armature voltage and inversely


proportional to flux. In armature controlled DC motor the desired speed is obtained by varying
the armature voltage. This speed control system is an electro-mechanical control system. The
electrical system consists of the armature and the field circuit for analysis purpose. Only the
armature circuit is considered because the field circuit but for analysis purpose, only the
armature circuit is considered because the field is excited by a constant voltage. The mechanical
system consist of the rotating part of the motor and load connected to the shaft of the motor.
The armature controlled DC motor speed control system is shown in Figure 1.5.1.

Figure 1.5.1 Armature controlled DC motor

20
Let Ra = Armature resistance Ω
La = Armature inductance, H
ia = Armature current, H
va = Armature voltage, V
eb = Back emf, V
Kt = Torque constant, N-m/A
T = Torque developed by motor, N-m
 = Angular displacement of shaft, rad
 = d/dt = Angular velocity of the shaft, rad/sec
J = Moment of inertia of motor and load, Kg-m2 / rad
B = Frictional coefficient of motor and load, N-m/(rad/sec)
Kb = Back emf constant, V/(rad/sec).

The equivalent circuit of armature is shown in Figure 1.5.2.

By Kirchoff’s voltage law, we can write

…1.5.1

Torque of DC motor is proportional to the product of flux and current. Since flux is
constant in this system, the torque is proportional to ia alone.

…. 1.5.2
Figure 1.5.2 Equivalent circuit of armature

The mechanical system of the motor is shown in Figure 1.5.3. The differential equation
governing the mechanical system of motor is given by

Figure 1.5.3
…1.5.3

The back emf of DC machine is proportional to speed (angular velocity) of shaft

… 1.5.4

From Eqn (1.5.1) and (1.5.4) we get,

… 1.5.5

From Eqn (1.5.2) and (1.5.3) we get,

… 1.5.6

21
The equations (1.5.5) and (1.5.6) are the differential equations governing the armature
controlled dc motor.

Let us choose i1,  and  as state variables to model the armature controlled dc motor.
The physical variables ia,  and  are related to the general notation of state variables x1, x2
and x3 as shown below.

x1 = ia ; x2 =  = d/dt and x3 = 

The input to the motor is the armature voltage, va and let va = u, where u is the general
notation for input variable.

On substituting the state variables for the physical variables in equation (1.5.5) we get,

…1.5.7

On substituting the state variables for physical variables in Eqn (1.5.6) we get,

..1.5.8

The state variable x3 = . On differentiating x3 =  with respect to t we get,

…1.5.9

The equation (1.5.7), (1.5.8) and (1.5.9) are the state equations of the system.

22
On arranging the state equations in the matrix form,

…1.5.10

Let the desired outputs be I,  and . Let us equate the desired output quantities to
standard notation y1, y2 and y3 as shown below.

y1 = ia ; y2 =  = d/dt and y3 = 

On relating the outputs to state variables we get,

y1 = x1 ; y2 = x2 ; y3 = x3

The output equation in the matrix form is

…1.5.11

The state equation [Eqn (1.5.10)] and the output equation [Eqn (4.5.11)] together
constitute the state model of the armature controlled dc motor.

Figure 1.5.4 Block diagram representation of the state model of armature controlled
dc motor

EXAMPLE 1.6

Determine the state model of field controlled dc motor.

SOLUTION

The speed of a DC motor is directly proportional to armature voltage and inversely


proportional to flux. In field controlled DC motor the armature voltage is kept constant
armature the speed is varied by varying the flux of the machine. Since flux is directly
proportional to field current, the flux is varied by varying field current. The speed control
system is an electromechanical control system. The electrical system consists of armature and
field circuit but for analysis purpose, only field circuit is considered because the armature is

23
excited by a constant voltage. The mechanical system consists of the rotating part of the motor
and the load connected to the shaft of the motor. The field controlled DC motor speed control
system is shown in Figure 1.6.1.

Figure 1.6.1 Field controlled DC motor

Let Rf = Field resistance, Ω


Lf = Field inductance, H
if = Field current, A
vf = Field voltage, V
 = Angular displacement of the motor shaft, rad
 = d/dt = Angular velocity of the motor shaft, rad/sec
T = Torque developed by motor, N-m
Ktf = Torque constant, N-m/A
J = Moment of inertia of rotor and load, Kg-m2/rad
B = Frictional coefficient of rotor and load, N-m/(rad/sec).

The equivalent circuit of field is shown in Figure 1.6.2.

By Kirchoff’s voltage law, we can write


Figure 1.6.2

…1.6.1

The torque of DC motor is proportional to produce of flux and armature current.


Since armature current is constant in this system, the torque is proportional to flux alone, but
flux is proportional to field current.

…1.6.2

The mechanical system of the motor is shown in Figure 1.6.3. The differential equation
governing the mechanical system of the motor is given by

Figure 1.6.3
…1.6.3

From Eqn (1.6.2) and (1.6.3) we get,

…1.6.4

24
The equation (1.6.1) and (1.6.4) are the differential equations governing the field
controlled dc motor.

Let us choose ip  and  as state variable to model the field controlled dc motor. The
physical variables ip  and  are related to the general notation of state variables x1, x2 and x3
as shown in below.

x1 = if ; x2 =  = d/dt ; x3 = 

The input to the system is the field voltage vf. Let vf = u, where u is the general notation
for input.

On substituting the state variables and input variables for the physical variables in Eqn
(1.6.1) we get,

…1.6.5

On substituting the state variables for the physical variables in Eqn (1.6.4) we get,

…1.6.6

The state variable x3 = . On differentiating x3 =  with respect to t we get,

…1.6.7

The equations (1.6.5), (1.6.6) and (1.6.7) are the state equations of the system.

25
On arranging the state equations in the matrix form,

…1.6.8

Let the desired output be  and . Let us equate the desired output quantities to standard
notation y1 and y2 as shown below.

y1 =  ; y2 = 

On relating the outputs to state variable we get,

y1 = x2 ; y2 = x3

The output equation in the matrix for is

…1.6.9

The state equation [Eqn (1.6.8)] and the output equation [Eqn (1.6.9)] together
constitute the state model of the system.

Figure 1.6.4 Block diagram representation of the state model field controlled dc motor

1.6 STATE SPACE REPRESENTAION USING PHASE VARIABLES

The phase variables are defined as those particular state variables which are obtained
from one of the system variables and its derivatives. Usually the variable used is the system
output and the remaining state variables are then derivatives of the output. The state model
using phase variables can be easily determined if the system model is already known in the
differential equation or transfer function form. There are three methods of modelling a system
using phase variables and they are explained in the following sections.

Method 1

Consider the following nth order linear differential equation relating to the output y(t)
to the input u(t) of a system.

26
…1.10

By choosing the output y and their derivatives as state variables, we get,

On substituting the state variables in the differential equation governing the system
[Eqn (1.10),] we get,

The state equations of the system are

On arranging the above equations in the matrix form we get,

… 1.11

Or Ẋ = A X + B U

Here the matrix A (system matrix) has a very special form. It has all 1’s in the upper
off-diagonal, its last row is comprised of the negative of the coefficients of the original
differential equation and all other elements are zero. This form of matrix A is known as Bush
form (or) Companion form.

27
Also note that B matrix has the speciality that all its elements except the last element
are zero. The output being y = x1, the output equation is given by,

…1.12

(or) Y = C X

The advantage in using phase variables for state space modelling is that the system state
model can be written directly by inspection from the differential equation governing the
system.

Method 2

Consider the following nth order differential equation governing the output y(t) to the
input u(t) of a system.

…1.13

Let n = m = 3

…1.14

On taking laplace transform of Eqn (1.14) with zero initial conditions we get,

…1.15

From the Mason’s gain formula, the transfer function of the system is given by

…1.16

Where Pk = path gain of Kth forward path.


 = 1 – (sun of loop gain of all individual loops)
+ (sum of gain products of all possible combinations of two non-
touching loops) - …..
k =  for that part of the graph which is not touching Kth forward path.

28
The transfer function of a system with four forward paths and with three feedback loops
(touching each other) is given by,

…1.17

On comparing equation (1.15) and (1.17) we get,

Hence for this system represented by the transfer function as that of equation (1.15), a
signal flow graph can be constructed as shown in the Figure 1.4. The signal flow is constructed
such that all k = 1 and all loops are touching loops.

Let us assign state variables at the output of each integrator in the signal flow graph.
Hence at the input of each integrator, the first derivative of the state variable will be available.
The state equations are formed by summing all the incoming signals to the nodes, whose values
corresponds to first derivative of state variables.

Figure 1.4 Signal flow graph of the system represented by the equation 1.15

By summing up the incoming signals to node ẋ 1 we get, (Refer Fig. 1.4a)

…1.18
Figure 4.4a

By summing up the incoming signals to node ẋ 2 we get, (Refer Fig. 1.4b)

…1.19
Figure 4.4b

29
By summing up the incoming signals to node ẋ 3 we get, (Refer Fig. 1.4c)

…1.20
Figure 4.4c

The output equation is given by the sum of incoming signals to output node.

The output equation is given by the sum of incoming signals to output node.

 y = x1 + b0 u …1.21

On arranging the state equations and the output equations in the matrix form, we get,

…1.22

…1.23

The above results can be generalized for an nth order differential equation, and the
general state model for m = n is given below.

…1.24

…1.25

Method 3

Consider the following nth order differential equation governing the output y(t) to the
input u(t) of a system.

30
…1.26

Let n = m = 3,

…1.27

On taking laplace transform of Eqn (1.27) with zero initial conditions, we get.

…4.28

…4.29

On cross multiplying the Eqn (1.28) we get,

…4.30

On taking inverse laplace transform of Eqn (1.30) we get,

…1.31

Let the state variable be, x1, x2 and x3

where, x2 = ẋ 1

and x3 = ẍ 1 = ẋ 2 ;  ẋ 3 = ẍ 1

On substituting the state variables in equation (1.31) we get,

The state equations are,

31
On cross multiply the Eqn (1.29) we get,

..1.32

On taking inverse laplace transform of Eqn (1.32), we get,

…1.33

On substituting the state variables in Eqn (1.33) we get,

…1.34

…1.35

` The equation (1.35) is the output equation.

On arranging the state equations and output equations in the matrix form, we get,

…1.36

…1.37

The above results can be generalized for an nth order differential equation and the
general state model for m = n is given below.

…1.38

…1.39

32
Advantages of Phase Variables

The state space model can be directly formed by inspection from the differential
equations governing the system. The phase variables provides a link between the transfer
function design approach and time-domain design approach.

Disadvantage of Phase Variables

The phase variables are not physical variables of the system and therefore are not
available for measurement and control purposes.

EXAMPLE 1.7

Construct a state model for a system characterized by the differential equation,

Give the block diagram representation of the state model.

SOLUTION

Let us choose y and their derivatives as state variables. The system is governed by third
order differential equation and so the number of state variables are three.

The state variables x1, x2 and x3 are related to phase variables as follows.

The state equations are

On arranging the state equations in the matrix form we get,

33
Here y = output

But, y = x1

The output equation is,

The state equation and output equation, constitutes the state model of the system, The
block diagram form of the state diagram of the system is shown in Figure 1.7.1

Figure 1.7.1 Block diagram form of state diagram

EXAMPLE 1.8

The state diagram of a system is shown in Figure 1.8.1. Assign state variables and
obtain the state model of the system.

Figure 1.8.1

SOLUTION

Since there are 4-integrators in the state diagram we can assign, 4 state variables. The
state variables can be assigned at the output of the integrators as shown in Figure 1.8.2. Hence
at the input of the integrator, the first derivative of the state variable will be available. The state
equations are formed by summing all the incoming signals to the input of the integrator and
equating to the corresponding first derivatives of the state variable.

34
Figure 1.8.2

On adding the signals coming to the 1st integrator we get, (refer Figure 1.8.3).

On adding the signals coming to the 2nd integrator we get, (Refer Figure 1.8.4)

On adding the signals coming to the 3rd integrator we get, (Refer Figure 1.8.5)

On adding the signals coming to the 4th integrator we get, (Refer Figure 1.8.6)

Figure 1.8.3 Figure 1.8.4 Figure 1.8.5 Figure 1.8.6

The state equations are

The output equations are, y1 = y2 and y2 = x4.

The state equations and output equations are arranged in the matrix form as shown
below. The state equations and output equations together constitute the state model of the
system.

35
EXAMPLE 1.9

The state diagram of a linear system is given below. Assign state variables to obtain
the state model.

Figure 1.9.1

SOLUTION

Since there are three integrators (1/s) we can assign three state variables. The state
variables are assigned at the output of the integrator as shown in Figure 1.9.2. At the input of
the integrator we have the first derivative of the state variable. The state equations are formed
by summing all the signals at the input of integrator and equating to the corresponding first
derivatives of state variable.

Figure 1.9.2

On adding the signals coming to node-5, we get, (Refer Figure 1.9.3)

𝑥̇ 1 = x2

On adding the signals coming to node-4, we get, (Refer Figure 1.9.4)

𝑥̇ 2 = -2x2 + x3

Figure 1.9.3 Figure 1.9.4

36
On adding the signals coming to node-=2, we get, (refer Figure 1.9.5).

Figure 1.9.5

The state equations are

The output equation is obtained by adding the signals coming to output node (refer
Figure 1.9.6)

Figure 1.9.6

The state equations and the output equation are arranged in the matrix form as shown
below.

EXAMPLE 1.10

Obtain the state model of the system whose transfer function is given as,

37
SOLUTION

Method 1

Given that, …1.10.1

On cross multiplying the Eqn (1.10.1) we get,

…1.10.2

On taking inverse laplace transform of Eqn (1.10.2) we get,

…1.10.3

Let us define state variables as follows,

Put in the equation (1.10.3)

The state equations are

The output equation is y = x1

The state model in the matrix form is,

Method 2

38
The signal flow graph for the above transfer function can be constructed as shown in
Figure 1.10.1 with a single forward path consisting of three integrators and with path gain 10/s 3.
The graph will have three individual loops with loop gains – 4/2, -2/s2, and 1/s3.

Figure 1.10.1

Assign state variables at the output of the integrator (l/s). The state equations are
obtained by summing the incoming signals to the input of the integrators and equating them to
the corresponding first derivative of the state variable. Refer Figure 1.10.2 to Figure 1.10.4).

The state equations are

Figure 1.10.2
Figure 1.10.3

The output equation is, y = x1


Figure 1.10.4
The state model in the matrix form is,

1.7 STTE SPACE REPRESENTATION USING CANONICAL VARIABLES

In canonical form (or normal form) of state model, the system matrix A will be a
diagonal matrix. The elements on the diagonal are the poles of the transfer function of the
system.

By partial fraction expansion, the transfer function Y(s)/U(s) of the nth order system can
be expressed as shown in Eqn (1.40).

…1.40

where C1, C2, C3…..Cn are residues and 1, 2,……….. n are roots of denominator
polynominal (or poles of the system).

The equation (1.40) can be rearranged as shown below.

39
…1.41

The equation (1.41) can be represented by a block diagram as shown in Figure 1.5.

Figure 1.5 Block diagram of canonical state model

Assign state variables at the output of integrator. The input of the integrator will be first
derivative of state variable. The state equations are formed by adding the incoming signals to
the integrator and equating to first derivative of state variable. The state equations are,

The output equation is,

The canonical form of state model in the matrix form is given below.

40
…1.41

…1.42

The advantage of canonical form is that the state equations are independent of each
other. The disadvantage is that the canonical variables are not physical variables and so they
are not available for measurement and control.

When a pole of the transfer function has multiplicity, the canonical state model will be
in a special form called Jordan canonical form. In this form the system matrix A will have a
Jordan block of size q x q, correspond to a pole of value 1 with multiplicity q. In the Jordan
block the diagonal element will be the poles and the element just above the diagonal is one.

Consider a system with poles 1, 1, 1, 4, 5, …. n where 1 has multiplicity of three.
The input matrix (B) and system matrix for this case will be as shown in Eqn (1.41a). The
system matrix is also denoted as J.

…1.41a

The transfer function of the system for this case is given by Eqn (1.40a) and the block
diagram is shown in Figure 1.5a.

….1.40a

41
Figure 1.5a Block diagram of Jordan canonical state model

EXAMPLE 1.11

A feedback system has a closed-loop transfer function

Construct three different state models for this system and give block diagram
representation for each state model.

SOLUTION

Mode 1

A signal flow graph for the above transfer function can be constructed as shown Figure
1.11.1 with two forward paths and two individual loops. The forward path gains are 10/s2 and
40/s3. The loop gains are -4/s and -3/s2.

42
Assign state variables at the output of integrator as shown in FIGURE 1.11.1 and so the
input of integrator is first derivative of state variable. The state equation are obtained by
summing all the incoming signals to the integrator and equating to the corresponding first
derivative of the state variable. [Refer Figure 1.11.2 to 1.11.3]

Figure 1.11.1

The state equations are

Figure 1.11.2 Figure 1.11.3 Figure 1.11.4

The output equation is, y = x1

The state model is obtained by arranging the state equations and the output equation
in the matrix form as shown below. The block diagram representative of this state model is
shown in Figure 1.11.5.

Figure 1.11.5

Model 2

Give that,

43
…1.11.1

On cross multiplying the Eqn (1.11.1) we get,

…1.11.2

On taking inverse laplace transform of Eqn (1.11.2) we get,

…1.11.3

Let the state variables be x1, x2 and x3; where x2 = x1 and x3 = 𝑥̇ 1.

in Eqn (1.11.3),

The state equations are

Consider the second part of transfer function,

…1.11.4

On cross multiplying Eqn (1.11.4) we get,

… 1.11.5

On taking inverse laplace transform of Eqn (1.11.5) we get,

Here, y = 40x1 + 10x2 is the output equation. The state model in the matrix form is
shown below. The block diagram representation of this state model is shown in Figure 1.11.6.

44
Figure 1.11.6

Model 3

By partial fraction expansion Y(s) / U(s) can be expressed as,

…1.11.6

The equation (1.11.6) can be rearranged as shown below

…1.11.7

The block diagram of the Eqn (1.11.7) is shown in Figure 1.11.7

Figure 1.11.7

45
Assign state variables at the output of the integrator as shown in Figure 1.11.7. At the
input of the integrator, the first derivative of the state variables will be available. The state
equations are obtained by adding incoming signals to the integrator and equating to the
corresponding first derivative of the state variable.

The state equations are

The output equation is

The state model is the matrix form is shown below. The Figure 1.11.7 is the block
diagram representation of this state model.

EXAMPLE 1.12

Determine the canonical state model of the system, whose transfer function is
T(s) = 2(s+5)/[(s+2) (s+3) (s+4)]

SOLUTION

By partial fraction expansion,

…1.12.1

The equation (1.12.1) can be rearranged as shown below.

…1.12.2

46
The equation (1.12.2) can be represented by the block diagram in Figure 1.12.1

Assigns state variables at the output of the integrators as shown in Figure 4.12.1. At the
input of the integrators we have first derivative of the state variables. The state equations are
formed by adding all the incoming signals to the integrator and equating to the corresponding
first derivative of state variable.

The state equations are

Figure 1.12.1

The output equation is, y = 3x1 – 4x2 + x3

The state model in matrix form is given below.

1.8 SOLUTION OF STATE EQUATIONS


SOLUTION OF HOMOGENEOUS STATE EQUATIONS
(Solution of State Equations without input or excitation)

Consider a first order differential equation, with initial condition, x(0) = x0.

…1.43

On rearranging Eqn (1.43) we get, …1.44

On integrating Eqn (1.44) we get,

47
…1.45

When t = 0, from Eqn (1.45) we get, x = x(0) = eC

Given that x(0) = x0 ; eC = x0

On substituting the initial condition in Eqn (1.45), we get the solution of first order
differential equation as

x = eat x0. ….1.46

We know that, …1.47

From Eqn (1.46) and (1.47) we get,

…1.48

Consider the state equations without input vector, (i.e., homogeneous state equation)

…1.49

Where X(0) is the initial condition vector.

By analog of the solution of first order differential equation [Eqn (4.48)], the solution
of the matrix or vector equation can be assumed as shown in Eqn (1.50).

...1.50

Where A0, A1, A2, …. Ai… are matrices and the elements of the matrices are constants.

On differentiating the Eqn (1.5) we get,

…1.51

On multiplying Equation (1.50) by A, we get,

…1.52

From Eqn (1.49), we know that X(t) = A X(t). Therefore we can equate the coefficients
of equal powers of t in equations (1.51) and (1.52) as shown below.

48
In the above analysis, the matrices A1, A2, A3, etc., are expressed in terms of A and A1.
Hence replace the matrices A1, A2, A3…. Ai in the assumed solution of X(t) [i.e., Eqn (1.50)]
by the equivalent terms of obtained above.

…1.53

where I is the unit matrix.

I tis given that, when t = 0, X(t) = X(0) = X0 …1.54

From Eqn (1.53) when t = 0, we get

…1.55

From Equations (1.54) and (1.55) we get,

A0 = X0 … 1.56

On substituting for A0 from Eqn (1.56) in Eqn (1.53) we get,

…1.57

Each of the term inside the brackets is an n x n matrix. Because of the similarity of the
entity inside the bracket with a scalar exponential of eat, we call it a matrix exponential, which
may be written as,

49
…1.58

Hence the solution of the state equation is

...1.59

The matrix eat is called state transition matrix and denoted by (t). From the solution of
the state equations it is observed that the initial state X0 at t = 0, is driven to state X(t) at time t
by state transition matrix.

SOLUTION OF NON HOMOGENEOUS STATE EQUATIONS

(Solution of state equations with input or excitation)

The state equation of nth order system is given by

…1.60

where X0 is initial condition vector.

The state equation of Eqn (1.60) can be rearrangement as shown below.

…1.61

Premultiply both sides of Eqn (1.61) by e -At

…1.62

Consider the differential of e -At X(t)

…1.63

On comparing equations (1.62) and (1.63) we can write,

…1.64

On integrating the equation (1.64) between limits 0 to t we get,

…1.65

50
where X0 = Initial condition vector = Integral constant

τ = Dummy variable substituted for t.

Premultiply both sides of Eqn (1.65) by e At,

…1.66

The term eAt independent of the integral variable τ, and so eAt can be brought inside the
integral function.

…1.67

The equation (1.67) is the solution of state equation, when the initial conditions are
known at t = 0. If initial conditions are known at t = t 0 then the solution of state equations is
given by Eqn (1.68).

…1.68

The state transition matrix e At is denoted by the symbol (t), i.e., (t) = eAt

Hence, eA(t-te) can be expressed as, eA(t-te) = (t-t0) …1.69

and, eA(t-τ) can be expressed as, eA(t-τ) = (t-τ) …1.70

The equation (1.67) and (1.68) can also be expressed as

if the initial conditions are known t = 0 …1.71

if the initial conditions are known t = t 0 … 1.72

PROPERTIES OF STATE TRANSITION MARIX

1.

2.

3.

COMPUTATION OF STATE TRANSITION MATRIX

51
Method 1: Computation of eAt using matrix exponential.
Method 2: Computation of eAt using laplace transform.
Method 3: Computation of eAt by canonical transformation.
Method 4: Computation of eAt using Sylvester’s interpolation formula (or computation
based on Cayley-Hamilton theorem).

The computation of state transition matrix using matrix exponential and laplace
transform are presented in this section.

Computation of state transition matrix using matrix exponential

In this method, the eAt is computed using the matrix exponential of Eqn (1.58), which
is also given below,

where, eAt = State transition matrix of order n x n

A = System matrix of order n x n

I = Unit matrix of order n x n.

The disadvantage in this method is that each term of e At will be an infinite series and
the convergence of the infinite series are obtained by trial and error.

Computation of State Transition Matrix by Laplace Transform Method

Consider the state equation without input vector, 𝑋̇(t) = A X(t) …1.73

On taking laplace transform of equation (1.73) we get,

where I is a unit matrix.

Premultiply both sides by (sl – A)-1

On taking inverse laplace transform we get,

…1.74

On computing Eqn (1.74) with the solution of state equation, X(t) = e At X(0) we get

…1.75

52
We know that, ...(1.76)

where, (s) = (sI–A)-1 and it is called resolvant matrix.

From the system matrix, A the resolvant matrix, (s) can be computed. By taking
inverse laplace transform of resolvant matrix, the state transition matrix is computed, from
which the solution of state equation is obtained.

The solution of state equation is given by

…(1.77)

where, (s) = (sl-A)-1

Consider the state equation with forcing function (input or excitation)

𝑋̇ = AX + BU …1.78

On taking laplace transform of Eqn (1.78), we get

where I is the unit matrix. …1.79

Premultiply the Eqn (1.79) by (sl-A)-1

…1.80

On taking inverse Laplace transform of Eqn (1.80) we get,

..1.81

The equation (1.81) is the solution of state equation with forcing function.

EXAMPLE 1.13

Consider the matrix A, Compute eAt by two methods.

53
SOLUTION

Method 1

The each term in the matrix is an expansion of eat. The convergence of series obtained
by trial and error. Consider the expansion of e -1 and e-2t.

54
Method 2

55
By partial fraction expansion, (s) can be written as,

On taking inverse Laplace transform (s) we get (t), where (t) = eAt

It is observed that the results of both the methods are same.

56
EXAMPLE 1.14

SOLUTION

57
EXAMPLE 1.15

For a system represented by state equation 𝑋̇(t) = A X(t)

Determine the system matrix A and the state transition matrix

SOLUTION

The Solution of State equation is, X(t) = eAt X(0) …1.15.1

Premultiply the Eqn (1.15.1) by e -At

…1.15.2

One of the response is

On substituting the response in equation (1.15.2) we get,

…1.15.3

…1.15.4

From equation (1.15.3) and (1.15.4) we can write

…1.15.5

On multiplying the equation (1.15.5) we get the following two equations.

…1.15.6

…1.15.7

58
The second solution of state equation is

On substituting this solution in equation (1.15.2) we get,

…1.15.8

From Eqn (1.15.4) and (1.15.8) we can write,

…1.15.9

On multiplying the equation (1.15.9) we get the following two equations,

….1.15.10

…1.15.11

equation (1.15.10)

equation (1.15.16)

On subtracting

From Eqn (1.15.12) we get

…1.15.13

From equation (1.15.6),

59
…1.15.14

From Eqn (1.15.14) we get,

From Equation (1.15.11),

eAt is the state transition matrix.

We know that,

Where

60
Determinant of (s)

RESULT

EXAMPLE 1.16

A linear time-invariant system is characterized by homogenous state equation.

Compute the solution of the homogenous equation, assuming the initial state vector.

61
SOLUTION

Here

The solution of the state equation is,

1.9 STATE SPACE REPRESENTATION OF DISCRETE TIME SYSTEMS

The state variable analysis techniques of continuous time systems can be extended to
the discrete-time system. The discrete form of state space representation is quite analogue to
the continuous form.

In the state variable formulation of a discrete time system, in general, a system consists
of m-inputs, p-outputs and n-state variables. The state space representation of discrete-time
system may be visualized as shown in Figure 1.6.

Figure 1.6 State space representation of discrete time system

62
The different variables may be represented by the vectors (column matrix) as shown
below.

Note: The simplified notation x(k), y(k) and u(k) are used to denote x(kT), y(kT) and
u(kT) respectively. Also for convenience the variables are denoted, x1, x2, x3,….: y1, y2, y3, and
u1, u2, u3…..

The state equation of a discrete time system is a set of n-numbers of first order
difference equations.

63
SCHOOL OF ELECTRICAL AND ELECTRONICS

DEPARTMENT OF ELECTRICAL AND ELECTRONICS

UNIT – II – Advanced Control Systems – SEEA1602

64
ANALYSIS AND DESIGN OF CONTROL SYSTEM IN STATE SPACE
2.1 DEFINITIONS OF INVOLVING MATRICES

Matrix: A matrix is an ordered array of elements which may be real numbers, complex
numbers, functions or operators. In general the array consists of m rows and n columns. When
m = n, the matrix is called square matrix. When n = 1, the matrix is called column matrix or
vector. When m = 1, the matrix is called row matrix or vector.

Diagonal matrix: It is a square matrix whose elements other than main diagonal area
all zeros.

Unit matrix: It is a diagonal matrix whose diagonal elements are all equal to unity. The
elements other than diagonal are all zeros. It is denoted by I.

Transpose: If the rows and columns of an m x n matrix A are interchanged, then the
resulting n x m matrix is called the transpose of A. The transpose of A is denoted by AT.

Determinant: A determinant consisting of the elements of a square matrix (in the order
given it the matrix) is called the determinant of the matrix.

Symmetric matrix: A square matrix is symmetric if it is equal to its transpose, i.e.,


A = A. If A is a square matrix, then A + AT is a symmetric matrix.
T

Skew-symmetric matrix: A square matrix is skew-symmetric if it is equal to the


negative of its transpose, i.e., AT = -A. If A is a square matrix then A-AT is a skew symmetric
matrix.

Orthogonal Matrix: A matrix A is called an orthogonal matrix if it is real and satisfies


the relationship AT A = AAT = I.

Minor: If the ith row and jth column of determinant A are deleted, the remaining (n-1)
rows and columns form a determinant Mij. This determinants is called the minor of the element
aij.

Cofactor: The cofactor Cij of element aij of the matrix A is defined as Cij = (-1)(i+j) Mij,
where Mij , is the minor of aij.

Adjoint matrix: The adjoint matrix of a square matrix A is found by replacing each
element aij of matrix A by its cofactor Cij and then transposing.

Singular matrix: A square matrix is called singular if its associated determinant is


zero. If the determinant of the matrix is non zero then the matrix is non singular.

Rank of matrix: A matrix A is said to have a rank r if there exists an r x r submatric of


A which is non singular and all other q x q submatrices are singular, where q ≥ (r+1).

Conjugate matrix: The conjugate of a matrix A is the matrix is which each element is
the complex conjugate of the corresponding element of A. The conjugate of A is denoted
by A*.

65
Real matrix: If all the elements of a matrix are real then the matrix is called real matrix.
A real matrix is equal to its conjugate.

2.2 EIGENVALUES AND EIGENVECTORS

A nonzero column vector X is an eigenvector of a square matrix A, if there exists a


scalar  such that AX = X, then  is an eigenvector (or characteristic value) of A. Eigenvalue
may be zero but the corresponding vector may not be a zero vector.

The characteristic equation of n x n matrix A is the nth degree polynomial of equation.


|I - A| = 0, where I is the unit matrix. Solving the characteristic equation for  gives the
eigenvalues of A. The eigenvalues may be real, complex or multiples of each other.

Once an eigenvalue is determined it may be substituted into AX = X and then that


equation may be solved for the corresponding eigenvector.

PROPERTIES OF EIGENVALUES AND EIGENVECTORS

1. The sum of the eigenvalues of a matrix is equal to its trace, which is the sum of the
elements on its main diagonal.

2. Eigenvectors corresponding to different eigenvalues are linearly independent.

3. A matrix is singular if and only if it has a zero eigenvalue.

4. If X is an eigenvector of A corresponding to the eigenvector of  and A is invertible,


then X is an eigenvector of A-1 corresponding to its eigenvalue 1/.

5. If X is an eigenvector of a matrix then KX is also an eigenvector for any nonzero


constant K. Here both X and KX correspond to the same eigenvector.

6. A matrix and its transpose have the same eigenvalues.

7. The eigenvalues of an upper or lower triangular matrix are the elements on its main
diagonal.

8. The product of the eigenvalues (counting multiplicities) of the matrix equals the
determinant of the matrix.

9. If X is an eigenvector of A corresponding to eigenvalue of , then X is an eigenvector


of A-CI corresponding to the eigenvalue -C for any scalar C.

DETERMINATON OF EIGENVECTORS

Case 1: Distinct eigenvalues

If the eigenvalues of A are all distinct, then we have only one independent eigenvector
corresponding to any particular eigenvalue i. The eigenvector corresponding to i may be
obtained by taking cofactors of matrix (i I-A) along any row.

Let, mi = Eigenvector corresponding to i

66
Now the eigenvector mi is given by

…2.1

where Ck1, Ck2,…Ckn are cofactors of matrix (i I-A) along kth row.

Case ii: Multiple eigenvalues

In this case the eigenvectors corresponding to the distinct eigenvalues are evaluated as
mentioned in case (i).

If the matrix has repeated eigenvalues with multiplicity “q”, then there exists only one
independent eigenvector corresponding to that repeated eigenvalue. If I is a repeated
eigenvalue, then the independent vector corresponding to I can be evaluated by taking the
cofactor of matrix (I I-A) along any row as mentioned in case (1). The remaining (q-1)
eigenvectors can be obtained as shown in Eqn (2.2).

Let, mp = pth eigenvector corresponding to repeated eigenvalue i.

…2.2

where ck1, ck2, ck3….ckn are cofactors of matrix (i I-A) along kth row

2.3 SIMILARITY TRANSFORMATION

The square matrices A and B are said to be similar if a non singular matrix P exists such
that

P-1 AP = B …2.3

The process of transformation is called similarity transformation and it is a linear


transformation. Thematrix P is called transformation matrix. Also the matrix, A can be obtained
from B by a similarly transformation with a transformation matrix P -1,

i.e., A = P B P-1 …2.4

The similarity transformation can be used for diagonalization of a square matrix. If an


n x n matrix has n linearly independent eigenvectors (i.e., with distinct eigenvalues) then it can

67
be diagonalized by a similarity transformation. If a matrix has multiple eigenvalues then it will
not have a complete set of n lineraly independent eigenvectors and so it cannot be diagonalized.
However such a matrix can be transformed into a Jordan matrix (Jordan canonical form).

The transformation matrix for diagonalization or converting to Jordan form can be


obtained from eigenvectors. For a system with n state variables we can find numbers of
eigenvectors m1, m2, m3, ……., mn. The eigenvectors are column vectors of order (nx1). The
transformation matrix is obtained by arranging the eigenvectors columnwise as shown in Eqn
(2.5). This transformation matrix is also called Modal matrix and denoted by M.

Modal matrix, M = [ m1 m2 m3 ………. Mn ] …2.5

The similarity transformation will not alter certain properties of the matrix. A property
of a matrix is said to be invariant if it is possessed by all similar matrices. The determinant,
characteristic equation and trace of a matrix are invariant under a similarity transformation.
Since the characteristics equation is invariant the eigenvalues are also invariant under a linear
or similarity transformation.

PROOF FOR INVARIANCE OF DETERMINANT

Let A and B are similar matrices and P be the transformation matrix which transforms
A to B by a similarity transformation, P -1 AP = B.

 B = P-1 AP …2.6

On taking determinant of Eqn (2.6) we get,

|B| = |P-1 AP| …2.7

Since the determinant of a product of two or more square matrices is equal to the
product of their individual determinants, the Eqn (2.7) can be written as,

From the above analysis it is evident that the determinant of a matrix is invariant under
a similarity transformation.

PROOF FOR INVARIANCE OF CHARACTERISTIC EQUATION AND


EIGENVALUES

Let A and B are similar matrices and P be the transformation matrix which transforms
A to B by a similarity transformation, P -1 AP = B.

The characterisrtic equation of matrix B is given by

…2.8

On substituting B = P-1 AP is Eqn (2.8) we get,

68
…2.9

Since the determinant of a product is the product of the determinant, the Eqn (2.9) can
be written as,

From the above analysis it is clear that the characteristic equations of A and B are
identical. Since the characteristic equations are identical, the eigenvalues of A and B are
identical. Hence the eigenvalues are invariant undr a similarity (linearity) transformation.

PROOF FOR INVARIANCE OF TRACE OF A MATRIX

Let A and B are similar matrices and P be the transformation matrix which transforms
A to B by a similarity transformation, P -1 AP = B.

 tr B = tr P-1 AP …2.10

For an n x m matric C and m x n matrix D, regardless of whether CD = DC or CD 


DC, we have,

tr (CD) = tr (DC) …2.11

Using the property of Eqn (2.11), the Eqn (2.10) can be written as,

From the above analysis it is clear that the trace of a matrix is invariant under a
similarity transformation.

2.4 CAYLEY – HAMILTON THEOREM

The Cayley – Hamilton theorem states that every square matrix satisfies its o wn
characteristics equation.

Consider an n x n matrix A and its characteristics equation [Eqn (2.12)].

…2.12

By Cayley-Hamilton theorem, the matrix A has to satisfy its characteristic equation,


hene Eqn (2.12) can be written as,

…2.13

69
PROOF OF CAY-LEY HAMILTON THEOREM

Let A be a square matrix. The characteristic equation of A is given by

..2.14

We have to prove that A satisfies the characteristic equation,

…2.15

where I is the unit matrix of order (n x n).

Consider the matrix (I – A). Let the matrix B be adjoint of (I – A).

…2.16

The elements of adj (I-A) are the cofactors of the elements of (I-A). Therefore each
element of B will be a polynominal in  of degree (n-1) or less. We know that every matrix
whose elements are ordinary polynominals can be written as matrix polynominal. Hence the
matrix B can be written as a matrix polynomial as shown in Eqn (2.17.).

…2.17

From equations (2.16) and (2.17) we get,

…2.18

We know that, (I-A) (I-A)-1 = 1

But,

Using equations (2.12) and (5.18), the equation (5.19) can be written as,

…2.19

70
On equating the coefficient of like powers of  in Eqn (2.18) we get the following (n+1)
equations

On premultiplying both sides of equations (1), (2), (3),… (n-1), (n) and (n+1) by An,
An+1, An-2,…. A2 A and I represectively we get the following (n+1) equations.

On adding the above (n+1) equations we get, (i.e., all the left hand side terms gets
cancelled and becomes zero),

…2.20

The Eqn (5.20) shows that the matrix A satisfies its characteristic equation. Thus
Cayley-Hamilton theorem is proved.

COMPUTATION OF THE FUNCDTION OF A MARIX USING CAYLEY-


HAMILTON THEOREM

The Cayley-Hamilton theorem provides a simple procedure for evaluting the function
of a matrix. Consider a matrix A of order (n x n) with eigenvalues 1, 2, 3,…. n. The
characteristic equation q() of matrix A will be as shon in Eqn (2.21)

…2.21

Let f(A) be a function of matrix A and f(A) can be expressed as a matrix polynomial.
Let f() be a scalar polynominal obtained from f(A) after substituting A by .

On dividing f() by q(), we get

71
…2.22

where Q() = Quotient polynoinal


and R() = Remainder polynominal

…2.23

If we evaluate the Eqn (5.23) using the eigenvalues 1, 2, 3, ….n then from Eqn
(5.23) we get, q() = 0 and we have,

…2.24

where i = 1, 2, 3, … n

The remainder polynominal R() will be in the form of Eqn (2.25) shown below.

…2.25

where α0, α1, α2, ….., αn-1 are constants

From equations (2.24) and (2.25) when  = I we get,

…2.26

where i = 1,2,3,… n

On substituting the n number of eigenvalues in Eqn (2.27), one by one, we get n number
of equations. There equations can be solved to find the constants α 0, α1,….,αn-1.

..2.27

The Cayley-Hamilton theorem says that every square matrix satisfies its characteristic
equation and so q(A) = 0. Therefore the Eqn (2. 28) can be written as,\

…2.28

From Eqn (2.28) we get,

…2.29

From equations (2.28) and (2.29) we get,

…2.30

72
The Eqn (2.30) can be used to evaluate the function f(A). On substituting the
eigenvalues in Eqn (2.26) we get n-number of linear equations. The constants α0, α1, α2, α3, ….
αn-1 are obtained by solving these n-number of linear equations.

The Eqn (2.26) can be used to form n-number of independent equations only when all
the eigenvalues are distinct. If the matrix A have a multiple eigenvalue with multiplicity in then
only one independent equation can be obtained by substituting the multiple eigenvalue in Eqn.
(2.26). The remaining (m-1) equations are obtained by differentiating Eqn (2.26) after replacing
i by  and then evaluating with  = p where p is the multiple eigenvalue, as shown in Eqn
(5.31). [The equations corresponding to distinct eigenvalues are obtained by substituting the
eigenvalues in Eqn (2.26)].

…2.31

where j = 1, 2, 3, …., (n-1)

The equation (2.30) can also be used to compute the state transition matrix of continous
time system eAt by taking f(A) = eAt and the state transition matrix of discrete time system Ak
by taking f(A) = Ak.

Note: In order to solve f(A) = eAt, when the eigenvalues are distinct the equations (2.26)
and (2.30) can also be obtained by using the sylvester’s interpolation formula given below

EXAMPLE 2.1

Find

SOLUTION

The characteristic equation is given by

73
The eivenvalues 1, 2 are roots of charactertistic equation.

1 = -2, 2 = -3,

Given that, f(A) = A7, f() = 7

…2.1.1

…2.1.2

…2.1.3

From Eqn (2.1.1) and (2.1.3) when i = 1 = -2, we get,

… 2.1.4

From Eqn (2.1.2) and (2.1.3) when i = 1 = -3 we get,

…2.1.5

On substituting for α1 from Eqn (2.1.5) in Eqn (2.1.4) we get,

On substituting the value of α0 in Eqn (2.1.5) we get,

74
ALTERNATE METHOD

EXAMPLE 1.2

For

Compute the state transition matrix e At using Cayley-Hamilton theorem.

SOLUTION

Given that,

The characteristic equation is given by

The eigenvalues 1, 2 are roots of characteristic equation.

…2.2.1
…2.2.2

…2.2.3

From Eqn (2.2.1) and (2.2.3) when i = 1 = -1, we get,

75
…2.2.4

From Eqn (2.2.2) and (2.2.3) when I = 2 = -2, we get,

…2.2.5

On substitutign for α1 from Eqn (2.2.5) in Eqn (2.2.4) we get,

On substituting the vaue of α0 in Eqn (2.2.5) we get,

By Cayley-Hamilton theorem,

 State transititon matrix,

EXAMPLE 2.3

The system matrix A of as discrete time systme is given by A =

76
Compute the state transitiotn matrix Ak using the Cayley-Hamilton theorem.

SOLUTION

The characteristic equation is given by

The eigenvalues 1, 2 are roots of characteristic equation.

…2.3.1

…2.3.2

…2.3.3

From Eqn (2.3.1) and (2.3.3) when i = 1 = - 1 we get,

…2.3.4

From Eqn (2.3.2) and (2.3.3) when i = 2 = -2, we get,

…2.3.5

77
On substituting for α1 from Eqn (2.3.5) in Eqn (2.3.4) we get,

On substituting the value of α0 in Eqn (2.3.5) we get,

By Cayley-Hemilton theorem,

2.5 TRANSFORMATION OF STATE MODEL

The state model of a system is not unique and it canbe formed using physical variables
phase variables or canonical variables. The physical variables are useful from application point
of view because they can be measured and used for contorl purposes. However, the state model
using physical variables is not convenient for investigation of system properties and evaluation
of time response. But the canonical state model is most convenient for time domain analysis.
In canonical model the system matrix A will be a diagonal matrix. Therefore each component
state variable equation is a first order equation and is decoupled from all other component state
variable equation.

When a non diagonal system matrix A has distinct eigenvalues, it can be converted
diagonal matrix by a similarity transformation using modal matrix, M. Due to this the state
model is transformed to canonical form.

78
When a non diagonal system matrix has multiple eigenvalues, it can be converted to
Jordan matrix by a similarity transformation using modal matrix, M. Due to this the state model
is transformed to Jordan canonical form.

CANONICAL FORM OF STATE MODEL

Consider the state equation of a system, 𝑋̇ = AX + BU, where X is the state variable
vector of order n x 1. Let us define a new state variable vector Z, such that X = MZ, where M
is the Modal Matrix or Diagonalizaiton matrix.

The state model of the nth order system is given by

On substituting X = MZ in the state model of the system, we get

…2.32

…2.33

Premultiply Eqn (5.32) by M-1

…2.34

The relation governing X and Z is, X = MZ. …2.35

On differentiating Eqn (2.35), we get, 𝑋̇ = M𝑍̇ …2.36

On premultiplying the Eqn (2.36) by M -1 we get

M-1𝑋̇ = 𝑍̇ …2.37

From Eqn (2.34) and (2.37), we get,

𝑍̇ = M-1 AMZ + M-1 BU …2.38

L1¸ M-1 AM = ~ (called grammian matrix) …2.39

M-1 B = 𝑋̇ = 𝐵̃ … 2.40

CM = 𝐶̃ …2.41

79
SCHOOL OF ELECTRICAL AND ELECTRONICS

DEPARTMENT OF ELECTRICAL AND ELECTRONICS

UNIT – III – Advanced Control Systems – SEEA1602

80
CONCEPTS OF CONTROLLABILITY AND OBSERVABILITY
CONTROLLABILITY
The controllability verifies the usefulness of a state variable. In the controllability test
we can find, whether the state variable can be controlled to achieve the desired output. The
choice of state variables is arbitrary while forming the state model. After determining the state
model, the controllability of the state variable is verified. If the state variable is not controllable
6then we have to go for another choice of state variable.

Definition of controllability

A system is said to be completely state controllable if it is possible to transfer the system


state from any initial state X(t 0) to any other desired state X(t0) in specified finite time by a
control vector U(t).

The controllability of a state model can be tested by Kalman’s test or Gilbert’s test.

Gilbert’s method of testing controllability

Case (i): When the system matrix has distinct eigenvalues

In this case the system matrix can be diagonalized and the state model can be converted
to canonical form.

Consider the state model of the system,

The state model can be converted to canonical form by a transformation, X = MZ, where
M is the modal matrix and Z is the transformed state variable vector.

The transformed state model is given by

where

In this case the necessary and sufficient condition for complete controllability is that,
the matrix 𝐵̃ must have no rows with all zeros. If any row of the matrix 𝐵̃ is zero then the
corresponding state variable is uncontrollable.

Case (ii): When the system matrix has repeated eigenvalues

In this case, the system matrix cannot be diagonalized but can be transformed to Jordan
canonical form.

81
Consider the state model of the system,

The state model can be transformed to Jordan canonical form by a transformation.


X = MZ, where M is model matrix and Z is the transformed state variable vector.

The transformed state model is given by,

where

In this case, the system is completely controllable if the elements of any row of B that
correspond to the last row of each Jordan block are not all zero and the rows corresponding to
other state variables must not have all zeros.

Kalman’s method of testing controllability

Consider a system with state equation, 𝑋̇ = AX + BU. For this system, a composite
matrix, Qc can be formed such that,

…3.1

where n is the order of the system (n is also equal to number of state variables)

In this case the system is completely state controllable if the rank of the composite
matrix, Qc is n.

The rank of the matrix is n, if the determinant of (n x n) composite matrix Q r is non-


zero. i.e., if |Qc|  0, then rank of Qc = n and the system is completely state controllable.

The advantage is kalman’s test is that the calculations are simpler. But the disadvantage
in kalman’s test is that, we can’t find the state variable which is uncontrollable. But is Gilbert’s
method we can find the uncontrollable state variable which is the state variable corresponding
to the row of 𝐵̃ which has all zeros.

Condition for complete state controllability in the s-plane

A necessary and sufficient condition for complete state controllability is that no


cancellation of poles and zeros occurs in the transfer function of the system. If cancellation
occurs then the system cannot be controlled in the direction of the cancelled mode.

OBSERVABILITY

In observability test we can find whether the state variable is observable or measurable.
The concept of observability is useful in solving the problem of reconstructing unmeasurable

82
state variables from measurable ones in the minimum possible length of time. In state feedback
control the estimation of unmeasurable state variables is essential in order to construct the
control signals.

Definition of observability

A system is said to be completely observable if every state X(t) can be completely


identified by measurements of the output Y(t) over a finite time interval. The observability of
a system can be tested by either Gilbert’s method or Kalman’s method.

Gilber’s method of testing observability

Consider a state model of nth order system, 𝑋̇ = AX + BU ; Y = CX + DC

The state model can be transformed to a canonical or Jordan canonical form by a


transformation, X = MZ, where M is the modal matrix and Z is the transformed state variable
vector.

The transformed state model is,

where  = M-1 AM; if eigenvalues are distinct ; 𝐵̃ = M-1 B


J= M-1 AM; if eigenvalues have multiplicity; 𝐶̃ = CM

The necessary and sufficient condition for complete observability is that none of the
columns of the matrix 𝐶̃ be zero. If any of the column’s of 𝐶̃ has all zeros then the
corresponding state variable is not observable.

Kalman’s Test for observability

Consider a system with state model, 𝑋̇ = AX + BU ; Y = CX + DU

For this system, a composite matrix, Q0 can be formed such that,

…3.2

where n is the order of the system (n is also equal to number of state variables)

In this case, the system is completely observable if the rank of composite matrix, Q 0
is n. The rank of the matrix is n, if the determinant of n x n composite matrix Q o is non-zero.
The disadvantage is Kalman’s test is that, the non observable state variables cannot be
determined.

Condition for complete observability in the s-plane

The necessary and sufficient condition for complete observability is that no cancellation
of poles and zeros occurs in the transfer function. If cancellation occurs, the cancelled mode
cannot be observed in the output.

83
RELATIONSHIPS BETWEEN CONTROLLABILITY, OBSERVABILITY &
TRANSFER FUNCTIONS

The concepts of controllability and observability play an important role in the design
of control system is state space. They govern the existence of a complete solution to the control
system design problem. The solution to this problem may not exist if the system considered is
not controllable.

It is important to note that all physical systems are controllable and observable.
However, the mathematical models of these systems may not posses the property of the
controllability or observability. Then it is necesasry to know the conditions under which a
system is controllable and observable and the designer can seek another state model which is
controllable and observable.

Duality property

The concepts of controllabilitu and observability are dual concepts and its is proposed
by Kalman as principle of duality. The principle of duality states that a system is completely
state controllable if and only if its dual system is completely observable or viceversa. [i.e., if
the system is observable then its duly is controllable]. Uusing the principle of duality, the
observability of a given system can be checked by testing the state controllability of its dual or
vice-versa.

Consider the system S1, described by the state model shown below.

Let the dual of system S1 be denoted as S2 and the dal system S2 is described by the
following state model.

where, Z = State vector of dual system


V = Input vector of dual system
N = Output vector of dual system

For the system S1 the composite matrix, Qc1 for controllability is given by Eqn (3.3)
and the composite matrix, QaI for observability is given by Eqn (3.4).

…3.3
…3.4

For the dual system S2 the composite matrix, Q c2 for controllability is given by Eqn
(3.5) and the composite matrix Qc2 for observability is given by Eqn (3.6).

…3.5

…3.6

84
From equations (3.3) and (3.4) we get Qc1 = Qc2, hence if the system S1 is controllable
then its dual system S2 is observable.

From equations (3.5) and (3.6) we get Qc1 = Qc2, hence if the system S1 is observable
then its dual system S2 is controllable.

Effect of pole-zero cancellation in transfer function

The concepts of controllability and observability are closely related to the properties of
the transfer function. Consider an nth order system with distinct eigenvalues. The transfer
function of the system can be expressed as a ratio of the two polynominals as shown in Eqn.
(3.7).

…3.7

By partial fraction expansion technique the Eqn (3.8) can be written as,

…3.8

where C1, C2, C3,….. Cn are residues.

If the transfer function has identical pair of pole and zero at ß i = i, then Ci = 0. The
effect of this cancellation on controllability and observability properties depends on the choice
of state variables [or depends on the method of forming state model].

In one method of state space modelling using canonical of variables, the C i = 0, will
appear in input (control) vector B and the state xi is uncontrollable. In another method of state
space modelling using canonical variables, the Ci = 0, will appear in output vector C and the
state xi is shielded from observation.

From the above discussion we can conclude that if cancellation of pole-zero occurs in
the transfer function of a system, then the system will be either not state controllable or
unobservable, depending on how the state variables are defined (or chosen). If the transfer
function does not have pole-zero cancellation, the system can always be represented by
completely controllable and observable state model.

EXAMPLE 1.6

Write the state equations for the system shown in


Figure 3.1 in which x1, x2 and x3 constitute the state vector.
Determine whether the system is completely controllable and
observable.
Figure 3.1

85
SOLUTION

To find state model

The state equations are obtained by writing equations for the output of each block and
then taking inverse Laplace transform.

With reference to Figure 3.2 we can write,

Figure 3.2

On taking inverse laplace transform

Figure 3.3
…3.6.1

With reference to Figure 3.3, we can write,

X3(s) = sX1(s)

On taking inverse laplace transform

x3 = 𝑥̇ 1 …3.6.2

With reference to Figure 3.6.4 we can write


Figure 3.4

On taking inverse Laplace transform

…3.6.3

From Eqn (2.6.2), 𝑥̇ 1 = x3 ; 𝑥̈ 1 = 𝑥̇ 3

Put 𝑥̇ 1 = x3 and 𝑥̈ 1 = 𝑥̇ 3 in equation (3.6.1)

The state equation are given by equations (3.6.2), (3.6.3), and (3.6.4)

86
The output equation is y = x1

The state model in the matrix form is

To find eigenvalues

Here the system matrix,

The characteristic equation is | I – A | = 0

The eigenvalues are 1 = -1, 2 = -1, and 3 = -4,

To find eigenvectors

Let C11, C12 and C13 be cofactors along Ist row of the matrix (1 I-A)

87
Let C11, C12 and C13 be the cofactors along Ist row of the matrix (3 I-A):

To find canonical form of state model

The modal matrix, M is given by

88
The Jordan canonical form of state model is shown below.

CONCLUSION

It is observed that the elements of the rows of 𝐵̃ are not all zeros. Hence the system is
completely controllable (or state controllable).

It is observed that the elements of the columns of 𝐶̃ are not all zeros. Hence the system
is completely observable [i.e., all the state variables are observable].

ALTERNATE METHOD

KALMAN’S TEST FOR CONTROLLABILITY

89
The composite matrix for controllability,

Hence the system is completely state controllable.

KALMAN’S TEST FOR OBSERVABILITY

Hence the system is completely observable (or all the state variables of the system are
observable).

90
3.7 CONTROLLABLE PHASE VARIABLE FORM OF STATE MODEL

A controllable system can be represented by a modified state model called controllable


phase variable form by transforming the system matrix, A into phase variable form (Bush form
or companion form).

Consider the state model of nth order system with single-input and single output as
shown below.

𝑋̇ = AX + Bu …3.9

y = CX + Du …3.10

Let us choose a transformation, Z = Pc X to transform the state model to controllable


phase variable form.

Here Z = Transformed state vector of order (n x 1)

Pc = Transformation matrix of order (n x n)

On premultiplying the equation Z = Pc X by Pc-1 we get

Pc-1Z = Pc-1 Pc X

 X = Pc-1 Z

On differentiating the equation X = Pc-1Z we get,

𝑋̇ = Pc-1Z

On substituting X = Pc-1Z and 𝑋̇ = Pc-1 𝑍̇ in the state model (equation (3.9) and (3.10))
of the system we get,

Pc-1 𝑍̇ = APc-1 Z + Bu …3.11

y = C Pc-1 Z + Du …3.12

On premultiplying the equations (3.11) by Pc we get,

𝑍̇ = PcAPc-1 Z + PcBu

y = C Pc-1 Z + Du

Let, Pc APc-1 = Ac ; PcB = Bc and CPc-1 = CC

91
 𝑍̇ = AC Z + BCu …3.13

y = CC Z + Du …3.14

The equations (3.13) and (3.14) are called the controllable phase variable form of state
model of the system.

Note: In controllable phase variable form of state model the matrices AC, BC and CC
will be as shown below.

Determination of transformation matrix, Pc

The n x n transformation matrix, Pc and be expressed as n-numbers of row vectors


(Matrices) as shown below.

…3.15

Where

The transformation Z = Pc X can be written in the expanded form as,

…3.16

92
From equation (3.16) we get,

 z1 = P1X …3.17

On differentiating equation (3.17) we get

𝑧̇ 1 = P1𝑋̇ …3.18

On substituting for 𝑋̇ from equation (3.9) in equation (3.18) we get

Since the transformed state variables are functions of state variables alone, the term P 1B
will be zero (i.e., P1B = 0)

𝑧̇ 1 = P1 AX …3.19

We know that, 𝑧̇ 1 = z2

z2 = 𝑧̇ 1 = P1 AX …3.20

On differentiating equation (3.20) we get

𝑧̇ 2 = P1A𝑋̇ …3.21

On substituting for 𝑋̇ for equation (3.9) in equation (3.21) we get,

We know that, 𝑧̇ 2 = z3

 z3 = P1A2X …3.22

Similarly the kth transformed state variable zk can be expressed as

Hence the n-numbers of transformed state variables can be expressed as shown below.

93
On arranging the above equations in the matrix form we get

…3.23

Providing P1B = P1AB = … = P1A(n-2) B = 0 and P1A(n-1) B = 1.

The equation (3.23) is same as Z = PcX we can write,

…3.24

On arranging the elements P1B, P1AB, P1A2B,…,P1A(n-1)B as column vector we get

…3.25

where, Qc = [B AB A2B … A(n-2) B A(n-1) B A(n-1)B)] …3.26

Using the equation (3.24), (3.25) and (3.26), the transformation matrix, P c can be
evaluated.

Alternate method to find transformation matrix, Pc

Let A be the system matrix of original state model. Now the characteristic equation
governing the system is given by Eqn (3.27).

…3.27

94
Using the coefficients a1, a2,…an-2, an-1 of characteristics equation [Eqn (3.27)] we can
form a matrix, W as shown in Eqn (3.28).

…3.28

Now the transformation matrix, Pc is given by

Pc = (Q W)-1 …3.29

(or) Pc-1 = (Qc W) ….3.30

Where, Q = [B AB A2B….A(n-2)B A(n-1)B]

EXAMPLE 1.7

The state model of a system is given by

Convert the state model to controllable phase variable form

SOLUTION

The given state model can be transformed to controllable phase variable form, only if
the system is completely state controllable. Hence check for controllability.

Kalman’s test for controllability

From the given state model we get,

95
The composite matrix for controllability,

Determinant of

Since, QC  0, the rank of Qc = 0. Hence the system is completely state controllable.

To find transformation matrix Pc

The system state model can be converted to controllable phase variable form by
choosing a transformation matrix, Pc.

96
 Transformation matric,

To determine the controllable phase variable form of state model

The controllable phase variable form of state model is given by,

𝑍̇ = AcZ + Bcu

y = CcZ (Here D is not given)

Where Ac = PcAPc-1 ; Bc = PcB and Cc = CPc-1

The transformation matrix,

97
The controllable phase variable form of state model is given by,

Alternate method of find Pc

From the given state model we get,

The characteristic equation is 1 + 62 + 9 + 4 = 0

The standard form of characteristic equation when n = 3 is given by

3 + a12 + a2 + a3 = 0

On comparing the characteristic equation of the system with standard form we get,

a1 = 6, a2 = 9 and a3 = 4

98
1.8 CONTROL SYSTEM DESIGN VIA POLE PLACEMENT BY STATE
FEEDBACK

In the conventional approach to the design of a single-input, single-output control


system, a controller or compensator is designed such that the dominant closed-loop poles have
a desired damping ratio,  and undamped natural frequency, n. In the compensated system the
output alone is used as feedback signal to achieve desired performance. In state space design
any inner parameter or variable of a system can be used for feedback. If the state variables
(inner parameters or variables of the system) are used for feedback, then the system can be
optimized for satisfying a desired performance index.

In control system design by pole placement or pole assignment technique, the state
variables are used for feedback, to achieve desired closed loop poles. The advantage in this
system is that the closed loop poles may be placed at any desired locations by means of state
feedback through an appropriate state feedback gain matrix, K. The necessary and sufficient
condition to be satisfied by the system for arbitrary pole placement is that the system be
completely state controllable.

Consider the nth order single – input single-output system with and without state
variable feedback as shown in Figure 3.5. The state model of the system without state feedback
is given by.

𝑋̇ = AX + Bu …3.31

Y = CX …3.32

99
Figure(a) System without state feedback Figure(b) System with state feedback

Figure 3.5 The nth order single – input single – output system

Let r = System input when state variable feedback is employed.


σ = Feedback signal obtained from state variables.
U = Plant input.

The feedback signal, σ is obtained from state feedback and it is related to the state
variables by the equation,

σ = KX …3.33

where K = State feedback gain matrix of order (1 x n) and

K = (k1 k2 k3 … kn) …3.34

In system employing state variable feedback, the plant input, u is the difference
between system input, r and feedback input, σ.

 Plant input, u = r – σ …3.35

On substituting, σ = KX in equation (3.35) we get,

u = r – KX …3.36

The equation (3.36) is called control law.

The state equation of the system with state variable feedback is obtained by substituting
the expression for u, from equation (3.36) in equation (3.31).

Therefore, the state model of the system with state variable feedback is given by the
following equations [Eqn (3.32) and (3.33)].

𝑋̇ = (A-BK) X + Br …3.32

y = CX …3.33

where, K = [k1, k2, k3 … kn]


and r = u + KX

100
This design technique starts with the determination of desired closed-loop poles to
satisfy transient response and/or frequency response requirements. By choosing an appropriate
gain matrix, K for state feedback, it is possible to force the system to have closed loop poles at
the desired locations, provided that the original system is completely state controllable. In this
design technique it is assumed that all state variable are measurable and are available for
feedback.

DETERMINATION OF STATE FEEDBACK GAIN MATRIX, K

The state feedback gain matrix can be determined by three methods. In all the three
methods, the system has to be first checked for complete state controllability.

The state model of the original nth order system is given by

𝑋̇ = AX + Bu

Y = CX

To check for controllability of original system, determine the composite matrix for
controllability Qc.

Where, Qc = [B AB A2B … An-1 B]

Then calculate the determinant of Qc. If the determinant of Qc is not equal to zero, then
the rank of Qc is n and so the system is completely state controllable. (Here n is the order of
the system). If the rank is not equal to n then arbitrary pole placement is not possible. When
the system is completely state controllable any one of the following method can be used to find
K.

METHOD – I

1. Determine the characteristic polynomial of original system. The characteristic


polynomial is given by |I – A| = 0.

2. Determine the desired characteristic polynomial from the specified closed loop poles.
Let the specified or desired closed loop poles be µ1, µ2, µ3, …. µn.

Now the desired characteristic polynomial is given by

3. Determine the transformation matrix, Pc which transforms the original state model to
controllable phase variable form.

The transformation matrix,

101
and, P1 = [0 0 … 0 1] Q-1c

4. Determine the state feedback gain matrix, from the following equation.

Note: If the given system state modal is in controllable phase variable form then P c = 1, unit
matrix.

METHOD – II

1. Determine the characteristic polynomial of the system with state feedback, which is
given by, | 1-(A-BK) | = 0.

Here take, K = [k1, k2, k3 … kn]

Let |1-(A-BK) | = |1-A+BK| = n + b1 n-1 + b2 n-2 + … bn-1  + bn.

The coefficients of this polynomial b1, b2, b3,… bn will be functions of k1, k2, k3,… kn.

2. Determine the desired characteristic polynomial from the specified closed loop poles.
Let the specified on desired closed loop poles be µ 1, µ2, µ3, … µn. Now the desired
characteristic polynomial is given by,

3. By equating the coefficients of polynomials obtained in step-1 and step-2, we get n-


number of equations.

i.e., b1 = α1 ; b2 = α2 ; …. ; bn-1 = αn-1 and bn = an.

On solving these equations we get the elements k1, k2, … kn of state feedback gain
matrix, K.

Note: Method – II is suitable only for low values of n (i.e. for 2 nd and 3rd order systems)
otherwise calculations will be tedious.

METHOD – III

1. Determine the desired characteristic polynomial from the specified closed loop poles.
Let the specified or desired closed loop poles be µ1, µ2, µ3, … µn.

Now the desired characteristic polynomial is given by,

2. Determine the matrix (A) using the coefficients of desired characteristic polynomial.

102
3. Calculate the state feedback gain matrix, K, using the Ackermann’s formula given
below.

K = [0 0 … 0 1] Qc-1  (A)

Where, Qc = [B AB A2B … An-1 B]

EXAMPLE 1.8

Consider a linear system described by the transfer function

Design a feedback controller with a state feedback so that the closed loop poles are
placed at -2, -1  j1

SOLUTION

To determine the state equation of the system

…3.8.1

On cross multiplying the equation (3.8.1) we get,

…3.8.2

On taking inverse laplace transform of equation (3.8.2) we bet,

… 3.8.3

Let us define state variables as follows,

in equation (3.8.3)

The state equations governing the system are

103
The state equation in the matrix form is

…3.8.4

Check for controllability

…3.8.5

Since, QC  0, the system is completely state controllable.

To find Qc-1

From equation (3.8.6) and (3.8.7) we get

104
To find desired characteristic polynomial

The desired closed loop poles are

Hence the desired characteristic polynomial is

The desired characteristic polynomial is

…3.8.9

To determine the state variable feedback matrix, K

Method – I

Characteristic polynomial of original system is given by | I – A| = 0

105
The characteristic polynomial of original system is,

…3.8.10

From Eqn (3.8.9) we get the desired characteristic polynomial as

…3.8.11

From equation (3.8.8.) we get,

The state feedback gain matrix, K = [α3 – a3 α2 – a2 α1 – a1 ] Pc

From equation (3.8.11) we get, α3 = 4; α2 = 6; α1 = 4

From equation (3.8.10) we get, a3 = 0; a2 = 2; a1 = 3

 K = [4-0 6-2 4-3] Pc

106
Method – II

From the given state model we get,

Let, K = [k1 k2 k3]

The characteristic polynomial of the system with state feedback is given by,

The characteristic polynomial of the system with state feedback is

…3.8.12

From equation (3.8.12) we get the desired characteristic polynomial as,

…3.8.13

On equating the coefficients of 0 term (constant) is equations (3.8.12) and (3.8.13) we


get,

107
On equating the coefficient of 1 term in equations (3.8.12) and (3.8.13) we get,

On equating the coefficient of 2 term in equations (3.8.12) and (3.8.13) we get,

The state feedback gain matrix, K = [k1 k2 k3 ] = [0.4 0.4 0.1]

Method – III

From equation (3.8.9) we get the desired characteristic polynomial as,

…3.8.14

Here, (A) = A3 + α1 A2 + α2 A + α3 I

From equation (3.8.14) we get, α1 = 4; α2 = 6; α3 = 4.

From the given state equation and equation (3.8.5) we get,

From equation (3.8.8) we get,

From Ackermann’s formula we get,

108
The state feedback gain matrix K = [ 0.4 0.4 0.1 ]

Note: It is observed that the values of k1, k2, k3 obtained by all the three methods are
same. Because for a given set of poles the values of k 1, k2, k3, … will be unique.

EXAMPLE 1.9

A single input system is described by the following state equations.

Design a state feedback controller which will give closed-loop poles at -1  j2, -6.

SOLUTION

Check for controllability

109
Since, QC  0, The system is completely state controllable.

To find QC-1

From equations (3.9.2) and (3.9.3) we get,

…3.9.4

To find desired characteristic polynomial

The desired closed loop poles are,

Hence the desired characteristic polynomial is,

The desired characteristic polynomial is 3 + 82 + 17 + 30 = 0

110
To determine the state variable feedback matrix, K

Method – I

The characteristics equation of original system is given by,

The characteristic polynomial of original system is.

…3.9.6

From equation (3.9.5) we get the desired characteristic polynomial as

…3.9.7

From equation (3.9.4) we get,

111
The state feedback gain matrix, K = [α3 – a3 α2 – a2 α1 – a1 ] Pc

From equation (3.9.7) we get, α3 = 30 ; α2 = 17 ; α1 = 8

From equation (3.9.6) we get, a3 = 6 ; a2 = 11 ; a1 = 6

Method – II

From the given state model we get

The characteristic polynomial of the systems with state feedback is given by,

112
The characteristic polynomial of system with state feedback is

…3.9.8

From equation (3.9.5) we get the desired characteristic polynomial as,

…3.9.9

On equating the coefficients of 2 term in equations (3.9.8) and (3.9.8) we get,

…3.9.10

On equation the coefficients of 1 term in equations (3.9.8) and (3.9.9) we get,

…3.9.11

On equating the coefficients of 0 term (constant) in equation (3.9.8) and (3.9.9) we


get,

113
…3.9.12

The equations (3.9.10), (3.9.11) and (3.9.12) can be arranged in the matrix form and k 1,
k2 and k3 are solved using cramer’s rule as shown below.

The state feedback gain matrix, K = [k1 k2 k3 ] = [ -0.22 4.22 -2 ]

Method – III

From equation (3.9.5) we get the desired characteristic polynomial as,

3 + 82 + 17 + 30 = 0 …3.9.13

Here, (A) = A3 + α1 A2 + α2 A + a3 I

114
From equation (3.9.13) we get, α1 = 8 ; α2 = 17 ; α3 = 30

From the given state equation and equation (3.9.1) we get,

From equation (3.9.4) we get,

From Ackermann’s formula we get,

The state feedback gain matrix, K = [ -0.22 4.22 -2]

Note: The result obtained from all the three methods are same.

115
1.9 OBSERVABLE PHASE VARIABLE FORM OF STATE MODEL

An observable system can be represented by a modified state model called observable


phase variable form by transforming the system matrix A into the transpose of bush of
companion form as shown in equation (3.34)

…3.34

Consider the state model of a nth order system with single-input and single-output as
shown below.

𝑋̇ = AX + Bu …3.35

y = CX + Du …3.36

Let us choose a transformation Z = Po X to transform the state model of observable


phase variable form.

Here, Z = Transformed state vector of order (n x 1)


Po = Transformed matrix of order (n x n)

On premultiplying the equation, Z = Po X by Po-1 we get,

Po-1 Z = Po-1 PoX

X = Po-1 Z

On differentiating the equation X = Po-1 Z we get,

𝑋̇ = Po-1 𝑍̇

On substituting X = Po-1 Z and 𝑋̇ = Po-1 𝑍̇ in the state model [equations (3.35) and (3.36)
of the system we get,

Po-1 𝑍̇ = A Po-1 Z + B u …3.37

y = C Po-1Z + D u ….3.38

On premultipling the equation by Po we get,

Let PoA Po-1 = A ; PoB = B and C Po-1 = C,

𝑍̇ = AoZ + Bo u …3.39

y = Co Z + Du …3.40

116
The equation (3.39) and (3.40) are called observable phase variable from of state model
of the system.

Note: In observable phase variable form of state model the matrix A.

DETERMINATION OF THE TRANSFORMATION MATRIX PO

Let A be the system matrix of original state model. Now the characteristic equation
governing the system is given by equation (3.41).

…3.41

Using the coefficients a1, a2, …. aa-2 aa-1 of characteristic equation. [equation 3.41] we
can form a matrix was shown in equation (3.42)].

…3.42

Now the transformation matrix Po is given by

Po = W QoT … 3.43

Where Qo = [CT ATCT (AT)2CT …. (AT)n-1 CT]

EXAMPLE 1.10

The state model of a system is given by

Convert the state model to observable phase variable form.

SOLUTION

The given state model can be transformed to observable phase variable form, only if
the system is completely observable. Hence check for observability.

117
Kalman’s test for observability

From the given state model we get,

Since Qo , the rank of Qo = 3. Hence the system is completely observable.

To find transformation matrix, Po

118
The characteristic equation is,

The standard form of characteristic equation when n = 3 is given by,

On comparing the characteristic equation of the system with standard form we get,

a1 = 6, a2 = 9 and a3 = 4

119
To determine the observable phase variable form of state model

The observable phase variable form of state model is given by,

𝑍̇ = Ao Z + Bo u

Y = Co Z (Here D is not given)

Where, Ao = Po A Po-1 ; Bo = Po B and Co = C Po-1

120
SCHOOL OF ELECTRICAL AND ELECTRONICS

DEPARTMENT OF ELECTRICAL AND ELECTRONICS

UNIT – IV – Advanced Control Systems – SEEA1602

121
SAMPLED DATA CONTROL SYSTEMS
4.1 INTRODUCTION

When the signal or information at any or some points in a system is in the form of
discrete pulses, then the system is called discrete data system. In control engineering the
discrete data system is popularly known as sampled data system.

The control system becomes a sampled data system in any one of the following
situations.

1. When a digital computer or microprocessor or digital device is employed as a


part of the control loop.
2. When the control components are used on time sharing basis.
3. When the control signals are transmitted by pulse modulation.
4. When the output or input of a component in the system is a digital or discrete
signals.

The controllers are provided in control systems to modify the errors signal for better
control action. If the controllers are constructed using analog elements then they are called
analog controllers and their input and output are analog signals, which are continuous functions
of time. The analog controllers are complex, costlier and once fabricated it is difficult to alter
the controllers.

A digital controller can be employed to implement complex or time shared control


functions. [In time shared controller, a single controller will perform more than one function].
The digital controller are simple, versatile, programmable, fast acting and less costlier than
analog controllers.

The digital controller can be a special purpose computer (microprocessor based system)
or a general purpose computer or it is constructed using non-programmable digital devices.
When computer or microprocessor is involved then the controller becomes programmable and
its easier to alter the control functions by modifying the program instructions.

A sampled-data control system using digital controller is shown in Figure 4.1. The input
and output signal in a digital computer will be digital signals, but the error signal (input to the
controller) to be modified by the controller and the control signal (output of the controller) to
drive the plant are analog in nature. Hence a sampler and an analog-to-digital converter (ADC)
are provided at the computer input. A digital to analog converter (DAC) and a hold circuit are
provided at the computer output.

Figure 4.1 Sampled-data control system

122
The sampler converts the continous time-error signal into a sequence of pulses and
ADC produces a binary code (binary number) for each sample. These codes are the input data
to the digital computer which process the binary codes and produces another stream of binary
codes as output. The DAC and hold circuit converts the output binary codes to continous time
signal (Analog signal) called control signal. This output control signal is used to drive the plant.

ADVANTAGES OF DIGITAL CONTROLLERS

1. The digital controllers can perform large and complex computation with any desired
degree of accuracy at very high speed. In analog controllers the cost of controllers
increases rapidly with the increase in complexity of computation and desired accuracy.
2. The digital controllers are easily programmable and so they are more versatile.
3. Digital controllers have better resoultion.

ADVANTAGES OF SAMPLED DATA CONTROL SYSTEMS

1. The sampled data systems are highly accurate, fast and flexible.
2. Use of time sharing concept of digital computer results in economical cosst and space.
3. Digital transducers used in the system have better resolution.
4. The digital components used in the system are less affected by noise, non linearities
and transmission errors of noisy channel.
5. The sampled data system require low power instruments which can be built to have
high sensitivity.
6. Digital coded signals can be stored, transmitted, retransmitted, detected, analysed or
processed as desired.
7. The system performance can be modified by compensation techniques.

4.2 SAMPLING PROCESS

Sampling is the conversion of a continuous-time signals (or analog signal) into a


discrete-time signal obtained by taking samples of the continuous time signal (or analog signal)
at discrete time instants. Thus if f(L) is the input to the sampler as shown in Figure 4.2, the
output is f(kT) where T is called the sampling interval or samplnig period. The reciprocal of T,
i.e., 1/T=Fs is called the sampling rate (or samples per second or sampling frequency). This
type of sampling is called periodic sampling, since samples are obtained uniformly at intervals
of T seconds.

Fig.c Discrete signal or


Fig. a Sampler Fig.b Analog signal
sequence

Figure 4.2 Periodic sampling of an analog signal

123
(In this book only periodic sampling of signals is considered, because periodic sampling
is most widely used in practice. The other forms of sampling are multiple-order sampling,
multiple-rate sampling and Random sampling.

Multiple-order sampling: A particular sampling pattern is repeated periodically.

Multiple-rate sampling: In this method two simultaneous sampling operations with


different time periods are carried out on the signal to produce the sampled output.

Random sampling: In this case the sampling instants are random.

The sampling frequency Fs (=I/) must be selected large enough such that the sampling
process will not result in any loss of spectral information. (i.e. if the spectrum of the analog
signal can be recovered from the spectrum of the discrete – time signal, there is no loss of
information). A guideline for choosing the sampling frequency is given sampling theorem
given below.

SAMPLING THEOREM: A band limited continuous time signal with highest


frequency (bandwidth) fm hertz, can be uniquely recovered from its samples provided that the
sampling rate Fs is greater than or equal to 2fm samples per second.

From the sampling theorem we can infer that the knowledge of frequency content of a
signal is essential while choosing the sampling frequency.

For processing the sampled signals by digital means, it has to be converted to binary
codes and this convertion process is called quantization and coding. The process of converting
a discrete time continuous valued signal into a discrete time discrete valued signal is called
quantization. In quantization the value of each signal sample is represented by a value selected
from a finite set of possible values called quantization levels. The difference between the
unquantized sample and the quantized output is called the quantization error. The coding is the
process of representing each discrete value by an n-bit binary sequence (or code or number).
The process of sampling, quantization and coding are performed by sample/hold circuit and
ADC.

1.3 ANALYSIS OF SAMPLING PROCESS IN FREQUENCY DOMAIN

The sampling process explained in the previous section is equivalent to multiplying the
analog signal, f(t) with a impulse train, δT(t) to produce the sampled signal, fs(t). Let the impulse
train consists of pulses of area, . Hence the impulse sampled signal, fs(t) can be expressed as,

…4.1

Mathematically, the impulse train, δ T(t) can be expressed as,

…4.2

… 4.3

where T is the sampling period.

124
A typical analog signal, f(t) [Fig a]; the impulse train, δ T(t) [Fig b] and the impulse
sampled signal, fs(t) [Fig c] are shown in Figure 4.3.

Fig.c. Impulse sampled


Fig.a. Analog signal Fig.b. Impulse train
analog signal

Figure 4.3 Impulse sampling of an analog signals

The frequency content (frequency response) of a signal can be obtained from the fourier
transform of the signal [i.e., Fourier transform converts the time domain signal to frequency
domain signal]. Hence the frequency response of the impulse sampled signal can be obtained
by taking fourier transform of Eqn (4.3).

The fourier transform of a single-valued function, f(t) is defined as

…4.4
On taking fourier transform of fs(t) using the definition of fourier transform we get,

… 4.5

Mathematically the Eqn (4.5) represents, the convolution of two signals, f(t) and δ(t-
kT). The convolution theorem of fourier transform says that, the convolution of two time
domain signals is equivalent to the product of their individual fourier transforms. Therefore,
fourier transform of fs(t) can be expressed as a product of fourier transform of f(t) and δ(t –
kT).

…4.6

Let, F {f(t)} = F() …4.7

…4.8
where, s = 2π/T = sampling frequency in rad/sec.

Using equations (4.7) and (4.8), the equ (4.6) can be written as,

125
Since F() δ( - ks) = F( - ks)

…4.9

The equation (4.9) gives the frequency spectrum of the impulse sampled signal.

Let f() be a band-limited signal with a maximum frequency of m. The frequency
spectrum of F() is shown in Figure 4.4(a), which is a plot of |F()| Vs . The frequency
spectrum of impulse sampled signal, i.e., |Fs()| Vs , is shown in Figure 4.4(b), when s > 2
s and in Figure 4.4(c), when when s < 2 m.

In Figure 4.4(b) the frequency spectrum of original signal is repeated periodically with
period s and there is no overlapping of original spectrum. In Figure 4.4(c) the periodic
repeatition of original spectrum overlaps.

(a) (b)

(c)

Figure 4.4 Fourier spectra of input signal and its impulse sampled version

From fig 4.4 it is observed that, as long as s  m , the original spectrum is preserved
(since there is no overlapping) in the sampled signal and can be extracted from it by low-pass
filtering. This fact was proposed as shanon’s sampling theorem, which states that the
information contained in a signal is fully preserved in the sampled version as long as the
sampling frequency is at least twice the maximum frequency in the signal.

4.4 RECONSTRUCTION OF SAMPLED SIGNALS USING HOLD CIRCUITS

The hold circuits are popularly used in the process of analog-to-digital conversion
(ADC) and digital-to-analog conversion (DAC). In ADC process the hold circuit is used to
hold the sample until the quantization and coding for the current sample is complete.

In DAC process various types of hold circuits are used to convert the discrete time
signal to analog signal. The simplest hold circuit is the zero order hold (ZOH). In zero order
hold circuits the signal is reconstructed such that the value of reconstructed signal for a
sampling period is same as the value of last received sample. The schematic diagram of sampler
and zero order hold (ZOH) is shown in Fig 4.5. The signal reconstruction by zero order hold
(ZOH) circuit is illustrated in Fig 4.6.

126
Figure 4.5 Sampler and ZOH

The high frequencies present in the reconstructed


signal are easily filtered out by the various elements of the
control system, because the control system is basically a
low-pass filter.
Figure 4.6 Sampler
In a first-order hold, the last two signal samples reconstruction by ZOH
(current and previous sample) are used to reconstruct the
signal for the current sampling period. Similarly higher order hold circuits can be devised. First
or higher-order hold circuits offer no particular advantage over the zero order hold. In sampled
data control systems, the zero-order hold when used in conjunction with a high sampling rate
provides a satisfactory performance. An ideal sample / hold circuit introduces no distortion in
the conversion process. However, in practical sample / hold circuits the following problems
may be encountered.

1. Errors in the periodicity of sampling process.


2. Non linear variations in the duration of sampling aperture.
3. Droop (changes) in the voltage held during conversion.

4.5 DISCRETE SEQUENCE (DISCRETE TIME SIGNAL)

A discrete sequence or discrete time signal, f(k), is a function of an independent


variable, k, which is an integer. It is important to note that a Discrete time signal is not defined
at instants between two successive samples. Also, it is incorrect to think that f(k) is equal to
zero if k is not an integer. Simply the signal f(k) is not defined for non-integer values of k. A
discrete-time signal is defined for every integer value of k in the range - ∞ < k < ∞. Since a
digital signal is represented by a set of numbers it is also called a sequence. (i.e., the terms
signal and sequence refers the digital or discrete time signal).

METHODS OF REPRESENTING A DISCRETE TIME SIGNAL OR SEQUENCE

1. Functional representation

Figure 4.7 Graphical


representation of a
disorder time signal

2. Graphical representation

The graphical representation of a discrete sequence is shown in Figure 4.7.

3. Tabular representation

127
4. Sequence representation

An infinite duration signal or sequence with the time origin (k=0) indicated by the
symbol ↑ is represented as

f(k) {…..1, 2, 1, 4, 1, 0, 0…..}


An infinite sequence f(k), which is zero for k<0, may be represented as

f(k)={2, 1, 4, 1, 0,0 …..} (or) f(k)={2, 1, 4, 1…..}


An finite duration sequence with the time origin (k=0), indicated by the symbol ↑ is
represented as

f(k)={3, -1, -2, 5, 0, 4 …..}


A finite duration sequence that satisfies the condition f(k) = 0 for k < 0 may be
represented as

f(k)={2, 1, 4, 1} (or) f(k)={2, 1, 4, 1}


SOME ELEMENTARY DISCRETE TIME SIGNALS

1. Digital impulse signal or unit sample sequence


Figure 4.8 Digital impulse signal

An impulse delayed by k0,

Figure 4.9 Delayed impulse signal

2. Unit step signal

An unit step signal delayed by k0 Figure 4.10 Unit step signal

The unit step is related to digital impulse by the


summation relation
Figure 4.11 Delayed unit step
signal

128
3. Ramp signal

Figure 4.12 Ramp signal


4. Exponential signal

MATHEMATICAL OPERATIONS ON
Figure 4.13 Exponential signal
DISCRETE TIME SIGNALS

1. Shifting in time

A signal f(k) may be shifted in time by replacing the independent variable k by (k-m),
where m is an integer. If m is a positive integer, the time shift results in a delay by m units of
time. If m is a negative integer, the time shift results in an advance of the signal by |m| units in
time. The delay results in shifting each sample of f(k) to right. The advance results in shifting
each sample of f(k) to left.

Example

2. Folding or reflection or Transpose

The folding of a signal f(k) is performed by changing the sign of the time base k in the
signal f(k). The folding operation produces a signal f(-k) which is mirror image of f(k) with
respect to time origin k=0.

Example

129
3. Amplitude scaling or scalar multiplication

Amplitude scaling of a signal by a constant A is accomplished by multiplying the value


of every signal sample by A.

Let c(k) be amplitude scaled signal of f(k), then c(k) = Af(k)

4. Time scaling or down sampling

In a signal, f(k), if k is replaced by k, where  is an integer, then it is called time


scaling or down sampling.

Example: If f(k)=ak; k0, then f1(k) = f(2k) = ak for even values of k

5. Signal (or vector) addition

The sum of two signals f1(k) and f2(k) is a signal c(k), whose value at any instant is
equal to the sum of the samples of these two signals at that instant.

Example

6. Signal (or vector) multiplication

Signal multiplication results in the product of two signals on a sample-by-sample basis.


The product of two signals f1(k) and f2(k) is a signal c(k), whose value at any instant is equal
to the product of the sample of these two signals at that instant. The product is also called
modulation.

Example

130
1.6 z-TRANSFORM

Transform techniques are an important tool in the analysis of signals and linear time
invariant systems. The Laplace transforms are popularly used for analysis of continuous time
signals and systems. Similarly z-transform plays an important role in analysis and
representation of linear discrete time systems. The z-transform provides a method for the
analysis of discrete time systems in the frequency domain which is generally more efficient
than its time domain analysis.

DEFINITION OF Z-TRANSFORM

Let, f(k) = Discrete time signal or sequence


F(z) = z{f(k)} = z-transform of f(k)

The z-transform of a discrete time signal or sequence is defined as the power series

…4.10

where z is a complex variable.

The sequence of equ (4.10) is considered to be two sided and the transform is called
two sided z-transform, since the time index k is defined for both positive and negative values.
If the sequence f(k) is one sided sequence, (i.e. f(k) is defined only for positive value of k) then
the z-transform is called one sided z-transform.

The one sided z-transform of f(k) is defined as,

REGION OF CONVERGENCE

Since the z-transform is an infinite power series, it exists only for those values for z for
which the series converges. The region of convergence, (ROC) of F(z) is the set of all values
of z for the which F(z) attains a finite value. The ROC of a finite-duration signal is the entire
z-plane, except possibly the point z = 0 and / or z = ∞. These points are excluded, because z k
(when k > 0) becomes unbounded for z = ∞ and z-k (when k > 0) becomes unbounded for
z = 0.

The complex variables z can be expressed in the polar form as,

Z = r ej …4.11

where r = |z| and  = z

On substituting for z from equ (4.11) in equ (4.10) we get,

…4.12

131
In the ROC of F(z), |F(z)| < ∞.

From equ (4.13) we observe that |F(z)| is finite, if the sequence f(k) r-k is absolutely
summable.

To find the ROC, the equ (4.13) can be expressed as,

…4.14

If F(z) converges in some region of the complex plane, both summations in equ (4.14)
must be finite.

If the first sum of equ (4.14) converges, there must exist values of r small enough for
f(-k)rk to be absolutely summable. Hence the ROC for the first sum consists of all points in a
circle of radius, r1 as shown in Figure 4.14, where r1 > r.

If the second sum of equ (4.14) converges, there must exist large values of r for which
f(k) / rk is absolutely summable. Hence the ROC for the second sum consists of all points in a
circle of radius, r2 as shown in Figure 4.15, where r2 < r.

Therefore, the ROC of F(z) is the region inbetween two circles of radius r 1 and r2 as
shown in Figure 4.16. where r2 < r < r1.

Figure 4.14 ROC for Figure 4.15 ROC for Figure 4.16 ROC for F(z)

132
Table 4.1 Characteristic families of signals with their corresponding ROC

SIGNAL ROC
Finite-Duration Signals

Infinite-Duration Signals

Table 4.2 Properties of one-sided Z-transform

Property Discrete sequence z-transform


Linearity a1 f1(k) + a2f2(k) a1F1(z) + a2F2(z)

f(k+m)
Shifting, m ≥ 0
f(k-m)
z-mF(z0
Multiplication by km (or
Kmf(k)
differentiation in z-domain)
Scaling in z-domain (or
ak f(k) F(a-1z)
multiplication by ak)
Time reversal f(-k) F(z-1)
Conjugation f*(k) F*(z)

Convolution H(z)R(z)

133
Initial value

Final value

Table 4.3 Some Common one side Z-transform

f(t) : t ≥ 0 f(k) or f(kT) ; k ≥ 0 F(z)


δ (k) 1
u(k) or 1 z/(z-1)
ak z/(z-a)

k ak

(k + i) ak

t kT

t2 (kT)2

e-at kTe-atT

sin t sin kT

cos t cos kT


Note: Two sided sequence can be converted to one sided sequence by multiplying by

GEOMETRIC SERIES

A geometric series is a series in which consecutive elements differ by a constant ratio.


Such a series can be written in the form,

…4.17

134
where C is a constant and M1 and M2 are any two numbers.

If C is a complex number, where |C| < 1, then by Taylor’s series expansion we can
write,

…4.18

Applying the result in the reverse direction yields the infinite geometric series sum
formula

… 4.19

The equ (4.19) is the infinite geometric series sum formula.

We can also compute the sum of a finite number of elements in a geometric series. Let
us consider the following sum,

...4.20

The sum of the finite duration sequence in equ (4.20) can be expressed as the difference
between the sum of two infinite duration sequence as shown in equ (4.21).

…4.21

…4.22

From equations (4.21) and (4.22) we can write,

…4.23

…4.23

The equation (4.23) and (4.24) are finite geometric series sum formula.

Note: The infinite geometric series sum formula requires that the magnitude of C be
strictly less than unity, but the finite geometric series sum formula is valid for any value of C.

135
EXAMPLE 4.1

Determine the z-transform and their ROC of the following discrete sequence

(a) f(k) = {3, 2, 5, 7} (b) f(k) = {2, 4, 5, 7, 3}


SOLUTION

(a) Given that, f(k) = {3, 2, 5, 7}


i.e., f(0) = 3 ; f(1) = 2 ; f(2) = 5 ; f(3) = 7
and f(k) = 0 for k < 0 and for k > 3
By the definition of z-transform

The given sequence is a finite duration sequence, hence the limits of summation can be
changed as k = 0 to k = 3.

On expanding the summation we get,

Here F(z) is bounded (i.e., finite) except when z = 0, therefore the ROC is entire z-plane
except z = 0.

(b) Given that, f(k) = {2, 4, 5, 7, 3}



i.e., f(-2) = 2 ; f(-1) = 4 ; f(0) = 5 ; f(1) = 7 ; f(2) = 3
and f(k) = 0 for k < -2 and for k > 2
By the definition of z-transform

The given sequence is a finite duration sequence, hence the limits of summation can be
changed as k = 2 to k = 2.

On expanding the summation we get,

136
Here F(z) is bounded (i.e., finite) except when z = 0 and z = ∞, therefore the ROC is
entire z-plane except z = 0 and z = ∞.

EXAMPLE 4.2

(a) f(k) = u (k) (b) f(k) = (1/2) k u(k) (c) f(k) = ak u(-k-1)

SOLUTION

(a) Given that, f(k) = u(k)


u(k) is a discrete unit step sequence, which is defined as

By the definition of z-transform,

Here, F(z) is an infinite geometric series and it converges if |z| > 1 (i.e., |z-1| < 1). Using
infinite geometric series sum formula we get,

(b) Given that, f(k) = (1/2)k u(k)

u(k) is a discrete unit step sequence, which is defined as

By the definition of z-transform,

Here, F(z) is an infinite geometric series and it converges if |z| > (i.e., |z -1| < 1). Using
infinite geometric series sum formula we get,

137
(c) Give that f(K) = αk u(-k-1)
u(-k-1) is a discrete unit step sequence, which is defined as
u(-k-1) = 0 for k ≥ 0
= 1 for k ≤ -1
f(k) = 0 for k ≥ 0
= αk for k ≤ -1
By the definition of z-transform,

Using infinite geometric series sum formula we get,

EXAMPLE 4.3

Find the one sided z-transform of the following discrete sequences.

(a) f(k) = k a (k-1) (b) f(k) = k2

SOLUTION

(a) Given that f(k) = k a(k-t)

The one sided z-transform of ak is given by

…4.3.1

Using infinite geometric series sum formula,

…4.3.2

From equation (4.3.1) and (4.3.2) we get

On expanding the summation in the above equation, we get,

…4.3.3

138
On differentiating the equation (4.3.3) we get,

…4.3.4

On multiplying the equation (4.3.4) by –(z/a) we get,

…4.3.5

The infinite series on the left hand side on the equ (4.3.5) can be expressed as a
simulation and the equ (4.3.5) is written as shown below.

…4.3.6

By definition of z-transform, the one sided z-transform of k a(k-1) is given by,

…4.3.7

(Because, k a(k-1) = 0 when k = 0)

On comparing equations (4.3.6) and (4.3.7) we get,

(b) Given that, f(k) = k2


Let us multiply the given discrete sequence by a discrete unit step sequence,
f(k) = k2 u(k)
Note: Multiplying a one sided sequence by u(k) will not alter its value.
By the property of z-transform, we get,

139
EXAMPLE 4.4

Find the one sided z-transform of the discrete sequence generated by mathematically
sampling the following continuous time functions

(a) t2 (b) sin t (c) cos t

SOLUTION

(a) Given that, f(t) = t2

The discrete sequence is generated by replacing t by kT, where T is the sampling time
period.

where, g(k) = T2

By the definition of one sided z-transform we get,

By the property of z-transform we get,

(b) Given that, f(t) = sint


The discrete sequence is generated by replacing t by kT, where T is the sampling time
period.

f(k) = sin (kT)

By the definition of one sided z-transform.

140
We know that, sin  = (ej - e-j)/2j

We know that.

We know that, sin  = (ej - e-j)/2j and cos  = (ej + e-j)/2

(c) Given that, f(t) = cos t

The discrete sequence is generated by replacing by t by kT, where T is the sampling


time period.

f(k) = cos (kT)

By the definition of one sided z- transform,

We know that, cos  = (ej + e-j)/2

We know that

141
We know that, cos  = (ej - e-j)/2

EXAMPLE 4.5

Find the one sided z-transform of the discrete sequence generated by mathematically
sampling the following continuous time function,

(a) e-at cos t (b) e-at sin t

SOLUTION

(a) Given that, f(t) = e-at cos t

The discrete sequence is generated by replacing t by kT, where T is the sampling time
period.

f(k) = e –at cos kT

By the definition of one sided z-transform we get,

From infinite geometric sum series formula we know that,

142
(b) Given that, f(t) = e-at sin t

The discrete sequence f(k) is generated by replacing t by kT, where T is the sampling
time period.

f(k) = e –akt sin kT

By the definition of one sided z-transform we get,

From infinite geometric sum series formula we know that,

INVERSE z-TRANSFORM

The following methods are employed to recover the original discrete sequence from
its z-transform.

1. Direct evaluation by contour integration (or) complex inversion integral.


2. Partial fraction expansion.
3. Power series expansion.

The inverse z-transform by partial fraction expansion method and power series
expansion method are presented in this section. The inverse z-transform by contour integration
is beyond the scope of the book.

143
PARTIAL FRACTION EXPANSION METHOD

Let f(k) = Discrete sequence


and F(z) = Z{j(k)} = z-Transform of f(k).

The function F(z) can be expressed as a ratio of two polynomials in z as shown below.

The function F(z) can be expressed as a series of sum terms by partial fraction
expansion technique.

…4.25

where A0 is a constant, A1, A2,…An are residues and p1, p2,….pn are poles of F(z).

Note: Sometimes it will be convenient to express F(z)/z as a series of sum terms instead
of F(z).

Once the function F(z) is expressed as a series of sum terms , the inverse z-transform
of F(z) is given by sum of inverse z-transform of each term in equ (4.25);[The inverse z-
transform of each term of equ (4.25)can be obtained from standard z-transform pairs.

The coefficients of the polynomials of F(z) are assumed real and so the roots of the
polynomial are real and/or complex conjugate pairs ) i.e., complex roots will occur only in
conjugate pairs). Hence on factorizing the denominator polynomial we get the following cases.
(The roots of the denominator polynomial are poles of F(z)).

Case (i) : When roots (or poles) are real and distinct
Case (ii) : When roots (or poles) have multiplicity
Case (iii) : When roots (or poles) are complex conjugate.

Case (i) : When roots (or poles) are real and distinct

In this case F(z) can be expressed as,

where A0 is a constant ; A1, A2 …. An are residues and P1, P2, …. Pn are poles.

The constant A0 is present when m = n (i.e., when the order of numerator and
denominator polynomial are equal). The value of A0 is obtained by dividing the numerator
polynomial by denominator polynomial.

144
The residue Ai is evaluated by multiplying both sides of H(z) by (z+pi) and letting
z = -pi.

Case (ii) When roots (or poles) have multiplicity

Let one of pole has a multiplicity of q. (i.e., repeats q times). In this case F(z) can be
expressed as,

where Ax0, Ax1, ..... Ax(q-1) are residues of repeated root (or pole), z = -px.
The constant A0 and residues of distinct real roots are evaluated as explained in case(i).
The residue Axr of repeated root is obtained as shown below.

Case (iii) When roots (or poles) are complex conjugate

Let F(z) has one pair of complex conjugate pole. In this case F(z) can be expressed as,

The constant A0 and residues of real and non-repeated roots are evaluated as explained
in case (i).

The residue Ax is evaluated as that of case(i) and the residue Ax* is conjugate of Ax*.

POWER SERIES EXPANSION METHOD

Let f(k) = Discrete sequence


and F(z) = Z{f(k)} = z-transform of f(k).
By the definition of z-transform we get,

145
On expanding the summation we get,

…4.26

In the given function, F(z) can be expressed as a power series of z by long division then
on comparing the coefficients of z with that of equ (4.26), the samples of f(k) are determined.
[i.e. the coefficient of z1 is the ith sample f(i) of the sequence f(k)].

Note: The different method of evaluation of inverse z-transform of a function F(z) will
result in different type of mathematical expressions. But on evaluating the expressions for each
value of k, we may get a same sequence.

EXAMPLE 4.6

Determine the inverse z-transform of the following function,

SOLUTION

By partial fraction expansion, F(z) / z can be expressed as

146
We know that

On taking inverse z-transform of F(z) we get,

f(k) = 2 u(k) + (0.5)k; k ≥ 0

(Here we consider only one sided z-transform)

By partial fraction expansion, we can write,

We know that

On taking inverse z-transform of F(z) we get,

147
By partial fraction expansion, we can write.

On taking inverse z-transform of F(z) we get,

By partial fraction expansion, we can write,

148
On taking inverse z-transform of F(z) we get,

EXAMPLE 4.7

Determine the inverse z-transform of the following z-domain functions.

SOLUTION

149
By time shifting property we get,

On taking inverse z-transform of F(z) we get,

Note: The term 2(k-1) is multiplied by u(k-1), because this term have samples only after
k ≥ 1.

150
By time shifting property,

On taking inverse z-transform of F(z) we get,

Note: The term d(k-1) is multiplied by u(k-1), because these terms have samples only after
k ≥ 1.

On taking inverse z-transform of F(z) we get,

151
Note: Since the term a(k-1) is valid only for k ≥ 1, it is multiplied by u(k-1).

By time shifting property we get,

On taking inverse z-transform of F(z) we get,

Note: Since the term a(k-1) is valid only for k ≥ 1, it is multiplied by u(k-1).

EXAMPLE 4.8

Determine the inverse z-transform of

When (a) ROC : |z\ > 1.0 and (b) ROC : |z| < 0.5.

SOLUTION

Since the ROC is the exterior of a circle, we expect f(k) to be causal signal. Hence we
can express F(z) as a power series expansion in negative powers of z. On dividing the
numerator of F(z) by its denominator we get,

152
…4.8.1

If F(z) is z-transform of f(k) then, by the definition of z-transform we get,

For a causal signal,

On expanding the summation we get,

….4.8.2

On comparing the two power series of F(z) [i.e., equ (4.8.1) & (4.8.2)], we get,

(b) Since the ROC is the interior of a circle, we expect f(k) to be anticausal signal.
Hence we can express F(z) as a power series expansion in positive powers of z. Therefore,

153
rewrite the denominator polynomial of F(z) in the reverse order and then the numerator, is
divided by the denominator as shown below.

…4.8.3

If F(z) is z-transform of f(k) then, by the definition of z-transform we get,

On expanding the summation we get,

…4.8.4

On comparing the two power series of F(z) [i.e., equ (4.8.3) & (4.8.4)], we get,

4.7 LINEAR DISCRETE TIME SYSTEMS

A discrete-time system is a device or algorithm that operates on a discrete-time signal


called the input or excitation, according to some well-defined rule, to produce another discrete-
time signal called the output or the response of the system. We can say that the input signal
r(k) is transformed by the system into a signal c(k) and expressed as

154
where H denotes the transformation (also called as operator)
Figure 4.17
A discrete time system is linear if it obeys the principle of
superposition and it is time invariant if its input-output relationships do
not change with time.

When the input to a discrete time system is unit impulse, δ(k) Figure 4.18
then the output is called impulse response of the system and denoted by
h(k).

…4.28

A linear-time invariant discrete time system is characterized by its impulse response


h(k) and so the impulse response h(k) is also called weighting sequence.

The input-output description of a discrete-time system consists of mathematical


expression or a rule, which explicitly defines the relation between the input and output signals
(input-output relationship). It is denoted by

…4.29

The input-output relationship of a linear-time invariant discrete time system, (LDS) can
be expressed by Nth order constant coefficient difference equation given below.

…4.30

The integer N is called the order of the system and M ≤ N.

Here c(k-m) are past outputs, r(k-m) are past inputs, r(k) is present input and ak and bk
are constant coefficients.

ANALYSIS OF LINEAR DISCRETE TIME SYSTEM (LDS)

There are two methods of analysing the behaviour or response of a LDS systems.

Method 1

The input-output relation of the LDS system is governed by the constant coefficient
difference equation of the form shown in equ (4.30). Mathematically the direct solution of
equation (4.30) can be obtained to analyse the performance of the system.

Method 2

The given input signal is first decomposed or resolved into a sum of elementary signals.
Then using the linearity property of the system, the responses of the system to the elementary
signals are added to obtain the total response of the system to the given input signals.

155
Resolution of discrete time signal (or sequence) into impulses

Let r(k) = Discrete time signal


δ(k) = Unit impulse signal
and δ(k-m) = Delayed unit impulse signal
Consider the product of r(k) and δ(k-m)

…4.41

…4.42

The product r(k) δ(k-m) has zero everywhere except at k = m. The value of the signal
at k = m is the mth sample of the signal r(k) and it is denoted by r(m). Therefore each
multiplication of the signal r(k) by an unit impulse at some delay m, in essence picks out the
signal value r(m) of the signal r(k) at k = m, where the unit impulse is non zero. Consequently
if we repeat this multiplication over all possible delays in the range of, 0 ≤ m < ∞ and sum all
the product sequences, the result will be a sequence that is equal to the sequence r(k). Hence
r(k) can be expressed as

…(4.53)

Note: Each product r(k) δ(k-m) is an impulse and the summation of impulses give r(k). Here
r(k) is considered as one sided sequence. If r(k) is two sided sequence then the range of
m is -∞ to +∞.

RESPONSE OF LDS SYSTEM TO ARBITRARY INPUT – THE CONVOLUITON


SUM

In a LDS system the response c(k) of the system for arbitrary input r(k) is given by
convolution of the input r(k0 with the impulse response h(k) of the system. It is expressed as

…4.34

where the symbol * represents convolution operation.

Proof

Let c(k) be the response of the H for an input r(k). [Let r(k) be a one sided sequence].

…4.35

The signal r(k) can be expressed as a summation of impulses as,

…4.36

156
where δ(k-m) is the delayed unit impulses signal.

From equation (4.35) and (4.36) we get,

…4.37

The system H is a function of k and not a function of m. Hence by linearity property


the equ (4.37) can be written as,

…4.38

Let the response of the LDS system to the unit impulse input δ(k) be denoted by h(k).

…4.39

Then by time invariance property the response of the system to the delayed unit
impulse input δ(k-m) is

…4.40

Using equ (4.40), the equ (4.38) can be expressed as

…4.41

The equation of c(k) [equ(4.41)] is called convolution sum. We can say that the input
r(k) is convoluted with the impulse response h(k) to yield the output c(k).

…4.42

PROPERTIES OF CONVOLUITON

Commutative property : r(k) * h(k) = h(k) * r(k)


Associative property : [r(k) * h1(k)] * h2(k) = r(k) * [h1(k) * h2(k)]
Distributive property : r(k) * [h1(k) + h2(k)] = [r(k) * h1(k)] + [r(k) * h2(k)]

4.8 TRANSFER FUNCTION OF LDS SYSTEM (PULSE TRANSFER


FUNCDTION)

The transfer function of LDS system is given by z-transform of its impulse response.
The transfer function of LDS system is also called z-transfer function or pulse transfer function.

Let h(k) = Impulse response of a LDS system

Now, z-transform of h(k) = Z{h(k)} = H(z)

 Transfer function of LDS system = H(z) …4.43

157
The input-output relationship of a LDS system is governed by a convolution sum of
equ (4.42). By taking z-transform of this convolution sum it can be shown that, H(z) is given
by the ratio by C(z)/R(z), where C(z) is the z-transform of output c(k) of LDS system and R(z)
is the z-transform of input r(k) to the LDS system.

Proof

By the definition of one sided z-transform.

…4.44

From equ (4.42), we get

On substituting this convolution sum in equ (4.44) we get,

…4.45

The order of summation in equ (4.45) can be interchanged. Therefore equ (4.45) can be
written as

…4.46

Let, p = (k – m),  when k = 0, p = -m


and when k = ∞, p = ∞
Also, k=p+m

On replacing (k – m) by p in equ (4.46) we get

…4.47

By the definition of one sided z-transform,

Hence equation (4.47) can be written as

…4.48

From equ (4.48) we can conclude that the transfer function of the system is given by
the ratio C(z) / R(z).

158
From the above analysis we can define the transfer function of the LDS system as the
ratio of the z-transform of the output of a system to the z-transform of the input to the system
with zero initial conditions.

Let r(k) = Input of LDS system


and c(k) = Output of a LDS system
Now, Z{r(k)} = R(z) and Z{c(k)} = C(z)
𝐶(𝑧)
Transfer function of LDS system = …4.49
𝑅(𝑧)

The input-output relation of LDS system is governed by the constant coefficient


difference equation.

…4.50
where N is the order of the system and M ≤ N.
On taking z-transform of equ (4.50) we get,
[By time shifting property, Z{c(k-m)} = z-m. C(z) and Z{r(k-m)} = z-mR(z)]

…4.51

On expanding the equ (4.51) with M = N, we get,

…4.52

From the above discussions it is evident that the transfer function of the LDS system
can be obtained by taking z-transform of the difference equation governing the system.

EXAMPLE 4.9

The input-output relation of a sampled data system is described by the equation


c(k + 2) + 3 c(k + 1) + 4c(k) = r(k + 1) – r(k).
Determine the z-transfer function. Also obtain the weighting sequence of the system.

159
SOLUTION

Let R(z) = Z{r(k)} and C(z) = Z{c(k)}


By time shifting property, when initial conditions are zero, we get,
Z{c(k+m)} = zm C(z) and Z{r(k+m)} = zm R(z)
Given that, c(k+2) + 3 c(k+1) + 4c(k) = r(k+1) – r(k)
On taking z-transform of the above equation we get,

The weighting sequence is the impulse response, h(k) of the system. It is given by
inverse z-transform of H(z).

By partial fraction technique H(z) can be expressed as

160
On taking inverse z-transform of H(z) we get,

EXAMPLE 4.10

Solve the difference equation c(k+1) + 3 c(k+1) + 2 c(k) = u(k)

Given that c(0) = 1 ; c(1) = -3 ; c(k) = 0 for k < 0

SOLUTION

Let Z{c(k)} = C(z) and Z {u(k)} = U(z)

Since u(k) is unit step signal,

We know that, if F(z) = Z{f(k)} then

Given that, c(k+2) + 3 c(k+1) + 2 c(k) = u(k)

On taking z-transform of the above equation we get,

Z{c(k+2)} + Z{3 c(k+1)} + Z{2 c(k)} = Z {u(k)}

On substituting the initial conditions, c(0) = 1 and c(t) – 3 we get,

161
By partial fraction expansion technique we can write C(z)/z as,

On taking inverse z-transform of C(z) we get,

The above equation of c(k) is the solution of the given difference equation.

4.9 ANALYSIS OF SAMPLER AND ZERO – ORDER HOLD

Consider a pulse sampler with zero-order hold (ZOH) shown in Figure 4.19. Let the
output of sampler be a pulse train of pulse width . For each input pulse, the ZOH produces a
pulse of duration T, where T is the sampling period.

Figure 4.19 Pulse sampler with ZOH Figure 4.20 Equivalent representation
pulse sampler with ZOH

In can be proved that the output of pulse sampler with ZOH can be produced by impulse
sampled f(t) when passed through a transfer function.

…4.53

Hence the pulse sampler with ZOH can be replaced by an equivalent system consisting
of an impulse sampler and a block with transfer function, (1 – e-sT)/s as shown in Figure 4.20.
This equivalent representation offers easier analysis of sampled data control systems.

162
FREQUENCY RESPONSE CHARACTERISTICS ZERO ORDER HOLDING
DEVICE

The sinusoidal transfer function of ZOH can be obtained from G 0(s) by replacing
s by j.

…4.54

We know that, …4.55

Hence from equation (4.54) and (4.55) we get,

…4.56
2𝜋
We know that, sampling frequency, s = 𝑇

On substituting T = 2π/s in equ (4.56) we get,

…4.57

…4.58

The frequency response characteristics consists of magnitude response and phase


response characteristics. The magnitude and phase response of ZOH device are given by
equations (4.57) and (4.58) respectively. The Figure (4.21) shows the frequency response curve
of ZOH device. From the frequency response curve we can conclude that ZOH device has low
pass filtering characteristics.

163
Fig.a. Magnitude response of ZOH device Fig.b. Phase a response of ZOH device

Figure 4.21 Frequency response of ZOH device

4.10 ANALYSIS OF SYSTEM WITH IMPULSE SAMPLING

Consider a linear continuous time system fed from an impulse sampler as shown in
Figure 4.22a. Let H(s) be the transfer function of the system is s-domain. In such a system we
are intersected in reading the output at sampling instants. This can be achieved by means of a
mathematical sampler or read-out sampler.

Fig. 4.22a Fig. 4.22b

Figure 4.22 Linear continuous time system with impulse sampled input

For the system shown in Figure 4.22b, it can be shown that the z-domain transfer
function H(z) can be directly obtained from s-domain transfer function by taking z-transform
of H(s)

i.e ., H(z) = Z{H(s)} …4.59

The Figure 4.23 shows the z-transform Figure 4.23: The z-transform equivalent of
equivalent of the s-domain system of Figure the system shown in Figure 3.22b
4.22b.

The output in z-domain is given by, C(z) = H(z) R(z) …4.60

Procedure to find z-transfer function from s-domain transfer function

1. Determine h(i1) from H(s, where h(t) = L-1 |H(s)}


2. Determine the discrete sequence h(kT) by replacing t by kT in h(t)
3. Take z-transform of h(kT), which is z-transform function of the system (i.e., H(z) =
Z{h(kT)}.

164
Table 4.4 Laplace and Z-transformations

H(s) H(z)

Alternatively, by partial fraction technique if H(s) can be expressed as a summation of


first order terms then using standard transform pairs listed in Table 4.4, the z-transform of H(s)
can be directly obtained.

Fig. 3.24a Fig. 3.24b Fig. 3.24c

Fig. 3.24

Consider a continuous time system with transfer function H(s) as shown in Figure
4.24a. Let the input r(t) be a continuous time input. To read the continuous output at sampling
instants, let us image a mathematical sampler at the output stage.

The system shown in Figure 4.24a can be equivalently represented by a block of H(s)
R(s) with impulse input δ(t) as shown in Figure 4.24b. Now the input and so the output does
not change by imaging a fictious impulse sampler through which δ(t) is applied to H(s) R(s) as
shown in Figure 3.24c. For such a system we can prove that

…4.61

Hence, if C(s) = H(s) R(s) then C(z) = Z{H(s) R(s)} = HR(z) …4.62

The function Z{H(s) R(s)} is also denote das HR(z).

When the impulse sampled input is applied to two or more s-domain transfer function
in cascade as shown in Figure 4.25a, then z-transfer function of the system is given by

165
…4.63

…4.64

The function Z{H1(s) H2(s)} is also denoted as H1H2(z). The equivalent z-domain
system is shown in Figure 4.25b.

Figure 4.25a Figure 4.25b

Consider a system in which impulse sampler is introduced at the input of each block a
shown in Figure 4.26a.

Figure 4.26a
Figure 4.26a
Now the z-transfer function of the system is given by,

H(z) = H1(z) H2(z)

where H1(z) = Z{H1((s)} and H2(z) = Z{H2(s)}


and C(z) = H1(z) H2(z) R(z)
where R(z) = Z{R(s)} and R(s) = L[r(t)].

The equivalent z-domain system is shown in Figure 4.26b.

EXAMPLE 4.11

Determine the z-domain transfer function for the following s-domain transfer functions.

SOLUTION

The discrete sequence h(kT) in obtained by letting t = kT in h(t)

166
h(kT) = akTc-akT

z-transfer function, H(z) = Z{h(kT)}

Let f(k) = e-akT, F(z) = Z {f(k)}

By the definition of z-transform,

The discrete sequence h(kT) is obtained by letting t = kT in h(t)

h(kT) = cos kT

[Refer Table 4.3 and example 4.4(c)]

By partial fraction expansion,

167
The discrete sequence h(kT) is obtained by letting t = kT in h(t)

By the definition of one sided z-transform,

From infinite geometric sum series formula we know that,

The discrete sequence h(kT) is obtained by letting t = kT in h(t)

h(kT) = e-bkT cos akT

168
z-transfer function, H(z) = Z{h(kT)} = Z{e-bkT cos akT}

For example 4.5(a) we get

The discrete sequence h(kT) is obtained by letting t = kT in hh(t)

h(kT) = e-bkT sin akT

z-transfer function, H(z) = Z{h(kT)} = Z{e-bkT sin akT}

From example 4.5(b) we get,

4.11 ANALYSIS OF SAMPLED DATA CONTROL SYSTEMS USING Z-


TRANSFORM

The analysis of sampled data control systems are performed using the concepts
developed in section 4.9 and 4.10. The following points serve as guidelines to determine the
output in z-domain and hence the z-transfer function of the sampled data control systems.

1. The pulse sampling is approximated as impulse sampling.

2. The ZOH is replaced by a block with transfer function, G 0(s) = (1 –e-sT)/s.

3. When the input to a block is impulse sampled signal then the z-transform of the
output of the block can be obtained from the z-transform of the input and z-
transform of the s-domain transfer function of the block. In determining the
output of a block one may come across the following cases.

Case (i) The impulse sampler is located at the input of a block as shown in Figure 4.27.

Figure 4.27

In this case, C(z) = G(z) R(z) …4.67

Here, G(z) = Z{G(s)} ; R(z) = Z{R’(s)} and R’(s) = L[r’(t)]

169
Case (ii) The impulse sampler is located at the input of two s-domain cascaded blocks
as shown in Figure 4.28.

Figure 4.28

In this case, C(z) = Z{G1(s) G2(s)} ; R(z) = G1G2(z) R(z)

Case (iii) The impulse sampler is located at the input of each blocks as shown in Figure
4.29.

Figure 4.29

In this case, C(z) = G1(z) G2(z) R(z) …4.69

Here, G1(z) = Z{G1(s)} and G2(z) = Z{G2(s)}

Case (iv) The impulse sampler is located at the input of ZOH in cascade with G(s) as
shown in Figure 4.30.

Figure 4.30

In this case, C(z) = Z{G0(s) G(s)} R(z) = (1-z-1 ) Z {G(s) / s} R(z) …4.70

The Table 4.5 shows some configurations of the closed loop sampled data control
systems and their corresponding z-domain outputs.

Table 3.5

Closed loop sampled data control system Output z-domain

170
EXAMPLE 4.12

Find C(z) / R(z) for the following closed loop sampled data control systems. Assume
all the samplers to be of impulse type.

Figure 4.12a Figure 4.12b Figure 4.12c

SOLUTION

(a) The ZOH in the system is replaced by G0(s) as shown in Figure 4.12.1, where G0(s) =
(1-e-sT)/s

Let e(t) = Error signals


e'(t) = Impulse sampled error signal
b(t) = Feedback signal

Figure 4.12.1 Figure 4.12.2a Figure 4.12.2b

The input to the cascaded blocks of G0(s) and G(s) is an impulse sampled signal as
shown in Figure 4.12.2a. It’s z-domain equivalent is shown in Figure 4.12.2b.

From Figure 4.12.2b we get, C(z) = Z{G0(s) G(s)} E(z) …4.12.1

Here, C(z) = Z {C(s)} ; E(z) = Z {E’(s)} ; C(s) = L[c(t)] and E’(s) = L[e’(t)]

The input to the cascaded blocks of G0(s), G(s) and H(s) is an impulse sampled signal
as shown in Figure 4.12.3a. It’s z-domain equivalent is shown in Figure 4.12.3b.

171
Figure 4.12.3a Figure 4.12.3b

From Figure 4.12.2b we get, B(z) = Z{G0(s) G(s) H(s)} E(z) …4.12.2
Here, B(z) = Z {B(s)} and B(s) = L[b(t)]
With reference to Figure 4.12.1, at the summing point we get,
e(t) = r(t) – b(t) …4.12.3

Since e’(t) = e(kT) is an impulse sampled signal, by superposition principle the equation
(4.12.3) can be written as,

e(kT) = r(kT) – b(kT) …4.12.4


where e(kT), r(kT) and b(kT) are impulse sampled signals of e(t), r(t) and b(t)
respectively.

On taking z-transform of equ (4.12.4) we get,

…4.12.5

Where R(z) = Z{R(s)} and R(s) = L[r(t)]


On substituting for B(z) from equ (4.12.2) in equ (4.12.5) we get,

…4.12.6

From equations (4.12.1) and (4.12.6) the z-transfer function or pulse transfer function,
C(z)/R(z) can be written as,

…4.12.7

Here, Z{G0(s) G(s)}is denoted as G0G(z) and Z{G0(s) G(s) H(s)} is denoted as
G0GH(z).

(b) The input to the block G2(s) in an impulse sampled signal as shown in Figure 4.12.4a.
It’s z-domain equivalent is shown in Figure 4.12.4b.

Figure 4.12.4a Figure 4.12.4b

172
From Figure 4.12.4b we get, C(z) = G2(z) D(z) …4.12.8

where C(Z) = Z{C(s)} ; G2(z) = Z{G2(s)} ; D(z) = Z{D’(s)}; C(s) = L[c(t) and D’(s)
= L[d’(t)]

The input to the block G1(s) is an impulse sampled signal as shown in Figure 4.12.5a.
It’s z-domain equivalent is shown in Figure 3.12.5b.

Figure 4.12.5a Figure 4.12.5b

From Figure 4.12.5b we get, D(z) = G1(z) E(z) …4.12.9

From equations (4.12.8) and (4.12.9) we get,

C(z) = G2(z)G1(z) E(z) …4.12.10

where G1(z) = Z{G1(s)} ; E(z) = Z{E’(s)} and E’(s) = L[e’(t)’]

The input to the cascaded blocks G2(s) and H(s) is an impulse sampled signal as shown
in Figure 4.12.6a. It’s z-domain equivalent is shown in Figure 4.12.6.b.

Figure 4.12.6a Figure 4.12.6b

From Figure 4.12.6b we get,


B(z) = Z(G2(s) H(s)} D(z) …3.12.11

On substituting for D(z) from equ (4.12.9). in equ (4.12.11) we get,


B(z) = Z(G2(s) H(s)} G1(z) E(z) …3.12.12

With reference to Figure 3.12b, at the summing point we get,


e(t) = r(1) – b(t) …4.12.13

Since e(t) = e(Kt) is an impulse sampled signal, by superposition principle the equation
(4.12.13) can be written as,

e(kT) = r(kT) – b(kT) …4.12.14

where e(kT), r(kT) and b(kT) are impulse sampled signals of e(t), r(t) and b(t) respectively.

On taking z-transform of equ (4.12.14) we get.


E(z) = R(z) – B(z)

173
R(z) = E(z) + B(z) …4.12.15

On substituting, for B(z) from equ (4.12.2) in equ (4.12.15) we get,

…4.12.16

From equation (4.12.10) and (4.12.16) the z-transfer function or pulse transfer function
C(z)/R(z) can be written as,

…4.12.17

Here Z {G2(s) H(s)} is denoted as G2H(z)

(c) The ZOH in the system is replaced by G0(S) as shown in Figure 4.12.7, where Ga(s) =
(1-e-sT)/s.

Figure 4.12.7 Figure 4.12.8a Figure 4.12.8b

The input to the cascaded blocks of G0(s) and G(s) is an impulse sampled signal as
shown in Figure 4.12.8a. It’s z-domain equivalent is shown in Figure 4.12.8b.

From 4.12.8b, we get C(z) = Z{G0(s) G(s)} E(z) …4.12.18

where, C(z) = Z{C(s)}; E(z) = Z{E(s)} ; C(s) = L[c(t)] and E(s) = L(e(t)].

The input to the block H(s) is an impulse sampled signal as shown in Figure 4.12.9a.
It’s z-domain equivalent is shown in Figure 4.12.9B.

Figure 4.12.9a Figure 4.12.9b

From Figure 4.12.9b, we get, B(z) = H(z) C(z) …4.12.19

with reference to Figure 4.12.7, at the summing point we get,


e(t) = r(t) – b(t) …4.12.20

Since e(t) = e(kT) is an impulse sampled signal, by principle of superposition the equ
(4.12.20) can be written as,

e(kT) = r(kT) – b(kT) …4.12.21

174
where e(kT), r(kT) and b(kT) are impulse sampled signals of e(t), r(t) and b(t) respectively.

On taking z-transform of equ (4.12.21) we get,


E(z) = R(z) – B(z) …4.12.22

On substituting for B(z) from equ (4.12.19) in equ (4.12.22) we get,


E(z) = R(z) – H(z) C(z) …4.12.23

On substituting for E(z) from equ (4.12.23) in equ (4.12.18) we get,

…4.12.24

The equation (4.12.24) is the z-transfer function of the system.

Here Z{G0(s) G(s)} is denoted as G0G(z).

EXAMPLE 4.13

Find the output C(z) in z-domain for the closed loop sampled data control system shown
in Figure 4.13.

Figure 4.13 Figure 4.13.1

SOLUTION

The ZOH in Figure 4.13 is replaced by a block with transfer function G0(s) as shown in
Figure 4.13.1, where G0(s) = (1 – e-sT) / s.

Here, d(t) = Impulse sampled signal of d(t).

The input to the cascaded blocks of G0(s) and G2(s) is an impulse sampled signal as
shown in Figure 4.13.2a. It’s z-domain equivalent is shown in Figure 4.13.2b.

Figure 4.13.2a Figure 4.13.2b

From Figure 4.13.12b we get, C(z) = Z {G0(s) G2(s)} D(z) …4.13.1

Where C(z) = Z {C(s)}; D(z) = Z {D(s)} ; C(s) = L[c(t)] and D(s) = L[d(t)].

175
With reference to Figure 4.13.1 the following s-domain equations can be obtained.

E(s) = R(s) – B(s) …4.13.2


D(s) = E(s) G1(S) …4.13.3
B(s) = G0(s) G2(s) H(s) Ds …4.13.4

On substituting for E(s) from equ (4.13.2) in equ (4.13.3) we get,


D(s) = [R(s) – B(s)] G1(s) = G1(s) R(s) – G1(s) B(s) …4.13.5

On substituting for B(s) from equ (4.13.4) in equ (4.13.5) we get,


D(s) = G1(s) R(s) – G1(s) G0(s) G2(s) H(s) D(s) …4.13.6

On taking z-transform of equ (4.13.6) we get,

…4.13.7

Note: The term G0(s) G1(s) G2(s) H(s) D(s) represents the output of a block with transfer
from G0(s) G1(s) G2(s) H(s) when the input is D(s).

On substituting for D(z) from equ (4.13.7) in equ (4.13.1) we get,

Output in z-domain,

Where {G0(s) G2(s)} is represented as G0G2(z),


{G1(s) R(s)} is represented as G1R(z) and
{G0(s) G1(s) G2(s) H(s)} is represented as G0G1G2H(z)

EXAMPLE 4.14

For the sampled data control system shown in Figure 4.14, find the response to unit step
input, where G(s) = 1/(s+t),

Figure 4.14 Figure 4.14.1

SOLUTION

The ZOH in the system is rep0laced by G0(s) as shown in fig 4.14.1, where G0(s) =
-sT
(1-e )/s.

176
The input to the cascaded blocks of G0(s) and G(s) is an impluse sampled signla as
shown in fig 4.14.2a. It’s z-domain equivalent is shown in fig 4.14.2b.

From fig 4.14.2b we get, C(z)= Z{G0(s)G(s)}E(z) …(4.14.1)

Figure 4.14.2a Figure 4.14.2b

From Figure 4.14.2b we get, C(z) = Z{G0(s) G(s)} E(z) …4.14.1

With reference to Figure 4.14.1, at the summing pont we get,

e(t) = r(t) – c(t) …4.14.2

Since e(t) = e(kT) is an impulse sampled signal, the equation (4.14.2) can be written
as,

E(kT) = r(kT) – c(kT) …4.14.3

where e(kT), r(kT) and c(kT) are impulse sampled signals of e(t), r(t) and c(t) respectively.

On taking z-transform of equ (4.14.3) we get,


E(z) = R(z) – C(z) …4.14.4

On substituting for E(z) from equ (4.14.4) in equ (4.14.10 we get,

…4.14.5

By partial fraction expansion,

177
From standard laplace and z-transform pairs we get,

Here a = 1 and T = 1

…4.14.6

Given that input is unit step

…4.14.7

From equation (4.14.5), (4.14.6) and (4.14.7) we get,

By partial fraction expansion,

178
…4.14.8

We know that

On taking inverse z-transform of equ (4.14.8) we get,

…4.14.9

The equation (4.14.9) is the response of given system for unit step input.

4.12 THE z AND s-DOMAIN RELATONSHIP

Let r(kT) be a discrete sequence which has been obtained by sampling r(t) at a sampling
rate of 1/T. On taking z-transform of r(kT) we get,

…4.71

Let, r(t) = Impulse sampled signal of r(t) at the sampling rate of 1/T and R(s) =L[r(t)]
= Laplace transform of r(t).

…4.72

On taking laplace transform of equ (4.72) we get,

…4.73

Let us choose a transformation such that,

z = esT …4.74

…4.75

On substituting for s from equ (4.75) in equ (4.73) we get,

179
…4.76

From equ (4.76) it is obvious that z-transform of a discrete sequence can be obtained
from the laplace transform of its impulse sampled version, by choosing a transformation,
s=(1/T)ln z(or z=esT).

The transformation, s=(1/T)ln z, maps the s-plane into the z-plane. It can be shown that
every section of j-axis of length, N, maps into the unit circle in the anticlockwise direction
where N is an integer and s is the sampling frequency and it can be shown that every strip in
he left half s-plane of width s, maps into the interior of the unit circle as shown in fig 4.31.

Figure Mapping of s-plane into z-plane

The above mapping helps in extending the s-plane stability criterion to z-plane. For
stability of a system in s-plane the poles of s-domain transfer function should lie on the left
half of s-plane. In this transformation the left half of s-plane maps into interior of unit circle.
Hence for the stabilith of the system in z-domain, the poles of the z-transfer function should
the inside the unit circle.

4.13 STABILITY ANALYSIS OF SAMPLED DATA CONTROL SYSTEMS

The sampled data control system is stable if all the poles of the z-transfer function of
the system lies inside the unit circlr in z-plane. The poles of the transfer funtion are given by
the roots of the characteristic equation. Hence the sysem stability can be determined from the
roots of the characteritic equation.

The z-transfer function of the sampled data control system can be expressed as a ratio
of two polynomials in z as shown below.

…4.77
Where, A0 = constant
P(z) = Numerator polynomial
Q(z) = Denominator polynomial

The characteristic equation is the denominator polynomial of H(z). [i.e., characteristic


equation is given by Q(z) = 0].

Consider the system shown in Figure 4.32. For this system, the z-transfer function is
given by,

180
…4.78

Figure 4.32

and the characteristic equation is,

(4.79)

The following methods are available for the stability analysis of sampled data control
system using the characteristic equation

1. Jury’s stability test


2. Bilinear transformation
3. Root locus technique

The Jury’s stability test and bilinear transformation are presented in this book.

181
SCHOOL OF ELECTRICAL AND ELECTRONICS

DEPARTMENT OF ELECTRICAL AND ELECTRONICS

UNIT – V – Advanced Control Systems – SEEA1602

182
NON LINEAR SYSTEMS
5.1 INTRODUCTION TO NON LINEAR SYSTEMS

The non-linear systems are which does not obey the principle of superposition. The
linear systems are systems which satisfy that principle of superposition.

The principle of superposition implies that if a system has responses y1(t) and y2(t) any
two inputs x1(t) and x2(t) respectively then the system response to the linear combination of
these inputs α1x1(t) ÷ α2x2(t) is givwn by the linear combination of the individual outpus, i.e.
α1y1(t) ÷ α2x2(t), where α1 and α2 are constants.

To satisfy the principle of superposition, y3 = α1 y1 + α2 y2


𝑑𝑥
Example of linear system : y = ax + b 𝑑𝑡

Example of nonlinear system : y = ax2 + ebx

EXAMPLE 5.1
𝑑𝑥
The response of a system is, y = ax + b 𝑑𝑡 . Test whether the system is linear or non
linear.

SOLUTION

Let x1 and x2 be the two inputs to the system and y1 and y2 be their resopnses,
respectively.
𝑑𝑥
Given that y = ax + b 𝑑𝑡

𝑑𝑥1
When x = x1, y = y1,  y1 = ax1 + b 𝑑𝑡

𝑑𝑥2
When x = x2, y = y2,  y2 = ax2 + b
𝑑𝑡

Consider a linear combination nof inputs α1x1 + α2x2 and let the response of the system
for this linear combination of inputs be y3.

When x = α1x1 + α2x2 . y = y3

183
Consider the same linear combination of output, α1y1 + α2 y2.

It is observed that y3 = α1 y1 + α2 y2 . Hence the system is linear.

EXAMPLE 5.2

The response of a system is y = ax2 + ebx. Test whether the system is linear or nonlinear.

SOLUTION

Let x1 and x2 be two inputs to the system and y1 and y2 be their resonses respectively.

Given that y = ax2 + ebx

When x = x1, y = y1, y1 = ax12 + ebxl

When x = x2, y = y2, y2 = ax22 + ebx2

Consider a linear combination of inputs α1x1 + α2x2 and let the response of the system
for this linear combination of inputs be y3.

Consider the same liear combination of output, α1y1 + α2y2

It is observed that y3  α1y1 + α2 y2. Hence the system is nonlinear.

In all practical engineering systems, there will be always some nonlinearity due to
friction, inertia, stiffness, backflash, hysteresis, saturation and dead-zone. The effect of the non
linear components can be avoided by restricting the operation of the component over a narrow
limited range. Moreover most of the automatic control systems operate within a narrow range,
e.g. the speed controller of an electric drive for constant speed operation of 1500 rpm will be
required to operate between 1450 to 1550 rpm. Similarly, automatic voltage controller will be
operating within  5% of the specified voltage. Thus the characteristics of components may be
considered as linear over this limited range.

Further, some components behave linearly over its working range, e.g., a spring when
loaded, gets extended. As the load is being increased the load-displacement curve is linar within
the working range. However, when the load is increased beyond the maximum of the working

184
range, the spring material starts to yield and it becomes permanently deformed. It can be
concluded that the sprin g behaves linearly over its working range and beyond this range it is
nonlinear.

Although nonlinearities in systems may generally be due to imperfections of a physical


device, some times we deliberately introduce non linear device or operate the linear devices in
nonlinear regions with a view to improve system performance.

The characterustucs of non linear system are given below.


1. The response of nonliinear system to a particular test signal is no guide to their
behaviour to other inputs, since the principle of superposition does not holds
good for nonlinear systems.
2. The nonlinear system response may be highly sensitive to input amplitude. The
stability study of nonlinear systems requires the information about the type and
amplitude of the anticipated inputs, initial conditions, etc., in addition to the
usual requirement of the mathematical modal.
3. The non linear systems may exhibit cycles which are self sustained oscillations
of fixed frequency and amplitude.
4. The non linear systems may have jump resonance in the frequency response.
5. The output of a nonlinear system will have harmonics and sub-harmonics when
excited by sinusoidal signals.
6. The nonlinear systems will exhibit phenomena like frequency entrainment and
asynchronous quenching.

BEHAVIOUR OF NONLINEAR SYSTEMS

In nonlinear systems, the response (output) depends on the magnitude and type of input
signal. The principle of superposition will not hold good for nonlinear systems. The nonlinear
systems may exhibit various phenomena like jump resonance, sub harmonic oscillation, limit
cycles, frequency entrainment and asynchronous quenching. The various phenomena that occur
in nonlinear system are explained in this section.

Frequency-amplitude dependence

The frequency-amplitude dependence is one of the most fundamental characteristics of


the oscillations of nonlinear systems. The frequency-amplitude dependence can be best studied
by considering the mechanical system shown in Figure 5.1 in which the spring is nonlinear.
The differential equation governing the dynamic of the system may be written as

M x + B x + K x + Kx1 = 0 …5.1

where Kx + Kx1 – Opposing force due to nonlinear spring.

The parameters M, B and K are positive constants. The


parameters K may be positive or negative. If K is positive,
the spring in called hard spring and if K is negative the spring
is called soft spring. The equation (5.1) is nonlinear differential Figure 2.1 Mechanical
equation and it also called Duffing’s equation. system with nonlinear spring

185
When the system of Figure 5.1 has non zero initial conditions, the free response (i.e.,
solution of equ 5.1) is damped oscillatory. The frequency of free oscillations depends on the
amplitude of oscillations. When K < 0 (soft spring) the frequency decreases with decreasing
amplitude . When K > 0 (hard spring) the frequency increases with decreasing amplitude.
When K = 0 (corresponding to linear system) the frequency remains unchanged as the
amplitude of free oscillation decreases. The frequency-amplitude dependence characteristic of
nonlinear mechanical system of Fig. 5.1 is shown in Fig. 5.2

Jump resonance

In the frequency response of nonlinear systems, the


amplitude of the response (output) may jump from one point to
another for increasing or decreasing values of
frequency, . This phenomenon is called jump Figure 5.2 Amplitude vs frequency
resonance and it can be observed in the frequency curves for free oscillations in the
response of the system shown in Fig. 51., when it is system described by equation 5.1
subjected to sinusoidal input.

Let the mechanical system of Fig. 5.1, be subjected to an input of type A cos t. Now
the differential equation governing the mechanical system is

M 𝑥̈ + B 𝑥̇ + K x + K x3 = A cos t …5.2

Let X be the amplitude of the response or output of the system. In frequency response
studies, the amplitude, A of the input is held constant, while its  is varied and the amplitude,
X of the output is observed. The frequency response curve is plotted between X and . The
frequency response curves of the mechanical system of fig 5.1 are shown in fig 5.3a and 5.3b
for hard and soft springs respectively.

Figure 5.3 Frequency response curves showing jump resonance

In the frequency response curve shown in fig 5.3a and b, as the frequency  is increased,
the amplitude X increas4es, until point-2 is reached, A further increase in frequency will cause
a jump from point-2 to point-3. This phenomenon is called jump resonance. As the frequency
is increased further, the amplitude X follows the curve from point-3 towards point-4.

When the frequency is reduced starting from a high value corresponding to point-4, the
amplitude X slowly increases through point-3, until point-5 is reached. A further decrease in 
will cause another jump from point-5 to point-6. This phenomenon is called jump resonance.
After this jump, the amplitude X decreases with  and follows the curve from point-6 towards
point-1.

186
For jump resonance to take place, it is necessary that the damping term be small and
the amplitude of the forcing function be large enough to drive the system into a region of
appreciably nonlinear operation.

Subharmonic oscillations

When an nonlinear system is excited by a sinusoidal signal, the response or output will
have steady-state oscillation whose frequency is an integral submultiple of the forcing
frequency. These oscillations are called sub harmonic oscillations. The generation of sub
harmonic oscillations depends on the system parameters and initial conditions. It also depends
on amplitude and frequency of the forcing functions.

Limit cycles

The response (or output) of nonlinear systems may exhibit oscillations with fixed
amplitude and frequency. These oscillations are called limit cycles. Consider a mechanical
system with nonlinear damping and described by the equation,

M 𝑥̈ + B(1-x2) 𝑥̇ + K x = 0 …5.3

where M, B and K are positive constants. The equation (5.3) is called the van der pol equation.
For small values of x the damping will be negative which implies the stored energy in the
damper is fed to the system. For large values of x the damping is positive which implies that it
absorbs energy from the system. Thus, it can be expected that such a system may exhibit a
sustained oscillation. Since the system explained above is not a forced system, this oscillation
is called a self-excited oscillation or zero input limit cycle.

Frequency entrainment

The phenomena of frequency entrainment is observed


in the frequency response of nonlinear systems that exhibit
limit cycles. Consider a system capable of exhibiting a limit
cycle of frequency . If a periodic input of frequency  is
applied to this system then the phenomenon of beats is
observed. [The beat is the oscillation whose frequency is the
difference between 1 and . This frequency is also called
beat frequency]. In linear systems, the beat frequency
decreases indefinitely as  approaches 1. But in nonlinear Figure 5.4 |-1| vs  curve
systems, the frequency 1 of the limit cycle falls in showing the zone of frequency
synchronistically with or is entrained by the forcing entrainment
frequency,  within a certain band of frequencies. This phenomenon is called frequency
entrainment. The band of frequency in which entrainment occurs is called the zone of frequency
entrainment. In this zone, the frequencies  and 1 coalsee and only one frequency,  exists.
The relationship between |-1| and  is shown in Figure 5.4.

Asynchronous quenching

In a nonlinear system that exhibits a limit cycle of frequency, 1 it is possible to quench


(stop or eliminate) the limit cycle oscillation by forcing the system of a frequency q, where
q and t are not related to each other. The phenomenon is called signal stabilization or
asynchronous quenching.

187
INVESTIGATION OF NONLIEAR SYSTEMS

For analysis, the nonlinear system can be approximated by a linear model in the entire
operating region. The nonlinear systems can be piecewise approximated. Each piece can be
analysed by a differential equation governing the systems.

The two popular methods of analysing nonlinear systems are phase-plane method and
describing function method.

The phase plane method is basically a graphical method from which information about
transient behaviour and stability is easily obtained by constructing phase trajectories. This
method is restricted to second order systems. Higher order systems may first be approximated
by their second-order equivalent for investigation by the phase plane method.

The Describing function method is based on harmonic linearization. Here the input to
nonlinear component is sinusoidal and depending upon the filtering properties of the linear part
of the overall system, the output is adequately represented by the fundamental frequency term
in fourier series.

The phase-plane and describing function methods use complimentary approximations.


The phase-plane method retains, the nonlinearity as such and uses the second-order
approximation of a higher-order linear part, while on the other hand, the describing function
method retains the linear part and harmonically linearizes the nonlinearity.

COMMON PHYSICAL NONLINEARITIES

The nonlinearites can be classified as incidental and intentional.

The incidental nonlinearities are those which are inherently present in the system.
Common examples of incidental nonlinearities are saturation, dead-zone, coulomb friction,
striction, backlash, etc.

The intentional nonlinarities are those which are deliberately inserted in the system to
modify system characteristics. The most common example of this type of nonlinearity is a
relay.

SATURATION: In this type nonlinearity the output is proportional to input for a


limited range of input signals. When the input exceeds this range, the output tends to become
nearly constant as shown in Figure 5.5.

All devices when driven by sufficiently large


signals, exhibit the phenomenon of saturation due to
limitations of their physical capabilities. Saturation in the
output of electronic, rotating and flow (hydraulic and
pneumatic) amplifiers, speed and torque saturation in
electric and hydraulic motors, saturation in the output of
sensors for measuring position, velocity, temperature etc.,
are the well known examples. Figure 5.5 Saturation
DEADZONE: The deadzone is the region in which the output is zero for a given input.
Many physical devices do not respond to small signals, i.e., if the input amplitude is less than

188
some small value, there will be no output. The region in which the output is zero is called
deadzone. When the input is increased beyond this deadzone value, the output will be linear.

Figure 5.6: Dead zone nonlinearity Figure 5.7: Dead zone and saturation nonlinearity

The Figure 5.6 shows the deadzone nonlinearity and the Figure 5.7 shows the
combination of dead zone and saturation nonlinearity.

FRICTION: Friction exists in any system when there is relative motion between
contacting surfaces. The different types of friction are viscous friction, coulomb friction and
stiction.

The viscous friction is linear in nature and the frictional force is directly proportional
to relative velocity of the sliding surfaces.

The coulomb friction and stiction are nonlinear frictions. The coulomb friction offers a
constant retarding force only when the motion is initiated. Due to interlocking of surface
irregularities, more force is required to move an object from rest than to maintain it in motion.
Hence the force of stiction is always greater than that of coulomb friction.

In actual practice, the stiction force gradually decreases with velocity and changes over
to coulomb friction at reasonably low velocities as shown in Figure 5.10. The composite
characteristics of various frictions are shown in Figure 5.8 to 5.11.

Figure 2.11:
Figure 5.9: Ideal Figure 5.10: Actual
Figure 5.8: Viscous Stiction, coulomb
stiction and stiction and
friction friction and viscous
coulomb friction coulomb friction
friction
5.2 DESCRIBING FUNCTION

Consider the block diagram of the nonlinear system shown in Figure 5.12

Figure 5.12: A nonlinear system

189
In the above system the block G1(s) and G2(s) represents linear elements and the block
N represent nonlinear element.

Let x = X sin t be the input to nonlinear element. Now the output y of the nonlinear
element will be in general a nonsinusoidal periodic function. The fourier series representation
of the output y can be expressed as (by assuming that the nonlinearity does not generate sub
harmonics).

y = A0 + A 1 sin t + B1 cos t + A2 sin 2t + B2 cos 2t +…. …5.4

If the nonlinearity is symmetrical the average value of y is zero and hence the output y
is given by

y = A1 sin t + B1 cos t + A2 sin 2t + B2 cos 2t +…. …5.5

In the absence of an external input (i.e., when r = 0) the output y of the nonlinearity N
is feedback to its input through the linear element G2(s) and G1(s) in tandem. If G1(s) G2(s) has
low-pass characteristics, then all the harmonics of y are filtered, so that the input x to the
nonlinear element N is mainly contributed by the fundamental component of y and hence x
remains sinusoidal. Under such conditions the harmonics of the output are neglected and the
fundamental components of y alone considered for the purpose of analysis.

 y = y1 = A1 sin t + B1 cos t = Y1 < 1 = Y1 sin (t + 1) …5.6

…5.7

…5.8

Y1 = Amplitude of the fundamental harmonic component of the output.


1 = Phase shift of the fundamental harmonic component of the output with
respect to the input.

The coefficient A1 and B1 of the fourier series are given by

…5.9

…5.10

When the input, x to the nonlinearity is sinusoidal (i.e., x = X sin t) the describing
function of the nonlinearity is defined as,

…5.11

The nonlinear element N in the system can be replaced by the describing function as
shown in Figure 5.13.

190
Figure 5.13: Nonlinear system with non linearity replaced by describing function

If the nonlinearity is replaced by a describing function then all linear theory frequency
domain technique can be used for the analysis of the system. The describing functions are used
only for stability analysis and it is not directly applied to the optimization of system design.
The describing function is a frequency domain approach and no general correlation is possible
between time and frequency responses.

5.3 DESCRIBING FUNCTION OF DEAD-ZONE AND SATURATION


NONLINEARITY

The input and the output relationship of nonlinearity with dead-zone and saturation is
shown in Figure 5.14.

The dead-zone region is from x = -D/2 to +D/2 and in this region the output is zero.
The input-output relation is linear for x =  D/2 to  S and when the input, x > S, the output
reaches a saturated value of K (S-D/2).

The output equation for the linear region can be obtained from the general equation of
straight line as shown below.

The equation of straight lines is, y = mx + c …5.12

In the linear region, when x = D/2, y = 0. On substituting this values of x and y in equ
(5.12) we get,

0 = mD/2 + c …5.13

In the linear region, when x = S,


y = K(S-D/2). On substituting thi s
values of x and y in equ (5.12) we get,

K (S-D/2) = mS + c ….5.14 Figure 5.14 Input-


output characteristic of
Equ (5.14) – equ (5.15) yields,
dead-zone and
saturation

…5.15

…5.16

From equations (5.12), (5.15) and (5.16) the output equation for the linear region can
be written as,

191
…2.17

The response or output of the non linearity when the input is sinusoidal signal
(x = X sin t) is shown in Figure 5.15.

The input x is sinusoidal  x = X sin t …2.18

Where X = Maximum value of input.

Figure 5.15: Sinusoidal


response of nonlinearity
with dead zone and
saturation

The output y of the nonlinearity can be divided five regions in a period of π and the
output equation for the five regions are given below.

Let Y1 = Amplitude of the fundamental harmonic component of the output.


1 = Phase shift of the fundamental harmonic component of the output with
respect to the input

The describing function is given by

Page No. 154

192
On substituting for D/2 and S from equations (5.26) and (5.27) in equ (5.25) we get,

…5.28

…5.29

…5.30
𝑌1
The describing function KN (X, ) = 1 …5.31
𝑋

On substituting for Y1 and 1 from equations (5.29) and (5.30) in equ (5.31) we get

…5.32

Depending on the maximum value of input, X the describing function of equ (5.32) can
be written as,

..5.33

..5.34

..5.35

5.4 DESCRIBING FUNCTION OF SATURATION NONLINARITY

The input-output relationship of saturation nonlinearity is shown in Figure 5.16.

The input-output relation is linear for x = 0 to S. When the input x > S, the output
reaches a saturated value of KS.

The response of the nonlinearity when the input is sinusoidal signal (x = X sin t) is
shown in Figure 5.17.

193
The input x is sinusoidal,

x = X sin t … (5.36)

Where X is the maximum value of input.

In Figure (5.17), when t = ß, x = S.

Hence equ (5.36) can be written as, S = X sin ß


…5.37
Figure 5.16 Input-output
characteristics of saturation
nonlinearity
…5.38

The output y of the nonlinearity can be divided into three regions in a period of π. The
output equation for the three regions are given by equ (5.39).

…5.39

Figure 5.17 Sinusoidal response of


saturation nonlinearity

Let Y1 = Amplitude of the fundamental harmonic component of the output.


 = Phase shift of the fundamental harmonic component of the output with
respect to the input.

The describing function is given by, KN(X,) = (Y1 / X)  1

where Y1 = √𝐴21 + 𝐵12 and 1 = tan-1 (B1 / A1)

The output y has half wave and quarter wave symmetries

194
…2.40

The output, y is given by two different expressions in the period 0 to π/2. Hence equ
(5.40) can be written as shown in equ (5.41).

…2.41

On substituting the values of y from equ (5.39) in equ (5.41) we get,

On substituting x = X sin t, we get

..5.42

On substituting for S, (i.e., S = X sin ß) from equ (5.37) in equ (5.42) we get,

…5.43

…5.44

195
..5.45

…5.46

Using equations (5.44) and (5.45), the describing function of equ (5.46) can be written
as,

…5.47

Depending on the maximum value of input X, the describing function can be written
as,

…5.48

..5.49

The equation (5.49) can be expressed in aother form as shown below.

…5.50

On constructing right angle triangle with unity hypotenuse is


shown in Figure 5.18, cos ß can be evaluated. From Figure 5.28 we get.

…5.51

In the describing function of equ (5.49), substitute for ß, sin ßnd cos ß from equations
(5.38), (5.50), and (5.51)

..5.52

5.5 DESCRIBING FUNCTION OF DEAD-ZONE NONLINEARITY

The input-output relationship of dead-zone nonlinearity is shown in Figure 5.19. The


output is zero, when the input is less than D/2. The input-output relationship is linear when the
input is less than D/2. The input-output relationship is linear when the input is greater than D/2.
The response of the nonlinearity when input is sinusoidal signal (x = X sin t) is shown in
Figure 5.20.

196
Figure 5.19 Input-output
characteristic of dead-zone
nonlinearity

Figure 5.20 Sinusoidal response of


dead-zone nonlinearity

The input x is sinusoidal, x = X sin t …5.53

Where X is the maximum value of input

In Figure 5.20, when t = α, x = D/2

Hence when t = α, the equ (5.53) can be written as, D/2 = X sin α

…5.55

..5.56

The output y can be divided into three regions in a period of π. The output equation for
the three regions are given by equ (5.27).

…5.57

Let Y1 = Amplitude of the fundamental harmonic component of the output.


1 = Phase shift of the fundamental harmonic component of the output with
respect to the input.

197
The describing function is given by, KN(X,) = (Y1 / X)  1

where Y1 = √𝐴21 + 𝐵12 and 1 = tan-1 (B1 / A1)

The output y has half wave and quarter wave symmetries

...5.58

Since the output, y is zero in the range, 0 ≤ t ≤ α, the limits on integration in equ (5.58)
can be changed to, α to π/2 instead of, 0 to π/2.

…5.59

Put x = X sin t in equ (5.59)

…5.60
𝐷
From equ (5.55) we get, sin α = 2𝑋 D = 2 sin α …5.61

On substituting for D from equ (5.61) in equ (5.60) we get,

198
…5.65

Using equations (5.63) and (5.64) the describing function of equ (5.65) can be written
as,

…5.66

Depending on the maximum value of input X, the describing function can be written
as,

...5.67

…5.68

The equation (5.68) can be expressed in another form as shown


below.

From equ (5.55), we get, sin α = D/2 X

On constructing right able triangle with unity hypotenuse as shown in Fig. 5.21 cos α
can be evaluated.

From Figure 5.21, we get,

…569

In the describing function of equ (5.68), substitute fro α, sin α and cos α from equations
(5.26) (5.55) and (5.69) respectively.

…5.70

199
5.6 DESCRIBING FUNCTION OF RELAY WITH DEAD-ZONE AND
HYSTERESIS

The input and the output relationship of a relay


with dead-zone and hysteresis shown in Figure 5.22.

Due to dead-zone the relay will respond only


after a definite value of input. Due to hysteresis the
output follows a different paths for increasing and
decreasing values of input. When the input x is
increased from zero, the output follows the path ABCD
Figure 5.22 Input output
and when the input is decreased from a maximum
value, the output follows the path DCEA. characteristics of relay with
dead-zone and hysteresis
For increasing values of input, the output is zero
when x<(D/2) and the output is M when x>(D/2). For decreasing values of input the output is
M when x>(D/2-H) and output is zero when x < (D/2-H).

The response or output of the relay when the input is sinusoidal signal (x=X sint) is
shown in fig 5.23.

Figure 5.23: Sinusoidal response of relay


with dead-zone and hysteresis

The input x is sinusoidal, x = X sin rt …5.71

Where X = maximum value of input.

In fig. 5.23, when t = α, x = D/2

Hence equ (5.71) can be writte as D /2 = sin α

 sin α = D / 2X … 5. 72

and α = sib-1 D / 2X …5.73

200
In Figure 5.23, when t = π – ß, x = D/2) – H

Hence eqn (5.71) an be written as

D / 2 – H = X sin (π – ß)

D / 2 – H = X sin ß

…5.74

…5.75

The output can be divided into five regions in a period of 2 and the output equation
for thr five regions are given by equ(5.76).

…5.76

Let Y1 = Amplitude of the fundamental harmonic component of the output.


1 = Phase shift of the fundamental harmonic component of the output with
respect to the input.

The describing function is given by, KN(X,) = (Y1 / X)  1

where Y1 = √𝐴21 + 𝐵12 and 1 = tan-1 (B1 / A1)

From equ(5.72) we get, sin  = D/2X Figure 5.24

On constructing right angle triangle with unity hypotenuse as shown in fig 5.24, cos 
can be evaluated

…5.78

201
𝐷 𝐻
From equ (5.74) we get sin ß = (2𝑋 − 𝑋 ).

On constructing right angle triangle with unity hypotenuse as shown in fig 5.25, cos 
can be evaluated

…5.79 Figure 5.25

On substituting for cos  and cos  from equations (5.78) and (5.79) in equ(5.77) we
get,

..5.80

On substituting for sin α and sin ß from equ (2.72) and equ (2.74) we get,

…5.81

…5.82

…5.83

The describing function of the relay with dead-zone and hysteresis is given by

…5.84

Where Y1 is given by equ (5.82) and 1 is given by equ(5.83).

From the equ(5.84), the describing functions of the following three cases of relay can
be obtained.

202
1. Ideal relay
2. Relay with dead-zone
3. Relay with hysteresis

1. IDEAL RELAY

In this case D = H = 0,
Figure 5.26 : Input – Output
On substituting D = H = 0, in equ (5.82) and
equ (5.83) we get, characteristics of ideal relay

2𝑀
Y1 = and 1 = 0
𝜋

Hence the describing function of the ideal relay is given by,


𝑌1 2𝑀
KN(X,) = = 1 = = 𝜋𝑋 …5.85
𝑥

2. RELAY WITH DEAD-ZONE

In this case H = 0

On substituting H = 0, in equ (5.82) and (5.83) we get,

Figure 5.27 Input-Output


characteristics of relay
with dead-zone

Hence the describing function of relay with dead-zone is given by

…5.86

3. RELAY WITH HYSTERESIS

In this case D = H

On substituting D = H in equ (5.82) we get,

Figure 5.28 Input – output


characteristics of relay with
hysteresis

203
…5.87

On substituting D = H in equ (5.83) we get,

…5.88

Using the numerator and denominator of equ (5.88) as two sides, we can construct a
right angle triangle as shown in Figure 5.29.

From Figure 5.29 we get,

…5.89
Figure 5.29

Using equations (5.87) and (5.89), the describing function of relay with hysteresis can
be written as,

…5.90

204
5.7 DESCRIBING FUNCTION OF BACKLASH NONLINEARITY

The input-output relationship of Backlash nonlinearity is shown in fig 5.30.

The response of the nonlinearity when the input is


sinusoidal signal (x=X sint) is shown in fig 5.31.

In Fig 5.31, when t = (π-ß), x = X – b

On substituting this value of x and t in the input


signal, x = X sin t we get

X – b = X sin (π – ß)
Figure 5.30: Input-
X – b = X sin ß
Output characteristic of
backlash nonlinearity
…5.91

…5.92

The output can be divided into five regions in a period of 2 and the output equation
for the five regions are given by equ(5.93).

Figure 5.31 Sinusoidal response of


backlash nonlinearity

….5.93

Let Y1 = Amplitude of the fundamental harmonic component of the output.


1 = Phase shift of the fundamental harmonic component of the output with
respect to the input.

205
The describing function is given by, KN(X,) = (Y1 / X)  1

where Y1 = √𝐴21 + 𝐵12 and 1 = tan-1 (B1 / A1)

…5.94

The output, y is given by three different equations in range 0 to π, hence equ (5.94) can
be written as

…5.95

Put x = X sin t in equ (5.95)

…5.96

In equ (5.96)

206
…5.97

On substituting of r(1-b/X) from equ (5.91) in equ (5.97) we get,

…5.98

…5.99

The output, y is given by three different equations in the range 0 to π, hence equ (5.99)
can be expressed as,

…5.100

Put = X sin t in equ (5.100)

207
…5.101

Since (1-b/X) = sin ß and cos 2ß = (1-2 sin2ß), the equ (5.98) can be written as

…5.102

Page No. 172

The Nyquist stability criterion can also be extended to the stability analysis of nonlinear
systems. According to the Nyquist stability criterion the system will exhibit sustained
oscillations or limit cycles when,

KN G(j) = -1 …5.108

The equation (5.108) implies that the sustained oscillations or limit cycles will occur if
KN G(j) locus pass through the critical point, -1+j0, in the complex plane.

208
The equation (5.108) can be modified as shown below

G(j) = - 1/ KN …5.109

The equation (5.109) implies that the critical point, -1 + j0 becomes the critical locus
which is the locus of -1/KN. Hence the intersection point of G(j) locus and -1/KN locus will
give the amplitude and frequency of limit cycles.

In the stability analysis, let us assume that the linear part of the system is stable. To
determine the stability of the system due to nonlinearity sketch the -1/KN locus and G(j) locus
(polar plot of G(j)) in complex plane. (Use either a polar graph sheet or ordinary graph sheet)
and from the sketches the folowing conclusions can be obtained.

1. If the -1/KN locus is not enclosed by the G(j) locus then the system is stable or there
is no limit cycle at steady state.

2. If the -1/KN locus is enclosed by the G(j) locus then the system is unstable.

3. If the -1/KN locus and the G(j) locus intersect, then the system output may exibit a
sustained oscillation or a limit cycle. The amplitude of the limit cycle is given by the
value of -1/KN locus at the intersection point. The frequency of the limit cycle is given
by the frequency of G(j) corresponding to the intersection point.

CONCEPT OF ENCLOSURE

In a complex plane the -1/KN locus is said to be enclosed by G(j) locus if it lies in the
region to the right of an observer travelling through G(j) locus in the direction of increasing
, as shown in fig 5.33.

In a complex plane the -1/KN locus is not enclosed by G(j) locus if it lies in the region
to the left of an observer travelling through G(j) locus in the direction of increasing , as
shown in fig 5.34.

If the -1/KN locus and G(j) locus intersect as shown in fig 5.35, then for an observer
travelling through G(j) locus in the direction of increasing , the region on the right is
unstable region and the region on the left is stable region.

Figure 5.33 Figure Figure 5.34 Figure Figure 5.35 Figure


showing enclosure of – showing non enclosure of – showing intersection of –
1/KN locus by G(j) locus 1/KN locus by G(j) locus 1/KN locus by G(j) locus

209
STABLE AND UNSTABLE LIMIT CYCLES

The -1/KN locus may intersect G(j) locus at one or more points. There exists a limit
cycle at every intersecting point. These limit cycles can be either stable or unstable limit cycles,
as shown in fig 5.36.

If -1/KN locus travels in unstable region and it intersect G(j) locus to enter stable
region then the limit cycle corresponding to that intersection point is stable limit cycle.

If -1/KN locus travels in stable region and it intersect G(j) locus to enter unstable
region then the limit cycle corresponding to that intersection point is unstable limit cycle.

Figure 5.36 Stable and unstable limit cycles

Note: The concept of enclosure can be extended to db-phase angle plane (i.e. to Nichols plot)
and it is same as that of complex plane.

5.9 REVIEW OF POLAR PLOT AND NICHOLS PLOT

POLAR PLOT

The polar plot of a sinusoidal transfer function, G(j) is a plot


of the magnitude of G(j) versus the phase angle of G(j) on polar
coordinates as  is varied from zero to infinity. Thus the polar pot is
the locus of vector |G(j)|  G(j) as  is varied from zero to infinity.
Figure 5.37
The polar plot is also called Nyquist plot.
Polar graph
The polar plot is usually plotted on a polar graph sheet. The polar graph sheet has
concentric circles and radial lines. The circles represent the magnitude and the radial lines
represent the phases angles. Each point on the polar graph has a magnitude and phase angle.
The magnitude of a point is given by the value of the circle passing through that point and the
phase angle is given by the radial line passing through that point. In polar graph sheet a positive
phase angle is measured in anticlockwise from the reference axis (0o) and a negative angle is
measured clockwise from the reference axis (0o).

Alternatively, if G(j) can be expressed in rectangular coordinates as,


G(j) = GR(j) + jG1(j)
Where, GR(j) = Real part of G(j)
and G1(j) = Imaginary part of G(j)

210
Then the polar plot can be plotted in ordinary graph sheet between G R(j) and G1(j)
as  is varied from 0 to ∞.

To plot the polar plot, first compute the magnitude and phase of G(j) for various
values of  and tabulate them. Usually the chicce of frquencies are corner frequencies and
frequencies around corner frequencies. Choose proper scale the magnitude circles. Fix all the
points on polar graph sheet and join the points by smooth curve. Write the frequency
corresponding to each point of the plot.

To plot the polar plot on ordinary graph sheet, compute the magnitude and phase for
various values of . Then convert the polar coordinates to rectangular coordinates using
P → R convertion (polar to rectangular convertion) in the calculator. Sketch the polar plot using
rectangular coordinates.

For minimum phase transfer function with onlt polea, the type number of the system
determines at what quadrant the polat plot starts and the order of the system determines at
what quadraqnt the polat plot ends.

Note: The minimum phase systems are systems with all poles and zeros on the left half of s-
plane

Figure 5.38
Figure 5.39 End
Start of polar
of polar plot
plot

NICHOLS PLOT

The Nichols plot is a frequency response plot of the open loop transfer function of a
system. The Nichols plot is a graph between magnitude of G(j) in db and the phase of G(j)
in degree, plotted on a ordinary graph sheet.

To plot the Nichols plot, first compute the magnitude of G(j) in db and phase of G(j)
in deg for various values of  and tabulate them. Usually the choice of frequencies are corner
frequencies. Choose appropriate scales for magnitude on y-axis and phase on x-axis. Fix all the
points on ordinary graph sheet and join the points by smooth curve. Write the frequency
corresponding to each point of the plot.

In another method, first the Bode plot of G(j) is sketched. From the Bode plot the
magnitude and phase for various values of frequency,  are noted and tabulated. Using these
values the Nichols plot is sketched as explained earlier.

In a system if the zero frequency gain K is varied then the magnitude of the transfer
function alone will vary and there will not be any change in phase. This results in vertical shift
of Nichols plot up or down. The constant K adds 20log K to every point of the plot. If 20log K
is positive then the plot shifts upwards and if it is negative the plot shifts downwards.

211
EXAMPLE 5.2

A servo system used for positioning a load has backlash characteristics as shown in Fig
5.2.1. The block diagram of the system is shown in Fig 5.2.2. The magnitude and phase of the
describing function of backlash nonlinearity for various values of b/X are listed in Table 5.2.1,
where X = Maximum value of input sinusoidal signal to the nonlinearity.

Figure 5.2.1 Figure 5.2.2


Table 5.2.1

b/X 0 0.2 0.4 1.0 1.4 1.6 1. 1.9 2.0


|KN| 1 0.954 0.882 0.592 0.367 0.248 0.125 0.064 0
KN 0 -6.7o -13.4o -32.5o -46.6o -55.2o -66o -69.8o -90o

Show that the system is table if K = 1. Also show that limit cycle exists when K = 2.
Investigate the stability of these limit cycles and determine their frequency and b/X.

SOLUTION

The describing function analysis of the system can be carried using either polar plot or
using Nichols plot.

METHOD 1: USING PLOR PLOT

Polar plot of G(j) when K = 1

The magnitude and phase of G(j) are calculated for various values of  and tabulated
in Table 5.2.2. Using poloar to rectangular conversion the polar coordinates are converted
rectangular coordinates and listed in Table 5.2.2. The polar plot of G(j) when K = 1 drawn in
an ordinary graph sheet, as shown in Figure 5.2.3.

212
Figure 5.2.3 Polar plot of G(j) and – 1/KN

Table 5.2.2

 rad/sec 0.1 0.15 0.2 0.25 0.5 0.75 1.0 1.25


|G(j)| 9.94 6.57 4.88 3.85 1.74 1.0 0.63 0.42
G(j) deg. -99 -103 -107 -111 -131 -147 -162 -173
GR(j) -1.6 -1.5 -1.4 -1.4 -1.1 -0.8 -0.6 -0.4
G1(j) -9.8 -6.4 -4.7 -3.6 -1.3 -0.5 -0.2 -0.05

213
Polar plot of G(j) when K = 2

The magnitude of G(j) when K = 2 is given by

(The phase of G(j) will not change due to a change in the value of K)

The magnitude and phase of G(j) and the real part and imaginary part of G(j) K = 2
are calculated for various values of  and listed in Table 5.2.3. The polar plot of 0 when
K = 2, is drawn on the same graph sheet using the same scales as shown in Figure.

Table 5.2.3

 rad/sec 0.2 0.25 0.3 0.5 0.75 1.0 1.25


|G(j)| 9.76 7.7 6.31 3.48 2.0 1.26 0.84
G(j) deg. -107 -111 -115 -1321 -147 -162 -173
GR(j) -2.9 -2.8 -2.7 -2.3 -1.7 -1.2 -0.8
G1(j) -9.3 -7.2 -5.7 -2.6 -1.1 -0.4 -0.1

Polar plot of – 1/KN

The function – 1/KN can be written as,

The values of |KN| and KN are given in the problem, in Table 5.2.1., for various values
of b/X. Using the values if Table 5.2.1, the |-1/KN| and (-1/KN) are calculated for various
values of b/X and listed in Table 5.2.4. Then the real part and imaginary part of -1/KN are
calculated using polar to rectangular convertion and listed in Table 5.2.4. The locus of -1/KN
is sketched using rectangular coordinates in the same graph sheet as shown in Figure 5.2.3.

Table 5.2.4

b/X 0 0.2 0.4 1.0 1.4 1.6 1.8 1.9 2.0


|KN| 1 0.954 0.882 0.592 0.367 0.248 0.125 0.064 0
KN 0 -6.7o -13.4o -32.5o -46.6o -55.2o -66o -69.8o -90o
|-1/KN| 1 1.05 1.13 1.69 2.72 4.03 8.0 15.63 ∞
(-1/KN) -180 -173 -166o
o o
-148o -133o -125o -114o -110o -90o
Real part of
-1.0 -1.04 -1.1 -1.4 -1.9 -2.3 -3.3 -5.3 0
-1/K
Ima. Part of
0 -0.1 -0.3 -0.9 -2.0 -3.3 -7.3 -14.7 ∞
1/KN

214
STABILITY ANALYSIS

Case (i) when K = 1

When K = 1, G(j) locus does not enclose -1/KN locus, hence the system is stable.

Case (ii) K = 2

When K = 2, the G(j) locus, intersects -1/KN locus at two points. From the polar plots,
it is observed that at one intersection point, unstable limit cycle exits and at another intersection
point stable limit cycle exist.

From Figure 5.2.3, Coordinates corresponding to unstable limit cycle = - 2.6 – j4.4 =
5.11 -120o.

Let 11 = Frequency corresponding to unstable limit cycle.


And b/X1 = The value of b/X corresponding to unstable limit cycle
Now at  = 11 G(j) = 5.11 -120o
At  = 11 G(j) = -120o
By equating the expression for G(j) to -120o, the frequency 11 can be determined.
We know that, G(j) = -90o –tan-1  - tan-1 0.5
At  = 11 -90o – tan-1 11 –tan-1 0.511 = -120o
-90o –tan-1 11 + tan-1 0.511 = 120o
tan-1 11 + tan-1 0.511 = 120o – 90o = 30o
On taking tan on either side we get,
tan (–tan-1 11 + tan-1 0.511) = tan 30o

215
From the describing function of backlash nonlinearity we get,

From the describing function of backlash nonlinearity we get,

On substituting ((π/2) + ß1 + (1/2) sin 2ß1) = 0.577 cos2 ß1 and then squaring we get

We know that, ß = sin-1 (1 – b/X)


 ß1 = sin-1 (1 – b / X1) (or) b / X1 = 1 – sin ß1 = 1 – sin 43.1o = 0.316.
From Figure 5.2.3,
Coordinates corresponding to stable limit cycle = - 1.1 – j0.3 = 1.14 -165o.
Let 12 = Frequency corresponding to stable limit cycle.
and b/X2 = The value of b/X corresponding to stable limit cycle
Now at  = 12 G(j) = 1.14 -165o
At  = 12 G(j) = -165o
By equating the expression for G(j) to -165o, the frequency 12 can be determined.

216
We know that, G(j) = -90o –tan-1  - tan-1 0.5
At  = 12 -90o – tan-1 12 –tan-1 0.511 = -165o
-90o –tan-1 12 + tan-1 0.512 = 165o – 90o = 75o
On taking tan on either side we get,
tan (tan-1 12 + tan-1 0.512) = tan 75o

On taking only positive root we get,

Hence at  = 12, |KN| = 0.877 and KN = -15o.

From the describing function of backlash nonlinearity we get,

217
From the describing function of backlash nonlinearity we get,

RESULT

1. The unstable limit cycle exist when b/X = 0.316 and the frequency of oscillation is
0.36 rad / sec.
2. The stable limit cycle exist when b/X = 0.464 and the frequency of oscillation is 1.07
rad /sec.

METHOD 2 : USING NICHOLS PLOT

Nichols plot of G(j) when K = 1

Given that, G(s) = K/s (1+s) (1+-0.5)

Let K = 1 and put s = j

The magnitude of G(j) in db and phase of G(j) are calculated for various values of
 and tabulated in Table 5.2.5. The Nichols plot of G(j) is sketched in an ordinary graph
sheet as shown in Figure 5.2.4.

218
Figure 5.2.4 Nichols plot of G(j) and – 1/KN

Table 5.2.5

 rad/sec 0.1 0.15 0.2 0.25 0.5 0.75 1.0 1.25


|G(j)| db 19.9 16.4 13.8 11.7 4.8 0 -4 -7.5
G(j) deg -99 -103 -107 -111 -131 -147 -162 -173

219
Nichols plot of G(j) when K = 2

When K = 2, the magnitude of G(j) increases by an amount, 20 log K = 20 log2=6


db. The phase of G(j) is not altered.

The increase in magnitude independent of frequency. Hence G(j) locus when K = 2


is obtained by shifting the locus of G(j) when K = 1, by 6 db upwards as shown in Figure
5.2.4.

Nichols plot of – 1/KN

The function -1/KN can be written as,

The magnitude and phase of the describing function of backlash, KN is listed in the
probelem in table 5.2.1 for various values of b/X. Using the values of |KN| and KN given in
table 5.2.1, the values of |-1/KN| in db and (-1/KN) are calculated for various values of b/X
and listed in table 5.2.6. Using these values the locus of -1/KN is sketched as shown in fig 5.2.4.

Table 5.2.6

b/X 0 0.2 0.4 1.0 1.4 1.6 1.8 1.9 2.0


|KN| 1 0.954 0.882 0.592 0.367 0.248 0.125 0.064 0
KN 0 -6.7o -13.4o -32.5o -46.6o -55.2o -66o -69.8o -90o
|-1/KN| in db 0 0.4 1.0 4.6 8.7 12.1 18.1 23.9 ∞
(-1/KN) in deg -180o -173o -166o -148o -133o -125o -114o -110o -90o

STABILITY ANALYSIS

Case (i) when K = 1

From the Nichols plots it is observed that when K = 1, G(j) locus does not enclose -
1/KN locus. Hence the system is stable.

Case (ii) when K = 2

From the Nichols plots it is observed that when K = 2, G(j) locus, intersects -1/KN
locus at two points. At one intersection point unstable limit cycle exits and at another
intersection point stable limit cycle exist.

The coordinates corresonding to


= (14.2 db, -120o) = 1014.2/20  -120o = 5.1  - 120o
unstable limit cycle

The coordinates corresonding to


= (11.2 db, -165o) = 101.1/20  -165o = 1.14  - 165o
stable limit cycle

Note: It is observed that the coordinates corresponding to limit cycles are same as that
obtained from polar plot, hence by an analysis similar to that of method-1. We can
determine the frequency and b/X corresponding to limit cycles.

220
RESULT

1. The unstable limit cycle exist when b/X = 0.316 and the frequency of oscillation is
0.36 rad / sec.
2. The stable limit cycle exist when b/X = 0.464 and the frequency of oscillation is 1.07
rad /sec.

EXAMPLE 5.3

Consider a unity feedback system shown in Figure 5.3.1 having a saturating amplifier
with gain K. Determine the maximum value of K for the system to stay stable. What would be
the frequency and nature of limit cycle for a gain of K = 2.59.

Figure 5.3.1

SOLUTION

The stability of the system can analysed using polar plot. The gain K of the saturating
amplifier can be attached to G(j) and amplifier is considered to be an unity gain amplifier.

Polar plot of G(j) when K = 1

The magnitude and phase of G(j) are calculated for various values of  and listed in
Table 5.3.1. Using polar to rectangular conversion the real part and imaginary part of G(j) are
determined and listded in Table 5.3.1. The polar plot of G(j) is sketched in an orindary graph
sheet as shown in Figure 5.3.2.

Table 5.3.1

 rad/sec 0.4 0.5 0.6 0.8 1.0 1.2


| G(j)| 1.299 0.868 0.614 0.346 0.216 0.145
 G(j) -159o -167o -174o -184o -192o -199o
GR(j) -1.21 -0.85 -0.61 -0.35 -0.21 -0.14
GJ(j) -0.47 -0.2 -0.06 0.02 0.04 0.05

221
Polar plot of G(j) when K = 2.5

The phase of G(j) is not altered by the term, K. The magnitude and phase of G(j)
when K = 2.5 are calculated for various values of  and listed in Table 5.3.2. Using plot to
rectangular conversion the real part and imaginary part of G(j) when K = 2.5 are determined
and listed in Table 5.3.2. The polar plot of G(j) when K = 2.5 is sketched in the same graph
sheet using the same scale,s as shown in Figure 5.3.2.

Table 5.3.2

 rad/sec 0.6 0.65 0.75 0.8 1.0 1.2


| G(j)| 1.535 1.313 0.987 0.865 0.54 0.363
 G(j) -174 -177 -182 -184 -192 -199
GR(j) -1.53 -1.31 -0.99 -0.87 -0.53 -0.34
GJ(j) -0.16 -0.07 0.03 0.06 0.11 0.12

Polar plot of -1/KN

The function -1/KN can be expressed as,

We know that the describing functio (KN) of saturation nonlinearity is given by

where, ß = sin-1 (S/X)

and X = Maximum value of input sinusoidal signal

Here, K = 1 and S = 1

where, ß = sin-1 (1/X)

From the equation of -1/KN we can say that, the locus of -1/KN starts at 1 -180o (i.e.,
=1+j0) and travels along the negative real axis for increasing values of X as shown in Figure
5.3.2. The locus of -1/KN is shown as a bold line on the negative real axis.

222
Figure 5.3.2 Polar plot of G(j) and – 1/KN

STABILITY ANALYSIS

Case (i) when K = 1

When K = 1, the G(j) locus does not encloses the -1/KN locus, hence the system is
stable.

Case (ii) when K = 2

When K is increased the G(j) locus sifts upards. For a paritulcar vlaue of K, the G(j)
locus crosses the starting point (k.e., -1 +j0) of -1/KN locus and this value of K is the limiting
value of K for stability.

If G(j) cross negative real axis at -1+j0, then G(j) = -1 = 1  -180o

223
 |G(j)| = 1 and  G(j) = - 180o
Let, 11 = Frequency when G(j) = -1
At = 11,  G(j) = -90o - tan-1 0.5 11 – tan-1-411 = -180o
tan-1 0.5 11 + tan-1 411 = 90o
On taking tan on either side we get,
tan (tan-1 0.511 + tan-1 411) = tan 90o

For the above equation to be infinity, the denominator should be zero.

Therefore the system remains stable if, K < 2.25

Case (iii) when K = 2.5

When K = 2.5 the G(j) locus intersects, -1/KN locus at -1.11 +j0. At the intersection
point stable limit cycle exists.

Coorindate correspoinding to stable limit cycle = -1.11 +j0 =  -180o

Let, 12 = Frequency of stable limit cycle


At  = 12, G(j) = 1.11  -180o
At = 12,  G(j) = -90o - tan-1 0.5 12 – tan-1 412 = -180o
tan-1 0.5 12 + tan-1 412 = 90o
On taking tan on either side we get,
tan (tan-1 0.512 + tan-1 412) = tan 90o

224
For the above equation to be infinity, the denominator should be zero.

 Frequency of limit cycle = 1√2 = 0.707 rad / sec

RESULT

1. When K = 1, the system is stable


2. The system remains stable if K < 2.25
3. When K = 2.5, a stable limit cycle occurs, whose frequency of oscillation is 0.707
rad/sec.

5.10 PHASE PLANE AND PHASE TRAJECTORIES

The phase plane method of analysis is a graphical method for the analysis of linear and
nonlinear systems. The analysis is carried by constructing phase trajectories. It gives an idea
about the transient behaviour and stability of the system.

The phase plan analysis is usually restricted to second order systems excited by step or
ramp inputs. This analysis technique can be extended to a higher order system if it is
approximated as a second order system.

The dynamics of control systems can be represented by differential equations. A second


order linear system can be represented by the differential equation.

where, x = One of the system variable (e.g. displacement in mechanical system,


current in electrical system, etc.,)
 = Damping ratio
n = Natural frequency of oscillation.

The state of the second order system represented by equ (5.11.0) can be described by
choosing two state variables.

Note: Refer chapter 4 for state, state variables and state space modelling using phase
variables.

In state space modelling using phase variables we choose one of the system variable
and its derivatives as state variables. Let x1 and x2 be the state variables of the second order
system.

Here x1 = x and x2 = dx/dt …5.111

On substituting the state variables in equ (5.110) we get,

…5.112

225
The state equations of the system are obtained from equations (5.111) and (5.112). The
state equations are,

…5.113

…5.114

For linear systems the state equations are a set of first order linear differential equations
and solutions of state equations can be easily obtained by integration. But for nonlinear
systems, the state equations are a set of first-order nonlinear differential equations and solving
the nonlinear differential equations will not be an easy task. Hence for nonlinear systems the
phase plane method of analysis will be an useful tool.

226
QUESTON BANK
PART A

1. Formulate the choice of state variables?


2. Choose the basic elements used to construct the state diagram?
3. Create the general form of state model of nth order system?
4. The drawback of transfer function model compare with state space model.
5. Compose the Phase variables of a linear time invariant system?
6. Estimate how the modal matrix can be determined
7. Construct the bush or companion form of state model
𝑑2 𝑦 𝑑𝑦
8. A system is characterized by the differential equation, 𝑑𝑡 2 + 10 𝑑𝑡 + 7𝑦 − 𝑢 = 0
Formulate its transfer function.
9. Estimate the path to diagonalise a matrix
10. Estimate the Eigen values and Eigen vector?
11. Examine the solution of homogenous state solutions.
12. List the solution of non-homogenous state equations.
13. What is resolvant matrix?
14. List the different methods available for computing 𝑒 𝐴𝑡 ?
15. Enumerate the properties of state transition matrix.
16. What is state transition matrix?
17. Define the characteristic equation of a matrix.
18. State cayley-Hamilton theorem.
19. List the disadvantage of state transition matrix using matrix exponential?
20. Illustrate the canonical form of state model?
21. Predict the condition for observability by Gilbert’s method.
22. Predict the condition for controllability by Kalman’s method.
23. Define observability
24. Define Controllability
25. What is pole placement by state feedback?
26. Write the Ackermann’s formula to find the state feedback gain matrix, k.
27. Write the observable phase variable form of state model.
28. Write the controllable phase variable form of state model.
29. Correlate the duality between controllability and observability.
30. What is state observer?
31. Define periodic sampling?
32. Explain Shanon’s sampling theorem.
33. Define pulse transfer function?
34. Define Zero order hold?
35. Compare analog and digital controller.
36. Discuss sampled data control systems?
37. Express one sided Z-transform.
38. Compute the infinite and finite geometric series sum formula.
39. Classify the different methods available for inverse Z-transform?
40. List the methods available for the stability analysis of sampled data control systems?
41. Compare the different kind of nonlinearities .Give examples.
42. List the properties of nonlinear systems.
43. Explain jump resonance?

227
44. Explain how limit cycles are formed?
45. Define a describing function?
46. List the different types of friction?
47. Explain hysteresis and backlash?
48. Classify the methods available for the analysis of nonlinear system?
49. Explain the non linearites that are introduced in the systems?
50. Trace the input-output characteristic of a relay with dead zone and hysteresis.

PART- B

1. Develop the state model of electro mechanical system whose speed can be controlled
below the rated value.

2. Construct the canonical state model of the system, whose transfer function is
2(𝑠+5)
𝑇(𝑠 ) = [(𝑠+2)(𝑠+3)(𝑠+4)]
𝑌(𝑠) 10(𝑠+4)
3. A feedback system has a closed-loop transfer function 𝑈(𝑠) = 𝑠(𝑠+1)(𝑠+3) Construct
state model for this system and give block diagram for the state model.

4. Develop the state model for Ward Leonard system


5. A linear time invariant system is described by the following state model.
𝑋1̇ 0 1 0 𝑋1 0 𝑋1
[𝑋2̇ ]=[ 0 0 1 ] [ 𝑋2 ] + [ 0 ] [ 𝑈 ] y= [ 1 0 0 𝑋2 ]
] [
𝑋3̇ −6 −11 −6 𝑋3 2 𝑋3
Formulate this state model into a canonical state model.

6. A linear time invariant system is described by the following state model.


𝑋1̇ 0 0 1 𝑋1 0 𝑋1
̇
[𝑋2 ]=[−2 −3 0 ] [𝑋2 ] + [0] [𝑈] y=[1 0 0] [𝑋2 ]
𝑋3 ̇ 0 2 −3 𝑋3 2 𝑋3
Modify this state model into a canonical state model.

𝜎 0 0 𝜔 𝜎 𝜔
7. Given that 𝐴1 = [ ] ; 𝐴2 = [ ];𝐴=[ ] Inspect 𝑒 𝐴𝑡 .
0 𝜎 −𝜔 0 −𝜔 𝜎

8. A linear time invariant system is described by the following state model.


𝑋1̇ 0 1 0 𝑋1 0 𝑋1
[𝑋2̇ ]=[ 0 0 1 ] [𝑋2 ] + [0] [𝑈] y=[1 0 0] [𝑋2 ]
𝑋3̇ −6 −11 −6 𝑋3 2 𝑋3

Compute the state transition matrix, 𝑒 𝐴𝑡 .

9. Discover the solution of Non Homogeneous state equations.

228
10. For a system represented by state equation 𝑋̇ (𝑡) = 𝐴𝑋(𝑡) The response is
−2𝑡 1 −𝑡 1
X(t)=[ 𝑒 −2𝑡 ] when X(0)=[ ] and X(t)=[ 𝑒 −𝑡 ] when X(0)=[ ] Examine the
−2𝑒 −2 −𝑒 −1
system matrix A and the state transition matrix.

0 1
11. For 𝐴 = [ ] Determine the state transition matrix 𝑒 𝐴𝑡 using cayley- Hamilton
−2 −3
theorem.

12. A linear time invariant system is characterised by homogeneous state equation.


𝑋1̇ 1 0 𝑋1
[ ̇ ]=[ ] [ ] Compute the solution of the homogenous equation, assuming the
𝑋2 1 1 𝑋2
1
initial state vector 𝑋0 =[ ]
0

13. Consider a linear system described by the transfer function


𝒀(𝒔) 𝟏𝟎
= Design a feedback controller with a state feedback so that the
𝑼(𝒔) 𝒔(𝒔+𝟏)(𝒔+𝟐)
closed loop poles are placed at −2, −1 + 𝑗1, −1 − 𝑗1

14. The state model of a system is given by


𝑋1̇ 0 0 1 𝑋1 0 𝑋1
̇
[𝑋2 ]=[−2 −3 0 ] [𝑋2 ] + [0] [𝑈] y=[1 0 0] [𝑋2 ]
𝑋3 ̇ 0 2 −3 𝑋 3 2 𝑋3
Formulate the state model to observable phase variable form.

15. Consider the system described by the state model

−1 1
16. 𝑋̇ = 𝐴𝑋, Y=CX where 𝐴 = [ ] ; 𝐶 = [1 0] Design a full-order state
1 −2
observer. The desired eigen values for the observer matrix are µ1 =-5, µ2 =-5

17. The state model of a system is given by


𝑋1̇ 0 0 1 𝑋1 0 𝑋1
[𝑋2̇ ]=[−2 −3 0 ] [𝑋2 ] + [0] [𝑈] y=[1 0 0] [𝑋2 ]
𝑋3̇ 0 2 −3 𝑋3 2 𝑋3
Formulate the state model to Controllable phase variable form.

18. The state model of a system is given by


𝑋1̇ 0 0 1 𝑋1 0 𝑋1
̇
[𝑋2 ]=[−2 −3 0 ] [𝑋2 ] + [0] [𝑈] y=[1 0 0] [𝑋2 ]
𝑋3 ̇ 0 2 −3 𝑋 3 2 𝑋3
Test whether the system is completely controllable and observable by Kalman’s Test.

19. A single-input system is described by the following state equation.


𝑋1̇ −1 0 1 𝑋1 10
̇
[𝑋2 ]=[ 1 −2 0 ] [𝑋2 ] + [ 1 ] [𝑈]
𝑋3̇ 2 1 −3 𝑋3 0
Design a state feedback controller which will give closed-loop poles at -1+j2, -1-j2, -6

229
20. Estimate the analysis of sampling process in frequency domain.

21. Determine the Z-transform for the following discrete sequences (a) f(k)={3,2,5,7} (b)
(1/2)k u(k) (c) f(k)= K2

22. Determine C(Z)/R(Z) for the given closed loop sampled data control systems. Assume
the sampler to be of impulse type.

23. Evaluate the difference equation 𝑐(𝑘 + 2) + 3𝑐(𝑘 + 1) + 2𝑐 (𝑘 ) = 𝑢(𝑘) Given that
c(0)=1; c(1)=-3; c(k)=0 for k<0

24. Estimate the stability of sampled data control systems represented by the following
characteristic equation 𝑧 4 − 1.7𝑧 3 + 1.04𝑧 2 + 0.024 = 0

25. Determine the one sided z-transform of the discrete sequence generated by
mathematically sampling the following continuous time functions𝑓(𝑡) = cos 𝑤𝑡

26. Assess the describing function. Derive the describing function of a relay with
hysteresis and dead zone.

27. (a). Explain Liapunov stability and instability theorems.


(b). Determine the stability range for the gain ‘k’ of the system shown in the figure.

28. (a).Determine Krasovski’s theorem of stability.


(b). Consider the nonlinear system

29. 𝑥1̇ = −𝑥1 − 𝑥22 ,𝑥2̇ = −𝑥2 Justify the stability of the equilibrium points using
Krasovski’s method.

30. Estimate the describing function of Dead-zone and saturation nonlinearity.

31. Consider a unity feedback system as shown in figure below having saturating amplifier
with gain k. Determine the maximum value of k for which the system to stay stable.

32. Estimate the describing function of saturation nonlinearity.

230

You might also like