Chapter 3: Linear Time-Invariant Systems 3.1 Motivation
Chapter 3: Linear Time-Invariant Systems 3.1 Motivation
Chapter 3: Linear Time-Invariant Systems 3.1 Motivation
3.1 MOTIVATION
Continuous and discrete-time systems that are both linear and time-in variant (LTI) play a central role in
digital signal processing, communication engineering and control applications:
Many physical systems are either LTI or approximately so.
Many efficient tools are available for the analysis and design of LTI systems (e.g. spectral analysis).
Consider the general input-output block diagram of a system. The response of the system h (t ) to an input
signal x (t ) is found by a convolution process, which takes into consideration the complete history of the
signal and the information in the system memory.
(3.1a)
(3.1b)
k =
Note that h[n] is called both impulse response and unit pulse response.
Why impulse response or unit pulse response?
Let the input to the continuous LTI system be x (t ) = (t ). Then from the definition of the convolution operation
we write:
(3.2)
The last integral above is obtained from the Sifting Theorem definition of the delta function of Chapter 2.
3.1
x( ).h (t ) d = h( ).x (t ) d
(3.3a)
y[ n] = x[ n] * h[ n ] = h[ n ] * x[ n] = x[ k ].h[n k ] = h[ k ].x[n k ]
k =
k =
(3.3b)
3.2.2 Associativity Property: This property will form the basis for cascade (series) systems:
y (t ) = [ x (t ) * h1 (t )] * h 2 (t ) = x (t ) * [ h1 (t ) * h 2 (t )] = x(t ) * hS (t )
where hS(t) represents the cascade connection of two subsystems h1(t) and h2(t), respectively:
h S (t ) = h1 ( t ) * h2 (t )
(3.4)
(3.5)
Combined cascade system impulse response hS (t ) is equal to the convolution of system responses of
individual subsystems. This result can easily be extended to the series connection of many systems via
repeated applications of the associativity property.
3.2.3 Distributivity Property: This property, on the other hand, forms the basis for parallel systems.
3.2
y (t ) = [ x (t ) * h1 (t )] + [ x (t ) * h2 (t )] = x (t ) * [ h1 (t ) + h2 (t )] = x (t ) * h P (t )
(3.6)
As in the previous case, hP(t) corresponds to the parallel combination of two subsystems.
3.2.4 Linear Convolution Examples (Elementary):
Example 3.2: Convolution of signals with delta and unit-step functions.
x (t ) * (t ) = x( ). ( t )d = x (t )
x (t ) * (t t0 ) = ( t0 ). x (t )d = x (t t0 )
(3.7)
x (t ) * u (t ) = x( ).u (t )d = x ( ) d
(3.8)
Observation:
The convolution of any function by a delta function gives the original function and the convolution of any
function with a shifted version of the delta function results in the shifted replica of the original function.
Convolving a signal by a unit-step function is equivalent to a perfect integrator.
Example 3.3: Time Averaging
Time-averaging is frequently employed in finding average behavior or mean of systems or signals or data.
1 t +T
x (t ) = x ave (t ) =
(3.9)
x ( ) d
T t
x(t )
t +T
3.3
1
= x (t ) * [u ( t + T ) u (t )]
T
(3.10)
Example 3.4: Response of a Capacitive Circuit to a switched DC voltage, where input and system impulses
are simply:
x (t ) = V .u ( t ) and h (t ) = A.e at .u (t ) where a > 0
(3.11)
The task is to compute: y (t ) = x (t ) * h (t ) =
x ( )h (t ) d
x(t )
h(t )
A
y (t ) = x ( ) h (t ) d = VAe a (t ) u ( )u (t )d
(3.12a)
Limits of integration are very critical and decided by the non-zero segments of the product of two step
functions:
u ( ).u (t ).
(3.12b)
Let us now find these non-zero segments with graphical support:
3.4
Case 1: t < 0
Case 2 : t > 0 u( )
u( )
1.0
=t
1.0
=t
1.0
=t
u (t )
1.0
u (t )
=t
Case 1: t<0
Since non-zero segments of the product is zero as clearly seen from the plots, the integral in (3.12a) is also
zero to yield:
y (t ) = AV 0 .d = 0
(3.13a)
Case 2: t>0
Since non-zero segments of the product is the region between 0 and t as shown above, this time the limits of
integral this time becomes:
t
t
AV at a t
AV
y (t ) = AV . e a (t ) d = AV .e at . e a d =
.e .e
=
.(1 e at )
(3.13b)
0
a
a
0
0
When we combine these two results into a single equation using a unit step function we have the final
answer:
AV
y (t ) =
.(1 e at ).u ( t )
(3.14)
a
% Convolution of decaying exponential with a unit-step function.
t=0:.05:1
h=exp(-1*t)
3.5
x= ones(size(t))
y=conv(h,x)
plot(t,y(1:21))
title('Numerical convolution');
xlabel('Time, Seconds'); ylabel('Approximation of y(t)')
grid; axis
y (t ) = [u ( + a ) u ( a )].[u (t ( + a )) u (t ( a ))]d
(3.15)
(3.16)
(3.17)
As in Example 3.4, we need to determine segments of the above integral for which two brackets have nonzero product. Careful observation and with the following graphical support we see that there are four distinct
cases.
3.7
Case 1 : t < 2a
x ( )
x( )
Case 2 : 2a < t < 0
1 .0
1 .0
h (t )
h(t )
1 .0
1.0
t a
t +a
ta
t +a
Case 1: t<-2a:
There is no overlapping segments of the two pulses and the integral in (3.18) would yield 0.
y (t ) = 0 for t < 2a
(3.18a)
Case 2: -2a<t<0:
The interval between -a and t+a are common to both pulses then the integral becomes:
t +a
y (t ) = 1.d = t + 2a
(3.18b)
By sliding the lower pulse (the system function) in above figure to the left we get two other cases.
Case 3: 0<t<2a:
The interval between t-a and a are common to both pulses and we get:
a
y (t ) = 1.d = 2 a t
(3.19)
t a
Case 4: t>2a:
Again there is no overlapping segments of the two pulses and the output would be zero.
y (t ) = 0 for t > 2 a
All of these cases can be written in a compact form:
3.8
(3.20)
y (t )
t + 2a if 2a < t < 0
y (t ) = t + 2a if 0 < t < 2a
0 Otherwise
2a
t
2a
We can conclude that the convolution of two identical pulses is a triangle. What would be the shape of two
different size rectangles?
% Convolution of decaying exponential
with a unit-step function.
n=0:60;
x= zeros(size(n)); x(6:15)=1;
h=zeros(size(n)); h(11:30)=1;
y=conv(h,x)
stem(n,y(1:61));
title('Discrete Convolution of Two Pulses');
xlabel('Time, Seconds');
ylabel('Approximation of y[n]')
grid; axis
h[ n] = {2, 2,0,1,1}
x[ n] = {1, 3 ,1, 2}
Note these two sequences have different lengths as in Example 3.6. It is not difficult to see that the output
sequence y[n] will be eight samples long in the interval 2 n 5, zero elsewhere. Let us verify that with a
linear convolution table.
-2
-1
x[n+1]
x[n]
x[n-1]
x[n-2]
x[n-3]
h[-1]x[n+1]
h[-0]x[n]
h[1]x[n-1]
h[2]x[n-2]
h[3]x[n-3]
-1
3
-1
-1
3
-1
-2
-1
3
-1
Y[n]
-6
-2
-8
2
6
0
4
-2
0
1
-2
-1
3
-1
-2
-1
3
-2
-1
-2
-4
0
-3
-1
0
1
3
2
-1
-2
-8
-2
N 1
(3.21)
k =0
where represents this periodic or circular convolution operation and the sum is over N terms. (3.21) is
periodic using the property: h [ n + rN ] = h [ n ]
N 1
N 1
k =0
k =0
y[ n + rN ] = x[ k ]h[ n + rN k ] = x[ k ]h[ n k ] = y[ n ]
(3.22)
Since the sum is a finite sum, we can write out the full expression is a straightforward expansion:
y[ n] = x[0 ]h[ n) + x (1)h[n 1] + x( 2) h[ n 2 ] + L + x ( N 1)h[n N + 1]
(3.23)
and use the tabular form to compute the circular convolution of two periodic functions.
Example 3.8: Consider the following system and signal sequences:
x[ n] = {1,2,0,1}
h[ n] = {1,3,1,2}
Note these two sequences have a common period of 4 samples. It is not difficult to see that the output
sequence y[n] will be again 4 samples long in the interval 0 n 3 and repeat itself. Let us verify that with a
circular convolution table.
n
x[n]
x[n-1]
x[n-2]
x[n-3]
h[0]x[n]
h[1]x[n-1]
h[2]x[n-2]
h[3]x[n-3]
1
-1
0
2
1
-3
0
-4
2
1
-1
0
2
3
1
0
0
2
1
-1
0
6
-1
2
-1
0
2
1
-1
0
-2
-2
yc[n]
-6
-5
bi D i x (t )
N 1
a j D j y (t )
i= 0
(3.25)
j =0
For simplicity and nice symmetrical behavior let us assume that N=M=System Order.
D N y(t) =
N 1
i= 0
j =0
bi D i x (t )
a j D j y (t )
(3.26)
In order to solve (3.26) we need N initial conditions for the output: { y (0 ), y ( 0), L , y N 1 ( 0)} . For EE
applications, it is more common to replace the above ODE with an equivalent integral equation and use the
integral operator D 1.
N 1
D N {D N y (t )} = D N { bi Di x (t ) ai Di y ( t )
i =0
i =0
N 1
i =0
i =0
y (t ) = { bi D i N }x (t ) { ai Di N } y (t )
3.12
N 1
y (t ) = b N x (t ) + [bi D i N x (t ) a i D i N y (t )]
(3.27)
i =0
If we collect identical terms into common order pairs we obtain another frequently observed form:
y (t ) = b N x( t ) + D N [b0 x (t ) a 0 y ( t )] + D N +1 [b1 x (t ) a1 y (t )] + L + D 1[b N 1 x (t ) a N 1 y (t )]
(3.28)
We have canonical (standard) implementation forms based on simple building blocks for these last two
equations (3.27) and (3.28) in terms of basic system building blocks:
1. Integrator D 1 :
D 1
x(t )
If y (t 0 ) = 0 then the system is said to be at rest and we have the usual case:
t
y (t ) = x( ) d
t0
2. Adder (Accumulator)
for t 0 0
(3.29)
y (t ) = x1 (t ) + x 2 (t )
x1 ( t )
x2 (t)
3. Scalar Multiplier K:
K
3.13
y(t)=K.x(t)
We will next use these three fundamental building blocks to implement systems expressed in terms of ODE
and/or integral equations.
Example 3.9: Implement and solve the following system using building blocks.
d
y (t ) + a. y (t ) = b.x (t )
dt
(3.30)
Since the highest-order derivative is dy / dx then the order of this system is "1." Let us convert this first-order
ODE into operational form:
Dy( t ) + a.y (t ) = b.x (t )
(3.31)
D 1 [ Dy (t ) + a.y ( t )] = D 1bx(t )
y (t ) + a.D 1 y (t ) = b.D 1 x (t )
Finally, we have the form ready for implementation using the set of blocks discussed above:
y (t ) = a.D 1 y (t ) + b.D 1 x (t )
x(t)
D 1
(3.32)
y(t)
a
ODE has a total solution composed of a homogeneous solution (natural response) y h (t ) and a particular
solution (forced response) y p (t ) :
y (t ) = y h ( t ) + y p (t )
(3.33)
Homogeneous Equation and its solution:
Dy h (t ) + ay h (t ) = 0
(3.34)
3.14
Assume a solution of the form: y h ( t ) = C .e at and a solution of the form for the particular part:
t
y p (t ) = e a ( t ) .b.x ( ) d
for t t 0
(3.35)
t0
y (t ) = C.e at + b. e a (t ) .x( ) d
(3.36)
t0
at
t0
+ b. e a (t ) .x ( ) d = C .e at + 0 C = Y0 .e at 0
t0
(3.37)
t0
3.15
I/O-Bus Form
Output y(t)
Input x(t)
b0
bN
-a 0
D -1
b N-1
D -1
b N-2
-a N-1
- aN-2
D-1
D -1
D -1
b1
- a1
D- 1
D -1
D -1
b0
b N-1
- aN-1
-a 0
D- 1
bN
Input Bus
y(t)
Output Bus
Example 3.10: Assume that the following system is at rest at t=0; i.e., all initial values are zero for t=0.
d2
d
d
y
(
t
)
4
y
(
t
)
+
y
(
t
)
=
3
x( t ) + 2 x (t )
dt
dt
dt 2
D 2 y (t ) = 3Dx (t ) + 2 x (t ) + 4 Dy (t ) y ( t )
D 2 [ D 2 y (t )] = D 2 [3 Dx(t ) + 2 x (t ) + 4 Dy (t ) y (t )]
y (t ) = D 1 [3 x(t ) + 4 y (t )] + D 2 [ 2 x (t ) y ( t )]
3.16
Canonical Form I:
Input x(t) 0
Output y(t)
-1
D-1
D-1
3
-1
3
I-Bus
-1
2
-1
-1
D-1
O-Bus
y(t)
k= 0
k =0
a k y[ n k ] = bk x[ n k ]
for n 0
(3.38)
k =0
k =0
k
k
a k D y[ n ] = bk D x[ n ]
for n 0
(3.40)
3.17
However, the output is implicitly expressed in (3.40); buried among the feedback terms. It is usually
expressed in the following form:
N
1 M
y[ n] =
( b k D k x[n ] a k D k y[ n]) for n 0
(3.41)
a 0 k =0
k =1
I/O-Bus Form
Output y[n]
b0
bN
D-1
bN-1
Input x[n]
-aN-1
bN-2
-aN-2
-a0
D-1
D-1
b1
D-1
D-1
- a1
D-1
D -1
D-1
b0
bN-1
- aN-1
-a0
D-1
bN
Input Bus
y[n]
Output Bus
It is clear from (3.41) and the canonical implementations, x[ n k ] are known at any given time. If we have
done our job correctly then y[ n k ] are also known. Then y[0 ] can be computed from:
3.18
N
1 M
( bk x[ k ] a k y[ k ])
(3.42)
a 0 k =0
k =1
where y[ k ] are the initial conditions (IC). Next, we compute:
N
1 M
y[1] =
( bk x[1 k ] a k y[2 k ])
(3.43)
a 0 k =0
k =1
Similarly, we can compute all future outputs. Note that we need to do that an iterative (recursive) fashion; i.e.,
it is not possible to
y[0 ] =
Example 3.11: Given y[ 1] = 1 and y[ 2] = 0 compute RECURSIVELY a few terms of the following 2nd
order DE:
3
1
1
y[ n] = y[n 1] y[n 2 ] + ( ) n
4
8
2
3
1
1
3
7
y[0 ] = y[ 1] y[ 2] + ( ) 0 = + 0 + 1 =
4
8
2
4
4
3
1
1
27
y[1] = y[0 ] y[ 1] + ( ) 1 =
4
8
2
16
3
1
1
83
y[ 2] = y[1] y[0 ] + ( ) 2 =
4
8
2
64
M
a k y[n k ] = 0
(3.44)
k =0
y h [n ] = A.a n
3.19
a k Aa
k= 0
N
ak a
n k
n k
k =0
=0
(3.45)
Values satisfying (3.45) are the characteristic values (eigenvalues) and there are N of them, which may or
may not be distinct. If they are distinct, the corresponding characteristic solutions are independent and they
are obtained as a linear combination of the terms like:
y h [n ] = A1 a1n + A2 a 2n + L + AN a Nn
(3.46)
If any of the roots are repeated, then we can generate N independent solutions by multiplying corresponding
characteristic solution by the appropriate power of n. For instance, if a 1 has a multiplicity of P1 , then we
assume a solution of the form:
y h [n ] = A1 a1n + A2 na1n + L + AP1 n P1 1 a1n + AP1 +1 a Pn1 +1 + L + AN a Nn
(3.47)
3.7.2 Particular Solution: Assume that y[n ] is a particular solution to a special case:
N
a k y[ n k ] = x[ n ]
(3.48)
k =0
y P [ n ] = b k y[n k ]
(3.49)
k =0
To find y[n ] , we assume it is a linear combination of x[n] and its delayed versions.
If x[n] is a constant then x[ n k ] is also constant. Thus, y[n ] is another constant.
If x[n] is an exponential function if the form: n , then y[n ] is similarly an exponential.
If x[n] is a sinusoid:
x[ n] = sin 0 n then x[ n k ] = sin 0 ( n k ) = cos 0 k .sin 0 n cos 0 n. sin 0 k
y[ n ] = A. sin 0 n + B. cos 0 n
3.20
3
1
n
y[n 1] + y[n 2] = 2. sin
with IC: y[ 1] = 2 and y[ 2] = 4
4
8
2
Part A: Particular solution: Assume a solution:
n
n
y[ n] = A. sin
+ B. cos
2
2
( n 1)
( n 1)
n
n
y P [ n 1] = A. sin
+ B. cos
= A cos
+ B sin
2
2
2
2
( n 1)
( n 1)
n
n
y P [ n 2] = A. cos
+ B. sin
= A sin
B cos
2
2
2
2
Let us substitute these into the DE, which must be satisfied in order for this to be solution:
3
1
n
3
1
n
n
( A B A). sin
+ ( B + A B ). cos
= 2. sin
4
8
2
4
8
2
2
Let us equate terms of the same form:
3
1
A B A= 2
4
8
3
1
112
96
B+ A =0
A=
and B =
4
8
85
85
y P [n] =
112 n 96
n
sin cos
85
2 85
2
If we substitute the given ICs to this last expression we could obtain that:
8
13
A1 =
A2 =
17
5
and
8 1
13 1
112
n 96
n
y[ n] = ( ) n + ( ) n +
sin
cos
17 4
5 2
85
2 85
2
k= 0
k =0
a k y[ n k ] = bk [ n k ]
with: y[ 1] = y[ 2]L = 0
(3.50)
1. For n > M , the right hand side is zero, thus we get a homogeneous equation.
2. N initial conditions (IC) to solve this equation are: { y[ M ), y[ M 1],L , y[ M N + 1]}.
3. To be meaningful this system must be causal: N M and we have to compute only the terms:
y[0 ], y[1], L , y[ M ].
4. By successively letting n be 0,1,2,L , M in (3.50) we obtain a set of M + 1 equations:
j
a k y[ n k ] =b j
k= 0
for j = 0,1,2, L, M
(3.51)
a1
a 0 . 0 y[ 2] b 2
a2
.
=
.
.
.
. . . .
.
.
.
. .
.
.
a M a M 1 . . a 0 y[ M ] b M
(3.52)
3.22
After solving this system for initial conditions y[0 ], y[1], L , y[ M ], we obtain the impulse response of the
system as the solution of the homogeneous equation:
N
a k y[ n k ] =0
k= 0
for n > M
(3.53)
1
5
1
1
x[ n 1] + y[n 1] y[n 2 ] +
y[n 3]
3
4
2
16
5
1
1
y[ n 1] + y[ n 2] y[ n 3] = 0
4
2
16
n2
5 1 1 2 1 3
a + a a =0
4
2
16
0 y[0 ] b0
.
=
a 0 y[1] b1
0 y[ 0] 1
1
5 / 4 1 . y[1] = 1 / 3
3.23
y[0 ] = 1; y[1] = 19 / 12