ChitsongChen, Signalsandsystems Afreshlook PDF
ChitsongChen, Signalsandsystems Afreshlook PDF
ChitsongChen, Signalsandsystems Afreshlook PDF
Chi-Tsong Chen
Stony Brook University
i
c
Copyright°2009 by Chi-Tsong Chen
Everyone is permitted to copy and distribute this book but changing it is not permitted.
Email: [email protected] or [email protected]
ii
Preface
Presently, there are over thirty texts on continuous-time (CT) and discrete-time (DT)
signals and systems.1 They typically cover CT and DT Fourier series; CT and DT Fourier
transforms; discrete Fourier transform (DFT); two- and one-sided Laplace and z-transforms;
convolutions and differential (difference) equations; Fourier and Laplace analysis of signals
and systems; and some applications in communication, control, and filter design. About one-
third of these texts also discuss state-space equations and their analytical solutions. Many
texts emphasize convolutions and Fourier analysis.
Feedback from graduates that what they learned in university is not used in industry
prompted me to ponder what to teach in signals and systems. Typical courses on signals and
systems are intended for sophomores and juniors, and aim to provide the necessary background
in signals and systems for follow-up courses in control, communication, microelectronic cir-
cuits, filter design, and digital signal procession. A survey of texts reveals that the important
topics needed in these follow-up courses are as follows:
• Signals: Frequency spectra, which are used for discussing bandwidth, selecting sampling
and carrier frequencies, and specifying systems, especially, filters to be designed.
• Systems: Rational transfer functions, which are used in circuit analysis and in design of
filters and control systems. In filter design, we find transfer functions whose frequency
responses meet specifications based on the frequency spectra of signals to be processed.
• Implementation: State-space (ss) equations, which are most convenient for computer
computation, real-time processing, and op-amp circuit or specialized hardware imple-
mentation.
These topics will be the focus of this text. For signals, we develop frequency spectra2 and
their bandwidths and computer computation. We use the simplest real-world signals (sounds
generated by a tuning fork and a piano) as examples. For systems, we develop four mathemat-
ical descriptions: convolutions, differential (difference) equations, ss equations, and transfer
functions. The first three are in the time domain and the last one is in the transform domain.
We give reasons for downplaying the first two and emphasizing the last two descriptions. We
discuss the role of signals in designing systems and the following three domains:
We also discuss the relationship between ss equations (an internal description) and transfer
functions (an external description).
Because of our familiarity of CT physical phenomena and examples, this text studies
first the CT case and then the DT case with one exception. The exception is to use DT
systems with finite memory to introduce some system concepts because simple numerical
examples can be easily developed whereas there is no CT counterpart. This text stresses
basic concepts and ideas and downplays analytical calculation because all computation in
this text can be carried out numerically using MATLAB. We start from scratch and take
nothing for granted. For example, we discuss time and its representation by a real number
line. We give the reason of defining frequency using a spinning wheel rather than using
sin ωt or cos ωt. We also give the reason that we cannot define the frequency of DT sinusoids
directly and must define it using CT sinusoids. We make distinction between amplitudes and
magnitudes. Even though mathematics is essential in engineering, what is more important,
in our view, is its methodology (critical thinking and logical development) than its various
1 See the references at the end of this book.
2 The frequency spectrum is not defined or not stressed in most texts.
iii
topics and calculational methods. Thus we skip many conventional topics and discuss, at a
more thorough level, only those needed in this course. It is hoped that by so doing, the reader
may gain the ability and habit of critical thinking and logical development.
In the table of contents, we box those sections and subsections which are unique in this
text. They discuss some basic issues and questions in signals and systems which are not
discussed in other texts. We discuss some of them below:
1. Even though all texts on signals and systems claim to study linear time-invariant (LTI)
systems, they actually study only a very small subset of such systems which have the
“lumpedness” property. What is true for LTI lumped systems may not be true for
general TLI systems. Thus, it is important to know the limited applicability of what
we study.
2. Even though most texts start with differential equations and convolutions, this text
uses a simple RLC circuit to demonstrate that the state-space (ss) description is eas-
ier to develop than the aforementioned descriptions. Moreover, once an ss equation is
developed, we can discuss directly (without discussing its analytical solution) its com-
puter computation, real-time processing, and op-amp circuit implementation. Thus ss
equations should be an important part of a text on signals and systems.
3. We introduce the concept of coprimeness (no common roots) for rational transfer func-
tions. Without it, the poles and zeros defined in many texts are not necessarily correct.
The concept is also needed in discussing whether or not a system has redundant com-
ponents.
4. We discuss the relationship between the Fourier series and Fourier transform which is
dual to the sampling theorem. We give reasons for stressing only the Fourier transform
in signal analysis and for skipping Fourier analysis of systems.
5. This text discusses model reduction which is widely used in practice and yet not dis-
cussed in other texts. The discussion shows the roles of a system’s frequency response
and a signal’s frequency spectrum. It explains why the same transfer functions can be
used to design seismometers and accelerometers.
A great deal of thought was put into the selection of the topics discussed in this text.3 It
is hoped that the rationale presented is convincing and compelling and that this new text will
become a standard in teaching signals and systems, just as my book Linear system theory
and design has been a standard in teaching linear systems since 1970.
In addition to electrical and computer engineering programs, this text is suitable for
mechanical, bioengineering, and any program which involves analysis and design of systems.
This text contains more material than can be covered in one semester. When teaching a
one-semester course on signals and systems at Stony Brook, I skip Chapter 5 and Section 7.8,
and cover essentially the rest of the book. Clearly other arrangements are also possible.
Many people helped me in writing this book. Ms. Jiseon Kim plotted all the figures in
the text except those generated by MATLAB. Mr. Anthony Oliver performed many op-amp
circuit experiments for me. Dr. Michael Gilberti scrutinized the entire book, picked up many
errors, and made several valuable suggestions. I consulted Professors Amen Zemanian and
John Murray whenever I had any questions and doubts. I thank them all.
C. T. Chen
December, 2009
3 This text is different from Reference [C8] in structure and emphasis. It compares four mathematical
descriptions, and discusses three domains and the role of signals in system design. Thus it is not a minor
revision of [C8]; it is a new text.
iv
Table of Contents
1 Introduction 1
1.1 Signals and systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Physics, mathematics, and engineering . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Electrical and computer engineering . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 A course on signals and systems . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 Confession of the author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.6 A note to the reader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2 Signals 13
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.1 Time – Real number line . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2.2 Where are time 0 and time −∞? . . . . . . . . . . . . . . . . . . . . . . 16
2.3 Continuous-time (CT) signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3.1 Staircase approximation of CT signals – Sampling . . . . . . . . . . . . 18
2.4 Discrete-time (DT) signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4.1 Interpolation – Construction . . . . . . . . . . . . . . . . . . . . . . . . 20
2.5 Impulses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.5.1 Pulse amplitude modulation (PAM) . . . . . . . . . . . . . . . . . . . . 23
2.5.2 Bounded variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.6 Digital procession of analog signals . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.6.1 Real-time and non-real-time processing . . . . . . . . . . . . . . . . . . 26
2.7 CT step and real-exponential functions - time constant . . . . . . . . . . . . . . 27
2.7.1 Time shifting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.8 DT impulse sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.8.1 Step and real-exponential sequences - time constant . . . . . . . . . . . 31
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
v
4.4.2 Time-Limited Bandlimited Theorem . . . . . . . . . . . . . . . . . . . . 71
4.4.3 Time duration and frequency bandwidth . . . . . . . . . . . . . . . . . . 72
4.5 Frequency spectra of CT pure sinusoids in (−∞, ∞) and in [0, ∞) . . . . . . . 75
4.6 DT pure sinusoids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.6.1 Can we define the frequency of sin(ω0 nT ) directly? . . . . . . . . . . . . 77
4.6.2 Frequency of DT pure sinusoids – Principal form . . . . . . . . . . . . . 78
4.7 Sampling of CT pure sinusoids – Aliased frequencies . . . . . . . . . . . . . . . 80
4.7.1 A sampling theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.8 Frequency spectra of DT signals . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.9 Concluding remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
vi
7 DT LTI systems with finite memory 138
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
7.2 Causal systems with memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
7.2.1 Forced response, initial conditions, and natural response . . . . . . . . . 141
7.3 Linear time-invariant (LTI) systems . . . . . . . . . . . . . . . . . . . . . . . . 142
7.3.1 Finite and infinite impulse responses (FIR and IIR) . . . . . . . . . . . 143
7.3.2 Discrete convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
7.4 Some difference equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
7.4.1 Comparison of convolutions and difference equations . . . . . . . . . . . 147
7.5 DT LTI basic elements and basic block diagrams . . . . . . . . . . . . . . . . . 148
7.6 State-space (ss) equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
7.6.1 Computer computation and real-time processing using ss equations . . . 150
7.7 Transfer functions – z-transform . . . . . . . . . . . . . . . . . . . . . . . . . . 153
7.7.1 Transfer functions of unit-delay and unit-advance elements . . . . . . . 155
7.8 Composite systems: Transform domain or time domain? . . . . . . . . . . . . . 156
7.9 Concluding remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
vii
9 Qualitative analysis of CT LTI lumped systems 209
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
9.1.1 Design criteria – time domain . . . . . . . . . . . . . . . . . . . . . . . . 209
9.2 Poles and zeros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
9.3 Some Laplace transform pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
9.3.1 Inverse Laplace transform . . . . . . . . . . . . . . . . . . . . . . . . . . 215
9.3.2 Reasons for not using transfer functions in computing responses . . . . 218
9.4 Step responses – Roles of poles and zeros . . . . . . . . . . . . . . . . . . . . . 219
9.4.1 Responses of poles as t → ∞ . . . . . . . . . . . . . . . . . . . . . . . . 221
9.5 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
9.5.1 What holds for lumped systems may not hold for distributed systems . 227
9.5.2 Stability check by one measurement . . . . . . . . . . . . . . . . . . . . 227
9.5.3 The Routh test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
9.6 Steady-state and transient responses . . . . . . . . . . . . . . . . . . . . . . . . 230
9.6.1 Time constant of stable systems . . . . . . . . . . . . . . . . . . . . . . 232
9.7 Frequency responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
9.7.1 Plotting frequency responses . . . . . . . . . . . . . . . . . . . . . . . . 237
9.7.2 Bandwidth of frequency selective Filters . . . . . . . . . . . . . . . . . . 238
9.7.3 Non-uniqueness in design . . . . . . . . . . . . . . . . . . . . . . . . . . 240
9.7.4 Frequency domain and transform domain . . . . . . . . . . . . . . . . . 240
9.7.5 Identification by measuring frequency responses . . . . . . . . . . . . . . 242
9.7.6 Parametric identification . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
9.8 Laplace transform and Fourier transform . . . . . . . . . . . . . . . . . . . . . . 243
9.8.1 Why Fourier transform is not used in system analysis . . . . . . . . . . 244
9.8.2 Phasor analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
9.8.3 Conventional derivation of frequency responses . . . . . . . . . . . . . . 246
9.9 Frequency responses and frequency spectra . . . . . . . . . . . . . . . . . . . . 247
9.9.1 Why modulation is not an LTI process . . . . . . . . . . . . . . . . . . . 248
9.9.2 Resonance – Time domain and frequency domain . . . . . . . . . . . . . 249
9.10 Reasons for not using ss equations in design . . . . . . . . . . . . . . . . . . . . 251
9.10.1 A brief history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
viii
11 DT LTI and lumped systems 287
11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
11.2 Some z-transform pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
11.3 DT LTI lumped systems – proper rational functions . . . . . . . . . . . . . . . 291
11.3.1 Rational transfer functions and Difference equations . . . . . . . . . . . 292
11.3.2 Poles and zeros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
11.4 Inverse z-transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
11.4.1 Step responses – Roles of poles and zeros . . . . . . . . . . . . . . . . . 296
11.4.2 s-plane and z-plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
11.4.3 Responses of poles as n → ∞ . . . . . . . . . . . . . . . . . . . . . . . . 299
11.5 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
11.5.1 What holds for lumped systems may not hold for distributed systems . 303
11.5.2 The Jury test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
11.6 Steady-state and transient responses . . . . . . . . . . . . . . . . . . . . . . . . 305
11.6.1 Time constant of stable systems . . . . . . . . . . . . . . . . . . . . . . 307
11.7 Frequency responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
11.8 Frequency responses and frequency spectra . . . . . . . . . . . . . . . . . . . . 313
11.9 Realizations – State-space equations . . . . . . . . . . . . . . . . . . . . . . . . 315
11.9.1 Basic block diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
11.10Digital processing of CT signals . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
11.10.1 Filtering of piano’s middle C . . . . . . . . . . . . . . . . . . . . . . . . 321
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
References 327
Index 331
ix
x
Chapter 1
Introduction
1
2 CHAPTER 1. INTRODUCTION
0.1
Amplitude
−0.1
−0.2
0 0.5 1 1.5 2
0.1
Amplitude
−0.1
−0.2
1.375 1.3755 1.376 1.3765 1.377 1.3775 1.378 1.3785 1.379 1.3795 1.38
Time (s)
Figure 1.1: (a) Transduced signal of the sound “Signals and systems”. (b) Segment of (a).
key of a piano. It lasts about one second. According to Wikipedia, the theoretical frequency
of middle-C sound is 261.6 Hz. However the waveform shown in Figure 1.3(b) is quite erratic.
It is not a sinusoid and cannot have a single frequency. Then what does 261.6 Hz mean? This
will be discussed in Sections 5.5.2.
An electrocardiogram (EKG or ECG) records electrical voltages (potentials) generated
by a human heart. The heart contracts and pumps blood to the lungs for oxygenation and
then pumps the oxygenated blood into circulation. The signal to induce cardiac contraction
is the spread of electrical currents through the heart muscle. An EKG records the potential
differences (voltages) between a number of spots on a person’s body. A typical EKG has 12
leads, called electrodes, and may be used to generate many cardiographs. We show in Figure
1.4 only one graph, the voltage between an electrode placed at the fourth intercostal space to
the right of the sternum and an electrode placed on the right arm. It is the normal pattern
of a healthy person. Deviation from this pattern may reveal some abnormality of the heart.
From the graph, we can also determine the heart rate of the patient. Standard EKG paper
has one millimeter (mm) square as the basic grid, with a horizontal 1 mm representing 0.04
second and a vertical 1 mm representing 0.1 millivolt. The cardiac cycle in Figure 1.4 repeats
itself roughly every 21 mm or 21 × 0.04 = 0.84 second. Thus, the heart rate (the number of
heart beats in one minute) is 60/0.84 = 71.
We show in Figure 1.5 some plots which appear in many daily newspapers. Figure 1.5(a)
shows the temperature at Central Park in New York city over a 24-hour period. Figure 1.5(b)
shows the total number of shares traded each day on the New York Stock Exchange. Figure
1.5(c) shows the range and the closing price of Standard & Poor’s 500-stock Index each day
over three months. Figure 1.5(d) shows the closing price of the index over six months and its
90-day moving average. We often encounter signals of these types in practice.
The signals in the preceding figures are all plotted against time, which is called an inde-
pendent variable. A photograph is also a signal and must be plotted against two independent
1.1. SIGNALS AND SYSTEMS 3
0.1
Amplitude
−0.1
−0.2
0 2 4 6 8 10 12 14
(b) Segment of (a)
0.2
0.1
Amplitude
−0.1
−0.2
0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1
(c) Segment of (b)
0.2
0.1
Amplitude
−0.1
−0.2
0.9 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1
Time (s)
Figure 1.2: (a) Transduced signal of a 128-Hz tuning fork. (b) Segment of (a). (c) Segment
of (b).
4 CHAPTER 1. INTRODUCTION
(a)
0.6
0.4
0.2
Amplitude
−0.2
−0.4
(b)
0.6
0.4
0.2
Amplitude
−0.2
−0.4
0.1 0.105 0.11 0.115 0.12 0.125 0.13 0.135 0.14 0.145 0.15
Time (s)
Figure 1.3: (a) Transduced signal of a middle-C sound. (b) Segment of (a).
Figure 1.5: (a) Temperature. (b) Total number of shares traded each day in the New York
Stock Exchange. (c) Price range and closing price of S&P 500-stock index. (d) Closing price
of S&P 500-stock index and its 90-day moving average.
6 CHAPTER 1. INTRODUCTION
variables, one for each spatial dimension. Thus a signal may have one or more independent
variables. The more independent variables in a signal, the more complicated the signal. In
this text, we study signals that have only one independent variable. We also assume the
independent variable to be time.
To transform an acoustic wave into an electrical signal requires a transducer. To obtain
the signal in Figure 1.4 requires a voltmeter, an amplifier and a recorder. To obtain the
temperature in Figure 1.5(a) requires a temperature sensor. All transducers and sensors
are systems. There are other types of systems such as amplifiers, filters, and motors. The
computer program which generates the 90-day moving average in Figure 1.5(d) is also a
system.
for engineers. The twenty equations were condensed to the now celebrated four equations in 1883 by Oliver
Heaviside (1850-1925). Heaviside was a self-taught electrical engineer with one and only one job as a telegraph
operator. He was keenly interested in applying mathematics into practical use and published his results in
the trade magazine Electrician. He was the first one to apply the Laplace transform to study electric circuits.
But his formulation was dismissed because of its lack of mathematical rigor.
1.2. PHYSICS, MATHEMATICS, AND ENGINEERING 7
light, ultraviolet, x-rays, and so forth. Light as an EM wave certainly has the properties of
wave. It turns out that light also has the properties of particles and can be looked upon as a
stream of photons.
Physicists were also interested in the basic structure of matter. By the early 1900s, it
was recognized that all matter is built from atoms. Every atom has a nucleus consisting
of neutrons and positively charged protons and a number of negatively charged electrons
circling around the nucleus. Although electrons in the form of static electrical charges were
recognized in ancient times, their properties (the amount of charge and mass of each electron)
were experimentally measured only in 1897. Furthermore, it was experimentally verified
that the electron, just as light, has the dual properties of wave and particle. These atomic
phenomena were outside the reach of Newton’s laws and Einstein’s theory of relativity but
could be explained using quantum mechanics developed in 1926. By now it is known that
all matters are built from two types of particles: quarks and leptons. They interact with
each other through gravity, electromagnetic interactions, and strong and weak nuclear forces.
In summary, physics tries to develop physical laws to describe natural phenomena and to
uncover the basic structure of matter.
Mathematics is indispensable in physics. Even though physical laws were inspired by
concepts and measurements, mathematics is needed to provide the necessary tools to make
the concepts precise and to derive consequences and implications. For example, Einstein had
some ideas about the general theory of relativity in 1907, but it took him eight years to find
the necessary mathematics to make it complete. Without Maxwell’s equations, EM theory
could not have been developed. Mathematics, however, has developed into its own subject
area. It started with the counting of, perhaps, cows and the measurement of farm lots. It
then developed into a discipline which has little to do with the real world. It now starts with
some basic entities, such as points and lines, which are abstract concepts (no physical point
has zero area and no physical line has zero width). One then selects a number of assumptions
or axioms, and then develops logical results. Note that different axioms may lead to different
mathematical branches such as Euclidean geometry, non-Euclidean geometry, and Riemann
geometry. Once a result is proved correct, the result will stand forever and can withstand any
challenge. For example, the Pythagorean theorem (the square of the hypotenuse of a right
triangle equals the sum of the squares of both sides) was first proved about 5 B.C. and is
still valid today. In 1637, Pierre de Fermat claimed that no positive integer solutions exist
in an + bn = cn , for any integer n larger than 2. Even though many special cases such as
n = 3, 4, 6, 8, 9, . . . had been established, nobody was able to prove it for all integers n > 2
for over three hundred years. Thus the claim remained a conjecture. It was finally proven
as a theorem in 1994 (see Reference [S3]). Thus, the bottom line in mathematics is absolute
correctness, whereas the bottom line in physics is truthfulness to the physical world.
Engineering is a pragmatic and practical field. Sending the two exploration rovers to
Mars (launched in Mid 2003 and arrived in early 2004) was an engineering problem. In
this ambitious national project, budgetary concerns were secondary. For most engineering
products, such as motors, CD players and cell phones, cost is critical. To be commercially
successful, such products must be reliable, small in size, high in performance and competitive
in price. Furthermore they may require a great deal of marketing. The initial product
design and development may be based on physics and mathematics. Once a working model
is developed, the model must go through repetitive modification, improvement, and testing.
Physics and mathematics usually play only marginal roles in this cycle. Engineering ingenuity
and creativity play more prominent roles.
Engineering often involves tradeoffs or compromises between performance and cost, and
between conflicting specifications. Thus, there are often similar products with a wide range
in price. The concept of tradeoffs is less eminent in mathematics and physics; whereas it is
an unavoidable part of engineering.
For low-velocity phenomena, Newton’s laws are valid and can never be improved. Maxwell’s
equations describe electromagnetic phenomena and waves and have been used for over one
hundred years. Once all elementary particles are found and a Theory of Everything is devel-
8 CHAPTER 1. INTRODUCTION
oped, some people anticipate the death of (pure) physics and science (see Reference [L3]). An
engineering product, however, can always be improved. For example, after Faraday demon-
strated in 1821 that electrical energy could be converted into mechanical energy and vice
versa, the race to develop electromagnetic machines (generators and motors) began. Now,
there are various types and power ranges of motors. They are used to drive trains, to move
the rovers on Mars and to point their unidirectional antennas toward the earth. Vast numbers
of toys require motors. Motors are needed in every CD player to spin the disc and to position
the reading head. Currently, miniaturized motors on the order of millimeters or even microm-
eters in size are being developed. Another example is the field of integrated circuits. Discrete
transistors were invented in 1947. It was discovered in 1959 that a number of transistors could
be fabricated as a single chip. A chip may contain hundreds of transistors in the 1960s, tens
of thousands in the 1970s, and hundreds of thousands in the 1980s. It may contain several
billion transistors as of 2007. Indeed, technology is open-ended and flourishing.
• O: (2) Optoelectronics.
• S: (19) Sensors, Signal Processing, Software Engineering, Speech and Audio Processing.
• V: (5) Vehicular Technology, Vary Large Scale Integration (VLSI) Systems, Visualiza-
tion and Computer Graphics.
IEEE alone publishes 175 journals on various subjects. Indeed, the subject areas covered in
ECE programs are many and diversified.
Prior to World War II, there were some master’s degree programs in electrical engineering,
but the doctorate programs were very limited. Most faculty members did not hold a Ph.D.
degree and their research and publications were minimal. During the war, electrical engineers
discovered that they lacked the mathematical and research training needed to explore new
fields. This motivated the overhaul of electrical engineering education after the war. Now,
most engineering colleges have Ph.D. programs and every faculty member is required to have
a Ph.D.. Moreover, a faculty member must, in addition to teaching, carry out research and
publish. Otherwise he or she will be denied tenure and will be asked to leave. This leads to
the syndrome of “publish or perish”.
Prior to World War II, almost every faculty member had some practical experience and
taught mostly practical design. Since World War II, the majority of faculty members have
been fresh doctorates with limited practical experience. Thus, they tend to teach and stress
theory. In recent years, there has been an outcry that the gap between what universities
teach and what industries practice is widening. How to narrow the gap between theory and
practice is a challenge in ECE education.
the subject area was Signals and Systems, by A. V. Oppenheim and A. S. Willsky, published
by Prentice-Hall in 1983 (see Reference [O1]). Since then, over thirty books on the subject
area have been published in the U.S.. See the references at the end of this text. Most
books follow the same outline as the aforementioned book: they introduce the Fourier series,
Fourier transform, two- and one-sided Laplace and z-transforms, and their applications to
signal analysis, system analysis, communication and feedback systems. Some of them also
introduce state-space equations. Those books are developed mainly for those interested in
communication, control, and digital signal processing.
This new text takes a fresh look on the subject area. Most signals are naturally generated;
whereas systems are to be designed and built to process signals. Thus we discuss the role
of signals in designing systems. The small class of systems studied can be described by four
types of equations. Although they are mathematically equivalent, we show that only one
type is suitable for design and only one type is suitable for implementation and real-time
processing. We use operational amplifiers and simple RLC circuits as examples because they
are the simplest possible physical systems available. This text also discusses model reduction,
thus it is also useful to those interested in microelectronics and sensor design.
my research and publications gave me only personal satisfaction and, more importantly, job
security.
In the latter half of my teaching career, I started to ponder what to teach in classes. First
I stop teaching topics which are only of mathematical interests and focus on topics which
seem to be useful in practice as evident from my cutting in half the first book listed in page ii
of this text from its second to third edition. I searched “applications” papers published in the
literature. Such papers often started with mathematics but switched immediately to general
discussion and concluded with measured data which are often erratic and defy mathematical
descriptions. There was hardly any trace of using textbook design methods. On the other
hand, so-called “practical” systems discussed in most textbooks, including my own, are so
simplified that they don’t resemble real-world systems. The discussion of such “practical”
systems might provide some motivation for studying the subject, but it also gives a false
impression of the reality of practical design. A textbook design problem can often be solved
in an hour or less. A practical design may take months or years to complete; it involves a
search for components, the construction of prototypes, trial-and-error, and repetitive testing.
Such engineering practice is difficult to teach in a lecture setting. Computer simulations or,
more generally, computer-aided design (CAD), help. Computer software has been developed
to the point that most textbook designs can now be completed by typing few lines. Thus
deciding what to teach in a course such as signals and systems is a challenge.
Mathematics has long been accepted as essential in engineering. It provides tools and
skills to solve problems. Different subject areas clearly require different mathematics. For
signals and systems, there is no argument about the type of mathematics needed. However,
it is not clear how much of those mathematics should be taught. In view of the limited use
of mathematics in practical system design, it is probably sufficient to discuss what is really
used in practice. Moreover, as an engineering text, we should discuss more on issues involving
design and implementation. With this realization, I started to question the standard topics
discussed in most texts on signals and systems. Are they really used in practice or are they
introduced only for academic reasons or for ease of discussion in class? Is there any reason
to introduce the two-sided Laplace transform? Is the study of the Fourier series necessary?
During the last few years, I have put a great deal of thought on these issues and will discuss
them in this book.
developing a novel device. When devoted, one will put one’s whole heart or, more precisely,
one’s full focus on the problem. One will engage the problem day in and day out, and try to
think of every possible solution. Perseverance is important. One should not easily give up.
It took Einstein five years to develop the theory of special relativity and another ten years
to develop the theory of general relativity. No wonder Einstein once said, “I am no genius, I
simply stay with a problem longer”.
The purpose of education or, in particular, of studying this text is to gain some knowledge
of a subject area. However, much more important is to learn how to carry out critical thinking,
rigorous reasoning, and logical development. Because of the rapid change of technology, one
can never foresee what knowledge will be needed in the future. Furthermore, engineers may
be assigned to different projects many times during their professional life. Therefore, what
you learn is not important. What is important is to learn how to learn. This is also true even
if you intend to go into a profession other than engineering.
Students taking a course on signals and systems usually take three or four other courses at
the same time. They may also have many distractions: part-time jobs, relationships, or the
Internet. They simply do not have the time to really ponder a topic. Thus, I fully sympathize
with their lack of understanding. When students come to my office to ask questions, I always
insist that they try to solve the problems themselves by going back to the original definitions
and then by developing the answers step by step. Most of the time, the students discover
that the questions were not difficult at all. Thus, if the reader finds a topic difficult, he or she
should go back and think about the definitions and then follow the steps logically. Do not
get discouraged and give up. Once you give up, you stop thinking and your brain gets lazy.
Forcing your brain to work is essential in understanding a subject.
Chapter 2
Signals
2.1 Introduction
This text studies signals that vary with time. Thus our discussion begins with time. Even
though Einstein’s relativistic time is used in the global positioning system (GPS), we show
that time can be considered to be absolute and uniform in our study and be represented by a
real number line. We show that a real number line is very rich and consists of infinitely many
numbers in any finite segment. We then discuss where t = 0 is and show that ∞ and −∞ are
concepts, not numbers.
A signal is defined as a function of time. If a signal is defined over a continuous range of
time, then it is a continuous-time (CT) signal. If a signal is defined only at discrete instants of
time, then it is a discrete-time (DT) signal. We show that a CT signal can be approximated
by a staircase function. The approximation is called the pulse-amplitude modulation (PAM)
and leads naturally to a DT signal. We also discuss how to construct a CT signal from a DT
signal.
We then introduce the concept of impulses. The concept is used to justify mathematically
PAM. We next discuss digital procession of analog signals. Even though the first step in
such a processing is to select a sampling period T , we argue that T can be suppressed in
real-time and non-real-time procession. We finally introduce some simple CT and DT signals
to conclude the chapter.
2.2 Time
We are all familiar with time. It was thought to be absolute and uniform. Let us carry out
the following thought experiment to see whether it is true. Suppose a person, named Leo,
is standing on a platform watching a train passing by with a constant speed v as shown in
Figure 2.1. Inside the train, there is another person, named Bill. It is assumed that each
person carries an identical watch. Now we emit a light beam from the floor of the train to the
ceiling. To the person inside the train, the light beam will travel vertically as shown in Figure
2.1(a). If the height of the ceiling is h, then the elapsed time for the light beam to reach the
ceiling is, according to Bill’s watch, tv = h/c, where c = 3 × 108 meters per second is the
speed of light. However, to the person standing on the platform, the time for the same light
beam to reach the ceiling will be different as shown in Figure 2.1(b). Let us use ts to denote
the elapsed time according to Leo’s watch for the light beam to reach the ceiling. Then we
have, using the Pythagorean theorem,
Here we have used the fundamental postulate of Einstein’s special theory of relativity that
the speed of light is the same to all observers no matter stationary or traveling at any speed
13
14 CHAPTER 2. SIGNALS
Figure 2.1: (a) A person observing a light beam inside a train that travels with a constant
speed. (b) The same event observed by a person standing on the platform.
even at the speed of light.1 Equation (2.1) implies (c2 − v 2 )t2s = h2 and
h h tv
ts = √ = p =p (2.2)
c2 −v 2 c 1 − (v/c)2 1 − (v/c)2
We see that if the train is stationary or v = 0, then ts = tv . If the train travels at 86.6% of
the speed of light, then we have
tv tv tv
ts = √ =√ = = 2tv
1 − 0.8662 1 − 0.75 0.5
It means that for the same event, the time observed or experienced by the person on the
platform is twice of the time observed or experienced by the person inside the speeding
train. Or the watch on the speeding train will tick at half the speed of a stationary watch.
Consequently, a person on a speeding train will age slower than a person on the platform.
Indeed, time is not absolute.
The location of an object such as an airplane, an automobile, or a person can now be
readily determined using the global positioning system (GPS). The system consists of 24
satellites orbiting roughly 20,200 km (kilometer) above the ground. Each satellite carries
atomic clocks and continuously transmits a radio signal that contains its identification, its
time of emitting, its position, and others. The location of an object can then be determined
from the signals emitted from four satellites or, more precisely, from the distances between
the object and the four satellites. See Problems 1.1 and 1.2. The distances are the products
of the speed of light and the elapsed times. Thus the synchronization of all clocks is essential.
The atomic clocks are orbiting with a high speed and consequently run at a slower rate as
compared to clocks on the ground. They slows down roughly 38 microseconds per day. This
amount must be corrected each day in order to increase the accuracy of the position computed
from GPS signals. This is a practical application of the special theory of relativity.
Other than the preceding example, there is no need for us to be concerned with relativistic
time. For example, the man-made vehicle that can carry passengers and has the highest speed
is the space station orbiting around the Earth. Its average speed is about 7690 m/s. For this
speed, the time experienced by the astronauts on the space station, comparing with the time
on the Earth, is
tv
ts = p = 1.00000000032853tv
1 − (7690/300000000)2
To put this in prospective, the astronauts, after orbiting the Earth for one year (365 × 24 ×
3600s), may feel 0.01 second younger than if they remain on the ground. Even after staying
in the space station for ten years, they will look and feel only 0.1 second younger. No human
can perceive this difference. Thus we should not be concerned with Einstein’s relativistic time
and will consider time to be absolute and uniform.
1 Under this postulate, a man holding a mirror in front of him can still see his own image when he is
traveling with the speed of light. However he cannot see his image according to Newton’s laws of motion.
2.2. TIME 15
There are infinitely many of them. We see that all rational numbers can be arranged in order
and be counted as indicated by the arrows shown. If a rational number is expressed in decimal
form, then it must terminate with zeros or continue on without ending but with a repetitive
pattern. For example, consider the real number
x = 8.148900567156715671 · · ·
with the pattern 5671 repeated without ending. We show that x is a rational number. We
compute
xp = 0.p1 p2 p3 p4 · · ·
16 CHAPTER 2. SIGNALS
Figure 2.3: (a) Infinite real line and its finite segment. (b) Their one-to-one correspondence.
xq = 0.q1 q2 q3 q4 · · ·
xr = 0.r1 r2 r3 r4 · · · (2.3)
..
.
Even though the list contains all irrational numbers between 0 and 1, we still can create a
new irrational number as
xn = 0.n1 n2 n3 n4 · · ·
where n1 be any digit between 0 and 9 but different from p1 , n2 be any digit different from q2 ,
n3 be any digit different from r3 , and so forth. This number is different from all the irrational
numbers in the list and is an irrational number lying between 0 and 1. This contradicts the
assumption that (2.3) contains all irrational numbers. Thus it is not possible to arrange all
irrational numbers in order and then to count them. Thus the set of irrational numbers is
uncountably infinitely many. In conclusion, the real line consists of three infinite sets: integers,
rational numbers, and irrational numbers. We mention that every irrational number occupies √
a unique point on the real line, but we cannot pin point its exact location. For example, 2
is an irrational number lying between the two rational number 1.414 and 1.415, but we don’t
know where it is exactly located.
Because a real line has an infinite length, it is reasonable that it contains infinitely many
real numbers. What is surprising is that any finite segment of a real line, no matter how
small, also contains infinitely many real numbers. Let [a, b], be a nonzero segment. We draw
the segment across a real line as shown in Figure 2.3. From the plot we can see that for any
point on the infinite real line, there is a unique point on the segment and vise versa. Thus the
finite segment [a, b] also has infinitely many real numbers on it. For example, in the interval
[0, 1], there are only two integers 0 and 1. But it contains the following rational numbers
1 2 n−1
, , ···,
n n n
for all integer n ≥ 2. There are infinitely many of them. In addition, there are infinitely many
irrational numbers between [0, 1] as listed in (2.3). Thus the interval [0, 1] contains infinitely
many real numbers. The interval [0.99, 0.991] also contains infinitely many real numbers such
as
x = 0.990n1 n2 n3 · · · nN
where ni can assume any digit between 0 and 9, and N be any positive integer. In conclusion,
any nonzero segment, no matter how small, contains infinitely many real numbers.
A real number line consists of rational numbers (including integers) and irrational numbers.
The set of irrational numbers is much larger than the set of rational numbers. It is said that
if we throw a dart on the real line, the probability of hitting a rational number is zero. Even
so the set of rational numbers consists of infinitely many numbers and is much more than
enough for our practical use. For example, the number π is irrational. However it can be
approximated by the rational number 3.14 or 3.1416 in practical application.
to both sides. At the very end of the right-hand side will be time +∞ and at the very end of
the left-hand side will be time −∞. Because the line can be extended forever, we can never
reach its ends or ±∞.
Where is time 0? According to the consensus of most astronomers, our universe started
with a Big Bang, occurred roughly 13.7 billion years ago. Thus t = 0 should be at that
instant. Furthermore, the time before the Big Bang is completely unknown. Thus a real line
does not denote the actual time. It is only a model and we can select any time instant as
t = 0. If we select the current time as t = 0 and accept the Big Bang theory, then the universe
started roughly at t = −13.7 × 109 (in years). It is indeed a very large negative number. And
yet it is still very far from −∞; it is no closer to −∞ than the current time. Thus −∞ is
not a number and has no physical meaning nor mathematical meaning. It is only an abstract
concept. So is t = +∞.
In engineering, where is time 0 depends on the question asked. For example, to track the
maintenance record of an aircraft, the time t = 0 is the time when the aircraft first rolled off
its manufacturing line. However, on each flight, t = 0 could be set as the instant the aircraft
takes off from a runway. The time we burn a music CD is the actual t = 0 as far as the CD
is concerned. However we may also consider t = 0 as the instant we start to play the CD.
Thus where t = 0 is depends on the application and is not absolute. In conclusion, we can
select the current time, a future time, or even a negative time (for example, two days ago) as
t = 0. The selection is entirely for the convenience of study. If t = 0 is so chosen, then we
will encounter time only for t ≥ 0 in practice.
In signal analysis, mathematical equations are stated for the time interval (−∞, ∞). Most
examples however are limited to [0, ∞). In system analysis, we use exclusively [0, ∞). More-
over, +∞ will be used only symbolically. It may mean, as we will discussed in the text, only
50 seconds away.
(a)
5
−5
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
(b)
5
Amplitude
−5
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
(c)
5
−5
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Time (s)
Figure 2.6: (a) CT signal approximated by a staircase function with T = 0.3 (b) With T = 0.1
(c) With T = 0.001.
is continuous if its amplitude does not jump from one value to another as t increases. All
real-world signals such as speech, temperature, and the speed of an automobile are continuous
functions of time.
We show in Figure 2.5 an important CT signal, called a clock signal. Such a signal,
generated using a quartz-crystal oscillator, is needed from digital watches to supercomputers.
A 2-GHz (2×109 cycles per second) clock signal will repeat its pattern every 0.5×10−9 second
or half a nanosecond (ns). Our blink of eyes takes about half a second within which the clock
signal already repeats itself one billion times. Such a signal is beyond our imagination and
yet is a real-world signal.
Now let us discuss an approximation of x(t). First we select a T > 0, for example T = 0.3.
2.4. DISCRETE-TIME (DT) SIGNALS 19
(a) (b)
5 5
4 4
3 3
2 2
Amplitude
Amplitude
1 1
0 0
−1 −1
−2 −2
−3 −3
0 1 2 3 4 5 0 10 20 30
Time (s) Time index
Figure 2.7: (a) DT signal plotted against time. (b) Against time index.
We then approximate the dotted line by the staircase function denoted by the solid line in
Figure 2.6(a). The amplitude of x(t) for all t in [0, T ) is approximated by x(0); the amplitude
of x(t) for all t in [T, 2T ) is approximated by x(T ) and so forth. If the staircase function is
denoted by xT (t), then we have
for n = 0, 1, 2, . . . , and for some T > 0. The approximation of x(t) by xT (t) with T = 0.3
is poor as shown in Figure 2.6(a). If we select T = 0.1, then the approximation is better as
shown in Figure 2.6(b). If T = 0.001, then we cannot tell the difference between x(t) and
xT (t) as shown in Figure 2.6(c). A staircase function xT (t) changes amplitude only at t = nT ,
for n = 0, 1, 2, . . . . Thus it can be uniquely specified by x(nT ), for n = 0, 1, , 2, . . . . We call
T the sampling period; nT sampling instants or sample times; and x(nT ) the sampled values
or samples of x(t). The number of samples in one second is given by 1/T := fs and should
be called the sampling rate.2 However fs = 1/T is more often called the sampling frequency
with unit in Hz (cycles per second).
The approximation of x(t) by xT (t) is called pulse-amplitude modulation (PAM). If the
amplitude of x(nT ) is coded using 1s and 0s, called the binary digits or bits, then the approx-
imation is called pulse-code modulation (PCM). If the amplitude is represented by a pulse
of a fixed height but different width, then it is called pulse-width modulation (PWM). If the
amplitude is represented by a fixed pulse (fixed height and fixed narrow width) at different
position, then it is called pulse-position modulation (PPM). The signals in Figures 1.1 and
1.2 are obtained using PCM as we will discuss in Section 3.7.1.
equals A.
20 CHAPTER 2. SIGNALS
and be plotted against time index as shown in Figure 2.7(b). Note the use of brackets and
parentheses. In this book, we adopt the convention that variables inside a pair of brackets can
assume only integers; whereas, variables inside a pair of parentheses can assume real numbers.
A DT signal consists of a sequence of numbers, thus it is also called a DT sequence or a time
sequence.
The DT signal in (2.5) consists of samples of x(t) and is said to be obtained by sampling
a CT signal. Some signals are inherently discrete time such as the one in Figure 1.4(b). The
number of shares in Figure 1.4(b) is plotted against calendar days, thus the DT signal is
not defined during weekends and holidays and the sampling instants are not equally spaced.
However if it is plotted against trading days, then the sampling instants will be equally spaced.
The plot in Figure 1.4(c) is not a signal because its amplitude is specified as a range, not a
unique value. However if we consider only its closing price, then it is a DT signal. If we plot
it against trading days, then its sampling instants will be equally spaced.
If a DT signal is of a finite length, then it can be stored in a computer using two sequences
of numbers. The first sequence denotes time instants and the second sequence denotes the
corresponding values. For example, a DT signal of length N starting from t = 0 and with
sampling period T can be stored in a computer as
t=nT; n=0:N-1;
x=[x 1 x 2 x 3 · · · x N]
where n=0:N-1 denotes the sequence of integers from 0 to N − 1. Note that the subscript of
x starts from 1 and ends at N. Thus both t and x have length N.
A number α can be typed or entered into a computer in decimal form or in fraction. It
is however coded using 1s and 0s before it is stored and processed in the computer. If the
number of bits used in representing α is B, then the number of available values is 2B . If α is
not one of the available values, it must be approximated by its closest available value. This
is called quantization. The error due to quantization, called quantization error, depends on
the number of bits used. It is in the order of 10−7 |α| using 32 bits or in the order of 10−16 |α|
using 64 bits. Generally there is no need to pay any attention to such small errors.
The amplitude of a DT signal can assume any value in a continuous range. There are
infinitely many possible values in any continuous range. If the amplitude is binary coded
using a finite number of bits, then it can assume a value only from a finite set of values. Such
a signal is called a digital signal. Sampling a CT signal yields a DT signal. Binary coding
a DT signal yields a digital signal. All signals processed in computers and microprocessors
are digital signals. The study of digital signals however is complicated and no book carries
out the study. Thus in analysis and design, we study only DT signals. In implementation,
all DT signals must be binary coded. If quantization errors are large, they are studied using
statistical methods. This topic is outside the scope of this text and will not be discussed.
Note that quantization errors can be reduced by simply increasing the number of bits used.
2 2 2
0 0 0
−2 −2 −2
0 2 4 6 0 2 4 6 0 2 4 6
Figure 2.8: Construction of a CT signal from a DT signal specified by the six dots at time
indices n = 0 : 5: (a) Zero-order hold. (b) First-order hold. (c) Linear interpolation.
Figure 2.9: (a) δa (t − t0 ). (b) Triangular pulse with area 1. (c) δT (t − 0) × T = T δT (t).
later, x(2T ) 2T seconds later and so forth. Let y(t) be the constructed CT signals shown in
Figure 2.8. If the value of y(t) can appear on real time, then it is called real-time processing.3
For the linear interpolation, the value of y(t) for t in [0, T ) is unknown before the arrival of
x(T ) at time t = T , thus y(t) cannot appear on real time. Thus the linear interpolation is
not a real-time processing even though it yields the best result. For the zero-order hold, the
value of y(t) for t in [0, T ) is determined only by x(0) and thus can appear on real time. So
is the first-order hold. Because of its simplicity and real-time processing, the zero-order hold
is most widely used in practice.
2.5 Impulses
In this section we introduce the concept of impulses. It will be used to justify the approxima-
tion of a CT signal by a staircase function. We will also use impulses to derive, very simply,
many results to illustrate some properties of signals and systems. Thus it is a very useful
concept.
Consider the pulse δa (t − t0 ) defined by
½
1/a t 0 ≤ t < t0 + a
δa (t − t0 ) = (2.6)
0 t < t0 and t ≥ t0 + a
and shown in Figure 2.9(a). It is located at t0 and has width a and height 1/a. Its area is 1
3 This issue will be discussed further in Section 2.6.1.
22 CHAPTER 2. SIGNALS
for every a > 0. As a approaches zero, the pulse has practically zero width but infinite height
and yet its area remains to be 1. Let us define
It is called the Dirac delta function, δ-function, or impulse. The impulse is located at t = t0
and has the property ½
∞ if t = t0
δ(t − t0 ) = (2.8)
0 if t 6= t0
and Z Z
∞ t0+
δ(t − t0 )dt = δ(t − t0 )dt = 1 (2.9)
−∞ t0
where t0+ is a time infinitesimally larger than t0 . Because δ(t − t0 ) is zero everywhere except
at t = t0 , the integration interval from −∞ to ∞ in (2.9) can be reduced to the immediate
neighborhood of t = t0 . The integration width from t0 to t0+ is practically zero but still
includes the whole impulse at t = t0 . If an integration interval does not cover any impulse,
then its integration is zero such as
Z 2.5 Z 10
δ(t − 2.6)dt = 0 and δ(t − 2.6)dt = 0
−∞ 3
where the impulse δ(t − 2.6) is located at t0 = 2.6. If an integration interval covers wholly an
impulse, then its integration is 1 such as
Z 3 Z 10
δ(t − 2.6)dt = 1 and δ(t − 2.6)dt = 1
−10 0
then ambiguity may occur. If the impulse is defined as in Figure 2.9(a), then the former is 0
and the latter is 1. However, we may also define the impulse using the pulse shown in Figure
2.9(b). Then the former integration is 1 and the latter is 0. To avoid these situations, we
assume that whenever an integration interval touches an impulse, the integration covers the
whole impulse and equals 1. Using this convention, we have
Z t0
δ(t − t0 )dt = 1 (2.10)
t0
where the integration interval is assumed to cover the entire value of f (t0 ). However no matter
what value f (t0 ) assumes, such as f (t0 ) = 10, 1010 , or 100100 , the integration is still zero. This
is in contrast to (2.10). Equation (2.11) shows that the value of an ordinary function at any
isolated time instant is immaterial in an integration.
Let A be a real number. Then the impulse Aδ(t − t0 ) is said to have weight A. This can
be interpreted as the pulse in Figure 2.9(a) to have area A by changing the height or width
or both. See Problem 2.8. Thus the impulse defined in (2.7) has weight 1. Note that there
2.5. IMPULSES 23
4 4
3 3
Amplitude
2 2
1 1
0 0
−1 −1
−5 0 5 10 15 −5 0 5 10 15
Time Time
Figure 2.10: (a) Triangular pulse with area 1 located at t0 = 4. (b) sin((t − 4)/0.1)/π(t − 4).
are many ways to define the impulse. In addition to the ones in Figures 2.9(a) and (b), we
may also use the isosceles triangle shown in Figure 2.10(a) located at t0 with base width a
and height 2/a to define the impulse. We can also define the impulse as
sin((t − t0 )/a)
δ(t − t0 ) = lim (2.12)
a→0 π(t − t0 )
which is plotted in Figure 2.10(b) for t0 = 4 and a = 0.1. Indeed there are many ways of
defining the impulse.
The mathematics involving impulses is very complex. See Reference [Z1]. All we need to
know in this text are the definition given in (2.7) and the next two properties:
and Z ∞ Z ∞ Z ∞
f (t)δ(t − t0 )dt = f (t0 )δ(t − t0 )dt = f (t0 ) δ(t − t0 )dt = f (t0 )
−∞ −∞ −∞
or Z Z
∞ b
f (t)δ(t − t0 )dt = f (t)δ(t − t0 )dt = f (t)|t−t0 =0 = f (t)|t=t0 (2.14)
−∞ a
for any function f (t) that is continuous at t = t0 , any a ≤ t0 , and any b ≥ t0 . Equation
(2.13) follows from the property that δ(t − t0 ) is zero everywhere except at t0 and (2.14)
follows directly from (2.13), (2.9) and (2.10). The integration in (2.14) is called the sifting or
sampling property of the impulse. We see that whenever a function is multiplied by an impulse
in an integration, we simply move the function outside the integration and then replace the
integration variable by the variable obtained by equating the argument of the impulse to
zero.4 The sifting property holds for any integration interval which covers or touches the
impulse. The property will be repeatedly used in this text and its understanding is essential.
Check your understanding with Problem 2.9.
property.
24 CHAPTER 2. SIGNALS
(a)
5
−5
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
(b)
5
Amplitude
−5
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
(c)
5
−5
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Time (s)
is by definition located at t = 0 with width T and height 1/T . Thus T δT (t) has height 1
as shown in Figure 2.9(c). Consequently the pulse in Figure 2.11(a) can be expressed as
x(0)δT (t − 0)T . The pulse in Figure 2.11(b) is located at t = T with height x(T ) = 1.325
and width T . Thus it can be expressed as x(T )δT (t − T )T . The pulse in Figure 2.11(c) is
located at t = 2T = 0.6 with height x(2T ) = −2.464 and width T . Thus it can be expressed
as x(2T )δT (t − 2T )T . Proceeding forward, we can express xT (t) as
This equation expresses the staircase function in terms of the samples of x(t) at sampling
instants nT .
Let us define τ := nT . As T → 0, τ becomes a continuous variable and we can write T as
dτ . Furthermore, the summation in (2.15) becomes an integration and δT (t − τ ) becomes an
impulse. Thus (2.15) becomes, as T → 0,
Z ∞
xT (t) = x(τ )δ(t − τ )dτ
0
This shows that the staircase function equals x(t) as T → 0. Thus PAM is a mathematically
sound approximation of a CT signal.
preceding subsection. However not every CT signal can be so approximated. For example,
consider the signal defined by
½
1 if t is a rational number
α(t) = (2.17)
0 if t is an irrational number
This signal will oscillate with increasing frequency as t → 0 and has infinitely many maxima
and minima in [0, 1]. Note that β(t) is a continuous-function of t for all t > 0. The signals
α(t) and β(t) are mathematically contrived and will not arise in the real world. Thus we
assume from now on that every CT signal encountered in this text is of bounded variation.
Because y(nT ) does not depend on u(nT ), we can start to compute y(nT ) right after receiving
u([n − 1]T ) or right after time instant [n − 1]T .5 Depending on the speed of the processor or
hardware used and on how it is programmed, the amount of time needed to compute y(nT )
will be different. For convenience of discussion, suppose one multiplication requires 30 ns
5 The case where y(nT ) depends on u(nT ) will be discussed in Section 6.8.1.
2.7. CT STEP AND REAL-EXPONENTIAL FUNCTIONS - TIME CONSTANT 27
(a) (b)
1.5 1.5
1 1
Amplitude
Amplitude
0.5 0.5
0 0
a
−0.5 −0.5
−2 0 2 4 6 8 10 −2 0 2 4 6 8 10
Time (s) Time (s)
(20 × 10−9 s) and one addition requires 20 ns. Because (2.19) involves two additions and two
multiplications and if they are carried out sequentially, then computing y(nT ) requires 100
ns or 0.1 µs. If this amount of computing time is less than T , we store y(nT ) in memory
and then deliver it to the output terminal at time instant t = nT . If the amount of time
is larger than T , then y(nT ) is not ready for delivery at time instant t = nT . Furthermore
when u(nT ) arrives, we cannot start computing y([n + 1]T ) because the processor is still in
the middle of computing y(nT ). Thus if T is less than the amount of computing time, then
the processor cannot function properly. On the other hand, if T is larger than the computing
time, then y(nT ) can be computed and then stored in memory. Its delivery at nT is controlled
separately. In other words, even in real-time procession, there is no need to pay any attention
to the sampling period T so long as it is large enough to carry out the necessary computation.
In conclusion, in processing DT signals x(nT ), the sampling period T does not play any
role. Thus we can suppress T or assume T = 1 and use x[n], a function of time index n, in
the study of of DT signals and systems.
(a) (b)
1.2 1.2
1 1
0.8 0.8
Amplitude
Amplitude
0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 20 40 60 0 1 2 3 4 5
Time (s) Time (s)
Figure 2.15: (a) e−at with a = 0.1 (dotted line) and a = 2 (solid line) for t in [0, 60]. (b)
Segment of (a) for t in [0, 5].
a straight line. Thus the slope of the straight line is 1/a. If we take the derivative of qa (t),
then dqa (t)/dt equals 0 for t < 0 and t > a and 1/a for t in [0, a]. This is the pulse defined
in Figure 2.9(a) with t0 = 0 or δa (t). As a → 0, qa (t) becomes q(t) and δa (t) becomes the
impulse δ(t). Thus we have
dq(t)
δ(t) = (2.21)
dt
This is another way of defining impulse.
where a is real and nonnegative (zero or positive). If a is negative, the function grows
exponentially to infinity. No device can generate such a signal. If a = 0, the function reduces
to the step function in (2.20). If a > 0, the function decreases exponentially to zero as t
approaches ∞. We plot in Figure 2.15(a) two real exponential functions with a = 2 (solid
line) and 0.1 (dotted line). The larger a is, the faster e−at vanishes. In order to see e−2t
better, we replot them in Figure 2.15(b) using a different time scale.
A question we often ask in engineering is: at what time will e−at become zero? Mathe-
matically speaking, it becomes zero only at t = ∞. However, in engineering, a function can
be considered to become zero when its magnitude remains less than 1% of its peak value. For
example, in a scale with its reading from 0 to 100, a reading of 1 or less can be considered to
be 0. We give an estimate for e−at to reach roughly zero. Let us define tc := 1/a. It is called
the time constant. Because
for all t, the amplitude of e−at decreases to 37% of its original amplitude whenever the time
increases by one time constant tc = 1/a. Because (0.37)5 = 0.007 = 0.7%, the amplitude of
e−at decreases to less than 1% of its original amplitude in five time constants. Thus we often
consider e−at to have reached zero in five time constants. For example, the time constant of
e−0.1t is 1/0.1 = 10 and e−0.1t reaches zero in 5 × 10 = 50 seconds as shown in Figure 2.15(a).
The signal e−2t has time constant 1/2 = 0.5 and takes 5 × 0.5 = 2.5 seconds to reach zero as
shown in Figure 2.15(b).
1 1 1
0 0 0
Figure 2.16: (a) A given function x(t). (b) x(t − 1). (c) x(t + 2).
Given the signal x(t) shown in Figure 2.16(a). From the plot, we can see that x(0) =
0, x(1) = 1.5, x(2) = 1, x(3) = 0.5, and x(4) = 0. We now plot the signal defined by x(t − 1).
To plot x(t − 1), we select a number of t and then find its values from the given x(t). For
example, if t = −1, then x(t − 1) = x(−2), which equals 0 as we can read from Figure 2.16(a).
If t = 0, then x(t − 1) = x(−1), which equals 0. If t = 1, then x(t − 1) = x(0), which equals
0. Proceeding forward, we have the following
t −1 0 1 2 3 4 5 6
x(t − 1) 0 0 0 1.5 1 0.5 0 0
From these, we can plot in Figure 2.16(b) x(t − 1). Using the same procedure we can plot
x(t + 2) = x(t − (−2)) in Figure 2.16(c). We see that x(t − t0 ) simply shifts x(t) from t = 0
to t0 . If t0 > 0, x(t − t0 ) shifts x(t) to the right and is positive time if x(t) is positive time.
If t0 < 0, x(t − t0 ) shifts x(t) to the left and may not be positive time even if x(t) is positive
time.
where n0 is any integer. In other words, we have δd [0] = 1 and δd [n] = 0 for all nonzero integer
n. The sequence is also called the Kronecker delta sequence and is the DT counterpart of the
CT impulse defined in (2.8). Unlike the CT impulse which cannot be generated in practice,
the impulse sequence can be easily generated. Note that −2δd [n] = −2δd [n − 0] is an impulse
sequence at n = 0; it is zero for all integer n except at n = 0 where the amplitude is −2.
A DT signal is defined to be positive time if x[n] = 0 for all n < 0. Thus a DT positive-
time signal can assume nonzero value only for n ≥ 0. Consider the DT positive-time signal
shown in Figure 2.17(a) with x[0] = 1.8, x[1] = 2.5, x[2] = −0.8, x[3] = 0.5, and so forth. Such
a signal can be represented graphically as shown. It can also be expressed in tabular form as
n 0 1 2 3 ···
x[n] 1.8 2.2 −0.8 0.5 ...
Can we express it as
x[n] = 1.8 + 2.2 − 0.8 + 0.5 + · · ·
for n ≥ 0? The answer is negative. (Why?) Then how do we express it as an equation?
30 CHAPTER 2. SIGNALS
(a)
−2
−5 0 (b) 5 10
−2
−5 0 (c) 5 10
−2
−5 0 (d) 5 10
−2
−5 0 5 10
Figure 2.17: (a) DT signal x[n]. (b) x[0]δd [n]. (c) x[1]δd [n − 1]. (d) x[2]δd [n − 2].
Let us decompose the DT signal shown in Figure 2.17(a) as in Figures 2.17(b), (c), (d),
and so forth. That is, the sum of those in Figures 2.17(b), (c), (d), . . . yields Figure 2.17(a).
Note that the summation is carried out at every n.
The sequence in Figure 2.17(b) is the impulse sequence δd [n] with height x[0] or 1.8δd [n−0].
The sequence in Figure 2.17(c) is the sequence δd [n] shifted to n = 1 with height x[1] or
2.2δd [n − 1]. The sequence in Figure 2.17(d) is −0.8δd [n − 2]. Thus the DT signal in Figure
2.17(a) can be expressed as
x[n] = 1.8δd [n − 0] + 2.2δd [n − 1] − 0.8δd [n − 2] + 0.5δd [n − 3] + · · ·
or
∞
X
x[n] = x[k]δd [n − k] (2.25)
k=0
This holds for every integer n. Note that in the summation, n is fixed and k ranges from 0
to ∞. For example, if n = 10, then (2.25) becomes
∞
X
x[10] = x[k]δd [10 − k]
k=0
As k ranges from 0 to ∞, every δd [10 − k] is zero except k = 10. Thus the infinite summation
reduces to x[10] and the equality holds. Note that (2.25) is the DT counterpart of (2.16).
To conclude this section, we mention that (2.24) must be modified as
½
1 for n = n0
δd ([n − n0 ]T ) = δd (nT − n0 T ) =
0 for n 6= n0
if the sampling period T > 0 is to be expressed explicitly. Moreover, (2.25) becomes
∞
X
x(nT ) = x(kT )δd (nT − kT ) (2.26)
k=−∞
if the signal starts from −∞. This equation will be used in Chapter 4.
2.8. DT IMPULSE SEQUENCES 31
(a) (b)
1.2 1.2
1 1
0.8 0.8
Amplitude
0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 10 20 30 40 50 0 2 4 6 8 10
Time index Time index
Figure 2.18: (a) bn with b = 0.5 (solid-line stems) and b = 0.9 (dotted-line stems) for n = 0 :
50. (b) Segment of (a) for n = 0 : 10.
It has amplitude 1 for n ≥ 0. If it has amplitude −2, we write −2qd [n]. The shifting discussed
in Section 2.7.1 is directly applicable to the DT case. Thus −2qd [n − 3] shifts −2qd [n] to the
right by 3 samples. That is, −2qd [n − 3] = 0 for all n < 3 and −2qd [n − 3] = −2 for all n ≥ 3.
only an estimate.
32 CHAPTER 2. SIGNALS
is t̄c = −T / ln 0.9 = 9.49T and it takes 5 × 9.49T = 47.45T in seconds or 47.45 in samples for
0.9n to become zero. This is indeed the case as shown in Figure 2.18(a).
Problems
2.1 Consider a point A on a straight line. If we know the distance of A from a known point,
can we determine the position of A?
2.2 Consider a point A on a plane. Can you determine the position of A from three distances
from three known points? How?
2.4 Verify that the midpoint of any two rational numbers is a rational number. Using the
procedure, can you list an infinite number of rational numbers between (3/5) and (2/3)?
List only three of them.
2.5 Consider x(t) defined by x(t) = 0.5t, for 0 ≤ t ≤ 3 and x(t) = 0, for t < 0 and t ≥ 3.
Plot x(t) for t in [0, 5]. Is x(t) a signal? If not, modify the definition to make it a signal.
2.6 Find the samples of the modified x(t) in Problem 2.5 if the sampling period is T = 0.5.
2.7 Construct, using zero-order, first-order holds, and linear interpolation, CT signals for t
in [0, 3], from the DT signal x[−1] = 0, x[0] = 1, x[1] = 0, x[2] = −1, x[3] = 2, x[4] =
1, x[5] = 1, x[6] = 0 with sampling period T = 0.5.
2.8 What are the mathematical expressions of the pulses shown in Figure 2.19 as a → 0?
2.9 What are the values of the following integrals involving impulses:
R∞
1. t=−∞ (cos t)δ(t)dt
R π/2
2. t=0 (sin t)δ(t − π/2)dt
R∞
3. t=π (sin t)δ(t − π/2)dt
R∞
4. t=0 δ(t + π/2) [sin(t − π)] dt
R∞
5. t=−∞ δ(t)(t3 − 2t2 + 10t + 1)dt
R0
6. t=0 δ(t)e2t dt
R0
7. t=0 1010 e2t dt
[Answers: 1, 1, 0, 0, 1, 1, 0.]
2.8. DT IMPULSE SEQUENCES 33
2.10 Verify Z ∞
x(τ )δ(t − τ − 3)dτ = x(t − 3)
−∞
and Z ∞
x(t − τ )δ(τ − t0 )dτ = x(t − t0 )
−∞
. . . . . .
2.14 What is the time constant of x(t) = −2e−1.2t ? Use it to sketch roughly the function for
t in [0, 10].
2.15 Consider x(t) shown in Figure 2.21. Plot x(t − 1), x(t + 1), x(t − 1) + x(t + 1), and
x(t − 1)x(t + 1). Note that the addition of two straight lines is a straight line, but their
product is not.
3
2.5
2
Amplitude
1.5
0.5
0
−4 −3 −2 −1 0 1 2 3 4
Time (s)
2.16 Given the DT signal x[n] defined by x[0] = −2, x[1] = 2, x[3] = 1 and x[n] = 0 for the
rest of n. Plot x[n] and express it as an equation using impulse sequences.
2.17 Given the x[n] in Problem 2.16. Plot x[n − 3] and x[−n].
2.18 What is the sampled sequence of e−0.4t , for t ≥ 0, with sampling period T = 0.5? What
are its time constants in seconds and in samples?
2.19 What is the sampled sequence of −2e−1.2t , for t ≥ 0, with sampling period T = 0.2?
What are its time constants in seconds and in samples?
34 CHAPTER 2. SIGNALS
Chapter 3
3.1 Introduction
In this chapter we introduce some mathematics and MATLAB. Even though we encounter
only real-valued signals, we need the concept of complex numbers. Complex numbers will be
used to define frequency and will arise in developing frequency contents of signals. They also
arise in analysis and design of systems. Thus the concept is essential in engineering.
We next introduce matrices and some simple manipulations. We discuss them only to
the extent needed in this text. We then discuss the use of mathematical notations and point
out a confusion which often arises in using Hz and rad/s. A mathematical notation can be
defined in any way we want. However once it is defined, all uses involving the notation must
be consistent.
We discuss a number of mathematical conditions. These conditions are widely quoted in
most texts on signals and systems. Rather than simply stating them as in most texts, we
discuss their implications and differences. It is hoped that the discussion will provide better
appreciation of these concepts. Even if you do not fully understand them, you should proceed
to the next topic. After all, you rarely encounter these conditions in practice, because all
real-world signals automatically meet these conditions.
We then give a proof of an irrational number. The proof will give a flavor of logical
development.
In the remainder of this chapter, we introduce MATLAB. It is an important software
package in engineering and is widely available. For example, every PC at Stony Brook for
students’ use contains MATLAB. The reader is assumed to have access to MATLAB. One
way to learn MATLAB is to pick up a book and to study it throughout. Our approach is to
study only what is needed as we proceed. This chapter introduces MATLAB to the extent of
generating Figures 1.1-1.3, and all figures in Chapter 2. The discussion is self-contained. A
prior knowledge of MATLAB is helpful but not essential.
35
36 CHAPTER 3. SOME MATHEMATICS AND MATLAB
Figure 3.1: (a) Complex plane. (b) The complex number ejθ .
equations. For example, the polynomial equation x2 + 4 = 0 has no real-valued solution but
has the complex numbers ±j2 as its solutions. By introducing complex numbers, then every
polynomial equation of degree n has n solutions.
A real number can be represented by a point on a real number line. To represent a complex
number, we need two real number lines, perpendicular to each other, as shown in Figure 3.1.
It is called a complex plane. The horizontal axis, called the real axis, denotes the real part
and the vertical axis, called the imaginary axis, denotes the imaginary part. For example,
the complex number 2 + 3j can be represented by the point denoted by A shown in Figure
3.1(a) and −3 + 2j by the point denoted by B.
Let us draw a line from the origin (0 + j0) toward A with an arrow as shown in Figure
3.1(a). We call the line with an arrow a vector. Thus a complex number can be consider a
vector in a complex plane. The vector can be specified by its length r and its angle or phase
θ as shown. Because the length cannot be negative, r is called magnitude, not amplitude.
The angle is measured from the positive real axis. By convention, an angle is positive if
measured counterclockwise and negative if measured clockwise. Thus a complex number A
can be represented as
A = α + jβ = rejθ (3.1)
The former is said to be in rectangular form and the latter in polar form. Either form can be
obtained from the other. Substituting Euler’s formula
which implies p
r= α2 + β 2 ≥ 0 and θ = tan−1 [β/α] (3.3)
For example, for A = 2 + 3j, we have
p √ √
r = 22 + 32 = 4 + 9 = 13 = 3.6
and
θ = tan−1 (3/2) = 56o
0 √
Thus we also have A = 3.6ej56 . Note that even though 13 = ±3.6, we cannot take −3.6
because we require r to be positive. Note that r and θ can also be obtained by measurement
using a ruler and a protractor.
3.2. COMPLEX NUMBERS 37
has magnitude 1 and phase θ. It is a vector of unit length as shown in Figure 3.1(b) or simply
a point on the unit circle. From the plot we can readily obtain
ej0 = 1+0×j =1
j(π/2)
e = 0+1×j =j
−j(π/2)
e = 0 + (−1) × j = −j
ejπ = −1 + 0 × j = −1
Of course, we can also obtain them using (3.2). But it is much simpler to obtain them
graphically.
To conclude this section, we introduce the concept of complex conjugation. Consider the
complex number A = α + jβ, where α and β are real. The complex conjugate of A, denoted
by A∗ , is defined as A∗ = α − jβ. Following this definition, we have that A is real if and only
if A∗ = A. We also have AA∗ = |A|2 and [AB]∗ = A∗ B ∗ for any complex numbers A and B.
We now compute the complex conjugate of rejθ , where r and θ are real or r∗ = r and
∗
θ = θ. We have
∗ ∗
[rejθ ]∗ = r∗ [ejθ ]∗ = rej θ = re−jθ
This can also be verified as
Thus complex conjugation simply changes j to −j or −j to j if all variables involved are real
valued.
Two angles θ1 and θ2 are considered the same angle if they differ by 360 or its integer
multiple or, equivalently, their difference can be divided by 360 wholly. This is expressed
mathematically as
θ1 = θ2 (modulo 360) or (mod 360) (3.4)
if
θ1 − θ2 = k × 360 (3.5)
for some integer k (negative, 0, or positive). Clearly, we have 56 6= −304. But we can write
56 = −304 (mod 360) because 56 − (−304) = 360. More generally we have
3.3 Matrices
In this section we discuss some matrix notations and operations. An m×n matrix has m rows
and n columns of entries enclosed by a pair of brackets and is said to have dimension or order
m by n. Its entries can be numbers or variables. The entry at the kth-row and lth-column is
called the (k, l)th entry. Every matrix will be denoted by a boldface letter. A 1 × 1 matrix is
called a scalar and will be denoted by a regular-face letter. An m × 1 matrix is also called a
column vector and a 1 × n matrix is also called a row vector.
Two matrices equal each other if and only if they have the same dimension and all cor-
responding entries equals each other. For matrices, we may define additions, subtractions,
multiplications and divisions. Divisions are defined through inverses which will not be used
in this text and will not be discussed. For additions and subtractions, the two matrices must
be of the same order and the operation is carried out at the corresponding entries in the usual
way.
The multiplication of a scalar and an m × n matrix is an m × n matrix with every of its
entries multiplied by the scalar. For example, if d = 3 and
· ¸
2 3 −1
M=
4 −1.5 0
3.3. MATRICES 39
then we have · ¸
6 9 −3
dM = Md =
12 −4.5 0
Note that the positional order of d and M can be interchanged.
In order for the multiplication of an m × n matrix M and a p × q matrix P, denoted by
MP, to be defined, we require n = p (their inner dimensions must be the same) and the
resulting MP is an m × q matrix (MP has their outer dimensions). For example, let M be
a 2 × 3 matrix and P be a 3 × 2 matrix as
· ¸ 2.5 0
2 3 −1
M= and P = 1 2 (3.6)
4 −1.5 0
−3 0.8
Then MP is a 2 × 2 matrix defined by
· ¸
2 × 2.5 + 3 × 1 + (−1) × (−3) 2 × 0 + 3 × 2 + (−1) × 0.8
MP =
4 × 2.5 + (−1.5) × 1 + 0 × (−3) 4 × 0 + (−1.5) × 2 + 0 × 0.8
· ¸
11 5.2
=
8.5 −3
Note that the (k, l)th entry of MP is the sum of the products of the corresponding entries of
the kth row of M and the lth column of P. Using the same rule, we can obtain
5 7.5 −2.5
PM = 10 0 −1
−2.8 −10.2 3
It is a 3 × 3 matrix. In general, we have MP 6= PM. Thus the positional order of matrices
is important and should not be altered. The only exception is the multiplication by a scalar
such as d = 3. Thus we can write dMP = MdP = MPd.
Example 3.4.1 Consider
x1 2 3 −1 2.5 0
x2 = 4 −1.5 0 1 + 2 (−3) (3.7)
x3 0 5 2 −3 0.8
We have
x1 2 × 2.5 + 3 × 1 + (−1)(−3) 0 × (−3)
x2 = 4 × 2.5 + (−1.5) × 1 + 0 × (−3) + 2 × (−3)
x3 0 × 2.5 + 5 × 1 + 2 × (−3) 0.8 × (−3)
or
x1 11 0 11
x2 = 8.5 + −6 = 2.5
x3 −1 −2.4 −3.4
Thus we have x1 = 11, x2 = 2.5, and x3 = −3.4. 2
The transpose of M, denoted by a prime as M0 , changes the first row into the first column,
the second row into the second column, and so forth. Thus if M is of order n × m, then M0
has order m × n. For example, for the M in (3.6), we have
2 4
M0 = 3 −1.5
−1 0
The use of transpose can save space. For example, the 3 × 1 column vector in (3.7) can be
written as [x1 x2 x3 ]0 .
A great deal more can be said about matrices. The preceding discussion is all we need in
this text.
40 CHAPTER 3. SOME MATHEMATICS AND MATLAB
then we have X̄(f ) = X(2πf ) and no confusion will arise. Under the definition of (3.8), we
have
3 3
= X(−ω) and = X(ω/5)
2 − j5ω 2 + jω
We give one more example. Let us define
Z ∞ Z ∞
−jωt
X(ω) := x(t)e dt = x(τ )e−jωτ dτ
t=−∞ τ =−∞
There are two variables ω and t, where t is the integration or dummy variable. The dummy
variable can be denoted by any notation such as τ as shown. From the definition, we can
write Z ∞
X(−ω) = x(t)ejωt dt
t=−∞
and Z ∞
X(ω − ωc ) := x(t)e−j(ω−ωc )t dt
t=−∞
all n or t in (−∞, ∞). They are however directly applicable to real-valued signals defined for
n or t in [0, ∞)
We study in this text both CT and DT signals. The mathematics needed to analyze them
are very different. Because the conditions for the latter are simpler than those for the former,
we study first the DT case.
Consider a DT signal or sequence x[n]. The sequence x[n] is defined to be summable in
(−∞, ∞) if ¯ ∞ ¯
¯ X ¯
¯ ¯
¯ x[n]¯ < M̄ < ∞ (3.13)
¯ n=−∞
¯
for some finite M̄ . In other words, if x[n] is not summable, then their sum will approach
either ∞ or −∞. The sequence is defined to be absolutely summable if
∞
X
|x[n]| < M < ∞ (3.14)
n=−∞
for some finite M . We first use an example to discuss the difference of the two definitions.
Consider x[n] = 0, for n < 0, and
½
1 for n even
x[n] = (3.15)
−1 for n odd
for n ≥ 0. The sequence assumes 1 and −1 alternatively as n increases from 0 to infinity. Let
us consider
XN
S := x[n] = 1 − 1 + 1 − 1 + 1 − 1 + · · · + (−1)N (3.16)
n=0
The sum is unique no matter how large N is and is either 1 or 0. However if N = ∞, the sum
is no longer unique and can assume any value depending on how it is added up. It could be
0 if we group it as
S = (1 − 1) + (1 − 1) + · · ·
or 1 if we group it as
S = 1 − (1 − 1) − (1 − 1) − · · ·
If we write 1 = 9 − 8 and carry out the following grouping
S = (9 − 8) − (9 − 8) + (9 − 8) − · · · = 9 − (8 − 8) − (9 − 9) − · · ·
then the sum in (3.16) with N = ∞ appears to be 9. This is a paradox involving ∞. Even
so, the summation is always finite and the sequence is summable.
Next we consider
∞
X
Sa = |x[n]| = 1 + | − 1| + 1 + | − 1| + · · · = 1 + 1 + 1 + 1 + · · ·
n=0
in which we take the absolute value of x[n] and then sum them up. Because there is no more
cancellation, the sum is infinity. Thus the sequence in (3.15) is summable but not absolutely
summable.
If a sequence x[n] is absolutely summable, then it must be bounded in the sense
for some finite M1 and for all n. Roughly speaking, if |x[n1 ]| = ∞, for some n1 , then the sum
including the term will also be infinity and the sequence cannot be absolutely summable.
42 CHAPTER 3. SOME MATHEMATICS AND MATLAB
(a) (b)
1.5 1.5
1 1
0.5 0.5
Magnitude
Amplitude
0 0
−0.5 −0.5
−1 −1
−1.5 −1.5
0 2 4 6 8 10 0 2 4 6 8 10
Time (s) Time (s)
If a sequence x[n] is absolutely summable, then the magnitude of the sequence must
approach zero as n → ±∞. Its mathematical meaning is that for any arbitrarily small ² > 0,
there exists an N such that
|x[n]| < ² for all |n| ≥ N (3.17)
We prove this by contradiction. If x[n] does not approach zero as |n| → ∞, then there are
infinitely many1 n with |x[n]| ≥ ². Thus their sum will be infinity. In conclusion, if x[n] is
absolutely summable, then it is bounded for all n and approaches zero as |n| → ∞.
We now discuss the CT counterpart. A CT signal x(t) is defined to be integrable in
(−∞, ∞) if
¯Z ∞ ¯
¯ ¯
¯ x(t)dt¯¯ < M̄ < ∞
¯
−∞
for some finite M . Consider x(t) = sin 3t, for t ≥ 0 and x(t) = 0 for t < 0 as plotted in Figure
3.3(a). Its integration is the algebraic sum of all positive and negative areas. Because of the
cancellation of positive and negative areas, the integration cannot be infinity. Thus sin 3t is
integrable. We plot the magnitude of sin 3t or | sin 3t| in Figure 3.3(b). All areas are positive
and there is no more cancellation. Thus its total area is infinity and sin 3t is not absolutely
integrable.
An absolutely summable DT signal is bounded and approaches 0 as n → ±∞. The cor-
responding statement for the CT case however is not necessarily true. For example, consider
the CT signal x(t) shown in Figure 3.4. The triangle located at t = n, where n is a positive
integer, is defined to have base width 2/n3 and height n. The triangle is defined for all n ≥ 2.
The CT signal so defined is not bounded, nor does it approach zero as t → ∞. Because the
area of the triangle located at t = n is 1/n2 , we have
Z ∞
1 1 1
|x(t)|dt = + 2 + ··· + 2 + ···
−∞ 22 3 n
The infinite summation can be shown to be finite (Problem 3.18). Thus the CT signal is
absolutely integrable. In conclusion, unlike the DT case, an absolutely integrable CT signal
may not be bounded and may not approach zero as |t| → ∞.
1 If there are only finitely many, then we can find an N to meet (3.17).
3.5. MATHEMATICAL CONDITIONS USED IN SIGNALS AND SYSTEMS 43
Figure 3.4: CT signal that is absolutely integrable is not bounded nor approaches zero as
t→∞
Now if x(t) is complex-valued, then the total energy delivered by x(t) is defined as
Z ∞ Z ∞
E= x(t)x∗ (t)dt = |x(t)|2 dt (3.19)
t=−∞ t=−∞
where x∗ (t) is the complex conjugate of x(t). Note that (3.19) reduces to (3.18) if x(t) is real.
If E in (3.19) is finite, x(t) is defined to be squared absolutely integrable or to have finite total
energy.
Consider a DT real-valued signal x(nT ). It is a sequence of numbers. Such a sequence
cannot drive any physical device and its total energy is zero. Now if we use x(nT ) to construct
a CT signal using a zero-order hold, then the resulting CT signal is a staircase function as
shown in Figure 2.6 with solid lines. If we use xT (t) to denote the staircase function, then
xT (t) has the total energy
Z ∞ ∞
X
E= [xT (t)]2 dt = T [x(nT )]2 (3.20)
t=−∞ n=−∞
Thus mathematically we may define the total energy of complex-valued x[n] defined for all n
in (−∞, ∞) as
X∞ X∞
Ed = x[n]x∗ [n] = |x[n]|2 (3.21)
n=−∞ n=−∞
If Ed is finite, x[n] is defined to be squared absolutely summable or to have finite total energy.
Now we show that if x[n] is absolutely summable, then it has finite total energy. As shown
in the preceding section, if x[n] is absolutely summable, then it is bounded, that is, there exists
a finite M1 such that |x[n]| < M1 for all n. We substitute this into one of |x[n]| × |x[n]| in
(3.21) to yield
X∞ X∞
Ed := |x[n]]||x[n]| < M1 |x(nT )| (3.22)
n=−∞ n=−∞
If x[n] is absolutely summable, the last infinite summation is less than some finite M . Thus
we have Ed < M M1 < ∞. In conclusion, an absolutely summable x[n] has finite total energy.
44 CHAPTER 3. SOME MATHEMATICS AND MATLAB
Do we have a similar situation in the CT case? That is, if x(t) is absolutely integrable,
will it have finite total energy? The answer is negative. For example, consider x(t) = t−2/3
for t in [0, 1] and x(t) = 0 for t outside the range. Using the formula
Z
1 a+1
ta dt = t
a+1
where a is a real number other than −1,2 we can compute
Z ∞ Z 1 ¯1
1 ¯
|x(t)|dt = t−2/3 dt = t1/3 ¯ = 3(1 − 0) = 3 (3.23)
−∞ t=0 −2/3 + 1 t=0
and Z Z
∞ 1
1 ¯1
¯
|x(t)|2 dt = t−4/3 dt = t−1/3 ¯ = −3(1 − ∞) = ∞ (3.24)
−∞ t=0 −4/3 + 1 t=0
Thus the signal is absolutely integrable but has infinite energy. Note that the impulse defined
in (2.7) and the signal in Figure 3.4 are absolutely integrable but have infinite energy. See
Problems 3.22 and 3.23. In conclusion, an absolutely integrable CT signal may not have finite
total energy or absolute integrability does not imply squared absolute integrability.
Why is the CT case different from the DT case? An absolutely summable DT signal is
always bounded. This is however not the case in the CT case. The signal x(t) = t−2/3 for t
in [0, 1] is not bounded. Nor are the impulse and the signal in Figure 3.4. Thus boundedness
is an important condition. Now we show that if x(t) is absolutely integrable and bounded
for all t in (−∞, ∞), then it has finite total energy. Indeed, if x(t) is bounded, there exists
a finite M1 such that |x(t)| < M1 , for all t. We substitute this into one of |x(t)| × |x(t)| in
(3.19) to yield Z Z
∞ ∞
E= |x(t)||x(t)|dt < M1 |x(t)|dt
t=−∞ t=−∞
If x(t) is absolutely integrable, the last integral is less than some finite M . Thus we have
E < M M1 < ∞ and x(t) has finite total energy. Note that the proof must use both conditions:
boundedness and absolute integrability. If we use only one of them, then the proof cannot be
completed.
Now we have introduced all mathematical conditions one may encounter in the study of
signals and systems. In practice, a signal must start somewhere and cannot last forever. If
we select the starting time instant as t = 0, then the signal x(t) is defined only for t in [0, L],
for some finite L. The signal is implicitly assumed to be zero outside the range and is said to
have a finite duration. Clearly a practical signal cannot grow to ∞ or −∞, thus the signal is
bounded for all t in [0, L]. If x(t) is defined only for t in [0, L] and is bounded by M , then
Z ∞ Z L Z L
|x(t)|dt = |x(t)|dt ≤ M dt = M L
−∞ 0 0
and Z Z Z
∞ L L
|x(t)|2 dt = |x(t)|2 dt ≤ M 2 dt = M 2 L
−∞ 0 0
Thus every signal that is of finite duration and bounded is automatically absolutely integrable
and has finite total energy. In conclusion, every real-world signal meets all the conditions
mentioned in this and preceding sections.
1. Most real-world signals are CT or analog. To process them digitally, we must select a
sampling period. To do so, we need to know frequency contents of CT signals. More-
over, the frequency of DT sinusoids must be defined, as we will discuss in Subsection
4.6.1, through the frequency of CT sinusoids. Thus we must study CT sinusoids before
studying DT sinusoids.
2. All signals in a computer are actually analog signals. However they can be modeled as
1s or 0s and there is no need to pay any attention to their analog behavior. However,
as we can see from Figure 2.12, the interface between analog and digital world requires
anti-aliasing filter, sample-and-hold circuit, ADC and DAC. They are all analog systems.
3. Both CT and DT systems can be classified as FIR and IIR. In the CT case, we study
exclusively IIR. See Subsection 8.8.1. In the DT case, we study and design both FIR and
IIR systems. Because there is no CT counterpart, DT FIR systems must be designed
directly. However, direct design of DT IIR systems is complicated. It is simpler to
design first CT systems and then transform them into the DT case. See References [C5,
C7]. This is the standard approach in designing DT IIR systems.
In view of the preceding discussion, we cannot study only DT signals and systems. Thus
we study both cases in this text. Because of our familiarity with CT physical phenomena and
examples, we study first the CT case. The only exception is to use DT systems with finite
memory to introduce some systems’ concepts because there are no simple CT counterparts.
3.7 MATLAB
MATLAB runs on a number of windows. When we start a MATLAB program in MS Windows,
a window together with a prompt “>>” will appear. It is called the command window. It is
the primary window where we interact with MATLAB.
A sequence of numbers, such as -0.6, -0.3, 0, 0.3, 0.6, 0.9, 6/5, can be expressed in
MATLAB as a row vector by typing after the prompt >> as
>> t=[-0.6,-0.3,0,0.3,0.6,0.9,6/5]
or
>> t=[-0.6 -0.3 0 0.3 0.6 0.9 6/5]
Note that the sequence of entries must be bounded by a pair of brackets and entries can be
separated by commas or spaces. If we type “enter” at the end of the line, MATLAB will
execute and store it in memory, and then display it on the monitor as
t= -0.6000 -0.3000 0 0.3000 0.6000 0.9000 1.2000
From now on, it is understood that every line will be followed by “enter”. If we type
>> t=[-0.6 -0.3 0 0.3 0.6 0.9 1.2];
then MATLAB will execute and store it in memory but will not display it on the monitor.
In other words, the semicolon at the end of a statement suppresses the display. Note that we
have named the sequence t for later use.
The MATLAB function a:b:c generates a sequence of numbers starting from a, adding
b repeatedly untile c but not beyond. For example, 0:1:5 generates {0, 1, 2, 3, 4, 5}. The
functions 0:2:5 and x=0:3:5 generate, respectively, {0, 2, 4} and {x=0, 3}. If b=1, a:b;c can
be reduced to a:c. Using n=-2:4, the sequence t can be generated as t=0.3*n or t=n*0.3.
It can also be generated as t=-0.6:0.3:1.2.
Suppose we want to compute the values of x(t) = 4e−0.3t cos 4t at t = 0 : 0.3 : 1.5. Typing
in the command window
>> t=0:0.3:1.5;
>> x=4*exp(-0.3*t).*cos(4*t)
will yield
x = 4.0000 1.3247 − 2.4637 − 2.7383 0.2442 2.4489
These are the values of x(t) at those t, denoted by x. There are six entries in x. MATLAB
automatically assigns the entries by x(k), with k=1:6. We call the integer k the internal
index. Internal indices start from 1 and cannot be zero or negative. If we continue to type
x(0), then an error message will occur because the internal index cannot be 0. If we type
3.7. MATLAB 47
>> x(2),x(7)
then it will yield 1.3247 and the error message “Index exceeds matrix dimensions”. Typing
>> x([1:2:6])
will yield
4.0000 − 2.4637 0.2442
They are the values of x(1), x(3), and x(5). Note that two or more commands or statements
can be typed in one line separated by commas or semicolons. Those followed by commas will
be executed and displayed; those followed by semicolons will be executed but not displayed.
We have used * and .* in expressing x=4*exp(-0.3*t).*cos(4*t). The multiplication ‘*’,
and (right) division ‘/’ in MATLAB (short for MATrix LABoratory) are defined for matrices.
See Section 3.4. Adding a dot such as .* or ./, an operation becomes element by element. For
example, exp(-0.3*t) and cos(4*t) with t=0:0.3:1.5 are both 1 × 6 row vectors, and they
cannot be multiplied in matrix format. Typing .* will change the multiplication into element
by element. All operations in this section involve element by element and all * and / can be
replaced by .* and ./. For example, typing in the command window
>> t=0:0.3:1.5;x=4*exp(-0.3*t)*cos(4*t)
will yield an error message. Typing
>> t=0:0.3:1.5;x=4.*exp(-0.3.*t).*cos(4.*t)
will yield the six values of x. Note that the addition ‘+’ and subtraction ‘-’ of two matrices
are defined element by element. Thus there is no need to introduce ‘.’ into ‘+’ and ‘-’, that
is, ‘.+’ and ‘.-’ are not defined.4
Next we use t and x to generate some plots. MATLAB can generate several plots in a
figure. The function subplot(m,n,k) generates m rows and n columns of plots with k=1:mn
indicating its position, counting from left to right and from top to bottom. For example,
subplot(2,3,5) generates two rows and three columns of plots (for a total of 2 × 3 = 6 plots)
and indicates the 5th or the middle plot of the second row. If there is only one plot, there is
no need to use subplot(1,1,1).
If a program consists of several statements and functions, it is more convenient to develop
a program as a file. Clicking the new file icon on the command window toolbar will open an
edit window. We type in the edit window the following
and then save it with the file name f35 (for Figure 3.5). This is called an m-file because
an extension .m is automatically attached to the file name. All m-files are resided in the
subdirectory “work” of MATLAB. For easy reference, we call the file Program 3.1. Now we
explain the program. All text after a percent sign (%) is taken as a comment statement and is
ignored by MATLAB. Thus it can be omitted. The next two lines generate x(t) at t = n ∗ T .
Note the use of .∗ for all multiplications in x. The function subplot(1,2,k) generates one
row and two columns of boxes as shown in Figure 3.5. The third index k denotes the k-th
box, counting from left to right. The first plot or subplot(1,2,1) is titled (a) which is
4 Try in MATLAB >> 2.+3 and >> 2.0.+3. The former yields 5; the latter yields an error message. Why?
48 CHAPTER 3. SOME MATHEMATICS AND MATLAB
(a) (b)
4 4
3 3
2 2
Amplitude
Amplitude
1 1
0 0
−1 −1
−2 −2
−3 −3
0 5 10 15 20 0 2 4 6
Time index Time (s)
Figure 3.5: (a) Plot of a DT signal against time index. (b) Against time.
generated by the function title(’(a)’). The second plot or subplot(1,2,2) is titled (b).
The function stem(p,q,’linetype’), where p and q are two sequences of numbers of the
same length, plots stems with height specified by q at the horizontal positions specified by
p with ends of stems specified by the linetype. If ’linetype’ is missing, the default is small
hollow circle. For example, Figure 3.5(a) is generate by the function stem(n,x,’.’) which
uses stems with dots at their ends to plot x against its time index n as shown. Figure 3.5(b)
is generated by stem(t,x) which uses hollow circles (default) to plot x against time t. We
label their vertical or y-axes as ’Amplitude’ by typing ylabel(’Amplitude’) and label their
horizontal or x-axes by typing xlabel(’text’) where ‘text’ is ‘Time index’ or ’Time (s)’.
We also type axis square. Without it, the plots will be of rectangular shape.
Now if we type in the command window the following
>> f35
then the plots as shown in Figure 3.5 will appear in a figure window. If the program contains
errors, then error messages will appear in the command window and no figure window will
appear. Note that the program can also be run from the edit window by clicking ’Debug’. If
there are errors, we must correct them and repeat the process until a figure appears. If the
figure is not satisfactory, we may modify the program as we will discuss shortly. Thus the use
of an edit window is convenient in generating a figure.
Once a satisfactory figure is generated in a figure window, it can be printed as a hard copy
by clicking the printer icon on the figure window. It can also be saved as a file, for example,
an eps file by typing on the command window as
>> print -deps f35.eps
where we have named the file f35.eps.
We next discuss how to generate CT signals from DT signals. This requires interpolation
between sampling instants. The MATLAB function stairs(t,x) carries out zero-order hold.
The function plot(t,x,’linetype’) carries out linear interpolation5 using the specified line
type. Note that the ‘linetype’ is limited to solid line (default, no linetype), dotted line (‘:’),
dashed line (‘- -’), and dash-and-dotted line (‘-.’). If the ‘linetype’ is ‘o’ or ’.’, then no
interpolation will be carried out. We now modify Program 3.1 as
(a) (b)
4 4
3 3
2 2
Amplitude
Amplitude
1 1
0 0
−1 −1
−2 −2
−3 −3
0 2 4 6 0 2 4 6
Time (s) Time (s)
(c) (d)
5
4
2
Amplitude
Amplitude
0
0
−2
−4 −5
0 2 4 6 0 2 4 6
Time (s) Time (s)
Figure 3.6: (a) A CT signal generated using zero-order hold. (b) Using linear interpolation.
(c) Improvement of (a). (d) Improvement of (b).
ylabel(’Amplitude’),xlabel(’Time (s)’)
subplot(2,2,2)
plot(t,x,’:’),title(’(b)’)
ylabel(’Amplitude’),xlabel(’Time (s)’)
subplot(2,2,3)
stairs(t,x),title(’(c)’)
ylabel(’Amplitude’),xlabel(’Time (s)’)
axis([0 6 -4 5])
hold on
plot([0 6],[0 0])
subplot(2,2,4)
plot(t,x,’:’,[-1 6],[0 0],[0 0],[-5 5]),title(’(d)’)
ylabel(’Amplitude’),xlabel(’Time (s)’)
axis([-1 6 -5 5])
This program will generate two rows and two columns of plots using subplot (2,2,k) with
k=1:4. The first plot, titled (a), is generated by stairs(t,x) which carries out zero-order
hold as shown in Figure 3.6(a). The second plot, titled (b), is generated by plot(t,x,’:’)
which carries out linear interpolation using dotted lines as shown in Figure 3.6(b). Note
that plot(t,x) without the ’linetype’ carries out linear interpolation using solid lines. See
Problems 3.28 and 3.29.
The ranges of the horizontal (x-) and vertical (y-) axes in Figures 3.6(a) and (b) are auto-
matically selected by MATLAB. We can specify the ranges by typing axis([x (min) x (max)
y (min) y (max)]). For example, we added axis([0 6 -4 5]) for subplot(2,2,3) or Fig-
50 CHAPTER 3. SOME MATHEMATICS AND MATLAB
ure 3.6(c). Thus its horizontal axis ranges from 0 to 6 and its vertical axis ranges from −4 to
5 as shown. In Figure 3.6(c) we also draw the horizontal axis by typing plot([0 6],[0 0]).
Because the new plot (horizontal axis) is to be superposed on the original plot, we must type
“hold on”. Without it, the new plot will replace the original plot.
The ranges of the horizontal (x-) and vertical (y-) axes in Figure 3.6(d) are specified by
axis([-1 6 -5 5]). The dotted line is generated by plot(t,x,’:’); the horizontal axis by
plot([-1 6],[0 0]); the vertical axis by plot([0 0],[-5 5]). The three plot functions
can be combined into one as in the program. In other words, the plot function may contain
one or more pairs such as plot(x1, y1, x2, y2, ...) with or without ’linetype’ and each
pair must be of the same length.
Next we list the program that generate Figure 2.6:
%Program 3.3(f26.m)
L=5;t=0:0.0001:L;
x=4*exp(-0.3*t).*cos(4*t);
T=0.3;n=0:L/T;
xT=4*exp(-0.3*n*T).*cos(4*n*T);
subplot(3,1,1)
plot(t,x,’:’,[0 5],[0 0]),title(’(a)’)
hold on
stairs(n*T,xT)
T=0.1;n=0:L/T;
xT=4*exp(-0.3*n*T).*cos(4*n*T);
subplot(3,1,2)
plot(t,x,’:’,[0 5],[0 0]),title(’(b)’)
hold on
stairs(n*T,xT)
ylabel(’Amplitude’)
T=0.001;n=0:L/T;
xT=4*exp(-0.3*n*T).*cos(4*n*T);
subplot(3,1,3)
plot(t,x,’:’,[0 5],[0 0]),title(’(c)’)
hold on
stairs(n*T,xT)
xlabel(’Time (s)’)
The first two lines compute the amplitudes of x(t) from t = 0 to t = 5 with increment 0.0001.
The next two lines compute the samples of x(t) or x(nT ) with the sampling period T = 0.3
and the time index from n = 0 up to the integer equal to or less than L/T , which can be
generated in MATLAB as n=0:L/T. The program generates three rows and one column of
plots as shown in Figure 2.6. The first line after subplot(3,1,1) plots in Figure 2.6(a) x(t)
for t in [0, 5] with a dotted line and with linear interpolation. It also plots the horizontal axis
specified by the pair [0 5], [0 0] with a solid line (default). In order to superpose the staircase
function specified by stairs(n*T, xT) on Figure 2.6(a), we type “hold on”. Without it, the
latter plot will replace, instead of superpose on, the previous plot. The programs to generate
Figures 2.6(b) and (c) are identical to the one for Figure 2.6(a) except that T = 0.3 is replaced
by T = 0.1 and then by T = 0.001. In addition, we add “Amplitude” to the vertical axis in
Figure 2.6(b) by typing ylabel(’Amplitude’) and add “Time (s)” to the horizontal axis in
Figure 2.6(c) by typing xlabel(’Time (s)’). In conclusion, MATLAB is a simple and yet
powerful tool in generating figures.
3.7. MATLAB 51
and save it as f12.m. The first line uses the function wavread to load the MS file into
MATLAB. Its output is called x which consists of a vertical string of entries. The number
of entries, N , can be obtained by using the function length as shown in the second line.
These N entries are the samples of the sound at n*T with n = 0 : N − 1. The function
plot connects these samples by linear interpolation as shown in Figure 1.2(a). We plot its
small segment from 0.6 to 1.01 seconds in Figure 1.2(b). It is achieved by typing the function
axis([x (min) x (max) y (min) y (max)]) =axis([0.6 1.01 -0.2 0.2]). We see that
the function axis can be used to zoom in x-axis. Figure 1.2(c) zooms in an even smaller
segment of Figure 1.2(a) using the function axis([0.9 1.0097 -0.2 0.2]).
In the control window, if we type
>> f12
then N=10600 will appear in the control window and Figure 1.2 will appear in a figure window.
Note that in the second line of the program, we use a comma after N=length(x), thus N is
not suppressed and appears in the control window. This is how Figure 1.2 is generated.
The sound “signals and systems” in Figure 1.1 was recorded using PCM, 24 kHz, 8 Bit,
and mono. We list in the following the program that generates Figure 1.1:
subplot(2,1,2)
plot(n*T,x),title(’(b) Segment of (a)’)
axis([1.375 1.38 -0.2 0.2])
xlabel(’Time (s)’),ylabel(’Amplitude’)
The first line loads the MS file, called f11.wav into MATLAB. The output is called xb. The
number of data in xb is 66000. Because the sampling frequency is 24000 Hz, xb contains
roughly 3 seconds of data. However the sound “signals and systems” lasts only about 2.3
seconds. Thus xb contains some unneeded data. We use x=xb([2001:53500]) to pick out
the part of xb whose internal indices starting from 2001 up to 53500, with increment 1. This
range is obtained by trial and error. The rest of the program is similar to Program 3.4.
The piano middle C in Figure 1.3 was recorded using PCM, 22.05 kHz, 8 Bit, and mono.
We list in the following the program that generates Figure 1.3:
Problems
3.1 Plot A = −3 − j4 on a complex plane and then use a ruler and a protractor to express A
in polar form. It is important when you draw a plan, you must select or assign a scale.
√
3.2 Express the following complex numbers in polar form. Use 2 = 1.4 and draw mentally
each vector on a complex plane.
3.9 The unit matrix In is defined as an n × n square matrix with all its diagonal elements
equal to 1 and the rest zero such as
1 0 0
I3 = 0 1 0
0 0 1
3.11 Verify
N
X
an bn = [a0 a1 · · · aN ][b0 b1 · · · bN ]0
n=0
x1 = 3y1 − 5y2 − 2u
x2 = 0.5y1 + 0.2y2
and Z ∞
x(t)e−st dt
0
3.15 If x(t) ≥ 0, for all t, is there any difference between whether it is integrable or absolute
integrable?
3.17 If a sequence is absolutely summable, it is argued in Section 3.5 that the sequence
approaches zero as n → ∞. Is it true that if a sequence is summable, then the sequence
approaches zero as n → ∞? If not, give a counter example. Note that the missing of
‘absolutely’ in the second statement.
54 CHAPTER 3. SOME MATHEMATICS AND MATLAB
7
3.18* Use the integration formula
Z ∞ ¯∞
1 −1 ¯¯
dt = =1
1 t2 t ¯t=1
to show
1 1
+ 2 + ··· < 1
22 3
Thus the CT signal in Figure 3.4 is absolutely integrable.
3.19* For n ≥ 2, we have 1/n < 1. Thus we have
X∞ ∞
X
1
< 1=∞
n=1
n n=1
P∞
Can we conclude that n=1 (1/n) < M , for some finite M ? Show that
X∞
1
=∞ (3.25)
n=1
n
Can we conclude that Ed is finite? Can you give the reason that we substitute |x[n]| <
M1 only once in (3.22)?
3.22 Is the pulse in Figure 2.9(a) absolutely integrable for all a > 0? Verify that its total
energy approaches infinity as a → 0. Thus it is not possible to generate an impulse in
practice.
3.23* Let us define x(t) = n4 t, for t in [0, 1/n3 ], for any integer n ≥ 2. It is a right triangle
with base width 1/n3 and height n. Verify
Z 1/n3
1
[x(t)]2 dt =
t=0 3n
Use this and the result in Problem 3.19 to verify that the signal in Figure 3.4 has infinite
energy.
7 Study the conclusion of a problem with an asterisk. Do the problem only if you are interested in.
3.7. MATLAB 55
3.24 Verify that x(t) = 1/t, for t in [1, ∞) is squared absolutely integrable but not absolutely
integrable in [1, ∞). Thus squared absolutely integrable does not imply absolutely
integrable.
3.25 Is the lemma in Section 3.6 still valid if we delete the preamble “Let p be an integer.”?
3.26 In MATLAB, if N = 5, what are n = 0 : 0.5 : N/2? What are n = 0 : N/2?
3.27 Given n = 0 : 5 and x = [2 −1 1.5 3 4 0], type in MATLAB stem(n,x), stem(x,n),
stem(x,x), stem(n,x,’.’), and stem(n,x,’fill’). Study their results.
3.28 Given n = 0 : 5 and x = [2 − 1 1.5 3 4 0], type in MATLAB plot(n,x),
plot(n,x,’:’), plot(n,x,’--’), and plot(n,x,’-.’). Study their results. Do they
carry out linear interpolation?
3.29 Given n = 0 : 5 and x = [2 − 1 1.5 3 4 0], type in MATLAB plot(n,x,’o’),
plot(n,x,’.’) and study their results. Do they carry out linear interpolation? What
is the result of plot(n,x,n,x,’o’)?
3.30 Given m = 0 : 2 and x = [2 − 1 1.5 3 4 0]. In MATLAB what are the results of
typing plot(m,x), plot(m,x(m)), and plot(m,x(m+1))?
3.31 Program 3.1 plots one row of two plots. Modify it so that the two plots are in one
column.
56 CHAPTER 3. SOME MATHEMATICS AND MATLAB
Chapter 4
4.1 Introduction
In this chapter, we introduce the concept of frequency spectra or, simply, spectra for CT
signals. This concept is needed in discussing bandwidths of signals in communication, and in
determining a sampling period if a CT signal is to be processed digitally. It is also needed in
specifying a system to be designed. Thus the concept is important in engineering.
This chapter begins with the definition of frequency using a spinning wheel. Using a
spinning wheel, we can define easily positive and negative frequencies. We then introduce the
Fourier series for periodic signals and then the Fourier transform for periodic and aperiodic
signals. We give reasons for downplaying the former and for stressing the latter. The Fourier
transform of a CT signal, if it exists, is defined as the frequency spectrum.
Frequency spectra of most, if not all, real-world signals, such as those in Figure 1.1 through
1.4, cannot be expressed in closed form and cannot be computed analytically. Thus we
downplay their analytical computation. Instead we stress their general properties and some
concepts which are useful in engineering.
The last part of this chapter discusses the DT counterpart of what has been discussed
for CT signals. We first show that the frequency of DT sinusoids cannot be defined directly
and is defined through CT sinusoids. We then discuss sampling of pure sinusoids which leads
naturally to aliased frequencies and a simplified version of the sampling theorem. Finally we
apply the (CT) Fourier transform to modified DT signals to yield frequency spectra of DT
signals.
57
58 CHAPTER 4. FREQUENCY SPECTRA OF CT AND DT SIGNALS
Figure 4.1: (a) Mark A spins on a wheel. (b) Plot of A against time.
and the rotation of mark A can be expressed as x(t) = ejω0 t . We call ejω0 t a complex
exponential function. Because one cycle has 2π radians, we have P ω 0 = 2π or
2π
P =
ω0
If we use f0 to denote the frequency, then we have f0 = 1/P with unit in Hz (cycles per
second). One cycle has 2π radians, thus the frequency can also be defined as
2π
ω 0 := 2πf0 = (4.1)
P
with unit in rad/s. In mathematical equations, all frequencies must be expressed in the unit
of rad/s.
When A rotates on a wheel, the independent variable t appears as a parameter on the
wheel. Now we plot the rotation of A against time as shown in Figure 4.1(b). Because A
is represented by a complex number, its plotting against time requires three dimensions as
shown. Such a plot is difficult to visualize. Using Euler’s formula
we can plot the imaginary part sin ω 0 t against time and the real part cos ω 0 t against time as
shown in Figure 4.2 with ω 0 = 3 (rad/s). They are respectively the projections of ejω0 t on the
imaginary and real axes. They are much easier to visualize. They have period 2π/3 = 2.09
or they repeat every 2.09 seconds.
If ω 0 > 0, then ejω0 t or A in Figure 4.1(a) rotates in the counterclockwise direction and
−jω0 t
e or A rotates in the clockwise direction. Thus we will encounter both positive and
negative frequencies. Physically there is no such thing as a negative frequency; the negative
sign merely indicates its direction of rotation. In theory, the wheel can spin as fast as desired.
Thus we have
Frequency range of ejω0 t = (−∞, ∞) (4.3)
To conclude this section, we mention that if we use sin ω 0 t or cos ω 0 t to define the frequency,
then the meaning of negative frequency is not clear because of sin(−ω 0 t) = − sin ω 0 t =
sin(ω 0 t + π) and cos(−ω 0 t) = cos ω 0 t. Thus it is best to use a spinning wheel or ejω0 t to
define the frequency. However most discussion concerning ejωt is directly applicable to sin ω 0 t
and cos ω 0 t. For convenience, we call ejωt , sin ωt, and cos ωt pure sinusoids.
(a)
1.5
0.5
Amplitude
0
−0.5
−1
−1.5
0 1 2 3 4 5 6 7 8
(b)
1.5
0.5
Amplitude
−0.5
−1
−1.5
0 1 2 3 4 5 6 7 8
Time (s)
t. The smallest such P , denoted by P0 , is called the fundamental period and ω 0 = 2π/P0 is
called the fundamental frequency of x(t). If x(t) is periodic with fundamental period P0 , then
it will repeat itself every P0 seconds. If a signal never repeat itself or P0 = ∞, then it is said
to be aperiodic. For example, the signals in Figures 1.1 through 1.3 are aperiodic. The clock
signal shown in Figure 2.5 is periodic with fundamental period P as shown. The signals sin 3t
and cos 3t shown in Figure 4.2 are periodic with fundamental period 2π/3 = 2.09 seconds
and fundamental frequency ω 0 = 3 rad/s. For pure sinusoids, there is no difference between
frequency and fundamental frequency.
Let xi (t), for i = 1, 2, be periodic with fundamental period P0i and fundamental frequency
ω 0i = 2π/P0i . Is their linear combination x(t) = a1 x1 (t) + a2 x2 (t), for any constants a1 and
a2 periodic? The answer is affirmative if x1 (t) and x2 (t) have a common period. That is, if
there exist integers k1 and k1 such that k1 P01 = k2 P02 or
P01 ω02 k2
= = (4.4)
P02 ω01 k1
then x(t) is periodic. In words, if the ratio of the two fundamental periods or fundamental
frequencies can be expressed as a ratio of two integers, then x(t) is periodic. We give examples.
Example 4.2.1 Is x(t) = sin 3t − cos πt periodic? The periodic signal sin 3t has fundamental
frequency 3 and cos πt has fundamental frequency π. Because π is an irrational number, 3/π
cannot be expressed as a ratio of two integers. Thus x(t) is not periodic.2
Example 4.2.2 Consider
x(t) = 1.6 sin 1.5t + 3 cos 2.4t (4.5)
Clearly we have
ω02 2.4 24 8 k1
= = = =: (4.6)
ω01 1.5 15 5 k2
Thus the ratio of the two fundamental frequencies can be expressed as a ratio of two integers
24 and 15, and x(t) is periodic. 2
If x(t) in (4.5) is periodic, what is its fundamental period or fundamental frequency? To
answer this question, we introduce the concept of coprimeness. Two integers are said to be
coprime, if they have no common integer factor other than 1. For example, 24 and 15 are not
coprime, because they have integer 3 as their common factor. After canceling the common
60 CHAPTER 4. FREQUENCY SPECTRA OF CT AND DT SIGNALS
2
amplitude
−2
−4
−6
−10 0 10 20 30 40 50 60
Time (s)
factor, the pair 8 and 5 becomes coprime. Now if we require k2 and k1 in (4.4) to be coprime,
then the fundamental period of x(t) is P0 = k1 P01 = k2 P02 and the fundamental frequency is
2π 2π 2π ω01 ω02
ω0 = = = = = (4.7)
P0 k1 P01 k2 P02 k1 k2
We discuss how to use (4.7) to compute the fundamental frequency.
A real number α is called a divisor of β if β/α is an integer. Thus β has divisors β/n, for
all positive integer n. For example, 2.4 has the following divisors
2.4, 1.2, 0.8, 0.6, 0.48, 0.4, 0.3, 0.2, 0.1, 0.01, . . . (4.8)
1.5, 0.75, 0.5, 0.375, 0.3, 0.25, 0.1, 0.01, 0.001, . . . (4.9)
From (4.8) and (4.9), we see that 2.4 and 1.5 have many common divisors. The largest one
is called the greatest common divisor (gcd). From (4.8) and (4.9), we see that the gcd of 2.4
and 1.5 is 0.3.1
Because k1 and k2 in (4.7) are integers, ω 0 is a common divisor of ω 01 and ω 02 . Now
if k1 and k2 are coprime, then ω 0 is the gcd of ω 01 and ω 02 . For the signal in (4.5), its
fundamental frequency is the gcd of 1.5 and 2.4 or ω 0 = 0.3 and its fundamental period is
P0 = 2π/0.3 = 20.9. We plot in Figure 4.3 the signal in (4.5) for t in [−10, 60]. From the
plot, we can see that its fundamental period is indeed 20.9 seconds.
Indeed, the periodic signal in Figure 4.3 is a linear combination of its fifth and eighth har-
monics. It contains no fundamental nor dc.
1 The gcd of two real numbers can also be obtained using the Euclidean algorithm with integer quotients.
4.2. CT PERIODIC SIGNALS 61
Equation (4.10) is one possible form of Fourier series. Using sin(θ + π/2) = sin(θ + 1.57) =
cos θ, we can write (4.10) as
This is another form of Fourier series. Using cos(θ − π/2) = sin θ, we can write (4.10) also as
where m is an integer ranging from −∞ to ∞ and is called the frequency index. The number
cm , called Fourier coefficients, can be computed from
Z P0 /2 Z P0
1 1
cm = x(t)e−jmω0 t dt = x(t)e−jmω0 t dt (4.15)
P0 t=−P0 /2 P0 t=0
Note that the preceding integration can be carried out over any one period. For example, the
periodic signal in Figure 4.3 with fundamental frequency ω 0 = 0.3 has its complex Fourier
series in (4.13) with c5 = −0.8j, c8 = 1.5, c(−5) = 0.8j, c(−8) = 1.5 and cm = 0 for all integers
m other than ±5 and ±8. Note that these coefficients can be computed using (4.15). Because
x(t) is already expressed in terms of pure sinusoids, the cm can be obtained more easily using
Euler’s equation as we did in (4.13).
Example 4.2.3 (Sampling function) Consider the periodic signal shown in Figure 4.4. It
consists of a sequence of impulses with weight 1 as shown. The sequence extends all the way
to ±∞. The impulse at t = 0 can be denoted by δ(t), the impulses at t = ±T can be denoted
by δ(t ∓ T ) and so forth. Thus the signal can be expressed as
∞
X
r(t) := δ(t − nT ) (4.16)
n=−∞
62 CHAPTER 4. FREQUENCY SPECTRA OF CT AND DT SIGNALS
−3T −2T −T 0 T 2T 3T
for all m. Thus the complex Fourier series of the periodic signal r(t) is, with ω s = 2π/T ,
∞
X
1
r(t) = ejmωs t (4.17)
T m=−∞
Thus the periodic signal in Figure 4.4 can be expressed as in (4.16) and as in (4.17) or
∞
X ∞
X
1
r(t) = δ(t − nT ) = ejmωs t (4.18)
n=−∞
T m=−∞
where we have defined X(mω0 ) := P0 cm . This is justified by the fact that cm is associated
with frequency mω0 . We then use X(mω0 ) and ω0 = 2π/P0 or 1/P0 = ω0 /2π to write (4.14)
as
∞ ∞
1 X 1 X
x(t) = P0 cm ejmω0 t = X(mω0 )ejmω0 t ω0 (4.20)
P0 m=−∞ 2π m=−∞
4.3. FREQUENCY SPECTRA OF CT APERIODIC SIGNALS 63
A periodic signal with period P0 becomes aperiodic if P0 approaches infinity. Define ω := mω0 .
If P0 → ∞, then we have ω0 = 2π/P0 → 0. Thus, as P0 → ∞, ω = mω0 becomes a continuous
variable and ω0 can be written as dω. Furthermore, the summation in (4.20) becomes an
integration. Thus the modified Fourier series pair in (4.19) and (4.20) becomes, as P0 → ∞,
Z ∞
X(ω) = F[x(t)] := x(t)e−jωt dt (4.21)
t=−∞
for all t in (−∞, ∞). The set of two equations is the CT Fourier transform pair. X(ω)
is the Fourier transform of x(t) and x(t) is the inverse Fourier transform of X(ω). Before
proceeding, we mention that the CT Fourier transform is a linear operator in the sense that
for any constants a1 and a2 . This can be directly verified from the definition in (4.21).
The Fourier transform is applicable to real- or complex-valued x(t) defined for all t in
(−∞, ∞). However all examples will be real valued and defined only for positive time unless
stated otherwise.
x(t) = e2t
grows unbounded as t → ∞, the value of e(2−jω)t at t = ∞ is not defined. Thus the Fourier
transform of e2t is not defined or does not exist.2
Example 4.3.2 Consider x(t) = 1.5δ(t − 0.8). It is an impulse at t0 = 0.8 with weight 1.5.
Using the sifting property of impulses, we have
Z ∞
¯
X(ω) = 1.5δ(t − 0.8)e−jωt dt = 1.5 e−jωt ¯t=0.8 = 1.5e−j0.8ω (4.23)
t=−∞
plotted in Figure 4.5(a). It vanishes exponentially to zero as shown. Its Fourier transform
can be computed as
64 CHAPTER 4. FREQUENCY SPECTRA OF CT AND DT SIGNALS
(a)
3.5
2.5
Amplitude
2
1.5
0.5
0
0 5 10 15 20 25 30
Time (s)
(b)
20
Magnitude and Phase (rad)
15
10
−5
−20 −15 −10 −5 0 5 10 15 20
Frequency (rad/s)
Figure 4.5: (a) Positive-time function 3e−0.2t . (b) Its magnitude spectrum (solid line) and
phase spectrum (dotted line).
Z ∞ Z ∞
−jωt
X(ω) = x(t)e dt = 3e−0.2t e−jωt dt
t=−∞ t=0
3 3 h ¯ i ¯∞
¯ ¯
= e(−0.2−jω)t ¯ = e(−0.2−jω)t ¯ −1
−0.2 − jω t=0 −0.2 − jω t=∞
for all ω in (−∞, ∞). This is the Fourier transform of the positive-time signal 3e−0.2t . 2
Example 4.3.1 shows that not every signal has a Fourier transform. In general, if a signal
grows to ∞ or −∞ as t → ±∞, then its Fourier transform does not exist. Sufficient conditions
for the existence of a Fourier transform are
1. x(t) is of bounded variation and
2. x(t) is absolutely integrable in (−∞, ∞)
See Sections 2.5.2 and 3.5. They are called the Dirichlet conditions.2 If a signal x(t) meets
the Dirichlet condition, then its Fourier transform X(ω) exists. Moreover X(ω) is bounded
and is a continuous function of ω. Indeed, (4.21) implies
¯Z ∞ ¯ Z ∞
¯ ¯
|X(ω)| = ¯ ¯ x(t)e−jωt ¯
dt¯ ≤ |x(t)||e−jωt |dt
t=−∞ t=−∞
Z ∞
= |x(t)|dt < M < ∞
t=−∞
for some constant M . The proof of continuity of X(ω) is more complex. See Reference [C7,
pp. 92-93]. If X(ω) for a x(t) exists, it is called the frequency spectrum or, simply, the
spectrum of x(t).
2 Sufficient conditions for the existence of a Fourier series of a periodic signal are x(t) is of bounded variation
The impulse in Example 4.3.2 and the signal in Example 4.3.3 meet the Dirichlet condi-
tions. Their Fourier transforms or spectra exist as derived in (4.23) and (4.25). They are
indeed bounded and continuous. All real-world signals are absolutely integrable, as discussed
in Subsection 3.5.1, and of bounded variation. Thus every real-world signal has a bounded
and continuous Fourier transform or spectrum.
The signal shown in Figure 4.5(a) can be described by x(t) in (4.24) or by X(ω) in (4.25).
Because x(t) is a function of time t, it is called the time-domain description of the signal.
Because X(ω) is a function of frequency ω, it is called the frequency-domain description.
Mathematically speaking, x(t) and X(ω) contain the same amount of information and either
one can be obtained from the other. However X(ω) will reveal explicitly the distribution of
energy of a signal in frequencies as we will discuss in a later section.
Even though a signal is real-valued and positive-time (x(t) = 0, for all t < 0), its spectrum
X(ω) is generally complex-valued and defined for all ω in (−∞, ∞). For a complex X(ω), we
can express it as
X(ω) = A(ω)ejθ(ω) (4.26)
where A(ω) := |X(ω)| ≥ 0 is the magnitude of X(ω) and is called the magnitude spectrum.
The function θ(ω) := < ) X(ω) is the phase of X(ω) and is called the phase spectrum. Note
that both A(ω) and θ(ω) are real-valued functions of real ω. For example, for the spectrum
in (4.25), using 3 = 3j0 and
p −1
0.2 + jω = (0.2)2 + ω 2 ej tan (ω/0.2)
we can write it as
3 3 −1
X(ω) = =√ ej(0−tan (ω/0.2))
0.2 + jω 0.04 + ω 2
3
A(ω) := |X(ω)| = √ (4.27)
0.04 + ω 2
and its phase spectrum is
θ(ω) := − tan−1 (ω/0.2) (4.28)
We next discuss a general property of spectra of real-valued signals. A signal x(t) is real-
valued if its complex-conjugate equals itself, that is, x(t) = x∗ (t). Substituting (4.26) into
(4.21), taking their complex conjugate, using x∗ (t) = x(t), A∗ (ω) = A(ω), θ∗ (ω) = θ(ω), and
(ejθ )∗ = e−jθ , we have
h i∗ ·Z ∞ ¸∗
jθ(ω) −jωt
A(ω)e = x(t)e dt
t=−∞
or Z ∞
A(ω)e−jθ(ω) = x(t)ejωt dt
t=−∞
See Section 3.2 and Problem 3.8. The right-hand side of the preceding equation equals X(−ω).
See Section 3.4. Thus we have
% Subprogram 4.1
w=-10:0.001:10;
X=3.0./(j*w+0.2);
plot(w,abs(X),w,angle(X),’:’)
The first line generates ω from −10 to 10 with increment 0.001. Note that every statement in
MATLAB is executed and stored in memory. A semicolon (;) at its end suppresses its display
on the monitor. The second line is (4.25). Note the use of dot division (./) for element
by element division in MATLAB. If we use division without dot (/), then an error message
will appear. The MATLAB function abs computes absolute value or magnitude and angle
computes angle or phase. We see that using MATLAB to plot frequency spectra is very
simple. As shown in Figure 4.5(b), the magnitude spectrum is an even function of ω, and the
phase spectrum is an odd function of ω.
Note that (4.30) holds only for t in [0, L].3 For the same signal x(t), its Fourier transform
is Z L
X(ω) = x(t)e−jωt dt (4.32)
t=0
Comparing (4.31) and (4.32) yields
for all m. In other words, the Fourier coefficients are simply the samples of the Fourier
transform divided by L or X(ω)/L. On the other hand, the Fourier transform X(ω)
can be computed from the Fourier coefficients as
∞
X ej(mω0 −ω)L − 1
X(ω) = cm (4.34)
m=−∞
j(mω0 − ω)
for all ω. See Problems 4.18 and 4.19. Thus for a signal of a finite duration, there is a
one-to-one correspondence between its Fourier series and Fourier transform.
For a signal of infinite duration, the Fourier series is applicable only if the signal is
periodic. However the Fourier transform is applicable whether the signal is periodic or
not. Thus the Fourier transform is more general than the Fourier series.
2. The time interval of a time function is embedded in its Fourier transform; there is no
need to specify the applicable time interval. Thus there is a unique relationship between
a time function and its Fourier transform. The Fourier transforms of a periodic signal
of one period, two periods and ten periods are all different. This is not the case for the
Fourier series.
3. The Fourier transform is a function of ω alone (does not contain explicitly t) and de-
scribes fully a time signal. Thus it is the frequency-domain description of the time signal.
The Fourier coefficients cannot be used alone, they make sense only in the expression
in (4.14) which is still a function of t. Thus the Fourier series is, strictly speaking, not
a frequency-domain description. However, many texts consider the Fourier series to be
a frequency-domain description and call its coefficients frequency spectrum.
4. The basic result of the Fourier series is that a periodic signal consists of fundamental
and harmonics. Unfortunately, the expression remains the same no matter how long
the periodic signal is. The Fourier transform of a periodic signal will also exhibit fun-
damental and harmonics. Moreover, the longer the periodic signal, the more prominent
the harmonics. See Problem 5.14 and Figure 5.19 of the next chapter. Thus the Fourier
transform is a better description than the Fourier series even for periodic signals.
Because of the preceding reasons, we downplay the Fourier series. The only result in the
Fourier series that will be used in this text is the Fourier series of the sampling function
discussed in Example 4.2.3.
As discussed in Section 3.5.1, the total energy of x(t) defined for t in (−∞, ∞) is
Z ∞ Z ∞
∗
E= x(t)x (t)dt = |x(t)|2 dt (4.35)
t=−∞ t=−∞
Not every signal has finite total energy. For example, if x(t) = 1, for all t, then its total
energy is infinite. Sufficient conditions for x(t) to have finite total energy are, as discussed
in Section 3.5.1, x(t) is bounded and absolutely integrable. All real-world signals have finite
total energy.
It turns out that the total energy of x(t) can also be expressed in terms of the spectrum
of x(t). Substituting (4.22) into (4.35) yields
Z ∞ Z ∞ · Z ∞ ¸
1
E= x(t)[x(t)]∗ dt = x(t) X ∗ (ω)e−jωt dω dt
t=−∞ t=−∞ 2π ω=−∞
The term inside the brackets is X(ω). Thus we have, using X ∗ (ω)X(ω) = |X(ω)|2 ,
Z ∞ Z ∞ Z
1 1 ∞
|x(t)|2 dt = |X(ω)|2 dω = |X(ω)|2 dω (4.36)
t=−∞ 2π ω=−∞ π ω=0
where we have used the evenness of |X(ω)|. Equation (4.36) is called a Parseval’s formula.
Note that the energy of x(t) is independent of its phase spectrum. It depends only on its
magnitude spectrum.
Equation (4.36) shows that the total energy of x(t) can also be computed from its mag-
nitude spectrum. More important, the magnitude spectrum reveals the distribution of the
energy of x(t) in frequencies. For example, the energy contained in the frequency range
[ω 1 , ω 2 ] can be computed from
Z ω2
1
|X(ω)|2 dω (4.37)
2π ω=ω1
We mention that if the width of [ω 1 , ω 2 ] approaches zero and if X(ω) contains no impulses,
then (4.37) is always zero. See (2.11). Thus for a real-world signal, it is meaningless to talk
about its energy at an isolated frequency. Its energy is nonzero only over a nonzero frequency
interval. For the signal in Figure 4.5(a), even though its energy is distributed over frequencies
in (−∞, ∞), most of its energy is contained in the frequency range [−5, 5] in rad/s as we can
see from Figure 4.5(b).
This is called frequency shifting because the frequency spectrum of x(t) is shifted to ω0 if x(t)
is multiplied by ejω0 t .
Consider a CT signal x(t). Its multiplication by cos ω c t is called modulation. To be more
specific, we call
xm (t) := x(t) cos ω c t
4.4. DISTRIBUTION OF ENERGY IN FREQUENCIES 69
(a) (b)
3 3
2 2
1 1
Amplitude
0 0
−1 −1
−2 −2
−3 −3
0 10 20 30 0 10 20 30
Time (s) Time (s)
Figure 4.6: (a) 3e−0.2t cos 2t. (b) 3e−0.2t cos 10t.
the modulated signal; cos ω c t, the carrier signal; ω c , the carrier frequency; and x(t), the
modulating signal. For example, if x(t) = 3e−0.2t , for t ≥ 0 as in Figure 4.5(a), and if ω c = 1,
then xm (t) = x(t) cos t is as shown in Figure 4.6(a). Note that because, using | cos ω c t| ≤ 1,
the plot of xm (t) is bounded by ±x(t) as shown in Figure 4.6(a) with dashed lines. We plot
in Figure 4.6(b) xm (t) for ω c = 10.
We now compute the spectrum of xm (t). Using Euler’s formula, the linearity of the Fourier
transform, and (4.38), we have
· ¸
ejωc t + e−jωc t
Xm (ω) := F[xm (t)] = F[x(t) cos ω c t] = F x(t)
2
= 0.5[X(ω − ωc ) + X(ω + ωc )] (4.39)
for all ω in (−∞, ∞). This is an important equation. We first give an example.
Example 4.4.1 The spectrum of x(t) = 3e−0.2t , for t ≥ 0, as computed in Example 4.3.3, is
X(ω) = 3/(0.2 + jω). Thus the spectrum of xm (t) = x(t) cos ωc t is
· ¸
3 3
Xm (ω) = 0.5 +
0.2 + j(ω − ωc ) 0.2 + j(ω + ωc )
· ¸
1 1
= 1.5 +
(0.2 + jω) − jωc (0.2 + jω) + jωc
· ¸
0.2 + jω + jωc + 0.2 + jω − jωc
= 1.5
(0.2 + jω)2 − (jωc )2
or
3(0.2 + jω)
Xm (ω) = (4.40)
(0.2 + jω)2 + ωc2
We plot in Figure 4.7(a) |Xm (ω)| (solid line), 0.5|X(ω − ω c )| (dotted line), and 0.5|X(ω − ω c )|
(dot-and-dashed line) for ω c = 1. We plot in Figure 4.7(b) the corresponding phase spectra.
We repeat the plots in Figures 4.8 for ω c = 10.2
Recovering x(t) from xm (t) is called demodulation. In order to recover x(t) from xm (t),
the waveform of Xm (ω) must contain the waveform of X(ω). Note that Xm (ω) is the sum of
0.5X(ω−ω c ) and 0.5X(ω+ω c ). The two shifted 0.5X(ω±ω c ) with ω c = 1 overlap significantly
as shown in Figures 4.7, thus Xm (ω) do not contain the waveform of X(ω) shown in Figure
4.5(b). On the other hand, for ω c = 10, the overlapping of 0.5X(ω ± ω c ) is negligible as in
Figure 4.8, and we have 2Xm (ω) ≈ X(ω − 10) for ω in [0, 20].
We now discuss a condition for the overlapping to be negligible. Consider a signal with
spectrum X(ω). We select ω max such that |X(ω)| ≈ 0 for all |ω| > ω max . In other words,
70 CHAPTER 4. FREQUENCY SPECTRA OF CT AND DT SIGNALS
(a)
8
6
Magnitude
0
−40 −30 −20 −10 0 10 20 30 40
(b)
2
1
Phase (rad)
−1
−2
−40 −30 −20 −10 0 10 20 30 40
Frequency (rad/s)
Figure 4.7: (a) Plots of |Xm (ω)| (solid line), 0.5|X(ω − ω c )| (dotted line), and 0.5|X(ω + ω c )|
(dot-and-dashed line) for ω c = 1. (b) Corresponding phase plots.
(a)
8
6
Magnitude)
0
−40 −30 −20 −10 0 10 20 30 40
(b)
2
1
Phase (rad)
−1
−2
−40 −30 −20 −10 0 10 20 30 40
Frequency (rad/s)
Figure 4.8: Plots of |Xm (ω)| (solid line), 0.5|X(ω − ω c )| (dotted line), and 0.5|X(ω + ω c )|
(dot-and-dashed line) for ω c = 10. (b) Corresponding phase plots.
4.4. DISTRIBUTION OF ENERGY IN FREQUENCIES 71
most nonzero magnitude spectrum or most energy of the signal lies inside the frequency range
[−ω max , ω max ]. Now if we select the carrier frequency ω c to meet
then the overlapping of X(ω ± ω c ) will be negligible. Under the condition, it is possible to
recover X(ω) from Xm (ω) and, consequently, to recover x(t) from xm (t).
Modulation and demodulation can be easily explained and understood using the concept
of spectra. It is not possible to do so directly in the time domain. Frequency spectra are
also needed in selecting ω c as in (4.41). Thus the Fourier analysis and its associated concept
are essential in the study. However, It is important to mention that once ω c is selected,
modulation and demodulation are carried out entirely in the time domain.
Modulation and demodulation are basic in communication. In communication, different
applications are assigned to different frequency bands as shown in Figure 4.9. For example,
the frequency band is limited from 540 to 1600 kHz for AM (amplitude modulation) radio
transmission, and from 87.5 to 108 MHz for FM (frequency modulation) transmission. In
AM transmission, radio stations are assigned carrier frequencies from 540 to 1600 kHz with
10 kHz increments as shown in Figure 4.10. This is called frequency-division multiplexing.
theorem.
72 CHAPTER 4. FREQUENCY SPECTRA OF CT AND DT SIGNALS
only exception is the trivial case x(t) = 0 for all t. To establish the theorem, we show that
if x(t) is bandlimited to W and time-limited to b, then it must be identically zero for all t.
Indeed, if x(t) is bandlimited to W , then (4.22) becomes
Z W
1
x(t) = X(ω)ejωt dω (4.42)
2π −W
Its differentiation repeatedly with respect to t yields
Z W
(k) 1
x (t) = X(ω)(jω)k ejωt dω
2π −W
for k = 0, 1, 2, . . . , where x(k) (t) := dk x(t)/dtk . Because x(t) is time-limited to b, its deriva-
tives are identically zero for all |t| > b. Thus we have
Z W
X(ω)(ω)k ejωa dω = 0 (4.43)
−W
to rewrite (4.42) as
Z W
1
x(t) = X(ω)ejω(t−a) ejωa dω
2π −W
Z W "∞ #
1 X (jω(t − a))k
= X(ω) ejωa dω
2π −W k!
k=0
∞
X Z
(j(t − a))k W
= X(ω)(ω)k ejωa dω
2πk! −W
k=0
which, following (4.43), is zero for all t. Thus a bandlimited and time-limited CT signal must
be identically zero for all t. In conclusion, no nontrivial CT signal can be both time-limited
and bandlimited. In other words, if a CT signal is time limited, then its spectrum cannot be
bandlimited or its nonzero magnitude spectrum will extend to ±∞.
Every real-world signal is time limited because it must start and stop somewhere. Conse-
quently its spectrum cannot be bandlimited. In other words, its nonzero magnitude spectrum
will extend to ±∞. However the magnitude must approach zero because the signal has finite
total energy.5 If a magnitude is less than, say 10−10 , it is nonzero in mathematics but could be
very well considered zero in engineering. Thus every real-world signal will be considered time
limited and bandlimited. For examples, the signals in Figures 1.1 through 1.3 are time limited
and their frequency spectra, as will be computed in the next chapter, are bandlimited. In fact,
the bandlimited property was used in selecting a carrier frequency in the preceding subsection
and will be used again in the next subsection and next chapter. Thus in engineering, there is
no need to be mathematically exact.
a spectrum does not approach zero as |ω| → ∞, then the signal has infinite total energy.
4.4. DISTRIBUTION OF ENERGY IN FREQUENCIES 73
Figure 4.11: (a) Time signal x(t). (aa) Its magnitude spectrum. (b) Time compression x(2t).
(bb) Frequency expansion. (c) Time expansion x(t/2). (cc) Frequency compression.
74 CHAPTER 4. FREQUENCY SPECTRA OF CT AND DT SIGNALS
where xmax is the peak magnitude of x(t), and a is some small constant. For example, if
a = 0.01, then e−0.1t and e−2t have, as discussed in Section 2.7, time duration 50 and 2.5
seconds, respectively. If we select a different a, then we will obtain different time durations.
Another way is to define the time duration as the width of the smallest time interval that
contains, say, 90% of the total energy of the signal. Thus there are many ways of defining the
time duration of a signal.
Likewise, there are many ways of defining the bandwidth of the spectrum of a signal. We
can define the frequency bandwidth as the width of the positive frequency range in which
where Xmax is the peak magnitude of X(ω) and b is a constant such as 0.01. We can also
define the frequency bandwidth as the width of the smallest positive frequency interval which
6 The frequency spectrum of the time signal in Figure 4.11(a) is complex-valued and extends all the way
to ±∞. See Problem 4.14 and the preceding subsection. It is assumed to have the form in Figure 4.11(aa) to
simplify the discussion.
4.5. FREQUENCY SPECTRA OF CT PURE SINUSOIDS IN (−∞, ∞) AND IN [0, ∞) 75
contains 45% of the total energy of the signal.7 In any case, no matter how the time duration
and frequency bandwidth are defined, generally we have
1
time duration ∼ (4.45)
frequency bandwidth
It means that the larger the frequency bandwidth, the smaller the time duration and vice
versa. For example, the time function 1.5δ(t − 0.8) in Example 4.3.2 has zero time duration
and its magnitude spectrum is 1.5, for all ω. Thus the signal has infinite frequency bandwidth.
The time function x(t) = 1, for all t, has infinite time duration; its spectrum (as we will show
in the next section) is F[1] = 2πδ(ω), which has zero frequency bandwidth. It is also consistent
with our earlier discussion of time expansion and frequency compression.
A mathematical proof of (4.45) is difficult, if not impossible, if we use any of the afore-
mentioned definitions. However, if we define the time duration as
³R ´2
∞
−∞
|x(t)|dt
L = R∞
−∞
|x(t)|2 dt
Figure 4.12: Frequency spectra of (a) x(t) = 1, for all t, (b) sin ω 0 t, (c) cos ω 0 t, and (d) q(t).
and · ¸
ejω0 t + e−jω0 t
F[cos ω0 t] = F = πδ(ω − ω0 ) + πδ(ω + ω0 ) (4.50)
2
The magnitude and phase spectra of (4.49) are shown in Figure 4.12(b). It consists of two
impulses with weight π at ω = ±ω o and phase −π (rad) at ω = ω 0 and π at ω = −ω 0
as denoted by solid dots. The magnitude and phase spectra of (4.50) are plotted in Figure
4.12(c). The frequency spectra of sin ω0 t and cos ω0 t are zero everywhere except at ±ω0 .
Thus their frequency spectra as defined are consistent with our perception of their frequency.
Note that the sine and cosine functions are not absolutely integrable in (−∞, ∞) and their
frequency spectra are still defined. Their spectra consist of impulses that are not bounded
nor continuous.
We encounter in practice only positive-time signals. However, in discussing spectra of
periodic signals, it is simpler to consider them to be defined for all t. For example, the
4.6. DT PURE SINUSOIDS 77
frequency spectrum of a step function defined by q(t) = 1, for t ≥ 0, and q(t) = 0, for t < 0,
can be computed as
1
Q(ω) = F[q(t)] = πδ(ω) + (4.51)
jω
The spectrum of cos ω0 t, for t ≥ 0, can be computed as
jω
F [cos ω0 t · q(t)] = 0.5π[δ(ω − ω0 ) + δ(ω + ω0 )] + (4.52)
ω0 2 − ω 2
Their derivations are fairly complex. See Reference [C8, 2nd ed.]. If the signals are extended
to −∞, then their spectra can be easily computed as in (4.49) and (4.50). This may be a
reason of studying signals defined in (−∞, ∞). Moreover the spectra of q(t) and cos ω0 t, for
t ≥ 0, are nonzero for all ω, thus mathematically speaking they are not a dc or pure cosine
function.
Many engineering texts call the set of Fourier coefficients the discrete frequency spectrum.
For example, because of
sin ω0 t = −0.5jejω0 t + 0.5je−jω0 t
the set of the Fourier coefficients ∓0.5j at ±ω0 is called the discrete frequency spectrum of
sin ω0 t in many texts. In this text, frequency spectra refer exclusively to Fourier transforms.
Because
F[sin ω0 t] = −jπδ(ω − ω0 ) + jπδ(ω + ω0 )
the frequency spectrum of sin ω0 t for t in (−∞, ∞) consists of impulses at ±ω0 which are
nonzero only at ±ω0 , thus we may call them discrete frequency spectrum.
To conclude this section, we discuss the term ‘frequency’ in frequency spectrum. A pure
sinusoid has a well defined frequency. A periodic signal can be decomposed as a sum of
pure sinusoids with discrete frequencies. Because the Fourier transform is developed from the
Fourier series, an aperiodic signal may be interpreted as consisting of infinitely many pure
sinusoids with various frequencies in a continuous range. This however is difficult to visualize.
Alternatively, we may consider the frequency spectrum to indicate rates of changes of the time
function. The ultimate rate of change is discontinuities. If a time function has a discontinuity
anywhere in (−∞, ∞), then its nonzero spectrum, albeit very small, will extend to ±∞. For
example, the step function has a discontinuity at t = 0 and its spectrum contains the factor
1/jω which is nonzero for all ω. Thus we may also associate the frequency spectrum of a
signal with its rates of changes for all t.
x[n] = x[n + N ]
for all integers n. If x[n] is periodic with period N , then it is periodic with period 2N, 3N, . . . .
The smallest such N is called the fundamental period.
78 CHAPTER 4. FREQUENCY SPECTRA OF CT AND DT SIGNALS
1.5
0.5
Amplitude
0
−0.5
−1
−1.5
0 5 10 15 20 25 30
Time (s)
1.5
0.5
Amplitude
−0.5
−1
−1.5
0 5 10 15 20 25 30
Time (s)
Figure 4.13: (a) sin 0.3πnT with T = 1. (b) sin 0.7πnT with T = 1.
The CT sin ω 0 t is periodic for every ω 0 . Is the DT sin ω 0 nT periodic for every ω 0 ? If so,
then there exists an integer N such that
For example, sin 0.3πnT with T = 1 is periodic with fundamental period 20 (in seconds), can
we define its frequency as 2π/20 = 0.314 (rad/s)? The answer is negative. To see the reason,
consider
sin 0.7πnT
with T = 1 as shown in Figure 4.13(b) and consider
for all integers n and T = 1. These four sin ω i nT with different ω i appear to represent
different DT sequences. Is this so? Let us use MATLAB to compute their values for n from
0 to 5:
n 0 1 2 3 4 5
sin 1.2nT 0 0.9320 0.6755 -0.4425 -0.9962 -0.2794
sin(7.4832nT ) 0 0.9320 0.6754 -0.4426 -0.9962 -0.2793
sin(13.7664nT ) 0 0.9320 0.6754 -0.4426 -0.9962 -0.2793
sin(−5.0832nT ) 0 0.9320 0.6755 -0.4425 -0.9962 -0.2795
We see that other than some differences in the last digit due to rounding errors in computer
computation, the four DT sinusoids appear to generate the same DT sequence. Actually this
is the case as we show next.
Consider the DT complex exponential ejωnT . We show that
for all integer n. Recall from (3.4) and (3.5) that if ω 1 = ω 2 (mod 2π/T ), then
ω 1 = ω 2 + k(2π/T )
for all integers n and k, where we have used ejk(2π)n = 1. See Problem 3.6. This establishes
(4.56). Note that (4.56) also holds if the complex exponentials are replaced by sine functions.
In other words, we have
for all n.
We now are ready to show that the four DT sinusoids in (4.55) denote the same DT signal.
If T = 1 and if we use π = 3.1416, then we have 2π/T = 6.2832 and
or
1.2 = 7.4832 = 13.7664 = −5.0832 (mod 6.2832)
Thus we have, for T = 1,
for all n.
The preceding example shows that a DT sinusoid can be represented by many sin ω i nT .
If we plot ω i on a real line as shown in Figure 4.14, then all dots which differ by 2π/T or
its integer multiple denote the same DT sinusoid. Thus we may say that ω 0 in sin ω 0 nT
is periodic with period 2π/T . In order to have a unique representation, we must select a
frequency range of 2π/T . The most natural one will be centered around ω = 0. We cannot
select [−π/T, π/T ] because the frequency π/T can also be represented by −π/T . Nor can
we select (−π/T, π/T ) because it does not contain π/T . We may select either (−π/T, π/T ]
or [−π/T, π/T ). We select the former and call (−π/T, π/T ] (in rad/s) the Nyquist frequency
range (NFR). We see that the frequency range is dictated by the sampling period T . The
smaller T is, the larger the Nyquist frequency range.
The Nyquist frequency range can also be expressed in terms of sampling frequency. Let us
define the sampling frequency fs in Hz (cycles per second) as 1/T and the sampling frequency
ω s in rad/s as ω s = 2πfs = 2π/T . Then the Nyquist frequency range can also be expressed
as (−0.5fs , 0.5fs ] in Hz. Thus we have
Nyquist frequency range (NFR) = (−π/T, π/T ] = (−0.5ω s , 0.5ω s ] (in rad/s)
= (−0.5fs , 0.5fs ] (in Hz)
Note that in mathematical equations, we must use frequencies in rad/s. But in text or
statement, we often use frequencies in Hz. Note that if T = 1, then fs = 1/T = 1 and the
NFR becomes (−π, π] (rad/s) or (−0.5, 0.5] (Hz). This is the frequency range used in many
texts on Digital Signal Processing. Note that the sampling frequency may be called, more
informatively, the sampling rate, because it is the number of time samples in one second.
We are ready to give a definition of the frequency of DT sinusoids which include ejω0 nT ,
sin ω 0 nT , and cos ω 0 nT . Giving a DT sinusoid, we first compute its Nyquist frequency range.
If ω 0 lies outside the range, we add or subtract 2π/T , if necessary repeatedly, to shift ω 0
inside the range, called the frequency ω̄. Then the frequency of the DT sinusoid is defined as
ω̄, that is
Example 4.6.1 Consider the DT sinusoid sin 13.7664nT with T = 1. According to our defi-
nition, its frequency is not necessarily 13.7664 rad/s. We first compute its Nyquist frequency
range as (−π/T, π/T ] = (−π, π] = (−3.1416, 3.1416]. The number 13.7664 is outside the
range. Thus we must shift it inside the range. Subtracting 2π/T = 6.2832 from 13.7664
yields 7.4832 which is still outside the range. Subtracting again 6.2832 from 7.4832 yields 1.2
which is inside the range (−3.1416, 3.1416]. Thus the frequency of sin 13.7664nT with T = 1
is 1.2 rad/s. 2
Consider a DT sinusoid sin ω 0 nT . We call a CT sinusoid sin ω̂t an envelope of sin ω 0 nT if
for all n. Clearly sin ω 0 t is an envelope of sin ω 0 nT . However, sin ω 0 nT has many other
envelopes. For example, because
for all n, sin 13.7664nT has envelopes sin 1.2t, sin 7.4832t, sin 13.7664t, sin(−5.0832t) and
many others. Among these envelopes, the one with the smallest ω̂ in magnitude will be called
the primary envelope. Thus the primary envelope of sin 13.7664nT is sin 1.2t which is a CT
sinusoid with frequency 1.2 rad/s. Thus strictly speaking, the frequency of DT sinusoids is
defined using the frequency of CT sinusoids. Moreover, the frequency of DT sinusoids so
defined is consistent with our perception of frequency as we will see in the next subsection.
We call the DT sinusoids
(a)
1.5
0.5
Amplitude
−0.5
−1
−1.5
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Time (s)
(b)
4
pi/T
DT frequency (rad/s)
0
−2pi/T 2pi/T
−2
−pi/T
−4
−10 −5 0 5 10
CT frequency (rad/s)
Figure 4.15: (a) sin 5t (solid line), its sampled sequence with T = 0.9 (solid dots) and its
primary envelope or sin(−1.98t) (dashed line). (b) Relationship between the frequency of a
CT sinusoid and that of its sampled sequence.
sin ω 0 nT equals ω 0 if ω 0 lies inside the NFR (−π/T, π/T ]. If ω 0 lies outside the NFR, then
the frequency of sin ω 0 nT is ω̄ which is obtained by shifting ω 0 inside the range by subtracting
or adding 2π/T , or is defined as in (4.57). Thus the frequency of sin ω 0 t and the frequency
of sin ω 0 nT are related as shown in Figure 4.15(b). For example, if ω 0 = 5 and T = 0.9, then
2π/T = 6.98 and ω̄ = −1.98 can be obtained from the plot as shown.
In digital processing of a CT signal x(t) using its sampled sequence x(nT ), we must select
a sampling period T so that all essential information of x(t) is preserved in x(nT ). In the
case of sin ω 0 t, we must preserve its frequency in sin ω 0 nT . From Figure 4.15(b), we conclude
that T must be selected so that ω 0 lies inside the Nyquist frequency range (−π/T, π/T ].
However if ω 0 is on the edge of the range or T = π/ω 0 , a nonzero CT sinusoid may yield a
sequence which is identically zero. For example, if x(t) = sin 5t and if 5 = π/T or T = π/5,
then x(nT ) = sin 5nT = sin πn = 0, for all n. Thus we must exclude this case and require
ω 0 = 5 to lie inside the range (−π/T, π/T ). In conclusion, the sampled sequence of sin ω 0 t
has frequency ω 0 if |ω 0 | < π/T or
π
T < or fs > 2f0 (4.58)
|ω0 |
where fs = 1/T and f0 = ω 0 /2π. Otherwise the sampled DT sinusoid sin ω 0 nT has a fre-
quency different from ω 0 , called an aliased frequency. Note that the (fundamental) frequency
of sin ω 0 t is P0 = 2π/|ω 0 |. Thus T < π/|ω0 | implies T < P0 /2. In other words, the sampled
sequence of sin ω 0 t has frequency ω 0 if we take more than two samples in one period and has
an aliased frequency if we take two or less samples in one period.
We give one more explanation of aliased frequencies. Consider the wheel spinning with a
constant speed shown in Figure 4.16(a). The spinning of point A on the wheel can be denoted
by ejω0 t . Let us use a video camera to shoot, starting from t = 0, the spinning wheel. To
simplify the discussion, we assume that the camera takes one frame per second. That is,
we assume T = 1. Then point A on the n-th frame can be denoted by ejω0 n . If A spins
4.7. SAMPLING OF CT PURE SINUSOIDS – ALIASED FREQUENCIES 83
Figure 4.16: (Top) A spins complete cycles per second in either direction but appears station-
ary in its sampled sequence. (Bottom) A spins (3/4) cycle per second in the counterclockwise
direction but appears to spin (1/4) cycle per second in the clockwise direction.
in the counterclockwise direction one complete cycle every second, then we have ω 0 = 2π
(rad/s) and A will appears as shown in Figure 4.16(top) for n = 0 : 3. Point A appears to
be stationary or the sampled sequence appears to have frequency 0 even though the wheel
spins one complete cycle every second. Indeed, if T = 1, then the Nyquist frequency range is
(−π, π]. Because ω 0 = 2π is outside the range, we subtract 2π/T = 2π from 2π to yield 0.
Thus the sampled sequence has frequency 0 which is an aliased frequency. This is consistent
with our perception. In fact, the preceding perception still holds if point A spins complete
cycles every second (no matter how many complete cycles) in either direction.
Now if point A rotates 3/4 cycle per second in the counterclockwise direction or with
angular velocity ω 0 = 3π/2 (rad/s), then the first four frames will be as shown in Figure
4.16(bottom). To our perception, point A is rotating in the clockwise direction 1/4 cycle every
second or with frequency −π/2 rad/s. Indeed ω 0 = 3π/2 is outside the Nyquist frequency
range (−π, π]. Subtracting 2π from 3π/2 yields −π/2. This is the frequency of the sampled
sequence. Thus sampling ej(3π/2)t with sampling period T = 1 yields a DT sinusoid with
frequency −π/2 rad/s which is an aliased frequency.
If T = 1, the highest frequency of the sampled sequence is π rad/s. This follows directly
from Figure 4.16(b). If the frequency is slightly larger than π, then it appears to have a
negative frequency whose magnitude is slightly less than π. Thus for a selected T , the highest
frequency is π/T (rad/s).
To conclude this section, we mention that the plot in Figure 4.15(b) has an important
application. A stroboscope is a device that periodically emits flashes of light and acts as a
sampler. The frequency of emitting flashes can be varied. We aim a stroboscope at an object
that rotates with a constant speed. We increase the strobe frequency and observe the object.
The object will appear to increase its rotational speed and then suddenly reverse its rotational
direction.9 If we continue to increase the strobe frequency, the speed of the object will appear
to slow down and then to stop rotating. The strobe frequency at which the object appears
stationary is the rotational speed of the object. Thus a stroboscope can be used to measure
rotational velocities. It can also be used to study vibrations of mechanical systems.
(a)
5
−5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
(b)
5
Amplitude
−5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
(c)
5
−5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Time (s)
Figure 4.17: (a) x(t) (solid line), its sampled sequence with T1 = π/35 (solid dots), and
reconstructed CT signal (dotted line). (b) With T2 = π/60. (c) With T3 = π/200.
where we have used cos(−θ) = cos θ. This DT signal contains only one frequency 50 rad/s.
The original 70 rad/s does not appear in (4.61). The most logically constructed CT signal of
(4.61) is shown in Figure 5.1(b) with a dotted line. It is different from x(t).
Finally we select T3 = π/200. Then its NFR is (−200, 200]. Both 50 and 70 are inside the
range and its resulting sampled signal in principal form is
which contains all original frequencies and its reconstructed CT signal is identical to the
original x(t).
From the preceding discussion, we can now develop a sampling theorem. Consider a CT
signal x(t) that contains a finite number of sinusoids with various frequencies. Let ω max =
2πfmax be the largest frequency in magnitude. If the sampling period T = 1/fs is selected
so that
π
T < or fs > 2fmax (4.63)
ωmax
then
(−π/T, π/T ) contains all frequencies of x(t)
and, consequently, the sampled sequence x(nT ) contains all frequencies of x(t). If a sampling
period does not meet (4.63), then the sampled sequence x(nT ) in principal form contains some
aliased frequencies which are the frequencies of x(t) lying outside the NFR shifted inside the
NFR. This is called frequency aliasing. Thus time sampling may introduce frequency aliasing.
If frequency aliasing occurs, it is not possible to reconstruct x(t) from x(nT ).
On the other hand, if a sampling period meets (4.63), then x(nT ) in principal form contains
all frequencies of x(t). Furthermore x(t) can be constructed exactly from x(nT ). This is called
a sampling theorem. Note that 2fmax is often called the Nyquist rate. Thus if the sampling
rate fs is larger than the Nyquist rate, then there is no frequency aliasing.
The preceding sampling theorem is developed for CT signals that contain only sinusoids.
Actually it is applicable to any CT signal x(t), periodic or not, if ω max can be defined for
x(t). This will be developed in the next chapter.
Because x(nT ) and ejnωt contain no impulses and because all integration intervals are prac-
tically zero, we have F[x(nT )] = 0. See (2.11). In other words, the application of the (CT)
Fourier transform to a DT signal directly will yield no information.
The DT signal x(nT ) consists of a sequence of numbers and can be expressed as in (2.26)
or
∞
X
x(nT ) = x(kT )δd (nT − kT )
k=−∞
86 CHAPTER 4. FREQUENCY SPECTRA OF CT AND DT SIGNALS
Its value at kT is x(kT ). Now we replace the value by a CT impulse at kT with weight x(kT )
or replace x(kT )δd (nT − kT ) by x(kT )δ(t − kT ), where the impulse is defined as in (2.7). Let
the sequence of impulses be denoted by xd (t), that is,
∞
X
xd (t) := x(nT )δ(t − nT ) (4.64)
n=−∞
The function xd (t) is zero everywhere except at sampling instants where they consist of
impulses with weight x(nT ). Thus xd (t) can be considered a CT representation of a DT
signal x(nT ). The application of the Fourier transform to xd (t) yields
Z ∞ " X ∞
#
Xd (ω) := F[xd (t)] = x(nT )δ(t − nT ) e−jωt dt
t=−∞ n=−∞
∞
X · Z ∞ ¸ ∞
X ¯
= x(nT ) δ(t − nT )e −jωt
dt = x(nT ) e−jωt ¯t=nT
n=−∞ t=−∞ n=−∞
where we have interchanged the order of integration and summation and used (2.14), or
∞
X
Xd (ω) = x(nT )e−jωnT (4.65)
n=−∞
for all ω in (−∞, ∞). This is, by definition, the discrete-time (DT) Fourier transform of
the sequence x(nT ). This is the counterpart of the (CT) Fourier transform of x(t) defined in
(4.21). Just as in the CT case, not every sequence has a DT Fourier transform. For example, if
x(nT ) grows unbounded, then its DT Fourier transform is not defined. A sufficient condition
is that x(nT ) is absolutely summable. See Section 3.5.10 If Xd (ω) exists, it is called the
frequency spectrum or, simply, spectrum of x(nT ). In general, Xd (ω) is complex-valued. We
call its magnitude |Xd (ω)| the magnitude spectrum and its phase <) Xd (ω) the phase spectrum.
Before giving examples, we discuss two formulas. First we have
N
X 1 − rN +1
rn = 1 + r + r2 + · · · + rN = (4.66)
n=0
1−r
where r is a real or complex constant and N is any positive integer. See Problems 4.28 and
4.29. If |r| < 1, then rN → 0 as N → ∞ and we have
∞
X 1
rn = for |r| < 1 (4.67)
n=0
1−r
The infinite summation diverges if |r| > 1 and r = 1. If r = −1, the summation in (4.67) is
not well defined. See Section 3.5. Thus the condition |r| < 1 is essential in (4.67). Now we
give an example.
Example 4.8.1 Consider the DT signal defined by
½
3 × 0.8n for n ≥ 0
x(nT ) = (4.68)
0 for n < 0
with T = 0.5. It is plotted in Figure 4.18(a) for n = 0 : 60. Using (4.67), we can show that
x(nT ) is absolutely summable. Its DT Fourier transform is
∞
X ∞
X ∞
X ¡ ¢n
Xd (ω) = x(nT )e−jωnT = 3 × 0.8n e−jωnT = 3 0.8e−jωT
n=−∞ n=0 n=0
10 This condition corresponds to that x(t) is absolutely integrable. See the Dirichlet conditions for the CT
case in Section 4.3. Note that every DT sequence is of bounded variation.
4.8. FREQUENCY SPECTRA OF DT SIGNALS 87
(a)
3
2.5
Amplitude
1.5
0.5
0
0 5 10 15 20 25 30
Time (s)
(b)
20
Magnitude and Phase (rad)
15
10
−5
−25 −20 −15 −10 −5 0 5 10 15 20 25
Frequency (rad/s)
Figure 4.18: (a) Positive-time sequence x(nT ) = 3 × 0.8n , for T = 0.5 and n ≥ 0. (b) Its
magnitude and phase spectra.
% Subprogram 4.2
w=-25:0.001:25;T=0.5;
Xd=3.0./(1-0.8*exp(-j*w*T));
plot(w,abs(Xd),w,angle(Xd),’:’)
will yield the magnitude (solid line) and phase (dotted line) spectra in Figure 4.18(b).2
Example 4.8.2 Consider the DT signal x(0) = 1, x(T ) = −2, x(2T ) = 1, x(nT ) = 0, for
n < 0 and n ≥ 3, and T = 0.2. It is a finite sequence of length 3 as shown in Figure 4.19(a).
Its frequency spectrum is
∞
X
Xd (ω) = x(nT )e−jnωT = 1 − 2e−jωT + 1 · e−j2ωT (4.70)
n=−∞
Even though (4.70) can be simplified (see Problem 4.34), there is no need to do so in using a
computer to plot its spectrum. Typing in MATLAB
88 CHAPTER 4. FREQUENCY SPECTRA OF CT AND DT SIGNALS
(a)
2
Amplitude
0
−1
−2
−3
−0.4 −0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6
Time (s)
(b)
4
Magnitude and phase (rad)
−2
−4
−50 −40 −30 −20 −10 0 10 20 30 40 50
Frequency (rad/s)
Figure 4.19: (a) Time sequence of length 3. (b) Its magnitude spectrum (solid line) and phase
spectrum (dotted line).
% Subprogram 4.3
w=-50:0.001:50;T=0.2;
Xd=1-2*exp(-j*w*T)+exp(-j*2*w*T);
plot(w,abs(Xd),w,angle(Xd),’:’)
will yield the magnitude (solid line) and phase (dotted line) spectra in Figure 4.19(b).2
If x(nT ) is absolutely summable, then its spectrum is bounded and is a continuous function
of ω in (−∞, ∞).11 Note that even if x(nT ) is defined for n ≥ 0, its spectrum is defined for
positive and negative frequencies.
Because X(ω) and Xd (ω) are the results of the same Fourier transform, all properties of
X(ω) are directly applicable to Xd (ω). For example, if x(nT ) is real, its magnitude spectrum
is even (|Xd (ω)| = |Xd (−ω)|) and its phase spectrum is odd (< ) Xd (ω) = −<
) Xd (−ω)) as
shown in Figures 4.18(b) and 4.19(b). These properties are identical to those for the CT case.
There is however an important difference between the CT and DT cases. Because of (4.56),
we have
Xd (ω 1 ) = Xd (ω 2 ) if ω 1 = ω 2 (mod 2π/T ) (4.71)
which implies that Xd (ω) is periodic with period 2π/T . For example, the spectrum in Figure
4.18(b) is periodic with period 2π/0.5 = 12.6 as shown and the spectrum in Figure 4.19(b)
is periodic with period 2π/0.2 = 31.4 as shown. This is similar to the frequencies of DT
sinusoids discussed in Subsection 4.6.2. In order to have a unique representation, we restrict
Xd (ω) to a range of 2π/T . We can select the range as (−π/T, π/T ] or [−π/T, π/T ). As in
Section 4.6.2, we select the former and call it the Nyquist frequency range (NFR). Thus we
have
where fs = 1/T and ω s = 2πfs = 2π/T . This is the most significant difference between
the spectra of x(t) and x(nT ). Note that the highest frequency of a CT signal has no limit,
whereas the highest frequency of a DT signal is limited to π/T (in rad/s) or 0.5fs (in Hz).
The DT signal in Example 4.8.1 is a low-frequency signal because the magnitudes of Xd (ω)
in the neighborhood of ω = 0 are larger than those in the neighborhood of the highest
frequency π/T = π/0.5 = 6.3. For the DT signal in Example 4.8.2, the highest frequency is
π/T = π/0.2 = 15.7. It is a high-frequency signal because its magnitude spectrum around
ω = 15.7 is larger than the one around ω = 0.
Even though a great deal more can be said regarding spectra of DT signals, all we need
in this text is what has been discussed so far. Thus we stop its discussion here.
Its application to a periodic or aperiodic CT signal yields the spectrum of the signal. The
spectrum is generally complex valued. Its magnitude spectrum reveals the distribution of the
energy of the signal in frequencies. This information is needed in defining for the time signal
a frequency bandwidth which in turn is needed in selecting a carrier frequency and a sampling
frequency, and in specifying a filter to be designed.
The application of the CT Fourier transform to a DT signal modified by CT impulses
yields the spectrum of the DT signal. Because the spectra of CT and DT signals are obtained
by applying the same transform, they have the same properties. The only difference is that
spectra of CT signals are defined for all frequencies in (−∞, ∞); whereas spectra of DT signals
are periodic with period 2π/T = ω s in rad/s or fs in Hz and we need to consider them only in
the Nyquist frequency range (NFR) (−π/T, π/T ] = (−ω s /2, ω s /2] in rad/s or (−fs /2, fs /2]
in Hz.
The equation in (4.72) is applicable to real- or complex-valued x(t). The only complex-
valued signal used in this chapter is the complex exponential ejωt defined for all t in (−∞, ∞).
All other signals are real valued and positive time. Even though it is possible to define a
Fourier transform for real-valued signals defined for t ≥ 0, the subsequent development will
be no simpler than the one based on (4.72). In fact, computing the spectrum of real-valued
cos ωt or sin ωt will be more complex. Thus we will not introduce such a definition.
Fourier analysis of signals is important. Without it, modulation and frequency-division
multiplexing in communication would not have been developed. On the other hand, the
concept used was only the bandwidth of signals. Thus we must put the Fourier analysis in
perspective.
Problems
4.1 Plot 2ej0.5t on a circle at t = kπ, for k = 0 : 6. What is its direction of rotation, clockwise
or counterclockwise? What are its period and its frequency in rad/s and in Hz?
4.2 Plot 2e−j0.5t on a circle at t = kπ, for k = 0 : 6. What is its direction of rotation,
clockwise or counterclockwise? What are its period and its frequency in rad/s and in
Hz?
4.3 Is the signal sin 3t + sin πt periodic? Give your reasons.
4.4 Is the signal 1.2 + sin 3t periodic? Is yes, what is its fundamental period and fundamental
frequency?
90 CHAPTER 4. FREQUENCY SPECTRA OF CT AND DT SIGNALS
4.5 Repeat Problem 4.4 for the signal 1.2 + sin πt.
4.6 Is the signal x(t) = −1.2 − 2 sin 2.1t + 3 cos 2.8t periodic? If yes, what is its fundamental
frequency and fundamental period?
4.7 Express x(t) in Problem 4.6 is complex Fourier series with its coefficients in polar form.
4.8 Use the integral formula Z
eat
eat dt =
a
to verify that x(t) = 3e−0.2t , for t ≥ 0, is absolutely integrable in [0, ∞).
4.9 What is the spectrum of x(t) = 2e5t , for t ≥ 0?
4.10 What is the spectrum of x(t) = −2e−5t , for t ≥ 0? Compute the spectrum at ω = 0, ±5,
and ±100, and then sketch roughly its magnitude and phase spectra for ω in [−100, 100].
Is the magnitude spectrum even? Is the phase spectrum odd?
4.11 What is the spectrum of x(t) = δ(t − 0)? Compute its total energy from its magnitude
spectrum. Can an impulse be generated in practice?
4.12 Let X(ω) be the Fourier transform of x(t). Verify that, for any t0 > 0,
and then use it to verify that if x(t) is real, then the real part of X(ω) is even and the
imaginary part of X(ω) is odd? Use the result to verify (4.29). This is the conventional
way of verifying (4.29). Which method is simpler?
4.14 Show that if x(t) is real and even (x(t) = x(−t)), then X(ω) is real and even. Can the
spectrum of a positive-time signal be real? If we consider all real-world signals to be
positive time, will we encounter real-valued spectra in practice?
4.15 Consider a signal x(t). It is assumed that its spectrum is real valued and as shown in
Figure 4.11(aa). What are the spectra of x(5t) and x(0.2t)?
4.16 Consider a signal x(t). It is assumed that its spectrum is real valued and of the form
shown in Figure 4.11(aa). Plot the spectra of x(t − 0.5), x(0.5(t − 0.5)), x(2(t − 0.5)),
x(t − 0.5) cos 2t, and x(t − 0.5) cos 10t.
4.17* Consider the Fourier series in (4.14) and (4.15). Now if we use P = 2P0 to develop its
Fourier series as
X∞
x(t) = c̄m ejmω̄0 t
m=−∞
with Z Z
P/2 P
1 1
c̄m = x(t)e−jmω̄0 t dt = x(t)e−jmω̄0 t dt
P t=−P/2 P t=0
We see that if we use P = 2P0 , then half of the computed Fourier coefficients will be
zero. If we use P = 3P0 , then two third of the computed Fourier coefficients will be
zero. In other words, the final result of the Fourier series remains the same if we use
any period to carry out the computation. However, if we use the fundamental period,
then the amount of computation will be minimum.
4.18* For a signal x(t) of a finite duration, we can develop its Fourier series if we specify
its applicable time interval. Clearly we can also develop its Fourier transform. For
convenience, we assume x(t) = 0, for t < −P0 /2 and t > P0 /2. In other words, the
signal is nonzero only for t in [−P0 /2, P0 /2]. Then its Fourier transform is
Z P0 /2
X(ω) = x(t)e−jωt dt
t=−P0 /2
for all ω. It means that the Fourier coefficients are the frequency samples of X(ω)/P0 ,
and the Fourier transform can be computed from the Fourier coefficients. Thus the two
descriptions are equivalent for any signal of a finite duration. Note that this result is
dual to the sampling theorem to be developed in the next chapter. See Problem 5.4.
4.19* For a signal x(t) defined for t in [0, L] for some finite L and zero outside the range.
Verify that its Fourier transform X(ω) can be computed from its Fourier coefficients as
in (4.34).
4.20 Consider a signal x(t). It is assumed that its spectrum X(ω) is real-valued and positive
for all ω. Its squared value or X 2 (ω) is plotted in Figure 4.20. What is the total
energy of the signal? Plot its spectrum and the spectrum of its modulated signal
xm (t) = x(t) cos 10t. What is the total energy of the modulated signal xm (t)?
4.21 Verify that the total energy of a signal x(t) is cut in half in its modulated signal xm (t) =
x(t) cos ω c t for any ω c so long as X(ω − ω c ) and X(ω + ω c ) do not overlap.
4.22 Consider the sequences x1 (nT ) = cos 2πnT , x2 (nT ) = cos 4πnT , and x3 (nT ) = cos 6πnT .
If T = 1, do the three sequences denote the same sequence? If yes, what is its frequency?
What is its principal form?
92 CHAPTER 4. FREQUENCY SPECTRA OF CT AND DT SIGNALS
4.23 Consider the sequences x1 (nT ) = cos 2πnT , x2 (nT ) = cos 4πnT , and x3 (nT ) = cos 6πnT .
If T = 0.5, do the three sequences denote the same sequence? If not, how many different
sequences are there? what are their frequencies? What are their principal forms?
4.24 Repeat Problem 4.23 for T = 0.1.
4.25 If we sample x(t) = sin 10t with sampling period T = 1, what is its sampled sequence
in principal form? Does the frequency of its sampled sinusoid equal 10 rad/s?
4.26 Repeat Problem 4.25 for T = 0.5, 0.3, and 0.1.
4.27 Consider x(t) = 2 − 3 sin 10t + 4 cos 20t. What is its sampled sequence in principal form
if the sampling period is T = π/4? What are its aliased frequencies? Can you recover
x(t) from x(nT )?
4.28 Repeat Problem 4.27 for T = π/5, π/10, and π/25.
4.33 Consider the sequence x(nT ) = 1.2, for n = 0, 2, and x(nT ) = 0, for all n other than
0 and 2, with T = 0.2. What is its frequency spectrum? Plot its magnitude and phase
spectra for ω in its NFR.
4.34 Verify that the spectrum of x(nT ) = 3 × 0.98n with T = 0.1 and n ≥ 0 is given
3
Xd (ω) =
1 − 0.98e−jωT
4.35 Consider the finite sequence in Figure 4.19(a) of length N = 3 with T = 0.2. It spectrum
is given in (4.70). Verify that (4.70) can be simplified as
What are its magnitude and phase spectra? Verify analytically the plots in Figure
4.19(b)? What are the magnitudes and phases of Xd (ω) at ω = m(2π/N T ), for m = 0 :
2? Note that in computer computation, if a complex number is zero, then its phase is
automatically assigned to be zero. In analytical computation, this may not be the case
as shown in this problem. In practical application, there is no need to be concerned
with this discrepancy.
4.36 Consider the sequence in Problem 4.35 with T = 1. Plot its magnitude and phase spectra
of ω in [−π, 2π]. What are the magnitudes and phases of Xd (ω) at ω = m(2π/N T ), for
m = 0 : 2? Are those values the same as those in Problem 4.35?
Chapter 5
5.1 Introduction
This chapter discusses computer computation of frequency spectra. It consists of three parts.
The first part discusses the equation for such computation. We first derive it informally
from the definition of integration and then formally by establishing the sampling theorem.
This justifies mathematically the computation of the spectrum of a CT signal from its time
samples. Even so, the computed phase spectrum may not be correct as we give the reason
and then demonstrate it using an example. Thus we plot mostly magnitude spectra.
In practice, we can use only a finite number of data and compute only a finite number of
frequencies. If their numbers are the same, then the DT Fourier transform (DTFT) reduces to
the discrete Fourier transform (DFT). We then discuss how to use the fast Fourier transform
(FFT), an efficient way of computing DFT, to compute spectra. We also discuss how to select
a largest possible sampling period T by downsampling.
In the last part, we show that the FFT-computed spectrum of a step function of a finite
duration looks completely different from the exact one. Even so, we demonstrate that the
FFT-computed result is correct. We also discuss a paradox involving ∞. We then introduce
trailing zeros in using FFT to complete the chapter.
Before proceeding, we mention that, in view of the discussion in Section 2.3.1, the spectrum
of a CT signal x(t) can be computed from its sampled sequence x(nT ) if T is selected to
be sufficiently small. Clearly the smaller T is, the more accurate the computed spectrum.
However it will be more difficult to design digital filters if the signal is to be processed
digitally.1 See Subsection 11.10.1. Thus the crux of this chapter is find a largest possible T
or an upper bound of T . An exact upper bound of T exists only for idealized signals. For
real-world signals, no such T exists. Thus the selection of an upper bound will be subjective.
If x(t) is absolutely integrable, then its spectrum exists and is a bounded and continuous
function of ω. If x(t) is, in addition, bounded, then x(t) has finite total energy (see Section
3.5.1). Every real-world signal, as discussed in Section 3.5.1, is absolutely integrable and
1 A smaller T requires more storage and more computation. This however is no longer an important issue
because of very inexpensive memory chips and very fast computing speed.
93
94 CHAPTER 5. SAMPLING THEOREM AND SPECTRAL COMPUTATION
bounded. In addition, it is time limited. Thus its nonzero spectrum will extend all the way
to ±∞. However because it has finite total energy, the magnitude spectrum must approach
zero, that is,
|X(ω)| → 0 as |ω| → ∞ (5.2)
See the discussion in Subsections 4.4.2 and 4.4.3. This condition will be used throughout this
chapter.
For real-world signals, analytical computation of (5.1) is not possible. Thus we discuss its
numerical computation. Using the following definition of integration
Z X
f (t)dt = f (nT )T as T → 0
where x(nT ) is the samples of x(t) with sampling period T . The term inside the brackets is
defined in (4.65) as the frequency spectrum of x(nT ) or
∞
X
Xd (ω) := x(nT )e−jωnT (5.3)
n=−∞
Thus we have
X(ω) = T Xd (ω) as T → 0 (5.4)
for all ω in (−∞, ∞)
Note that T → 0 is a concept and we can never achieve it in actual computation. No
matter how small T we select, there are still infinitely many Ti between T and 0. Thus for
a selected T , the equality in (5.4) must be replaced by an approximation. Moreover, the
approximation cannot hold for all ω in (−∞, ∞). Recall that we have assumed |X(ω)| → 0
as |ω| → ∞. Whereas Xd (ω) is periodic with period 2π/T = ω s for all ω in (−∞, ∞), and
we consider Xd (ω) for ω only in the Nyquist frequency range (−π/T, π/T ] = (−ω s /2, ω s /2].
Thus in actual computation, (5.4) must be replaced by
½
T Xd (ω) for |ω| < π/T = ω s /2
X(ω) ≈ (5.5)
0 for |ω| ≥ π/T = ω s /2
for T sufficiently small. Clearly the smaller T , the better the approximation and the wider
the applicable frequency range. Computer computation of spectra is based on this equation.
The preceding derivation of (5.5) is intuitive. We develop it formally in the next section.
where
∞
X
xd (t) := x(nT )δ(t − nT )
n=−∞
is the CT representation of the DT signal x(nT ) discussed in (4.64). Using x(nT )δ(t − nT ) =
x(t)δ(t − nT ) as shown in (2.13), we can write xd (t) as
∞
X ∞
X
xd (t) := x(t)δ(t − nT ) = x(t) δ(t − nT ) (5.8)
n=−∞ n=−∞
The infinite summation is the sampling function studied in (4.16) and is plotted in Figure
4.4. It is periodic with fundamental frequency ω s := 2π/T and can be expressed in Fourier
series as in (4.18). Substituting (4.18) into (5.8) yields
" ∞
# ∞
1 X jmωs t 1 X
xd (t) = x(t) e = x(t)ejmωs t (5.9)
T m=−∞ T m=−∞
This equation relates the spectrum X(ω) of x(t) and the spectrum Xd (ω) of x(nT ). We
discuss in the next subsection the implication of (5.10).
In other words, all nonzero spectrum of x(t) lies inside the range [−ω max , ω max ]. Note that
if x(t) is bandlimited, then it cannot be, as discussed in Subsection 4.4.2, time-limited. All
real-world signals are time limited, therefore they cannot be bandlimited. Therefore what
will be discussed is applicable only to non-real-world or idealized signals.
For convenience of discussion, we assume that X(ω) is real-valued and is as shown in
Figure 5.1(a). We write (5.10) as
Note that X(ω ± ω s ) are the shiftings of X(ω) to ∓ω s as shown in Figure 5.1(b) with dotted
lines. Thus the spectrum of x(nT ) multiplied by T is the sum of X(ω) and its repetitive
shiftings to mω s , for m = ±1, ±2, . . .. Because Xd (ω) is periodic with period ω s , we need to
plot it only in the Nyquist frequency range (−0.5ω s , 0.5ω s ]2 bounded by the two vertical dotted
lines shown. Furthermore, because X(ω) is even, the two shiftings immediately outside the
2 If X(ω) contains no impulse, there is no difference to plot it in [−0.5ω s , 0.5ω s ].
96 CHAPTER 5. SAMPLING THEOREM AND SPECTRAL COMPUTATION
(a) X(w)
−wmax 0 wmax
(b) T1 Xd(w)
(c) T2 Xd(w)
−ws 0 ws
Figure 5.1: (a) Spectrum of x(t). (b) Spectrum of T x(nT ) with ω s > 2ω max . (c) Spectrum
of T x(nT ) with ω s < 2ω max .
range can be obtained by folding X(ω) with respect to ±0.5ω s . Now if the sampling frequency
ω s is selected to be larger than 2ω max , then their shiftings will not overlap and we have
Note that they are different outside the range because X(ω) is 0 and Xd (ω) is periodic outside
the range. Now if ω s < 2ω max , then the repetitive shiftings of X(ω) will overlap as shown in
Figure 5.1(c) and (5.12) does not hold. This is called frequency aliasing. In general, the effect
of frequency aliasing is not as simple as the one shown in Figure 5.1(c) because it involves
additions of complex numbers. See Problem 5.3.
In conclusion, if x(t) is bandlimited to ω max = 2πfmax and if T = 1/fs is selected so that
π
T < or fs > 2fmax (5.13)
ωmax
then we have ½
T Xd (ω) for |ω| < π/T
X(ω) = (5.14)
0 for |ω| ≥ π/T
Thus the spectrum of x(t) can be computed exactly from its samples x(nT ). Moreover,
substituting (5.7) into (5.14) and then into the inverse Fourier transform in (4.22), we can
express x(t) in terms of x(nT ) as
∞
X T sin[π(t − nT )/T ]
x(t) = x(nT ) (5.15)
n=−∞
π(t − nT )
See Problem 5.4. It means that for a bandlimited signal, the CT signal x(t) can be recovered
exactly from x(nT ). This is called the Nyquist sampling theorem.
5.2. RELATIONSHIP BETWEEN SPECTRA OF X(T ) AND X(N T ) 97
Equation (5.15) is called the ideal interpolator. It is however not used in practice because
it requires an infinite number of operations and, as seriously, cannot be carried out in real
time. In practice we use a zero-order hold to construct a CT signal from a DT signal. In the
remainder of this chapter, we focus on how to compute X(ω) from x(nT ).
All real-world signals have finite time duration, thus their spectra, as discussed in Section
4.4.3, cannot be bandlimited. However their spectra do approach zero as |ω| → ∞ because
they have finite energy. Thus in practice we select ω max so that
Clearly, the selection is subjective. One possibility is to select ω max so that |X(ω)| is less
than 1% of its peak magnitude for all |ω| > ω max . We then use (5.13) to select a T and
compute the spectrum of x(t) using x(nT ). In this computation, frequency aliasing will be
small. The frequency aliasing can be further reduced if we design a CT lowpass filter to make
the |X(ω)| in (5.16) even closer to zero. Thus the filter is called an anti-aliasing filter; it is
the filter shown in Figure 2.12. Because frequency aliasing is not identically zero, we must
replace the equality in (5.14) by an approximation as
½
T Xd (ω) for |ω| < π/T = ω s /2
X(ω) ≈ (5.17)
0 for |ω| ≥ π/T = ω s /2
This is the same equation as (5.5) and will be the basis of computing the spectrum of x(t)
from x(nT ).
(a)
15
10
Magnitude
5
0
−30 −20 −10 0 10 20 30
Frequency (rad/s)
(b)
2
1
Phase (rad.)
−1
−2
−30 −20 −10 0 10 20 30
Frequency (rad/s)
Figure 5.2: (a) Magnitude spectra of x(t) (solid line) and T x(nT ) (dotted line). (b) Corre-
sponding phase spectra.
and plotted in Figure 4.5(b). From the plot, we see that its magnitude spectrum is practically
zero for |ω| > 20. Thus we may select ωmax = 20 and require T < π/20 = 0.157. Arbitrarily
we select T = 0.1. The sampled sequence of x(t) with T = 0.1 is
¡ ¢n
x(nT ) = 3e−0.2nT = 3 e−0.2T = 3 × 0.98n
Before proceeding, we mention that in order for (5.18) and (5.22) to hold the computed
|T Xd (ω)| must be practically zero in the neighborhood of π/T = 0.5ω s . If the computed
|T Xd (ω)| is large at π/T , then |X(ω)| in (5.22) is discontinuous at ω = π/T . This violates
the fact that the spectrum of every real-world signal is continuous.
5.2. RELATIONSHIP BETWEEN SPECTRA OF X(T ) AND X(N T ) 99
In application, we may be given a x(t) and be asked to compute its spectrum. In this
case, we cannot use (5.16) to select a T . We now discuss a procedure of selecting a T . We
select arbitrarily a T and use the samples x(nT ) to compute Xd (ω) in (5.7). If the computed
Xd (ω) multiplied by T at ω = π/T is significantly different from zero, the condition in (5.16)
cannot be met and we must select a smaller T . We repeat the process until we find a T so
that the computed T Xd (ω) is practically zero in the neighborhood of ω = π/T , then we have
(5.18) and (5.22).
The selection of T discussed so far is based entirely on the sampling theorem. In practice,
the selection may involve other issues. For example, the frequency range of audio signals
is generally limited to 20 Hz to 20 kHz, or fmax = 20 kHz. According to the sampling
theorem, we may select fs as 40 kHz. However, in order to design a less stringent and,
consequently, less costly anti-aliasing analog filter, the sampling frequency is increased to
44 kHz. Moreover, in order to synchronize with the TV horizontal video frequency, the
sampling frequency of CD was finally selected as 44.1 kHz. Thus the sampling period of CD
is T = 1/44100 = 2.2676 · 10−5 s or 22.676 microsecond (µs). It is not determined solely by
the sampling theorem.
where T > 0 and N is an integer. For a selected frequency, (5.23) can be readily computed
using a computer. As an example, we compute Xd (ω) at ω = 128 × 2π for the x(nT ) in
Program 3.4. Recall that it is the samples of the sound generated by a 128-Hz tuning fork
with sampling period T = 1/8000. We type in the control window of MATLAB the following
%Program 5.1
>> x=wavread(’f12.wav’);
>> N=length(x),T=1/8000;n=0:N-1;
>> w=2*pi*128;e=exp(-j*w*n*T);
>> tic,Xd=sum(x’.*e),toc
The first two lines are the first two lines of Program 3.4. The third line specifies the frequency
in radians per second and generates the 1 × N row vector
e = [1 e−jωT e−jω2T · · · ejω(N −1)T ]
The x generated by wavread is an N × 1 column vector. We take its transpose and multiply
it with e term by term using ‘.*’. Thus x’.*e is a 1 × N row vectors with its entry equal
to the product of the corresponding entries of x’ and e. The MATLAB function sum sums
up all entries of x’.*e which is (5.23).4 The function tic starts a stopwatch timer and toc
returns the elapsed time. The result of the program is
N = 106000 Xd = −748.64 − 687.72j elapsed time=0.0160
3 Ifx(t) contains no impulses as is the case in practice, there is no difference in using [0, L) or [0, L]. We
adopt the former for convenience. See Problems 5.4 and 5.5.
4 Note that e is 1 × N and x is N × 1. Thus e ∗ x also yields (5.23). However if we type e*x in MATLAB,
an error message will occur. This may be due to the limitation in dimension in matrix multiplication.
100 CHAPTER 5. SAMPLING THEOREM AND SPECTRAL COMPUTATION
We see that it takes only 0.016 second to compute Xd (ω), which involves over one hundred
thousand additions and multiplications, at a single frequency. For some applications such as
in DTMF (Dual-tone multi-frequency) detection, we need to compute Xd (ω) only at a small
number of frequencies. In this case, we may compute (5.23) directly.
of 2, we can repeat the process k = log2 N times. By so doing, the number of operations
can be reduced to be proportional to N log2 N . Based on this idea, many methods, called
collectively the fast Fourier transform (FFT), were developed to compute (5.27) efficiently.
They are applicable for any positive integer N . For example, to compute all N = 106000
data of the problem in the preceding subsection, FFT will take, as will be demonstrated in
Section 5.4, only 0.078 second in contrast to 28 minutes by direct computation. Thus FFT is
indispensable in computing frequency spectra of signals. For the development of one version
of FFT, see Reference [C7].
We discuss only the use of FFT in this text. The use of FFT in MATLAB is very simple.
Given a set of N data x expressed as a row vector. Typing Xd=fft(x) will yield N entries
which are the values of X[m], for m = 0 : N − 1, in (5.27). For example, typing in MATLAB
>> x=[1 -2 1];Xd=fft(x)
will yield
Xd = 0
1.5000 + 2.5981i 1.5000 − 2.5981i
√
Note that i and j both stand for −1 in MATLAB. What are the meanings of these two sets
of three numbers? The first set [1 -2 1] denotes a time sequence of length N = 3. Note
that the sampling period does not appear in the set, nor in (5.27). Thus (5.27) is applicable
for any T > 0. See Problems 4.34 and 4.35. If T = 0.2, then the sequence is as shown
in Figure 4.19(a) and its spectrum is shown in Figure 4.19(b). Note that the spectrum is
periodic with period 2π/T = 10π = 31.4 and its plot for ω only in the Nyquist frequency
range (NFR) (−π/T, π/T ] = (−15.7, 15.7] is of interest. The second set of numbers Xd is the
three equally-spaced samples of the frequency spectrum of x for ω in [0, 2π/T ) = [0, 31.4)
or at ω = 0, 10.47, 20.94 Note that FFT computes the samples of Xd (ω) for ω in [0, 2π/T ),
rather than in the NFR, for the convenience of indexing. Because the spectrum is complex,
we compute its magnitude by typing abs(Xd) and its phase by typing angle(Xd) which yield,
respectively,
abs(Xd) = 0 3 3
and
angle(Xd) = 0 1.0472 − 1.0472
They are plotted in Figure 4.19(b) with solid and hollow dots. They are indeed the three
samples of Xd (ω) for ω in [0, 31.4). See Problem 4.35.
In conclusion, if x is a row vector of N entries, then fft(x) will generate a row vector of
N entries. The sampling period T comes into the picture only in placing the result of fft(x)
at ω m = m[2π/(N T )] for m = 0 : N − 1. This will be used to plot spectra in the next two
subsections. Recall that Xd (ω) is a continuous function of ω and X[m], m = 0 : N − 1, yield
only samples of Xd (ω) at ω = mD. Once X[m] are computed, we must carry out interpolation
using, for example, the MATLAB function plot to obtain Xd (ω) for ω in [0, ω s ).
% Subprogram A
X=T*fft(x);
m=0:N-1;D=2*pi/(N*T);
plot(m*D,abs(X))
then it will generate the magnitude of T Xd (ω) for ω in [0, ω s ). Note that the MATLAB
function plot carries out linear interpolation as shown in Figure 2.8(c). Thus the
magnitude so generated is a continuous function of ω in [0, ω s ). The program is not a
complete one, thus we call it a subprogram.
4. If the generated magnitude plot of T Xd (ω) is significantly different from zero in the
neighborhood of ω s /2 (the middle part of the plot), then the effect of frequency aliasing
is significant. Thus we go back to step 1 and select a larger N or, equivalently, a smaller
T . We repeat the process until the generated magnitude plot of T Xd (ω) is practically
zero in the neighborhood of ω s /2 (the middle part of the plot). If so, then frequency
aliasing is generally negligible. Subprogram A generates a plot for ω in [0, ω s ) in which
the second half is not used in (5.22). We discuss next how to generate a plot for ω only
in [0, ω s /2].
5. If m = 0 : N − 1, the frequency range of mD is [0, ω s ). Let us assume N to be even6 and
define mp = 0 : N/2. Because (N/2)D = π/T = ω s /2, the frequency range of mp × D
is [0, ω s /2]. In MATLAB, if we type
% Subprogram B
X=T*fft(x);
mp=0:N/2;D=2*pi/(N*T);
plot(mp*D,abs(X(mp+1)))
then it will generate the magnitude of T Xd (ω) for ω in [0, ω s /2]. Note that MATLAB
automatically assign the N entries of X by X(k) with k = 1 : N . In Subprogram B,
if we type plot(mp*D, abs(X)), then an error message will appear because the two
sequences have different lengths (mp*D has length (N/2) + 1 and X has length N ).
Typing plot(mp*D,abs(X(mp)) will also incur an error message because the internal
index of X cannot be 0. This is the reason of using X(mp+1) in Subprogram B. The
program generates the magnitude of T Xd (ω) for ω in [0, ω s /2]. If the plot decreases to
zero at ω s /2, then frequency aliasing is generally negligible (see Problem 5.13) and we
have ½
T |Xd (ω)| for 0 ≤ ω ≤ ω s /2
|X(ω)| ≈
0 for ω > ω s /2
In most application, we use Subprogram B; there is no need to use Subprogram A.
6. In item (5) we assumed N to be even. If N is odd, Subprogram B is still applicable
without any modification. The only difference is that the resulting plot is for ω in
[0, ω s /2) instead of [0, ω s /2] as in item (5). See Problems 5.6 and 5.7.
Example 5.3.1 We use the preceding procedure to compute the magnitude spectrum of
x(t) = 3e−0.2t cos 10t for t ≥ 0. It is a signal of infinite length. Thus we must truncate it to a
finite length. Arbitrarily we select L = 30. Because e−0.2t decreases monotonically to 0 and
| cos 10t| ≤ 1, we have
|x(t)| ≤ |3e−0.2L | = 3e−0.2×30 = 0.0074
for all t ≥ 30, which is practically zero. Thus we may use x(t) for t in [0, 30) to compute the
spectrum of x(t) for t in [0, ∞). Next we select N = 100. We type in an edit window
6 This assumption will be removed in item (6).
5.3. DISCRETE FOURIER TRANSFORM (DFT) AND FAST FOURIER TRANSFORM (FFT)103
7 7
6 6
5 5
Magnitude
Magnitude
4 4
3 3
2 2
1 1
0 0
0 5 10 15 20 0 2 4 6 8 10
Frequency (rad/s) Frequency (rad/s)
Figure 5.3: (a) Magnitude of T Xd (ω) for ω in [0, ω s ) with T = 0.3. (b) For ω in [0, ω s /2].
%Program 5.2(f53.m)
L=30;N=100;T=L/N;n=0:N-1;
x=3*exp(-0.2*n*T).*cos(10*n*T);
X=T*fft(x);
m=0:N-1;D=2*pi/(N*T);
subplot(1,2,1)
plot(m*D,abs(X))
title(’(a) Spectrum in [0, 2*pi/T)’)
axis square,axis([0 2*pi/T 0 8])
xlabel(’Frequency (rad/s)’),ylabel(’Magnitude’)
mp=0:N/2;
subplot(1,2,2)
plot(mp*D,abs(X(mp+1)))
title(’(b) Spectrum in [0, pi/T]’)
axis square,axis([0 pi/T 0 8])
xlabel(’Frequency (rad/s)’),ylabel(’Magnitude’)
The first two lines generate the N samples of x(t). The third line uses FFT to compute N
frequency samples of T Xd (ω) in [0, 2π/T ) = [0, ω s ) located at frequencies mD for m = 0 :
N − 1. We use linear interpolation to plot in Figure 5.3(a) the magnitude of T Xd (ω) for ω in
[0, ω s ) = [0, 20.9). We plot in Figure 5.3(b) only the first half of Figure 5.3(a). It is achieved
by changing the the frequency indices from m=0:N-1 to mp=0:N/2. Let us save Program 5.2
as f53.m. If we type f53 in the command window, then Figure 5.3 will appear in a figure
window.
Does the plot in Figure 5.3(b) yield the spectrum of x(t) for ω in [0, ω s /2] as in (5.22)?
As discussed earlier, in order for (5.22) to hold , the computed |T Xd (ω)| must decrease to
zero at π/T = 10.45. This is not the case. Thus frequency aliasing is appreciable and the
computed T Xd (ω) will not approximate well the spectrum of x(t) for ω in [0, π/T ] = [0, 10.5].
In other words, the sampling period T = L/N = 0.3 used in Program 5.2 is too large and we
must select a smaller one.2
Example 5.3.2 We repeat Example 5.3.1 by selecting a larger N . Arbitrarily we select
N = 1000. Then we have T = L/N = 30/1000 = 0.03 which is one tenth of T = 0.3.
Program 5.2 with these N and T will yield the plots in Figures 5.4(a) and (b). We see
that |T Xd (ω)| in Figure 5.4(a) is practically zero in the middle part of the frequency range
or |T Xd (ω)| in Figure 5.5(b) decreases to zero at π/T = 104.7. Thus (5.22) is applicable
and the magnitude spectrum of x(t) for 0 ≤ ω ≤ π/T = 104.7 is as shown in Figure 5.5(b)
and is zero for ω > 104.7. The FFT computed magnitude spectrum in Figure 5.4(b) is
indistinguishable from the exact one shown in Figure 4.8(a) for ω ≥ 0. This shows that the
104 CHAPTER 5. SAMPLING THEOREM AND SPECTRAL COMPUTATION
Magnitude 4
0
0 20 40 60 80 100 120 140 160 180 200
6
Magnitude
0
0 20 40 60 80 100 120 140 160 180 200
Frequency (rad/s)
Figure 5.4: (a) Magnitude of T Xd (ω) for ω in [0, 2π/T ) = [0, ω s ) = [0, 209.4) with T = 0.03.
(b) For ω in [0, ω s /2] = [0, 104.7] (rad/s).
magnitude spectrum of x(t) can indeed be computed from its samples x(nT ) if T is selected
to be sufficiently small.2
−N 2π −π
= = −0.5ω s
2 NT T
and the last frequency is
N − 2 2π (N − 2)π N −2 π π
= = < = 0.5ω s
2 NT NT N T T
7 This section will be used only in generating Figure 5.12 in a later section and may be skipped.
5.4. MAGNITUDE SPECTRA OF MEASURED DATA 105
−1
−2
−100 −80 −60 −40 −20 0 20 40 60 80 100
Frequency (rad/s)
Figure 5.5: FFT computed magnitude (solid line) and phase (dotted line) spectra of x(t).
% Subprogram C
X=T*fft(x);
mn=-N/2:N/2-1;D=2*pi/(N*T);
plot(mn*D,abs(fftshift(X)))
then the program will plot the magnitude of T Xd (ω) for ω in [−ω s /2, ω s /2). It is the mag-
nitude spectrum of x(t) if the plot decreases to zero at ±ω s /2.
Let us return to Example 5.3.2. Now we modify Program 5.2 as
%Program 5.3(f55.m)
L=30;N=1000;T=L/N;n=0:N-1;D=2*pi/(N*T);
x=3*exp(-0.2*n*T).*cos(10*n*T);
X=T*fft(x);
mn=-N/2:N/2-1;
plot(mn*D,abs(fftshift(X)),mn*D,angle(fftshift(X))),’:’)
This program will yield the plot in Figure 5.5. We see that by changing the frequency index
to mn and using fftshift, we can readily plot spectra for positive and negative frequencies.
Note that the last three lines of Program 5.3 can be replaced by
X=fftshift(T*fft(x));mn=-N/2:N/2-1;
plot(mn*D,abs(X),mn*D,angle(X)),’:’)
−3 (a) (b)
x 10
5 4
4
2
Phase (rad)
Magnitude
3
0
2
−2
1
0 −4
0 5000 10000 0 5000 10000
−3 (c) (d)
x 10
5 4
4
Phase (rad) 2
Magnitude
3
0
2
−2
1
0 −4
110 115 120 125 110 115 120 125
Frequency (Hz) Frequency (Hz)
Figure 5.6: (a) Magnitude spectrum in [0, 0.5fs ] (Hz) of the signal in Figure 1.1. (b) Cor-
responding phase spectrum. (c) Magnitude spectrum in [110, 125] (Hz). (d) Corresponding
phase spectrum.
%Program 5.4(f56.m)
xb=wavread(’f11.wav’);
x=xb([2001:53500]);
N=length(x);fs=24000;T=1/fs;D=fs/N;
X=T*fft(x);
mp=0:N/2;
subplot(2,2,1)
plot(mp*D,abs(X(mp+1))),title(’(a)’)
subplot(2,2,2)
plot(mp*D,angle(X(mp+1))),title(’(b)’)
subplot(2,2,3)
plot(mp*D,abs(X(mp+1))),title(’(c)’)
axis([110 125 0 0.005])
subplot(2,2,4)
plot(mp*D,angle(X(mp+1))),title(’(d)’)
axis([110 125 -4 4])
is the main part of the program that generates Figure 5.6. The first two lines are those
of Program 3.5. The sampling frequency is fs = 24, 000 Hz. Thus we have T = 1/fs .
The frequency resolution D = 2π/N T defined in (5.25) is in the unit of rad/s. It becomes
D = 1/N T = fs /N in the unit of Hz. Thus the frequency range of mp*D is [0, 1/2T ] = [0, fs /2]
in Hz. Figures 5.6(a) and (b) show the magnitude and phase spectra in [0, fs /2] = [0, 12000]
(Hz) of the time signal shown in Figure 1.1. In order to see them better, we zoom in Figures
5.6(c) and (d) their segments in the frequency range [110, 125] (Hz). This is achieved by
5.4. MAGNITUDE SPECTRA OF MEASURED DATA 107
(a) (b)
0.25 0.3
0.25
0.2
0.2
Magnitude
0.15
0.15
0.1
0.1
0.05
0.05
0 0
0 1000 2000 3000 4000 126 127 128 129 130
Frequency (Hz) Frequency (Hz)
Figure 5.7: (a) Magnitude spectrum in [0, 0.5fs ] = [0, 4000] (Hz) of the signal in Figure 1.2
sampled with sampling frequency 8000 Hz. (b) In [126, 130] (Hz).
typing axis([110 125 y min y max]) as shown. We see that the phase spectrum is erratic
and is of no use.
Is the magnitude spectrum correct? Because the computed |T Xd (ω)| is not small in
the neighborhood of π/T (rad/s) or fs /2 = 12000 (Hz) as shown in Figure 5.6(a), the plot
incurs frequency aliasing. Moreover it is known that spectra of speech are limited mostly to
[20 4000] (Hz). The magnitude spectrum shown in Figure 5.6(a) is not so limited. It has a
simple explanation. The signal recorded contains background noises. The microphone and
the cable used also generate noises. No wonder, recording requires a sound-proof studio and
expensive professional equipment. In conclusion, the plot in Figure 5.6 shows the spectrum
of not only the sound “signals and systems” uttered by the author but also noises.
We next compute the spectrum of the signal shown in Figure 1.2. We plot only its
magnitude spectrum. Recall that its sampling frequency is 8,000 Hz. The program that
follows
will generate Figure 5.7 in a figure window and elapsed time=0.078. Figure 5.7(a) shows the
magnitude spectrum for the entire positive Nyquist frequency range [0, fs /2] = [0, 4000] (Hz);
Figure 5.7(b) shows its small segment for frequencies in [126, 130] (Hz). We see that most
of the energy of the signal is concentrated around roughly 128 Hz. This is consistent with
the fact that the signal is generated by a 128-Hz tuning fork. The peak magnitude however
occurs at 128.15 Hz.
Because of the use of tic and toc, Program 5.5 also returns the elapsed time as 0.078
second. Direct computation of the spectrum will take over 28 minutes, FFT computation
however takes only 0.078 second. The improvement is over twenty thousand folds. Thus FFT
is indispensable in computing spectra of real-world signals.
108 CHAPTER 5. SAMPLING THEOREM AND SPECTRAL COMPUTATION
(a) (b)
0.25 0.3
0.25
0.2
0.2
Magnitude
0.15
0.15
0.1
0.1
0.05
0.05
0 0
0 500 1000 1500 2000 126 127 128 129 130
Frequency (Hz) Frequency (Hz)
Figure 5.8: (a) Magnitude spectrum in [0, 2000] (Hz) of the signal in Figure 1.2 sampled with
sampling frequency 4000 Hz. (b) In [126, 130] (Hz).
5.4.1 Downsampling
The magnitude spectrum of the signal in Figure 1.2, as we can see from Figure 5.7(a), is prac-
tically zero for frequencies larger than fmax = 200 Hz. According to the sampling theorem,
any sampling frequency larger than 2fmax = 400 Hz can be used to compute the spectrum
of x(t) from its sampled sequence x(nT ). Clearly the sampling frequency fs = 8 kHz used
to generate the sampled sequence x is unnecessarily large or the sampling period T is un-
necessarily small. This is called oversampling. We show in this subsection that the same
magnitude spectrum of the signal can be obtained using a larger sampling period. In order
to utilize the data stored in x, a new sampling period must be selected as an integer multiple
of T := 1/8000. For example, if we select a new sampling period as T1 = 2T , then its sam-
pled sequence will consist of every other term of x. This can be generated as y=x([1:2:N]),
where N is the length of x. Recall that the internal indices of x range from 1 to N and 1:2:N
generates the indices 1, 3, 5, . . . , up to or less than N. Thus the new sequence y consists of
every other term of x. This is called downsampling. The program that follows
will generate Figure 5.8. Note that for T1 = 2T , the new sampling frequency is fs1 = fs /2 =
4000 (Hz) and its positive Nyquist frequency range is [0, fs1 /2] = [0, 2000] (Hz) as shown in
Figure 5.8(a). Its segment in [128, 130] is shown in Figure 5.8(b). We see that the plot in
Figure 5.8(b) is indistinguishable from the one in Figure 5.7(b) even though the sampling
period of the former is twice of the sampling period of the latter.
If we change a=2 in Program 5.6 to a=10, then the program will yield the plots in Figure
5.9. The spectrum in Figure 5.9 is obtained by selecting its sampling period as T2 = 10T or
5.4. MAGNITUDE SPECTRA OF MEASURED DATA 109
(a) (b)
0.25 0.3
0.25
0.2
0.2
Magnitude
0.15
0.15
0.1
0.1
0.05
0.05
0 0
0 100 200 300 400 126 127 128 129 130
Frequency (Hz) Frequency (Hz)
Figure 5.9: (a) Magnitude spectrum in [0, 400] (Hz) of the signal in Figure 1.2 sampled with
sampling frequency 800 Hz. (b) In [126, 130] (Hz).
its sampling frequency as 800 Hz. Thus it uses only about one tenth of the data in x. The
result appears to be also acceptable.
The first three lines are taken from Program 3.6. The rest is similar to Program 5.5. The
program will generate in Figure 5.10(a) the magnitude spectrum for the entire positive Nyquist
frequency range [0, 0.5fs ] = [0, 11025] Hz and in Figure 5.10(b) for frequencies between 200
and 600 Hz. It is the magnitude spectrum of the middle C shown in Figure 1.3. Its spectrum
has narrow spikes at roughly fc = 260 Hz and kfc , for k = 2, 3, 4, · · ·. Moreover the energy
around 2fc = 520 is larger than the energy around 260 Hz.8 In any case, it is incorrect to
think that middle C consists of only a single frequency at 261.6 Hz as listed in Wikipedia.
From Figure 5.10(a) we see that the spectrum is identically zero for frequency larger than
2500 Hz. Thus we may consider middle C to be bandlimited to fmax = 2500. Clearly the
sampling frequency fs = 22050 used in obtaining Figure 1.3 is unnecessarily large. Thus
we may select a smaller f¯ which is larger than 2fmax = 5000. In order to utilize the data
obtained using fs = 22050, we select f¯s = fs /4 = 5512.5 or T̄ = 4T = 4/22050. Using a
program similar to Program 5.6 we can obtain the magnitude spectrum shown in Figure 5.11.
It is indistinguishable from the one in Figure 5.10(a).
8 The grand piano I used may be out of tune.
110 CHAPTER 5. SAMPLING THEOREM AND SPECTRAL COMPUTATION
(a) (b)
0.07 0.08
0.06 0.07
0.06
Magnitude spectrum
0.05
0.05
0.04
0.04
0.03
0.03
0.02
0.02
0.01 0.01
0 0
0 5000 10000 15000 200 300 400 500 600
Frequency (Hz) Frequency (Hz)
Figure 5.10: (a) Magnitude spectrum in [0, 11025] (Hz) of the middle C in Figure 1.3 with
sampling frequency 22050 Hz. (b) In [200, 600] (Hz).
0.07
0.06
0.05
Magnitude spectrum
0.04
0.03
0.02
0.01
0
0 500 1000 1500 2000 2500 3000
Frequency (Hz)
Figure 5.11: Magnitude spectrum in [0, 0.5f¯s ] = [0, 2756.25] (Hz) of the middle C in Figure
1.3 with sampling frequency f¯s = fs /4 = 5512.5 (Hz).
5.5. FFT-COMPUTED MAGNITUDE SPECTRA OF FINITE-DURATION STEP FUNCTIONS111
2. In recording real-world signals, they are often corrupted by noises due to devices used
and other sources.
3. In digital recording of real-world signals, we should use the smallest possible sampling
period or the highest possible sampling frequency, and hope that frequency aliasing due
to time sampling will be small. If it is not small, there is nothing we can do because we
already have used the smallest sampling period available.
4. Once the sampled sequence of a CT signal x(t) is available, we can use Subprogram
B or C to compute and to plot the magnitude spectrum of x(t) for ω in [0, ω s /2] or
in [−ω s /2, ω s /2) (rad/s) or for f in [0, fs /2] or in [−fs /2, fs /2) (Hz). Even though
the phase spectrum can also be generated; it may not be correct. Fortunately, we use
mostly magnitude spectra in practice.
5. In digital processing of a CT signal, the smaller T used, the more difficult in designing
digital filters. See Subsection 11.10.1. Thus it is desirable to use the largest possible T .
Such a T can be found by downsampling as discussed in the preceding subsection.
will generate the magnitude spectrum of xL (t) for ω in [−π/T, π/T ) = [−157, 157) (rad/s).
In order to see better the triangle at ω = 0, we use Subprogram C to plot the spectrum for
112 CHAPTER 5. SAMPLING THEOREM AND SPECTRAL COMPUTATION
3 10 100
2
5 50
1
0 0 0
−20 0 20 −10 0 10 −1 0 1
Frequency (rad/s) Frequency (rad/s) Frequency (rad/s)
positive and negative frequencies. We plot in Figure 5.12(a) only for ω in [−20, 20]. This is
achieved by typing the axis function axis([x min x max y min y max])=([-20 20 0 5]).
The spectrum in Figure 5.12(a) is a triangle. We explain how it is obtained. Recall that
FFT generates N frequency samples of XL (ω) at ω = mD = m(2π/N T ) = m(2π/L), for
m = 0 : N − 1. In our program, because no semicolon is typed after X=T*fft(x), the program
will generate Figure 5.12(a) in a figure window as well as return in the command window 200
numbers. They are
4 0 0 ··· 0 0 (5.29)
That is, 4 followed by 199 zeros. If we shift the second half of (5.29) to negative frequencies
and then connect neighboring points with straight lines, we will obtain the triangle shown in
Figure 5.12(a). The height of the triangle is 4 or XL (0) = 4 = L. Because D = 2π/L = 1.57,
the base is located at [−1.57, 1.57] or [−2π/L, 2π/L]. Thus the area of the triangle is 2π.
Next we select N = 800 and then N = 8000 or L = N T = 16 and then L = 160. Program
5.8 with these N and modified axis functions will yield the magnitude spectra in Figures
5.12(b) and (c). Each one consists of a triangle centered at ω = 0 with height L and base
[−2π/L, 2π/L] and is zero elsewhere. As L increases, the height of the triangle becomes higher
and its base width becomes narrower. However the area of the triangle remains as 2π. Thus
we may induce from the FFT computed results that the magnitude spectrum of the step
function xL (t) = 1, with L = ∞ is 2πδ(ω), an impulse located at ω = 0 and with weight 2π.
This conclusion is inconsistent with the mathematical results. The spectrum of x(t) = 1,
for t in [0, ∞] is πδ(ω) + 1/jω. The FFT computed spectrum yields an impulse at ω = 0 but
with weight 2π and does not show any sign of 1/jω. Actually the FFT computed spectrum
leads to the spectrum of x(t) = 1 for all t in (−∞, ∞). It is most confusing. Is it a paradox
involving ∞ as in (3.16)? Or is there any error in the FFT computation?
We will show in the next section that the FFT computed spectrum of xL (t) is mathe-
matically correct for any finite L, even though it leads to a questionable result for L = ∞.
Fortunately, in engineering, there is no need to be concerned with ∞.
3.5 14
3 12
2.5 10
Magnitude
Magnitude
2 8
1.5 6
1 4
0.5 2
0 0
−10 −5 0 5 10 −10 −5 0 5 10
Frequency (rad/s) Frequency (rad/s)
Figure 5.13: (a) Exact magnitude spectrum of xL (t) for L = 4. (b) L = 16.
(a) L=4 with Nb=N (b) L=4 with Nb=2xN (c) L=4 with Nb=20xN
4 4 4
3 3 3
Magnitude
2 2 2
1 1 1
0 0 0
0 5 10 0 5 10 0 5 10
Frequency (rad/s) Frequency (rad/s) Frequency (rad/s)
Figure 5.14: (a) Exact magnitude spectrum of x4 (t) (dotted line), FFT-computed magnitudes
(dots) using N = 200 samples of x4 (t), and their linear interpolation (solid line). (b) With
additional N trailing zeros. (c) With additional 19N trailing zeros.
yields in Figure 5.13(a) the magnitude spectrum of xL (t) for L = 4. Figure 5.13(b) shows the
magnitude spectrum of xL (t) for L = 16.
Now we show that even though the FFT-generated magnitude spectrum in Figure 5.12
looks very different from the exact magnitude spectrum shown in Figure 5.13, the result in
Figure 5.12 is in fact correct. Recall that FFT computes the samples of XL (ω) at ω m =
2mπ/N T . We superpose Figure 5.12(a) (solid line) and Figure 5.13(a) (dotted line) in Figure
5.14(a) for ω in [0, 10]. Indeed the FFT computed results are the samples of XL (ω) which
happen to be all zero except at ω = 0.9 In conclusion, the the FFT-generated magnitude
spectrum in Figure 5.12 is correct. Figure 5.12 looks different from Figure 5.13 only because
of the poor frequency resolution and the use of linear interpolation. This problem will be
resolved in the next subsection.
9 This can also be verified analytically.
114 CHAPTER 5. SAMPLING THEOREM AND SPECTRAL COMPUTATION
Then it will generate the solid dots and the solid line in Figure 5.14(b). Note that the solid dots
are generated by plot(mp*D,abs(X(mp+1)),’.’) and the solid line by plot(mp*D,abs(X(mp+1))).
See Problem 3.29. Note that two or more plot functions can be combined as in the fifth line of
the program. We plot in Figure 5.14(b) with dotted line also the exact magnitude spectrum
for comparison. Program 5.10 uses Nb=2xN or adds 200 trailing zeros, thus it has a frequency
resolution half of the one in Program 5.8. Consequently Program 5.10 generates one extra
point between any two immediate points in Figure 5.14(a) as shown in Figure 5.14(b). We
see that the exact and computed spectra become closer. If we use Nb=20xN, then the program
generates extra 19 points between any two immediate points in Figure 5.14(a) and the com-
puted magnitude spectrum (solid lines) is indistinguishable from the exact one (dotted lines)
as shown in Figure 5.14(c). In conclusion, the exact magnitude spectrum in Figure 5.13(a)
can also be obtained using FFT by introducing trailing zeros.
(a) (aa)
0.04 0.03
0.02 0.02
Amplitude 0.01
0
0
−0.02
−0.01
−0.04 −0.02
−0.06 −0.03
0 2 4 6 8 2 2.01 2.02 2.03 2.04 2.05
Time (s) Time (s)
(b) (bb)
0.04 0.04
0.03 0.03
Magnitude
0.02 0.02
0.01 0.01
0 0
0 1000 2000 3000 4000 200 300 400 500 600
Frequency (Hz) Frequency (Hz)
Figure 5.15: (a) A dial tone over 6.5 seconds. (aa) Its small segment. (b) Magnitude spectrum
of (a). (bb) Its segment between [200, 600].
accept a telephone number. We use an MS sound recorder (with audio format: PCM, 8.000
kHz, 8 Bit, and Mono) to record a dial tone over 6.5 seconds as shown in Figure 5.15(a). We
plot in Figure 5.15(aa) its small segment for t in [2, 2.05]. We then use a program similar to
Program 5.5 to plot the magnitude spectrum of the dial tone in Figure 5.15(b) for the entire
positive Nyquist frequency range and in Figure 5.15(bb) its small segment for frequency in
[200, 600] (Hz). It verifies that the dial tone is generated by two frequencies 350 and 440 Hz.
We repeat the process by recording a ringback tone. Figure 5.16(a) shows a ringback
tone for about 8 seconds. We plot in Figure 5.16(aa) its small segment for t in [4, 4.1]. We
then use a program similar to Program 5.5 to plot its magnitude spectrum in Figure 5.16(b)
for the entire positive Nyquist frequency range and in Figure 5.16(bb) its small segment for
frequency in [350, 550] (Hz). It verifies that the ringback tone is generated by two frequencies
440 and 480 Hz.
In a touch-tone telephone, each number generates a tone with two frequencies according
to the following table:
1209 1336 1477
697 Hz 1 2 3
770 4 5 6
852 7 8 9
941 * 0 #
In detecting these numbers, there is no need to compute their spectra. All that is needed is to
detect the existence of such frequencies. Thus for each tone, we need to compute its spectrum
116 CHAPTER 5. SAMPLING THEOREM AND SPECTRAL COMPUTATION
(a) (aa)
0.04 0.04
0.02
0.02
Amplitude
0
0
−0.02
−0.02
−0.04
−0.06 −0.04
0 2 4 6 8 10 4 4.02 4.04 4.06 4.08 4.1
Time (s) Time (s)
(b) (bb)
0.02 0.02
0.015 0.015
Magnitude
0.01 0.01
0.005 0.005
0 0
0 1000 2000 3000 4000 350 400 450 500 550
Frequency (Hz) Frequency (Hz)
Figure 5.16: (a) A ringback tone over 8 seconds. (aa) Its small segment. (b) Magnitude
spectrum of (a) in [0, 4000] (Hz). (bb) Its segment in [350, 550].
5.7. DO FREQUENCY SPECTRA PLAY ANY ROLE IN REAL-TIME PROCESSING?117
5.7.1 Spectrogram
Even though we use only bandwidths of signals in this text, magnitude spectra of signals
are used in the study of speech. Because the Fourier transform is a global property, it is
difficult to subtract local characteristics of a time signal form its spectrum. Thus in speech
analysis, we divide the time interval into small segments of duration, for example, 20 ms,
and compute the spectrum of the time signal in each segment. We then plot the magnitude
spectra against time and frequency. Note that the phase spectra are discarded. Instead of
plotting the magnitude spectra using a third ordinate, magnitudes are represented by different
shades or different colors on the time-frequency plane. A same shade or color is painted over
the entire time segment and over a small frequency range of width, for example, 200 Hz. The
shade is determined by the magnitude at, for example, the midpoint of the range and the
magnitude is then quantized to the limited number of shades. The resulting two-dimensional
plot is called a spectrogram. It reveals the change of magnitude spectrum with time. It is
used in speech recognition and identification. See References [M4, S1].
Problems
5.1 Consider a CT signal x(t). It is assumed that its spectrum is real and as shown in Figure
5.17. What are the spectra of x(nT ) and T x(nT ) if T = π/25? Is there frequency
aliasing?
30
25
20
Spectrum
15
10
0
−40 −30 −20 −10 0 10 20 30 40
Frequency (rad/s)
1. Plot the magnitudes and phases of X(ω), X(ω−8) and X(ω+8), for ω in [−20, 20].
Can the magnitude spectra of X(ω − 8) and X(ω + 8) be obtained from the mag-
nitude spectrum of X(ω) by folding with respect to ±4? Can the phase spectra of
X(ω − 8) and X(ω + 8) be obtained from the phase spectrum of X(ω) by folding
with respect to ±4?
2. Compute analytically the spectrum of T Xd (ω) with sampling period T = π/4 and
then plot the magnitude and phase of T Xd (ω) for ω in the NFR (−π/T, π/T ] =
(−4, 4]. Can they be obtained from the plots in Part 1? Discuss the effect of t0 .
Can you conclude that the effect of frequency aliasing is complicated if X(ω) is
not real valued?
5.4* If x(t) is bandlimited to ω max and if T < π/ωmax , then X(ω) = T Xd (ω) for |ω| < π/T
as shown in (5.14). Substitute (5.7) into (5.14) and then into (4.22) to verify
∞
X sin[π(t − nT )/T ]
x(t) = x(nT )
n=−∞
π(t − nT )/T
Thus for a bandlimited signal, its time signal x(t) can be computed exactly from its
samples x(nT ). This is dual to Problem 4.18 which states that for a time-limited signal,
its frequency spectrum can be computed exactly from its frequency samples.
5.5 Consider x(t) defined for t in [0, 5). If we select N = 5 equally spaced samples of x(t),
where are x(nT ), for n = 0 : 4? What is its sampling period T ? Do we have L = N T ?
5.6 Consider x(t) defined for t in [0, 5]. If we select N = 5 equally spaced samples of x(t),
where are x(nT ), for n = 0 : 4? What is its sampling period T ? Do we have L = N T ?
5.9 Use FFT to compute the magnitude spectrum of aL (t) = cos 70t, for t defined in [0, L)
with L = 4. Select the sampling period as T = 0.001. Plot the magnitude spectrum
for ω in the positive Nyquist frequency range [0, π/T ] (rad/s) and then zoom it in the
frequency range [0, 200].
5.7. DO FREQUENCY SPECTRA PLAY ANY ROLE IN REAL-TIME PROCESSING?119
5.10 Consider Problem 5.9. From the magnitude spectrum obtained in Problem 5.9, can you
conclude that cos 70t, for t in [0, 4], is roughly band-limited to 200 rad/s. Note that it is
band-limited to 70 rad/s only if cos 70t is defined for all t in (−∞, ∞). Use T = π/200
to repeat Problem 5.9. is the result roughly the same as the one obtained in Problem
5.9? Compare the numbers of samples used in both computations.
5.11 Use FFT to compute and to plot the magnitude spectrum of
for t defined in [0, L) with L = 50. Use N = 1000 in the computation. Are the peak
magnitudes at the three frequencies 20, 21, and 40 rad/s the same? If you repeat the
computation using L = 100 and N = 2000, will their peak magnitudes be the same?
5.12 Consider Problem 5.11 with L = 50 and N = 1000. Now introduce 4N trailing zeros into
the sequence and use 5N -point FFT to compute and to plot the magnitude spectrum
of x(t) for t in [0, 50). Are the peak magnitudes at the three frequencies 20, 21, and
40 rad/s roughly the same? Note that this and the preceding problem compute spectra
of the same signal. But the frequency resolution in this problem is 2π/5N T , whereas
the frequency resolution in Problem 5.11 is 2π/N T . The former is five time smaller (or
better) than the latter.
5.13 Consider the signal
x(t) = sin 20t cos 21t sin 40t
defined for t in [0, 50) and zero otherwise. It is time limited, therefore its spectrum
cannot be band-limited. Use FFT and N = 1000 to compute and to plot its magnitude
spectrum. At what frequencies are the spikes located? Use the formula
to verify the correctness of the result. Which spike is due to frequency aliasing? In
general if the FFT-computed spectrum approaches zero in the neighborhood of π/T ,
then there is no frequency aliasing. This example shows an exception because the
spectrum of x(t) consists of only spikes. Repeat the computation using N = 2000.
Where are the spikes located? Is frequency aliasing significant?
5.14 Consider the signal shown in Figure 5.18. The signal is defined for t ≥ 0; it equals
x(t) = 0.5t, for 0 ≤ t < 2 and then repeats itself every 2 seconds. Verify that the
2
1.5
1
Amplitude
0.5
−0.5
−1
−1 0 1 2 3 4 5 6 7 8
Time (s)
MATLAB program
T=0.01;t=0:T:2-T;x=0.5*t;N=length(x);
n=0:N-1;D=2*pi/(N*T);X=T*fft(x);
mp=0:N/2;plot(mp*D,abs(X(mp+1))
axis([0 50 0 1])
120 CHAPTER 5. SAMPLING THEOREM AND SPECTRAL COMPUTATION
will yield the plot in Figure 5.19(a). It is the magnitude spectrum of x(t) (one period).
Verify that Figure 5.19(b) shows the magnitude spectrum of five periods and Figure
5.19(c) shows the magnitude spectrum of 25 periods. (Hint: Use x1=[x x x x x] and
x2=[x1 x1 x1 x1 x1].)
1 5 25
0.8 4 20
Magnitude
0.6 3 15
0.4 2 10
0.2 1 5
0 0 0
0 20 40 0 20 40 0 20 40
Frequency (rad/s) Frequency (rad/s) Frequency (rad/s)
Figure 5.19: Magnitude spectra of (a) one period, (b) five periods, and (c) twenty five periods
of the signal in Figure 5.18.
5.15 Let A be real and negative. What are its magnitude and phase? If A → 0, will |A| and
<
) A approach zero?
Chapter 6
Systems – Memoryless
6.1 Introduction
Just like signals, systems exist everywhere. There are many types of systems. A system
that transforms a signal from one form to another is called a transducer. There are two
transducers in every telephone set: a microphone that transforms voice into an electrical
signal and a loudspeaker that transforms an electrical signal into voice. Other examples of
transducers are strain gauges, flow-meters, thermocouples, accelerometers, and seismometers.
Signals acquired by transducers often are corrupted by noise. The noise must be reduced or
eliminated. This can be achieved by designing a system called a filter. If a signal level is
too small or too large to be processed by the next system, the signal must be amplified or
attenuated by designing an amplifier. Motors are systems that are used to drive compact
discs, audio tapes, or huge satellite dishes.
The temperature of a house can be maintained automatically at a desired temperature
by designing a home heating system. By setting the thermostat at a desired temperature,
a burner will automatically turn on to heat the house when the temperature falls below the
desired temperature by a threshold and will automatically shut off when it returns to the
desired temperature. The US economy can also be considered a system. Using fiscal policy,
government spending, taxes, and interest rates, the government tries to achieve that the gross
domestic product (GDP) grows at a healthy but not overheated rate, that the unemployment
rate is acceptable and that inflation is under control. It is a complicated system because
it is affected by globalization, international trades, and political crises or wars. Consumers’
psychology and rational or irrational expectation also play a role. Such a system is very
complicated.
To a user, an audio compact disc (CD) player is simply a system; it generates an audio
signal from a CD. A CD player actually consists of many subsystems. A CD consists of one
or more spiral tracks running from the inner rim to the outer rim. The spacing between
tracks is 1.6 µm (10−6 meter). Roughly a human hair placing along the tracks can cover 40
tracks. There are pits and lands along the tracks with width 0.6 µm and various lengths. The
transition from a pit to a land and vice versa denotes a 1 and no transition denotes a 0. A
CD may contain over 650 million such binary digits or bits. To translate this stream of bits
impressed on the disc’s plastic substrate into an electrical form requires three control systems.
A control system will spin the CD at various angular speeds from roughly 500 rpm (revolution
per minute) at the inner rim to 200 rpm at the outer rim to maintain a constant linear velocity
of 1.2 m/s; another control system will position the reading head; yet another control system
will focus the laser beam on the track. The reflected beam is pick up by photodiodes to
generate streams of 1s and 0s in electrical form. The stream of bits contains not only the
16-bit data words of an audio signal sampled at 44.1 kHz but also bits for synchronization,
eight-to-fourteen modulation (EFM) (to avoid having long strings of 0s), and error correction
codes (to recover data from scratches and fingerprints). A decoder will subtract from this
121
122 CHAPTER 6. SYSTEMS – MEMORYLESS
stream of bits the original audio signal in binary form. This digital signal will go through
a digital-to-analog converter (DAC) to drive a loudspeaker. Indeed a CD player is a very
complicated system. It took the joint effort of Philips and Sony Corporations over a decade
to bring the CD player to the market in 1982.
The word ‘systems’ appears in systems engineering, systems theory, linear systems, and
many others. Although they appear to be related, they deal with entirely different subject
areas. Systems engineering is a discipline dealing with macro design and management of
engineering problems. Systems theory or general systems theory, according to Wikipedia, is
an interdisciplinary field among ontology, philosophy, physics, biology, and engineering. It
focuses on organization and interdependence of general relationship and is much broader than
systems engineering. Linear systems, on the other hand, deal with a small class of systems
which can be described by simple mathematical equations. The systems we will study in this
text belong to the last group. The study of linear systems began in the early 1960 and the
subject area has now reached its maturity. We study in this text only a small part of linear
systems which is basic and relevant to engineering.
and only transfer functions in qualitative analysis. We also give reasons for using only transfer
functions in design.
Not every physical system can be approximated or modeled by an LTI lumped model.
However a large number of physical systems can be so modeled under some approximation and
limitation as we will discuss in the text. Using the models, we can then develop mathematical
equations and carry out analysis and design. Note that the design is based on the model
and often does not take the approximation and limitation into consideration. Moreover, the
design methods do not cover all aspects of actual implementations. For example, they do
not discuss, in designing a CD player, how to focus a laser beam on a spiral track of width
0.6 µm, how to detect mis-alignment, and how to maintain such a high precision control in
a portable player. These issues are far beyond the design methods discussed in most texts.
Solving these problems require engineering ingenuity, creativity, trial-and-error, and years of
development. Moreover, textbooks’ design methods do not concern with reliability and cost.
Thus what we will introduce in the remainder of this text is to provide only the very first
step in the study and design of physical systems.
6.1.1 A terminal does not necessarily mean a physical wire sticking out of the box. It merely
means that a signal can be applied or measured at the terminal. The signal applied at the
input terminal is called an input or an excitation. The signal at the output terminal is called
an output or a response. The only condition for a black box to qualify as a system is that every
input excites a unique output. For example, consider the RC circuit shown in Figure 6.2(a).
It can be modeled as a black box with the voltage source as the input and the voltage across
the capacitor as the output. Figure 5.2(b) shows a block with mass m connected to a wall
through a spring. The connection of the two physical elements can also be so modeled with
the applied force u(t) as the input and the displacement y(t) measuring from the equilibrium
position 2 as the output. It is called a black box because we are mainly concerned with its
terminal properties. This modeling concept is a powerful one because it can be applied to
any system, be it electrical, mechanical or chemical.
A system is called a single-input single-output (SISO) system if it has only one input
terminal and only one output terminal. It is called a multi-input multi-output (MIMO)
system if it has two or more input terminals and two or more output terminals. Clearly, we
can have SIMO or MISO systems. We study in this text mostly SISO systems. However most
concepts and computational procedure are directly applicable to MIMO systems.3
A SISO system is called a continuous-time (CT) system if the application of a CT signal
excites a CT signal at the output terminal. It is a discrete-time (DT) system if the application
of a DT signal excites a DT signal at the output terminal. The concepts to be introduced are
applicable to CT and DT systems.
case.
2 It is the position where the block is stationary before the application of an input.
3 However the design procedure is much more complex. See Reference [C6].
6.4. CAUSALITY, TIME-INVARIANCE, AND INITIAL RELAXEDNESS 125
Figure 6.3: (a) An input-output pair. (b) If the input is shifted by t1 , then the output is
shifted by t1 .
output, then u(t), for t < t0 , is the past input; u(t0 ), the current input; and u(t), for t > t0 ,
the future input. It is simple to define mathematically a system whose current output depends
on past, current, and future inputs. See Problem 6.1.
A system is defined to be causal,4 if its current output depends on past and/or current
input but not on future input. If a system is not causal, then its current output will depend
on a future input. Such a system can predict what will be applied in the future. No physical
system has such capability. Thus causality is a necessary condition for a system to be built in
the real world. This concept will be discussed further in Section 7.2. This text studies only
causal systems.5
If the characteristics of a system do not change with time, then the system is said to
be time invariant (TI). Otherwise it is time varying. For the RC circuit in Figure 6.2(a),
if the resistance R and the capacitance C are constant, independent of t, then the circuit
is time-invariant. So is the mechanical system if the mass m and the spring constant k are
independent of time. For a time-invariant system, no matter at what time we apply an input,
the output waveform will always be the same. For example, suppose the input u(t) shown on
the left hand side of Figure 6.3(a) excites the output y(t) shown on its right hand side. Now
if we apply the same input t1 seconds later as shown in Figure 6.3(b), and if the system is
time invariant, then the same output waveform will appear as shown in Figure 6.3(b), that
is,
u(t − t1 ), t ≥ t1 → y(t − t1 ), t ≥ t1 (time-shifting) (6.1)
Note that u(t − t1 ) and y(t − t1 ) are the shifting of u(t) and y(t) by t1 , respectively. Thus a
time-invariant system has the time-shifting property.
For a system to be time-invariant, the time-shifting property must hold for all t1 in [0, ∞).
Most real-world systems such as TVs, automobiles, computers, CD and DVD players will
break down after a number of years. Thus they cannot be time invariant in the mathematical
sense. However they can be so considered before their breakdown. We study in this text only
time-invariant systems.
The preceding discussion in fact requires some qualification. If we start to apply an input
at time t0 , we require the system to be initially relaxed at t0 . By this, we mean that the
output y(t), for t ≥ t0 , is excited exclusively by the input u(t), for t ≥ t0 . For example, when
we use the odometer of an automobile to measure the distance of a trip, we must reset the
meter. Then the reading will yield the distance traveled in that trip. Without resetting, the
reading will be incorrect. Thus when we apply to a system the inputs u(t) and u(t−t1 ) shown
in Figure 6.3, the system is required to be initially relaxed at t = 0 and t = t1 respectively.
If the output y(t) lasts until t2 as shown in Figure 6.3(a), then we require t1 > t2 . Otherwise
the system is not initially relaxed at t1 . In general, if the output of a system is identically
4 Not to be confused with casual.
5 Some texts on signals and systems call positive-time signals causal signals. In this text, causality is defined
only for systems.
126 CHAPTER 6. SYSTEMS – MEMORYLESS
zero before we apply an input, the system is initially relaxed. Resetting a system or turning
a system off and then on will make the system initially relaxed.
We study in this text only time-invariant systems. For such a system, we may assume,
without loss of generality, the initial time as t0 = 0. The initial time is not an absolute one;
it is the instant we start to study the system. Thus the time interval of interest is [0, ∞)
6.4.1 DT systems
We discuss the DT counterpart of what has been discussed so far for CT systems. A system is a
DT system if the application of a DT signal excites a unique DT signal. We use u[n] := u(nT )
to denote the input and y[n] := y(nT ) to denote the output, where n is the time index and can
assume only integers and T is the sampling period. As discussed in Section 2.6.1, processing of
DT signals is independent of T so long as T is large enough to carry out necessary computation.
Thus in the study of DT systems, we may assume T = 1 and use u[n] and y[n] to denote the
input and output.6 Recall that variables inside a pair of brackets must be integers.
The current output y[n0 ] of a DT system may depend on past input u[n], for n < n0 ,
current input u[n0 ], and future input u[n], for n > n0 . A DT system is defined to be causal if
the current output does not depend on future input; it depends only on past and/or current
inputs. If the characteristics of a DT system do not change with time, then the system
is time invariant. For a time-invariant DT system, the output sequence will always be the
same no matter at what time instant the input sequence is applied. This can be expressed
as a terminal property using input-output pairs. If a DT system is time invariant, and if
u[n], n ≥ 0 → y[n], n ≥ 0, then
for any u[n] and any n1 ≥ 0. As in the CT case, the system must be initially relaxed when
we apply an input. For a DT time-invariant system, we may select the initial time as n0 = 0;
it is the time instant we start to study the system.
if the system is time invariant. The function f in (6.3) is independent of t. In this case, we
can drop t and write (6.3) as y = f (u). In this equation, every u, a real number will generate
a unique real number as its output y. Such a function can be represented graphically as
shown in Figure 6.4. For each u, we can obtain graphically the output y. We use ui → yi , for
i = 1, 2, to denote that ui excites yi .
A time-invariant memoryless system is defined to be linear if for any ui → yi , i = 1, 2,
and for any real constant β, the system has the following two properties
u1 + u2 → y1 + y2 (additivity) (6.4)
βu1 → βy1 (homogeneity) (6.5)
6 In practice, T rarely equals 1. See Section 11.10 and its subsection to resolve this problem.
6.5. LINEAR TIME-INVARIANT (LTI) MEMORYLESS SYSTEMS 127
If a system does not have the two properties, the system is said to be nonlinear. In words,
if a system is linear, then the output of the system excited by the sum of any two inputs
should equal the sum of the outputs excited by individual inputs. This is called the additivity
property. If we increase the amplitude of an input by a factor β, the output of the system
should equal the output excited by the original input increased by the same factor. This is
called the homogeneity property. For example, consider a memoryless system whose input u
and output y are related by y = u2 . If we apply the input ui , then the output is yi = u2i , for
i = 1, 2. If we apply the input u3 = u1 + u2 , then the output y3 is
which is different from y1 + y2 = u21 + u22 . Thus the system does not have the additivity
property and is nonlinear. Its nonlinearity can also be verified by showing that the system
does not have the homogeneity property.
Let us check the memoryless system whose input and output are related as shown in Figure
6.4(a). From the plot, we can obtain {u1 = 0.5 → y1 = 2} and {u2 = 2 → y2 = 4}. However,
the application of u1 + u2 = 2.5 yields the output 4 which is different from y1 + y2 = 6.
Thus the system does not have the additivity property and is nonlinear. Note that we can
also reach the same conclusion by showing that 5u1 = 2.5 excites 4 which is different from
5y1 = 10. The nonlinearity in Figure 6.4(a) is called saturation. It arises often in practice.
For example, opening a valve reaches saturation when it is completely open. The speed of an
automobile and the volume of an amplifier will also saturate.
The system specified by Figure 6.4(b) is nonlinear as can be verified using the preceding
procedure. We show it here by first developing an equation. The straight line in Figure 6.4(b)
can be described by
y = 0.5u + 1 (6.6)
Indeed, if u = 0, then y = 1 as shown in Figure 6.4(b). If u = −2, then y = 0. Thus
the equation describes the straight line. If u = u1 , then y1 = 0.5u1 + 1. If u = u2 , then
y2 = 0.5u2 + 1. However, if u3 = u1 + u2 , then
and the additivity property does not hold. Thus the system specified by Figure 6.4(b) or
described by the equation in (6.6) is not a linear system.7
Consider now the memoryless system specified by Figure 6.6(c). To show a system not
linear, all we need is to find two specific input-output pairs which do not meet the additivity
property. However to show a system linear, we must check all possible input-output pairs.
There are infinitely many of them and it is not possible to check them all. Thus we must use
7 It is a common misconception that (6.6) is linear. It however can be transformed into a linear equation.
a mathematical equation to verify it. Because the slope of the straight line in Figure 6.4(c)
is 0.5, the input and output of the system are related by
for all t. Using this equation, we can show that the system is linear. Indeed, for any u = ui ,
we have yi = 0.5ui , for i = 1, 2. If u3 = u1 + u2 , then y3 = 0.5(u1 + u2 ) = y1 + y2 . If
u4 = βu1 , then y4 = 0.5(βu1 ) = β(0.5u1 ) = βy1 . Thus the system satisfies the additivity and
homogeneity properties and is linear. In conclusion, a CT time-invariant memoryless system
is linear if and only if it can be described by
for some real constant α and for all t. Such a system can be represented as a multiplier or an
amplifier with gain α as shown in Figure 6.5.
All preceding discussion is directly applicable to the DT case. A DT time-invariant mem-
oryless system is linear if and only if it can be described by
for some real constant α and for all time index n. Such a system can also be represented as
shown in Figure 6.5.
To conclude this section, we mention that the additivity and homogeneity properties can
be combined as follows. A memoryless system is linear if for any {ui → yi } and any real
constants βi , for i = 1, 2, we have
β1 u1 + β2 u2 → β1 y1 + β2 y2
This is called the superposition property. Thus the superposition property consists of addi-
tivity and homogeneity properties.
where f is as shown in Figure 6.7. The output voltage is a function of the difference of the two
input voltages or ed (t) := e+ (t) − e− (t). If the difference between e+ and e− lies within the
range [−a, a] as indicated in Figure 6.7, then vo is a linear function of ed . If |ed | is larger than
a, vo reaches the positive or negative saturation region. The level of saturation, denoted by
±vs , is determined by the power supplies, usually one or two volts below the power supplies.
Clearly an op amp is a nonlinear system.
In Figure 6.7, if ed = e+ − e− is limited to [−a, a], then we have
where A is a constant and is called the open-loop gain. It is the slope of the straight line
passing through the origin in Figure 6.7 and is in the range of 105 ∼ 1010 . The value a can
be computed roughly from Figure 6.7 as
vs
a≈
A
If vs = 14 V and A = 105 , then we have a = 14 × 10−5 or 0.14 mV (millivolt).
Because of the characteristics shown in Figure 6.7, and because a is in the order of milli-
volts, op amps have found many applications. It can be used as a comparator. Let us connect
v1 to the non-inverting terminal and v2 to the inverting terminal. If v1 is slightly larger than
v2 , then vo will reach the saturation voltage vs . If v2 is slightly larger than v1 , then vo becomes
−vs . Thus from vo , we know right away which voltage, v1 or v2 , is larger. Comparators can
be used to trigger an event and are used in building analog-to-digital converters.
Op amps can also be used to change a signal such as a sinusoidal, triangular, or other
signal into a timing signal. For example, consider the op amp shown in Figure 6.8(a) with
its inverting terminal grounded. If we apply a sine function to the non-inverting terminal,
130 CHAPTER 6. SYSTEMS – MEMORYLESS
Figure 6.8: (a) Op amp. (b) Changing a sinusoid into a square wave.
then vo (t) will be as shown in Figure 6.8(b). Indeed, the op amp is a very useful nonlinear
element.
To conclude this section, we mention that an op amp is not a two-input one-output system
even though it has two input terminals and one output terminal. To be so, we must have
where f1 and f2 are some functions, independent of each other. This is not the case as we
can see from (6.10). Thus the op amp is an SISO system with the single input ed = e+ − e−
and the single output vo .
non-inverting terminal as shown. To develop an equation to relate the input vi (t) and the
output vo (t), we substitute e+ (t) = vi (t) and e− (t) = vo (t) into (6.11) to yield
which implies
(1 + A)vo (t) = Avi (t)
and
A
vo (t) = vi (t) (6.13)
1+A
This equation describes the negative feedback system in Figure 6.9(a). Thus the system is
LTI memoryless and has gain A/(1 + A). For A large, A/(1 + A) practically equals 1 and
(6.13) becomes vo (t) = vi (t) for all t. Thus the op-amp circuit is called a voltage follower.
Equation (6.13) is derived from (6.11) which holds only if |ed | ≤ a. Thus we must check
whether or not |ed | ≤ a ≈ vs /A. Clearly we have
A 1
ed = e+ − e− = vi − vo = vi − vi = vi (6.14)
1+A 1+A
If |vi | < vs , then (6.14) implies |ed | < |vs /(1 + A)| which is less than a = vs /A. Thus for the
voltage follower, if the magnitude of the input vi (t) is less than vs , then the op amp circuit
functions as an LTI memoryless system and vo (t) = vi (t), for all t.
An inquisitive reader may ask: If vo (t) equals vi (t), why not connect vi (t) directly to
the output terminal? When we connect two electrical devices together, the so-called loading
problem often occurs.8 See Section 10.4. Because of loading, a measured or transmitted signal
may be altered or distorted and will not represent the original signal. This is not desirable.
This loading problem can be eliminated or at least reduced by inserting a voltage follower.
Thus a voltage follower is also called a buffer or an isolating amplifier. Figure 6.10 shows a
sample-and-hold circuit in which two voltage followers are used to shield the electronic switch
and the capacitor from outside circuits. Indeed, voltage followers are widely used in electrical
systems.
25
20
15
10
Amplitude
0
−5
−10
−15
−20
−25
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
Time (s)
Figure 6.11: The input vi (t) = 20 sin 3t (dotted line) and its output (solid line) of a voltage
follower.
Now if vi (t) is limited to vs in magnitude, then we have vo (t) = vi (t) for all t. Does this
hold for every vi (t) with its magnitude limited to vs ? Let us build an op amp circuit and
carry out some experiments. If vs = 14, and if vi (t) = 4 sin ω0 t with ω0 = 10 rad/s, then we
can measure vo (t) = 4 sin 10t, for all t. If ω0 = 1000, we can still measure vo (t) = 4 sin 1000t.
However, if ω0 = 106 , we can measure roughly vo (t) = 2 sin 106 t, its amplitude is only about
half of the input amplitude. If ω0 = 1010 , no appreciable output can be measured or vo (t) ≈ 0.
It means that the voltage followers cannot track any input even if its magnitude is limited to
vs .
Every device has an operational frequency range or a bandwidth. For example, there are
low-frequency, intermediate-frequency (IF), or high-frequency amplifiers. A dc motor is to be
driven by direct current or battery. An ac motor may be designed to be driven by a 120-volt,
60-Hz household electric power supply. The operational frequency range of a voltage follower,
as we will discuss in Subsection 10.2.1, is of the form shown in Figure 6.12. We call [0, ω B ]
the operational frequency range. If ω 0 of vi (t) = sin ω0 t lies inside the range [0, ω B ] shown,
then we have vo (t) = vi (t). If ω0 > ω A , then vo (t) = 0. If ω0 lies roughly in the middle of ω B
and ω A , then we have vo (t) = 0.5vi (t). Real-world signals such as the ones shown in Figures
1.1 and 1.2 are rarely pure sinusoids. Thus it is more appropriate to talk about frequency
spectra. If the frequency spectrum of a signal lies inside the operational frequency range,
then the op-amp circuit can function as a voltage follower.
In conclusion, in order for a voltage follower to function as designed, an input signal must
meet the following two limitations:
2. The magnitude spectrum of the signal must lie inside the operational frequency range
of the voltage follower.
If an input signal does not meet the preceding two conditions, the output of the voltage
follower will not follow faithfully the input. These two limitations on input signals apply not
only to every voltage follower but also to every physical device.
6.8. IDEAL OP AMPS 133
for all t ≥ 0. This is the standing assumption in (6.12). For an ideal op amp, its two
input terminals are virtually open and virtually short at the same time. Using these two
properties, analysis and design of op-amp circuits can be greatly simplified. For example, for
the voltage follower in Figure 6.9(a), we have vi (t) = e+ (t) and vo (t) = e− (t). Using (6.17),
134 CHAPTER 6. SYSTEMS – MEMORYLESS
Figure 6.13: (a) Inverting amplifier with gain R2 /R1 . (b) Inverter.
we have immediately vo (t) = vi (t). For the op-amp circuit in Figure 6.9(b), we can also obtain
vo (t) = vi (t) immediately.
How do we know that the ideal op-amp model can be used to design an op-amp circuit?
The simplest way is to build the circuit and test its stability. This can be easily tested as we
will discuss in Subsection 9.5.2. If the circuit is not stable, the circuit must be redesigned.
Even if the circuit is stable, the circuit may require some modification in order to meet other
design criteria. In general, stability is the first requirement in designing most systems. We
mention that simple negative-feedback op-amp circuits are generally stable.
We use the ideal model to analyze the op-amp circuit shown in Figure 6.13(a). The input
voltage vi (t) is connected to the inverting terminal through a resistor with resistance R1 and
the output voltage vo (t) is fed back to the inverting terminal through a resistor with resistance
R2 . The non-inverting terminal is grounded as shown. Thus we have e+ (t) = 0.
Let us develop the relationship between vi (t) and vo (t). Equation (6.17) implies e− (t) = 0.
Thus the currents passing through R1 and R2 are, using Ohm’s law,
vi (t) − e− (t) vi (t)
i1 (t) = =
R1 R1
and
vo (t) − e− (t) vo (t)
i2 (t) = =
R2 R2
respectively. Applying Kirchoff’s current law, we have
vo (t) vi (t)
=−
R2 R1
Thus we have
R2
vo (t) = − vi (t) (6.19)
R1
This equation describes the op-amp circuit in Figure 6.13(a). The circuit is an LTI memoryless
system. It is called an inverting amplifier with gain R2 /R1 or an amplifier with gain −R2 /R1 .
It is called an inverter if R1 = R2 = R as shown in Figure 6.13(b). An inverter changes only
the sign of the input.
We now consider the DT counterpart. Consider y[n] = αu[n]. Can it be carried out in
real time? Or can y[n] appear at the same time instant as u[n]? The output y[n] is the result
of the multiplication of α and u[n]. Depending on how it is implemented, the speed of the
hardware used, and the number of bits used, the multiplication of two numbers may take
30 nanoseconds, it is not instantaneous. Thus y[n] = αu[n] can not hold in real time. The
simplest way to resolve this problem is simply to deliver y at the next sampling instant or,
equivalently, to modify the equation as y[n + 1] = αu[n]. Then the output will be delayed
by one sample. Here we implicitly assume that the sampling period T is large enough to
complete the multiplication. Because T is often very small, in the order of millisecond for
telephone transmission and much smaller for audio CD, the delay is not noticeable.
9 This was built by Anthony Olivo. The actual values of R1 and R2 are respectively 948 and 9789Ω.
136 CHAPTER 6. SYSTEMS – MEMORYLESS
Problems
6.1 Consider a CT system whose input u(t) and output y(t) are related by
Z t+1
y(t) = u(τ )dτ
τ =0
for t ≥ 0. Does the current output depend on future input? Is the system causal?
6.2 Consider a CT system whose input u(t) and output y(t) are related by
Z t
y(t) = u(τ )dτ
τ =0
for t ≥ 0. Does the current output depend on future input? Is the system causal?
6.7 Consider y(t) = (cos ω c t)u(t). It is the modulation discussed in Section 4.4.1. Is it
memoryless? linear? time-invariant?
6.11 Consider a memoryless system whose input u and output y are related as shown in
Figure 6.15. If the input u is known to limit to the immediate neighborhood of u0
shown, can you develop a linear equation to describe the system? It is assumed that
the slope of the curve at u0 is R. The linear equation is called a small signal model. It
is a standard technique in developing a linear model for electronic circuits.
6.9. CONCLUDING REMARKS 137
6.12 Consider the op amp shown in Figure 6.8(a). If the applied input vi (t) is as shown in
Figure 6.16, what will be the output?
6.13 Use the ideal op-amp model to find the relationship between the input signal vi (t) and
the output signal vo (t) shown in Figure 6.17. Is it the same as (6.19)? The circuit
however cannot be used as an inverting amplifier. See Problem 10.2.
6.14 Verify that the op-amp circuit in Figure 6.18 is an amplifier with gain 1 + R2 /R1 .
6.15 Verify that a multiplier with gain α can be implemented as shown in Figure 6.19(a) if
α > 0 and in Figure 6.19(b) if α < 0.
6.16 Consider the op-amp circuit shown in Figure 6.20 with three inputs ui (t), for i = 1 : 3
and one output vo (t). Verify vo (t) = u1 (t) + u2 (t) + u3 (t). The circuit is called an adder.
138 CHAPTER 6. SYSTEMS – MEMORYLESS
Figure 6.19: Multiplier with gain (a) α > 0 and (b) α < 0.
7.1 Introduction
In the preceding chapter, we studied linear time-invariant (LTI) memoryless systems. The
input u and output y of such a system can be related graphically. Moreover its mathematical
description is very simple. It is y(t) = αu(t) for the CT case and y[n] = αu[n] for the DT
case, where α is a real constant.
In the remainder of this text, we study systems with memory. There are two types of
memory: finite and infinite. Thus we will encounter four types of systems:
For a system with memory, it is not possible to relate its input u and output y graphically.
The only way is to use mathematical equations to relate them. Moreover, its study requires
some new concepts. These concepts can be most easily explained using DT systems with
finite memory. Thus we study first this class of systems.
After introducing the concepts of initial conditions, forced and natural responses, we
extend the concepts of linearity (L) and time-invariance (TI) to systems with memory. We
then introduce the following four types of mathematical equations to describe DT LTI systems:
1. Discrete convolutions
The first description is developed for DT systems with finite and infinite memory. The
remainder however are developed only for finite memory. The introduction is only a preview,
the four types of equations will be further discussed in the next chapter and chapter 11.
As discussed in Section 6.4.1, in the study of DT systems we may assume the sampling
period to be 1. This will be the standing assumption throughout this text unless stated
otherwise.
139
140 CHAPTER 7. DT LTI SYSTEMS WITH FINITE MEMORY
where f is some function and is independent of n. The equation reduces to y[n] = f (u[n]) or
y = f (u) if the system is memoryless or N = 0.
Equation (7.1) holds for all integers n. Replacing n by n − 2, n − 1, n + 1, and so forth,
we have
..
.
y[n − 2] = f (u[n − 2], u[n − 3], u[n − 4], . . . , u[n − N − 2])
y[n − 1] = f (u[n − 1], u[n − 2], u[n − 3], . . . , u[n − N − 1])
y[n] = f (u[n], u[n − 1], u[n − 2], . . . , u[n − N ])
y[n + 1] = f (u[n + 1], u[n], u[n − 1], . . . , u[n − N + 1])
..
.
y[n + N ] = f (u[n + N ], u[n + N − 1], u[n + N − 2], . . . , u[n])
y[n + N + 1] = f (u[n + N + 1], u[n + N ], u[n + N − 1], . . . , u[n + 1])
We see that the current input u[n] does not appear in y[n −1], y[n− 2], and so forth. Thus the
current input of a causal system will not affect past output. On the other hand, u[n] appears
in y[n + k], for k = 0 : N . Thus the current input will affect the current output and the next
N outputs. Consequently even if we stop to apply an input at n1 , that is, u[n] = 0 for n ≥ n1 ,
the output will continue to appear N more samples. If a system has infinite memory, then
the output will continue to appear forever even if we stop to apply an input. In conclusion,
if a DT system is causal and has a memory of N samples, then
• its current output does not depend on future input; it depends on current and past N
input samples,
or, equivalently,
• its current input has no effect on past output; it affects current and next N output
samples.
Example 7.2.1 Consider a savings account in a bank. Let u[n] be the amount of money
deposited or withdrawn on the n-th day and y[n] be the total amount of money in the account
at the end of the n-th day. By this definition, u[n] will be included in y[n]. The savings account
can be considered a DT system with input u[n] and output y[n]. It is a causal system because
the current output y[n] does not depend on future input u[m], with m > n. The savings
account has infinite memory because y[n] depends on u[m], for all m ≤ n or, equivalently,
u[n] affects y[m], for all m ≥ n. 2
1 The concept of causality is associated with time. In processing of signals such as pictures which have two
u[n], n ≥ 0 → y[n], n ≥ 0
to denote an input-output pair. This notation however requires some qualification. If the
system has memory, then the output y[n], for n ≥ 0, also depends on the input applied before
n = 0. Thus if u[n] 6= 0, for some n < 0, then the input u[n], for n ≥ 0, may not excite a
unique y[n], for n ≥ 0, and the notation is not well defined. Thus when we use the notation,
we must assume u[n] = 0, for all n < 0. Under this assumption, the notation denotes a unique
input-output pair and the output is called a forced response, that is,
½
u[n] = 0 for all n < 0
• Forced response: Output y[n], for n ≥ 0, excited by
u[n] 6= 0 for some or all n ≥ 0
Then we can show that the system is not linear. Indeed, if ui [n], i = 1, 2, then we have
Thus the DT system described by (7.4) does not have the additivity property and is therefore
not linear.
It turns out that the DT TI system is linear if and only if the output can be expressed as
a linear combination of the inputs with constant coefficients such as
Indeed, by direct substitution, we can verify that (7.5) meets the additivity and homogeneity
properties. Note that (7.5) is a causal system because its output y[n] does not depend on any
future input u[m] with m > n.
We define an important response, called the impulse response, for a system. It is the
output of the system, which is initially relaxed at n = 0, excited by the input u[0] = 1 and
u[n] = 0, for all n > 0. If the system is initially relaxed at n = 0, we may assume u[n] = 0
for all n < 0. In this case, the u[n] becomes u[0] = 1 and u[n] = 0 for all n 6= 0. It is the
impulse sequence defined in (2.24) with n0 = 0. Thus the impulse response of a system can
be defined as the output excited by u[n] = δd [n] where
½
1 for n = 0
δd [n] = (7.6)
0 for n 6= 0
is the impulse sequence at n = 0. We now compute the impulse response of the system
described by (7.5). Substituting u[n] = δd [n] into (7.5) yields
y[n] = 3δ d [n] − 2δ d [n − 1] + 0 × δ d [n − 2] + 5δ d [n − 3]
and y[n] = 0 for all n ≥ 4 and for all n < 0. This particular output sequence is the impulse
response and will be denoted by h[n]. Thus we have h[0] = 3, h[1] = −2, h[2]=0, h[3]=5, and
h[n] = 0, for n < 0 and n ≥ 4. They are simply the coefficients of the equation in (7.5). We
plot in Figure 7.1(a) the impulse sequence δd [n] and in Figure 7.1(b) the impulse response
144 CHAPTER 7. DT LTI SYSTEMS WITH FINITE MEMORY
Amplitude
0.5
−0.5
0 1 2 3 4 5 6 7
4
Amplitude
−2
0 1 2 3 4 5 6 7
Time index
Figure 7.1: (a) Impulse sequence δ[n − 0]. (b) Impulse response h[n].
h[n], for n ≥ 0. We see that after the input is removed, the output continues to appear for
N = 3 more sampling instants.
The impulse response is defined as the output excited by an impulse sequence applied
at n = 0. If a system is initially relaxed and causal, the output must be zero before the
application of an input. Thus we have h[n] = 0 for all n < 0. In fact, h[n] = 0, for all
n < 0, is a necessary and sufficient condition for a system to be causal. This is an important
condition and will be used later.
Example 7.3.1 Consider a DT system defined by
1
y[n] = (u[n] + u[n − 1] + u[n − 2] + u[n − 3] + u[n − 4])
5
Its current output is a linear combination of its current input and past four inputs with the
same coefficient 0.2. Thus the system is linear (L) and time-invariant (TI), and has a memory
of 4 samples. From the equation, we see that y[n] is the average of the five inputs u[n − k],
for k = 0 : 4, thus it is called a 5-point moving average. If n denotes trading days of a stock
market, then it will be a 5-day moving average. Its impulse response is
y[0] = 1
y[1] = y[0] + y[0] × 0.00015 = (1 + 0.00015)y[0] = 1.00015
3 The annual interest rate is 5.63%.
7.3. LINEAR TIME-INVARIANT (LTI) SYSTEMS 145
and
In general, we have y[n] = (1.00015)n . This particular output is the impulse response, that
is,
h[n] = (1.00015)n (7.7)
See (2.25). Let h[n] be the impulse response of a DT system. If the system is linear and time
invariant, then we have
Because the left-hand side of (7.9) is the input in (7.8), the output excited by the input u[n]
is given by the right-hand side of (7.9), that is,
∞
X
y[n] = h[n − k]u[k] (7.10)
k=0
This is called a discrete convolution. The equation will be used to introduce the concept of
transfer functions and a stability condition. However once the concept and the condition are
introduced, the equation will not be used again. Thus we will not discuss further the equation.
We mention only that if an impulse response is given as in Figure 7.1(b), then (7.10) reduces
to (7.5) (Problem 7.9). Thus a discrete convolution is simply a linear combination of the
current and past inputs with constant coefficients.
146 CHAPTER 7. DT LTI SYSTEMS WITH FINITE MEMORY
where N and M are positive integers, and ai and bi are real constants. Note that the coefficient
a0 associated with y[n] has been normalized to 1. It is called an LTI difference equation of
order max(N, M ). It is called a non-recursive difference equation if ai = 0 for all i and a
recursive difference equation if one or more ai are different from 0.
Consider the DT system described by (7.5) or
Its current output y[n] is the sum of of its current and past nineteen inputs and then divided
by 20 or multiplied by 1/20 = 0.05. The equation is a non-recursive difference equation of
order 19. Computing each y[n] requires nineteen additions and one multiplication.
We next transform the non-recursive difference equation into a recursive one. Equation
(7.12) holds for all n. It becomes, after subtracting 1 from all its indices,
or
y[n] = y[n − 1] + 0.05(u[n] − u[n − 20])
It is a recursive difference equation of order 20. Computing each y[n] requires two additions
(including subtractions) and one multiplication. Thus the recursive equation requires less
computation than the non-recursive equation. 2
This example shows that for a long moving average, its non-recursive difference equation
can be changed to a recursive difference equation. By so doing, the number of operations can
be reduced. This reduction is possible because a moving average has the same coefficients. In
general, it is futile to change a non-recursive difference equation into a recursive one.
Example 7.4.2 Consider the savings account studied in Example 7.3.2. Its impulse response
is h[n] = (1.00015)n , for n ≥ 0 and h[n] = 0, for n < 0. Its convolution description is
∞
X n
X
y[n] = h[n − k]u[k] = (1.00015)n−k u[k] (7.14)
k=0 k=0
Note that the upper limit of the second summation is n not ∞. If it were ∞, then the second
equality does not hold because it does not use the causality condition h[n] = 0, for n < 0.
Thus care must be exercised in actual use of the discrete convolution in (7.10).
7.4. SOME DIFFERENCE EQUATIONS 147
Using (7.14), we can develop a difference equation. See Problem 7.12. It is however simpler
to develop the difference equation directly. Let y[n − 1] be the total amount of money in the
account in the (n − 1)th day. Then y[n] is the sum of the principal y[n − 1], its one-day
interest 0.00015y[n − 1] and the money deposited or withdrawn on the nth day u[n], that is,
or
y[n] − 1.00015y[n − 1] = u[n] (7.16)
This is a special case of (7.11) with N = 1 and M = 0 and is a first-order recursive difference
equation. 2
1. The convolution expresses the current output y[n] as a linear combination of all past
and current inputs; whereas the difference equation expresses the current output as a
linear combination of one past output and current input. Thus the latter expression is
simpler.
2. In using the convolution, we need h[n] and all past inputs. In using the difference
equation, we need only the coefficient 0.00015, y[n − 1], and current input. All past
inputs are no longer needed and can be discarded. Thus the convolution requires more
memory locations than the difference equation.
N
X N (N + 1)
n= = 0.5N 2 + 0.5N
n=0
2
Computing y[n] using (7.15) or (7.16) requires only one addition. Thus to compute
y[n], for n = 0 : N requires only (N + 1) additions. This number is much smaller than
0.5N 2 +0.5N for N large. For example, if N = 1000, then the convolution requires more
than half a million additions; whereas the difference equation requires only 1001 addi-
tions. Similar remarks apply to the number of multiplications. Although the number of
operations in computing (7.14) can be reduced using the FFT discussed in Section 5.4,
its number of operations is still larger than the one using (7.16). Moreover, FFT will
introduce some additional numerical errors due to changing real-number computation
into complex-number computation.
4. The difference equation can be easily modified to describe the time-varying (interest
rate changes with time) or nonlinear case (interest rate depends on y[n]). This will be
difficult for using the convolution.
In conclusion, a difference equation is simpler, requires less memory and less computa-
tions, and is more flexible than a convolution. Thus if a system can be described by both
descriptions, we prefer the former. Although all DT LTI system with infinite memory can be
described by a convolution, only a small number of them can also be described by difference
equations, as we will discuss in Chapter 11. This text, as every other text on signals and sys-
tems, studies only this small class of DT LTI systems. Thus we will downplay convolutions
in this text.
148 CHAPTER 7. DT LTI SYSTEMS WITH FINITE MEMORY
Figure 7.2: Time domain: (a) Multiplier. (b) Adder. (c) Unit-delay element.
The element in Figure 7.2(c) is a unit-sample-delay element or, simply, unit-delay element; its
output y[n] equals the input u[n] delayed by one sample, that is,
can be implemented using basic elements. The equation has a memory of three samples.
Thus its implementation requires three unit-delay elements as shown in Figure 7.3. They
are connected in tandem as shown. If we assign the input of the left-most element as u[n],
then the outputs of the three unit-delay elements are respectively u[n − 1], u[n − 2], and
7.6. STATE-SPACE (SS) EQUATIONS 149
u[n − 3] as shown. We then use three multipliers with gains 3, −2, 5 and an adder to generate
y[n] as shown in Figure 7.3. Note that the coefficient of u[n − 2] is zero and no multiplier is
connected to u[n−2]. We see that the procedure is simple and straightforward. The procedure
is applicable to any DT LTI system with finite memory.
x1 [n + 1] = u[n]
x2 [n + 1] = x1 [n] (7.17)
x3 [n + 1] = x2 [n]
y[n] = 3u[n] − 2x1 [n] + 5x3 [n]
This set of equations is called a state-space (ss) equation. State-space equations are custom-
arily expressed in matrix form. Let us write (7.17) as
Then they can be expressed in matrix form as, see Section 3.3,
x1 [n + 1] 0 0 0 x1 [n] 1
x2 [n + 1] = 1 0 0 x2 [n] + 0 u[n] (7.18)
x3 [n + 1] 0 1 0 x3 [n] 0
x1 [n]
y[n] = [−2 0 5] x2 [n] + 3u[n] (7.19)
x3 [n]
150 CHAPTER 7. DT LTI SYSTEMS WITH FINITE MEMORY
where
x1 0 0 0 1
x = x2 , A= 1 0 0 , b= 0
x3 0 1 0 0
c = [−2 0 5], d=3
Note that both Ax[n] and bu[n] are 3 × 1 and their sum equals x[n + 1]. Both cx[n] and
du[n] are 1 × 1 and their sum yields y[n].
The set of two equations in (7.20) and (7.21) is called a (DT) state-space (ss) equation
of dimension 3. We call (7.20) a state equation and (7.21) an output equation. The vector x
is called the state or state vector; its components are called state variables. The scalar d is
called the direct transmission part; it connects u[n] directly to y[n] without passing through
any unit-delay element. Note that the state equation consists of a set of first-order difference
equations.
From (7.21) we see that if the state x at time instant n1 is available, all we need to
determine y[n1 ] is the input u[n1 ]. Thus the state x[n1 ] summarizes the effect of u[n], for
all n < n1 , on y[n1 ]. Indeed comparing Figures 7.3 and 7.4, we have x1 [n1 ] = u[n1 − 1],
x2 [n1 ] = u[n1 − 2], and x3 [n1 ] = u[n1 − 3]. For this example, the state vector consists
of simply the past three input samples and is consistent with the fact that the system has
a memory of three samples. Note that the output equation in (7.21) is memoryless. The
evolution or dynamic of the system is described by the state equation in (7.20).
To compute the output y[n], for n ≥ 0, excited by an input u[n], for n ≥ 0, we must
specify x[0], called the initial state. The initial state actually consists of the set of initial
conditions discussed in Section 7.2.1. For the system in Figure 7.4, the initial state is x[0] =
[u[−1] u[−2] u[−3]]0 , where the prime denotes the transpose. Unlike discrete convolutions
which can be used only if systems are initially relaxed or all initial conditions are zero, ss
equations can be used even if initial conditions are different from zero.
Once x[0] and u[n], for n ≥ 0 are given, we can compute y[n], for n ≥ 0, recursively using
(7.22). We use x[0] and u[0] to compute y[0] and x[1] in (7.22). We then use the computed
x[1] and the given u[1] to compute y[1] and x[2]. Proceeding forward, we can compute y[n],
for n = 0 : nf , where nf is the final time index. The computation involves only additions and
multiplications. Thus ss equations are most suited for computer computation.
As discussed in Subsection 2.6.1, there are two types of processing: real time and non-
real time. We discuss next that if the sampling period T is large enough, ss equations can
be computed in real time. We first assume x[0] = 0 and d = 0. In this case, we have
automatically y[0] = 0 as we can see from (7.22). Because d = 0, y[n] in (7.22) depends only
on x[n] which in turns depends on x[n − 1] and u[n − 1]. When u[0] arrives at n = 0, we
can send out the output y[0] at the same instant n = 0 and start to compute x[1] and then
7.6. STATE-SPACE (SS) EQUATIONS 151
y[1] = cx[1] using the given x[0] and u[0]. If this computing time is less than the sampling
period T , we store x[1] and y[1] in memory. We then deliver y[1] at the same time instant
as u[1] arrives. We also start to compute x[2] and then y[2] using u[1] and stored x[1]. We
deliver y[2] at the time instant n = 2 and use just arrived u[2] and stored x[2] to compute x[3]
and y[3]. Proceeding forward, the output sequence y[n] can appear at the same time instant
as u[n].
If d 6= 0, we can compute y[0] = cx[0] + du[0] only after u[0] arrives. Thus y[0] cannot
appear at the same time instant as u[0]. In this case, we simply delay y[0] by one sampling
instant. Because the sampling period is very small in practice, this delay can hardly be
detected. See Subsection 6.8.1. In conclusion, ss equations can be computed in real time.
In the remainder of this subsection, we use an example to discuss the use of MATLAB
functions to compute system responses. Consider a DT system described by (7.22) with
· ¸ · ¸
−0.18 −0.8 1
A = b= (7.23)
1 0 0
c = [2 − 1] d = 1.2
We use the MATLAB function lsim, abbreviation for linear simulation, to compute the
response of the system excited by the input u[n] = sin 1.2n, for n ≥ 0 and the initial state
x[0] = [−2 3]0 , where the prime denotes the transpose. Up to this point, nothing is said about
the sampling period T and we can select any T > 0. On the other hand, if u[n] is given as
x[n] = x(nT ) = sin 2.4nT with a specified T , say T = 0.5, then we must select T = 0.5. We
type in an edit window the following:
The first line expresses {A, b, c, d} in MATLAB format. In MATLAB, a matrix is expressed
row by row separated by semicolons and bounded by a pair of brackets as shown. Note that
b has two rows, thus there is a semicolon in b=[1;0]. Whereas c has only one row, and there
is no semicolon in c=[2 -1]. Note also that for a scalar, the pair of brackets may be omitted
as in d=1.2. The second line defines the system. We call the system ‘pig’, which is defined
using the state-space model, denoted by ss. It is important to have the fifth argument T
inside the parentheses. Without a T, it defines a CT system. If no T is specified, we may set
T = 1. The third line denotes the number of samples to be computed and the corresponding
time instants. The fourth line is the input and initial state. We then use lsim to compute
the output. Note that the T in defining the system and the T in defining t must be the same,
otherwise an error message will appear. The output is plotted in Figure 7.5(a) using the
function stem and in Figure 7.5(b) using the function stairs. The actual output of the DT
system is the one shown in Figure 7.5(a). The output shown in Figure 7.5(b) is actually the
output of the DT system followed by a zero-order hold. A zero-order hold holds the current
value constant until the arrival of the next value. If it is understood that the output of the
152 CHAPTER 7. DT LTI SYSTEMS WITH FINITE MEMORY
(a)
10
5
Amplitude
−5
−10
0 5 10 15
(b)
10
5
Amplitude
−5
−10
0 5 10 15
5
Amplitude
−5
−10
0 5 10 15
Time (sec)
Figure 7.5: (a) Response of (7.23). (b) Response of (7.23) followed by a zero-order hold. (c)
Output of lsim(pig,u,t,x0)
without left-hand argument.
DT system consists of only the values at sampling instants, then Figure 7.5(b) is easier for
viewing than Figure 7.5(a), especially if the sampling period is very small. Thus all outputs
of DT systems in MATLAB are plotted using the function stairs instead of stem. If the
function lsim does not have the left-hand argument as shown in the last line of Program 7.1,
then MATLAB automatically plots the output as shown in Figure 7.5(c).4 . It automatically
returns the title and x- and y-labels. We save Program 7.1 as an m-file named f75.m. Typing
in the command window >> f75 will generate Figure 7.5 in a figure window. Note that if the
initial state x[0] is zero, there is no need to type x0 in Program 7.1 and the last argument of
lsim may be omitted. Note that we can also run the program by clicking ‘Debug’ on the edit
window.
MATLAB contains the functions impulse and step which compute the impulse and step
responses of a system. The impulse response is the out excited by an impulse sequence (u[n] =
δd [n]) and the step response is the output excited by a step sequence (u[n] = 1, for all n ≥ 0).
In both responses, the initial state is assumed to be zero. The program that follows
will generate the impulse and step responses in Figure 7.6. Note that in using lsim, we must
4 It also generates plot(t,u)
7.7. TRANSFER FUNCTIONS – Z-TRANSFORM 153
Impulse Response
2
Amplitude
0
−1
−2
0 5 10 15 20 25
Time (sec)
Step Response
4
3
Amplitude
0
0 5 10 15 20 25
Time (sec)
Figure 7.6: (Top) Impulse response of (7.23) followed by a zero-order hold. (Bottom) Step
response of (7.23) followed by a zero-order hold.
specify the number of samples to be computed. However impulse and step will select auto-
matically the number of samples to be computed or, more precisely, they will automatically
stop computing when the output hardly changes. They also automatically generate titles and
labels. Thus their use is very simple.
where z is a complex variable. Note that it is defined only for the positive-time part of x[n]
or x[n], for n ≥ 0. Because its negative-time part (x[n], for n < 0) is not used, we often
assume x[n] = 0, for n < 0, or x[n] to be positive time. Before proceeding, we mention that
the z-transform has the following linearity property:
for any constants βi and any sequences xi [n], for i = 1, 2. This is the same superposition
property at the end of Section 6.5.
The z-transform in (7.24) is a power series of z −1 . In this series, z −n indicates the nth
sampling instant. For example, z 0 = 1 indicates the initial time instant n = 0, z −1 indicates
the sampling instant n = 1, and so forth. In this sense, there is not much difference between
a time sequence and its z-transform. For example, the z-transform of the DT signal shown in
Figure 7.1(a) is 1z 0 = 1. The z-transform of the DT signal shown in Figure 7.1(b) is
3z 0 + (−2)z −1 + 0 × z−2 + 5z −3 = 3 − 2z −1 + 5z −2
154 CHAPTER 7. DT LTI SYSTEMS WITH FINITE MEMORY
where we have interchanged the order of summations. Let us define l := n − k. Then the
preceding equation can be written as
∞
" ∞ #
X X
Y (z) = h[l]z −l u[k]z −k
k=0 l=−k
which can be reduced as, using the causality condition h[l] = 0, for l < 0,
∞
"∞ # "∞ #" ∞ #
X X X X
Y (z) = h[l]z −l u[k]z −k = h[l]z −l u[k]z −k
k=0 l=0 l=0 k=0
Thus we have
Y (z) = H(z) × U (z) = H(z)U (z) (7.25)
where
∞
X
H(z) := Z[h[n]] = h[n]z −n (7.26)
n=0
It is called the DT transfer function of the system. It is, by definition, the z-transform of
the impulse response h[n]. Likewise Y (z) and U (z) are respectively the z-transforms of the
output and input sequences. We give an example.
Example 7.7.1 Consider the DT system described by the equation in (7.5). Its impulse
response was computed as h[0] = 3, h[1] = −2, h[2] = 0, h[3] = 5 and h[n] = 0, for n ≥ 4.
Thus its transfer function is
3 − 2z −1 + 5z −3
H(z) = 3z 0 + (−2)z −1 + 0 × z −2 + 5z −3 = (7.27)
1
3z 3 − 2z 2 + 5
= (7.28)
z3
The transfer function in (7.27) is a polynomial of negative powers of z or a negative-power
rational function of z with its denominator equal to 1. Multiplying z 3 to its numerator and
denominator, we obtain a rational function of positive powers of z as in (7.28). They are
called respectively a negative-power transfer function and a positive-power transfer function.
Either form can be easily obtained from the other. 2
Although the transfer function is defined as the z-transform of the impulse response, it
can also be defined without referring to the latter. Using (7.25), we can also define a transfer
function as ¯
Z[output] Y (z) ¯¯
H(z) := = (7.29)
Z[input] U (z) ¯Initially relaxed
If we use (7.29) to compute a transfer function, then there is no need to compute its impulse
response. Note that if u[n] = δd [n], then U (z) = 1, and (7.29) reduces to H(z) = Y (z) which
is essentially (7.26). In practical application, we use (7.29) to compute transfer functions. In
computing a transfer function, we may assume u[n] = 0 and y[n] = 0, for all n < 0. This will
insure the initial relaxedness of the system.
7.7. TRANSFER FUNCTIONS – Z-TRANSFORM 155
∞
" ∞
#
X X
−1 −l −1 −l
Z[x[n − 1]] = z x[l]z =z x[−1]z + x[l]z
l=−1 l=0
" ∞
#
X
−1 −n
= z x[n]z = z −1 X(z)
n=0
where we have changed the summing variable from l to n and used (7.24). We see that the
time delay of one sampling instant is equivalent to the multiplication of z −1 in the z-transform,
denoted as
Unit-sample delay ←→ z −1
Thus a system which carries out a unit-sample time delay has transfer function z −1 = 1/z.
If a positive-time sequence is delayed by two samples, then its z-transform is its original
z-transform multiplied by z −2 . In general, we have
for any integer k ≥ 0 and any positive-time sequence x[n]. If k is a negative integer, the
formula does not hold as we will show shortly.
If it is initially relaxed at n0 = 0, we may assume u[n] = 0 and y[n] = 0 for n < 0. Thus y[n]
and u[n] are both positive time. Applying the z-transform and using (7.30), we have
Y (z)
H(z) := = 3 − 2z −1 + 5z −3 (7.32)
U (z)
We mention that for TI nonlinear systems, we may use (7.29) to compute transfer func-
tions. However, different input-output pairs will yield different transfer functions. Thus for
nonlinear systems, the concept of transfer functions is useless. For TI linear systems, no
matter what input-output pair we use in (7.29), the resulting transfer function is always the
same. Thus an LTI system can be described by its transfer function.
156 CHAPTER 7. DT LTI SYSTEMS WITH FINITE MEMORY
To conclude this section, we compute the z-transform of x[n + 1], the advance or shifting
to the left of x[n] by one sample. Be definition, we have
∞
X ∞
X
Z[x[n + 1]] = x[n + 1]z −n = x[n + 1]z −(n+1)+1
n=0 n=0
X∞
= z x[n + 1]z −(n+1)
n=0
We give an explanation of (7.33). The sequence x[n + 1] shifts x[n] one sample to the left.
Thus x[0] is located at n = −1 and x[n + 1] is no longer positive time even though x[n] is
positive time. The z-transform of x[n + 1] is defined only for the positive-time part of x[n + 1]
and, consequently, will not contain x[0]. Thus we subtract x[0] from X(z) and then multiply
it by z to yield (7.33). Equation (7.34) can be similarly explained. We see that advancing a
sequence by one sample is equivalent to the multiplication of z in its z-transform, denoted as
Unit-sample advance ←→ z
In other words, a system that carries out unit-sample time advance of the input signal has
transfer function z. Note that such a system is not causal and cannot be implemented in
real time. Moreover because (7.33) and (7.34) involve x[0] and x[1], their uses are not as
convenient as (7.30).
This equation is purely algebraic; it does not involve time advance or time delay. For example,
consider the unit-delay element shown in Figure 7.2(c) with y[n] = u[n − 1]. Although the
7.8. COMPOSITE SYSTEMS: TRANSFORM DOMAIN OR TIME DOMAIN? 157
Figure 7.7: Transform domain: (a) Multiplier. (b) Adder. (c) Unit-delay element.
Figure 7.8: (a) Parallel connection. (b) Tandem connection. (c) Positive feedback connection.
Figure 7.9: Transform domain: (a) Basic block diagram of (7.6). (b) Parallel connection of
three paths.
H(z) = 5z −3 − 2z −1 + 3
Let hi [n], for i = 1, 2, be the impulse responses of two subsystems and h[n] be the impulse
response of an overall system. Then we have
in the tandem connection. We see that (7.38) is more complicated than H1 (z)H2 (z) and its
derivation is also more complicated. More seriously, there is no simple way of relating h[n]
of the feedback connection in Figure 7.8(c) with h1 [n] and h2 [n]. Thus convolutions are not
used in describing composite systems, especially, feedback systems. Similar remarks apply to
ss equations and difference equations. See Reference [C6]. In conclusion, we use exclusively
transfer functions in studying composite systems.
1. A discrete convolution.
Among them, ss equations are best for computer computation6 and real-time processing.
Rational transfer functions are best for studying composite systems.
If N = ∞ or, equivalently, a system has infinite memory, then the situation changes
completely. Even though every DT LTI system with infinite memory can be described by a
convolution, it may not be describable by the other three descriptions. This will be studied
in the remainder of this text.
Problems
7.1 Consider a DT system whose input u[n] and output y[n] are related by
are preferred because they are more efficient, incur less numerical errors, and can be carried out in real time.
See Reference [C7].
160 CHAPTER 7. DT LTI SYSTEMS WITH FINITE MEMORY
7.3 Consider a DT system whose input u[n] and output y[n] are related by
u2 [n]
y[n] =
u[n − 1]
if u[n − 1] 6= 0, and y[n] = 0 if u[n − 1] = 0. Show that the system satisfies the
homogeneity property but not the additivity property.
7.4* Show that if the additivity property holds, then the homogeneity property holds for all
rational number α. Thus if a system has some “continuity” property, then additivity
implies homogeneity. Thus in checking linearity, we may check only the additivity
property. But we cannot check only the homogeneity property as Problem 7.3 shows.
7.5 Compute the impulse response of a system described by
y[n] = 2u[n − 1] − 4u[n − 3]
Is it FIR or IIR? Recall that the impulse response is the output y[n] = h[n] of the
system, which is initially relaxed at n0 = 0, excited by the input u[n] = δd [n]. If a
system is causal, and if u[n] = 0 and y[n] = 0, for all n < 0, then the system is initially
relaxed at n0 = 0. In substituting u[n] = δd [n], the condition u[n] = 0, for all n < 0, is
already imbedded in computing the impulse response.
7.6 Consider a DT memoryless system described by y[n] = 2.5u[n]. What is its impulse
response? Is it FIR or IIR?
7.7 Design a 4-point moving average filter. What is its impulse response? Is it FIR or IIR?
7.8 Compute impulse responses of systems described by
1. 2y[n] − 3y[n − 1] = 4u[n]
2. y[n] + 2y[n − 1] = −2u[n − 1] − u[n − 2] + 6u[n − 3]
Are they FIR or IIR? Note that in substituting u[n] = δd [n], we have used the condition
u[n] = 0, for all n < 0. In addition, we may assume y[n] = 0, for all n < 0, in computing
impulse responses.
7.9 Verify that if h[n] is given as shown in Figure 7.1(b), then (7.10) reduces to (7.5).
7.10* Verify that the discrete convolution in (7.10) has the additivity and homogeneity prop-
erties.
7.11* Verify that (7.10) is time invariant by showing that it has the shifting property in
(7.2). That is, if the input is u[n − n1 ], for any n1 ≥ 0, then (7.10) yields y[n − n1 ].
Recall that when we apply u[n − n1 ], the system is implicitly assumed to be relaxed at
n1 or u[n] = 0 and y[n] = 0 for all n < n1 .
7.12* For the savings account studied in Example 7.3.2, we have h[n] = (1.00015)n , for n ≥ 0
and h[n] = 0, for n < 0. If we write its discrete convolution as
∞
X ∞
X
y[n] = h[n − k]u[k] = (1.00015)n−k u[k]
k=0 k=0
then it is incorrect because it does not use the condition h[n] = 0, for n < 0 or h[n−k] =
0, for k > n. The correct expression is
∞
X n
X
y[n] = h[n − k]u[k] = (1.00015)n−k u[k]
k=0 k=0
Use this equation to develop the recursive difference equation in (7.16). Can you obtain
the same difference equation from the incorrect convolution?
7.9. CONCLUDING REMARKS 161
7.13 Consider Figure 1.5(c) in which the smoother curve is obtained using a 90-day moving
average. What is its convolution or non-recursive difference equation description? De-
velop a recursive difference equation to describe the moving average. Which description
requires less computation?
7.14 Let the output of the adder in Figure 7.2(b) be denoted by y[n]. If we write y[n] = cu[n],
where u is a 3 × 1 column vectors with ui , for i = 1 : 3, as its entries, what is c? Is the
adder linear and time-invariant?
7.15 The operator shown in Figure 7.10(a) can also be defined as an adder in which each
entering arrow has a positive or negative sign and its output is −u1 + u2 − u3 . Verify
that Figure 7.10(a) is equivalent to Figure 7.10(b) in which every entering arrow with
a negative sign has been changed to a positive sign by inserting a multiplier with gain
−1 as shown.
7.16 Consider the basic block diagram in Figure 7.4. Develop an ss equation by assigning
from left to right the output of each unit-delay element as x3 [n], x2 [n] and x1 [n]. Is the
ss equation the same as the one in (7.18) and (7.19)? Even though the two ss equations
look different, they are equivalent. See Reference [C6].
7.17 Draw a basic block diagram for the five-point moving average defined in Example 7.3.1
and then develop an ss equation to describe it.
7.19 Consider the savings account described by the first-order difference equation in (7.16).
Define x[n] := y[n − 1]. Can you develop an ss equation of the form shown in (7.22) to
describe the account?
7.20 Draw a basic block diagram for (7.16), assign the output of the unit-delay element as a
state variable, and then develop an ss equation to describe it. Is it the same as the one
developed in Problem 7.19? Note that the block diagram has a loop. A block diagram
is said to have a loop if it has a closed unidirectional path on which a point can travel
along the path and come back to the same point. Note that the block diagram in Figure
7.3 has no loop.
7.21 What are the z-transforms of δd [n − 1] and 2δd [n] − 3δd [n − 4]?
7.22 What is the DT transfer function of the savings account described by (7.16)?
7.23 What is the transfer function of the DT system in Problem 7.5? Compute it using
(7.26) and using (7.29).
162 CHAPTER 7. DT LTI SYSTEMS WITH FINITE MEMORY
7.24 What is the DT transfer function of the five-point moving average defined in Example
7.3.1?
7.25 Can the order of the tandem connection of two SISO systems with transfer functions
Hi (z), with i = 1, 2, be interchanged? Can the order of the tandem connection of two
MIMO systems with transfer function matrices Hi (z), with i = 1, 2, be interchanged?
7.26 Consider the feedback connection shown in Figure 7.11(a). Verify that its transfer
function from U (z) to Y (z) is
Y (z) H1 (z)
H(z) = =
U (z) 1 + H1 (z)H2 (z)
as shown in Figure 7.11(b). You can derive it directly or obtain it from Figure 7.8(c)
using Figure 7.10.
Figure 7.11: (a) Negative feedback system. (b) Overall transfer function.
Chapter 8
8.1 Introduction
We developed in the preceding chapter
1. discrete convolutions,
2. difference equations,
3. state-space equations, and
4. rational transfer functions
to describe DT LTI systems with finite memory. In this chapter, we will develop the four
descriptions for the general case. Logically, the next step is to study DT LTI systems with
infinite memory. Unfortunately, other than the savings account discussed in Example 7.3.2,
no suitable DT systems are available for explaining what will be discussed. Whereas physical
examples are available in the CT case. Thus we study in this chapter CT LTI systems with
finite and infinite memory.
After introducing the concepts of causality, time-invariance, forced responses, and linearity,
we develop an integral convolution to describe a CT LTI system with memory. This part is
similar to the DT case in the preceding chapter. We then discuss how to model RLC circuits as
LTI systems. Using simple circuit laws, we can develop state-space (ss) equations to describe
such systems. We next show that ss equations can be readily used in computer computation
and be implemented using op-amp circuits. We also give reasons for downplaying convolutions
and high-order differential equations in this text.
In order to develop CT transfer functions, we introduce the Laplace transform. We com-
pute transfer functions for RLC circuits, integrators, and differentiators. Their transfer func-
tions are all rational functions.
We then use transfer functions to classify systems as lumped or distributed. We show that
CT LTI systems with finite memory, except for trivial cases, are distributed and their study
is complicated. Thus we study in this text only CT LTI and lumped systems which have
proper rational transfer functions. We then show that such systems can also be described by
ss equations.
Finally we compare ss equations and transfer functions and show that transfer functions
may not describe systems fully. However under some conditions, there is no difference between
the two descriptions and either one can be used to study systems.
The class of LTI systems is very rich. Some can be described by rational transfer functions,
the rest cannot. The latter class is much larger than the former if the analogue between the
set of irrational numbers and the set of rational numbers holds. We study in this text only
LTI and lumped systems. So does every other text on signals and systems. However the
concept of lumpedness is not discussed in those texts.
163
164 CHAPTER 8. CT LTI AND LUMPED SYSTEMS
In conclusion, a forced response y(t), for t ≥ 0, is exited exclusively by the input u(t), for t ≥ 0;
whereas, a natural response is excited exclusively by the input u(t), for t < 0. For a natural
response, we require the input to be identically zero for all t ≥ 0. For a forced response, we
require the input to be identically zero for all t < 0 which however can be replaced by a set
of initial conditions or an initial state for the systems to be studied in this text. If the initial
state is zero, then the net effect of u(t), for all t < 0, on y(t), for all t ≥ 0, is zero, even though
u(t) may not be identically zero for all t < 0. See Subsection 7.2.1.
The classification of outputs into natural or forced responses will simplify the development
of mathematical equations for the latter. Note that natural responses relate the input applied
before t0 = 0 and the output appeared after t0 = 0. Because their time intervals are different,
developing equations to relate such inputs and outputs will be complicated. Whereas, forced
responses relate the input and output in the same time interval [0, ∞). Moreover, under the
relaxedness condition, every input excites a unique output. Thus developing equations to
relate them will be relatively simple. In the remainder of this text, we study mainly forced
responses. This is justified because in designing systems, we consider only forced responses.
Before proceeding, we mention that for a memoryless system, there is no initial state and
no natural response. A memoryless system is always initially relaxed and the excited output
is a forced response. On the other hand, if a system has infinite memory, then the input
applied at t1 affects the output for all t larger than t1 all the way to infinity. Consequently
if u(t) is different from zero for some t < 0, the system may not be initially relaxed at any
t2 > 0. The only way to make such a system initially relaxed is to turn it off and then to
turn it on again. Then the system is initially relaxed.
be any input-output pair and let t1 be any positive number. If the CT system is time invariant,
then we have
u(t − t1 ), t ≥ t1 → y(t − t1 ), t ≥ t1 (time shifting) (8.2)
This is called the time-shifting property. Recall that the system is implicitly assumed to be
initially relaxed at t1 before applying u(t − t1 ).
We next classify a system to be linear or nonlinear. Consider a CT system. Let
for i = 1, 2, be any two input-output pairs, and let β be any real constant. Then the system
is defined to be linear if it has the following two properties
If not, the system is nonlinear. These two properties are stated in terms of input and output
without referring to the internal structure of a system. Thus it is applicable to any type of
system: electrical, mechanical or others.
Consider a CT system. If we can find one or two input-output pairs which do not meet
the homogeneity or additivity property, then we can conclude that the system is nonlinear.
However to conclude a system to be linear, the two properties must hold for all possible
166 CHAPTER 8. CT LTI AND LUMPED SYSTEMS
input-output pairs. There are infinitely many of them and there are no way to check them
all. Not to mention the requirement that the system be initial relaxed before applying an
input. Thus it is not possible to check the linearity of a system by measurement. However
we can use the conditions to develop a general equation to describe LTI systems as we show
in the next subsection.
This is the same as (2.15) except that T is replaced by a. Now if the system is linear and
time invariant, then we have
The left-hand side of (8.6) equals roughly the input, thus the output of the system is given
by
X∞
y(t) ≈ u(na)ha (t − na)a
n=0
2 Correlations are intimately related to convolutions. Correlations can be used to measure the matching
between a signal and its time-delayed reflected signal. This is used in radar detection and is outside the scope
of this text.
8.3. MODELING LTI SYSTEMS 167
Let us define τ = na. If a → 0, the pulse δa (t) becomes the impulse δ(t), τ becomes a
continuous variable, a can be written as dτ , the summation becomes an integration, and the
approximation becomes an equality. Thus we have, as a → 0,
Z ∞
y(t) = h(t − τ )u(τ )dτ
0
We also mention that convolutions introduced in most texts on signals and systems assume
the form Z ∞
y(t) = h(t − τ )u(τ )dτ (8.7)
τ =−∞
in which the input is applied from t = −∞. Although the use of (8.7), as we will discuss
in Chapter 9, may simplify some derivations, it will suppress some important engineering
concepts. Thus we adopt (8.5). Moreover, we stress that convolutions describe only forced
responses or, equivalently, are applicable only if systems are initially relaxed.
The derivation of (8.5) is instructional because it uses explicitly the conditions of time
invariance and linearity. However, the equation is not used in analysis nor design. See also
Section 8.4.3. Thus we will not discuss it further.
ẏ(t) := dy(t)/dt. When the block is stationary (its velocity is zero), we need a certain amount
of force to overcome its static friction to start its movement. Once the block is moving, there
is a constant friction, called the Coulomb friction, to resist its movement. In addition, there
is a friction, called the viscous friction, which is proportional to the velocity as shown with a
dashed line in Figure 8.2(b) or
where f is called the viscous friction coefficient or damping coefficient. It is a linear relation-
ship. Because of the viscous friction between its body and the air, a sky diver may reach a
constant falling speed. If we disregard the static and Coulomb frictions and consider only the
viscous friction, then the mechanical system can be modeled as a linear system within the
elastic limit of the spring.
In conclusion, most physical systems are nonlinear and time-varying. However, within the
time interval of interest and a limited operational range, we can model many physical systems
as linear and time-invariant.4 Thus the systems studied in this text, in fact in most texts,
are actually models of physical systems. Modeling is an important problem. If a physical
system is properly modeled, we can predict the behavior of the physical system from its
model. Otherwise, the behavior of the model may differ appreciably from that of the physical
system as the op-amp circuit in Figure 6.9(b) demonstrated.
circuit is also linear and time invariant. Analogous to the DT case, we can develop a number
of mathematical equations to describe the system. In this section we develop its state-space
(ss) equation description.
To develop an ss equation, we must first select state variables. For RLC circuits, state
variables are associated with energy-storage elements. A resistor whose voltage v(t) and
current i(t) are related by v(t) = Ri(t) is a memoryless element; it cannot store energy5
and its variable cannot be selected as a state variable. A capacitor can store energy in its
electric field and its voltage or current can be selected as a state variable. If we select the
capacitor voltage v(t) as a state variable, then its current is Cdv(t)/dt =: C v̇(t) as shown
in Figure 8.1(b). If we select the capacitor current as a state variable, then its voltage is an
integration and is not used. An inductor can store energy in its magnetic field and its current
or voltage can be selected as a state variable. If we select the inductor current i(t) as a state
variable, then its voltage is Ldi(t)/dt = Li̇(t) as shown in Figure 8.1(b). In the assignment,
it is important to assign the polarity of a voltage and the direction of a current, otherwise
the assignment is not complete.
1. Assign all capacitor voltages and all inductor currents as state variables.6 If the voltage
of a capacitor with capacitance C is assigned as xi (t), then its current is C ẋi (t). If the
current of an inductor with inductance L is assigned as xj (t), then its voltage is Lẋj (t).
2. Use Kirchhoff’s current or voltage law to express the current or voltage of every resistor
in terms of state variables (but not their derivatives) and, if necessary, the input. If the
expression is for the current (voltage), then its multiplication (division) by its resistance
yields the voltage (current).
3. Use Kirchhoff’s current or voltage law to express each ẋi (t) in terms of state variables
and input.
We use an example to illustrate the procedure. Consider the circuit shown in Figure 8.1(a).
We assign the 5-H inductor current as x1 (t). Then its voltage is 5ẋ1 (t) as shown. Next we
assign the 4-H inductor current as x2 (t). Then its voltage is 4ẋ2 (t). Finally, we assign the
2-F capacitor voltage as x3 (t). Then its current is 2ẋ3 (t). Thus the network has three state
variables. Next we express the 3-Ω resistor’s voltage or current in terms of state variables.
Because the resistor is in series connection with the 4-H inductor, its current is x2 (t). Thus
the voltage across the 3-Ω resistor is 3x2 (t). Note that finding the voltage across the resistor
first will be more complicated. This completes the first two steps of the procedure.
Next applying Kirchhoff’s voltage law along the outer loop of the circuit yields
which implies
ẋ1 (t) = −0.2x3 (t) + 0.2u(t) (8.10)
Applying Kirchhoff’s voltage law along the right-hand-side loop yields
which implies
ẋ2 (t) = −0.75x2 (t) + 0.25x3 (t) (8.11)
Finally, applying Kirchhoff’s current law to the node denoted by A yields
which implies
ẋ3 (t) = 0.5x1 (t) − 0.5x2 (t) (8.12)
From Figure 8.1(a), we have
y(t) = x3 (t) (8.13)
The equations from (8.10) through (8.13) can be arranged in matrix form as
ẋ1 (t) 0 0 −0.2 x1 (t) 0.2
ẋ2 (t) = 0 −0.75 0.25 x2 (t) + 0 u(t) (8.14)
ẋ3 (t) 0.5 −0.5 0 x3 (t) 0
x1 (t)
y(t) = [0 0 1] x2 (t) + 0 × u(t) (8.15)
x3 (t)
The set of two equations in (8.16) and (8.17) is called a (CT) state-space (ss) equation of
dimension 3 and is the CT counterpart of (7.20) and (7.21). They are identical except that
we have the first derivative in the CT case and the first difference or unit-sample advance in
the DT case. As in the DT case, we call (8.16) a state equation and (8.17) an output equation.
The vector x is called the state or state vector and its components are called state variables.
The scalar d is called the direct transmission part. Note that the output equation in (8.17)
is memoryless. The evolution or dynamic of the system is described by the state equation
in (8.16). The preceding procedure is applicable to most simple RLC circuits. For a more
systematic procedure, see Reference [C6].
We next consider the mechanical system shown in Figure 8.2(a). It consists of a block with
mass m connected to a wall through a spring. We consider the applied force u the input and
the displacement y the output. If we disregard the Coulomb and static frictions and consider
only the viscous friction, then it is an LTI system within the elastic limit of the spring. The
spring force is ky(t), where k is the spring constant, and the friction is f dy(t)/dt = f ẏ(t),
where f is the damping or viscous-friction coefficient. The applied force must overcome the
spring force and friction and the remainder is used to accelerate the mass. Thus we have,
using Newton’s law,
u(t) − ky(t) − f ẏ(t) = mÿ(t)
or
mÿ(t) + f ẏ(t) + ky(t) = u(t) (8.18)
where ÿ(t) = d2 y(t)/dt2 is the second derivative of y(t). It is a second-order LTI differential
equation.
For the mechanical system, we can select the position and velocity of the mass as state
variables. The energy associated with position is stored in the spring and the kinetic energy
is associated with velocity. Let us defined x1 (t) := y(t) and x2 (t) := ẏ(t). Then we have
These two equations and y(t) = x1 (t) can be expressed in matrix form as
· ¸ · ¸· ¸ · ¸
ẋ1 (t) 0 1 x1 (t) 0
= + u(t)
ẋ2 (t) −k/m −f /m x2 (t) 1/m
· ¸
x1 (t)
y(t) = [1 0] + 0 × u(t)
x2 (t)
It is, as will be discussed in the next chapter, a system with infinite memory. Thus the input
applied at any t1 will affect the output for all t ≥ t1 . Conversely the output at t1 will be
affected by all input applied before and up to t1 .
Consider the output equation in (8.19) at t = t1 or
where x(t1 ) = [x1 (t1 ) x2 (t1 )]0 . As mentioned earlier, y(t1 ) depends on u(t1 ) and u(t), for
all t < t1 . The output equation however depends only on u(t1 ) and x(t1 ). Thus the state
x at t1 must summarize the effect of u(t), for all t < t1 , on y(t1 ). The input u(t), for all
t < t1 contains infinitely many values of u(t), yet its effect on y(t1 ) can be summarized by
the two values in x(t1 ). Moreover, once x(t1 ) is obtained, the input u(t), for all t < t1 , is no
longer needed and can be discarded. This is the situation in actual operation of real-world
systems. For example, consider the RLC circuit shown in Figure 8.1(a). No matter what
input is applied to the circuit, the output will appear instantaneously and in real time. That
is, the output at t1 will appear when u(t1 ) is applied, and no external device is needed to
store the input applied before t1 . Thus ss equations describe actual operation of real-world
systems.
In particular, the initial state x(0) summarizes the net effect of u(t), for all t < 0, on y(t),
for t ≥ 0. Thus if a system can be described by an ss equation, its initial relaxedness can be
checked from its initial state. For example, the RLC circuit in Figure 8.1(a) is initially relaxed
at t = 0 if the initial currents of the two inductors and the initial voltage of the capacitor are
zero. There is no need to be concerned with the input applied before t < 0. Thus the use of
state is extremely convenient.
where I is the unit matrix of the same order as A (see Problem 3.9). Because (1 + ∆A)x
is not defined, we must use Ix = x before summing x and ∆Ax to yield (I + ∆A)x. If we
compute x(t) and y(t) at t = n∆, for n = 0, 1, 2, . . ., then we have
or, by suppressing ∆,
where Ā := I+∆A and b̄ := ∆b. Equation (8.23) is the same as the DT ss equation in (7.22),
thus it can be computed recursively using only additions and multiplications as discussed in
Section 7.6.1. Moreover it can be computed in real time if the step size is sufficiently large.
However (8.23) will generate only the values of y(t) at t = n∆. Because the output of a
CT system is defined for all t, we must carry out interpolation. This can be achieved using
MATLAB function plot which carries out linear interpolation. See Figure 2.8(c).
The remaining question is how to select a step size. The procedure is simple. We select
an arbitrary ∆1 and carry out the computation. We then select a smaller ∆2 and repeat the
computation. If the result is different from the one using ∆1 , we repeat the process until the
computed result is indistinguishable from the preceding one.
The preceding discussion is the basic idea of computer computation of CT ss equations.
The discretization procedure used in (8.21) is the simplest and yields the least accurate result
for a given ∆. However it can yield a result as accurate as any method if we select ∆ to be
sufficiently small. The topic of computer computation is a vast one and will not be discussed
further. We discuss in the following only the use of some MATLAB functions.
Consider the ss equation
ẋ1 (t) 0 0 −0.2 x1 (t) 0.2
ẋ2 (t) = 0 −0.75 0.25 x2 (t) + 0 u(t)
ẋ3 (t) 0.5 −0.5 0 x3 (t) 0
x1 (t)
y(t) = [0 0 1] x2 (t) + 0 × u(t) (8.24)
x3 (t)
We use the MATLAB function lsim, abbreviation for linear simulation, to compute the
response of the system excited by the input u(t) = sin 2t, for t ≥ 0 and the initial state
x(0) = [0 0.2 − 0.1]0 , where the prime denotes the transpose. We type in an edit window
the following:
0.2
0.15
0.1
0.05
Amplitude
0
−0.05
−0.1
−0.15
−0.2
0 10 20 30 40 50 60 70 80
Time (s)
Figure 8.3: Output of (8.23) computed using ∆1 = 1 (solid line), using ∆2 = 0.1 (dotted
line), and using ∆3 = 0.01 (dash-and-dotted line).
c=[0 0 1];d=0;
dog=ss(a,b,c,d);
t1=0:1:80;u1=sin(2*t1);x=[0;0.2;-0.1];
y1=lsim(dog,u1,t1,x);
t2=0:0.1:80;u2=sin(2*t2);
y2=lsim(dog,u2,t2,x);
t3=0:0.01:80;u3=sin(2*t3);
y3=lsim(dog,u3,t3,x);
plot(t1,y1,t2,y2,’:’,t3,y3,’-.’,[0 80],[0 0])
xlabel(’Time (s)’),ylabel(’Amplitude’)
The first two lines express {A, b, c, d} in MATLAB format as in Program 7.1. The third
line defines the system. We call the system ‘dog’, which is defined using the state-space
model, denoted by ss. The fourth line t1=0:1:80 indicates the time interval [0, 80] to be
computed with increment 1 or, equivalently, with step size ∆1 = 1 and the corresponding
input u1=sin(2*t1). We then use lsim to compute the output y1=lsim(dog,u1,t1,x). The
solid line in Figure 8.3 is generated by plot(t1,y1) which also carries out linear interpolation.
We then repeat the computation by selecting ∆2 = 0.1 as its step size and the result y2 is
plotted in Figure 8.3 with a dotted line using plot(t2,y2,’:’). It is quite different from
the solid line. Thus the result obtained using ∆1 = 1 is not acceptable. At this point we
don’t know whether the result using ∆2 = 0.1 will be acceptable. We next select ∆3 = 0.01
and repeat the computation. We then use plot(t3,y3,’-.’) to plot the result in Figure 8.3
using the dash-and-dotted line. It is indistinguishable from the one using ∆2 = 0.1. Thus
we conclude that the response of (8.24) can be computed from its discretized equation if the
step size is selected to be 0.1 or smaller. Note that the preceding three plot functions can be
combined into one as in the second line from the bottom in Program 8.1. The combined plot
function also plots the horizontal axis which is generated by plot([0 80],[0 0]). We save
Program 8.1 as an m-file named f83.m. Typing in the command window >> f83 will yield
Figure 8.3 in a figure window.
The impulse response of a system was defined in Section 8.2.1 as the output of the system,
which is initially relaxed at t = 0, excited by an impulse applied at t = 0 or u(t) = δ(t).
Likewise the step response of a system is defined as the output of the system, which is initially
relaxed at t = 0, excited by a step function applied at t = 0 or u(t) = 1, for all t ≥ 0. MATLAB
contains the functions impulse and step which compute the impulse and step responses of
systems. The program that follows
174 CHAPTER 8. CT LTI AND LUMPED SYSTEMS
Step Response
1.5
Amplitude
0.5
0
0 10 20 30 40 50 60 70
Time (sec)
Impulse Response
0.6
0.4
Amplitude
0.2
−0.2
0 10 20 30 40 50 60 70
Time (sec)
Figure 8.4: (Top) Step response of (8.23). (Bottom) Impulse response of (8.23).
generates the step response in Figure 8.4 (top) and impulse response in Figure 8.4 (bottom).
In using lsim, we must select a step size and specify the time interval to be computed. The
functions step and impulse however automatically select step sizes and a time interval to be
computed. They also automatically generate titles and labels. Thus their use is very simple.
We mention that both step and impulse are based on lsim. However they require adaptive
selection of step sizes and automatically stop the computation when the responses hardly
change. We also mention that impulse is applicable only for ss equations with d = 0 or,
more precisely, simply ignores d 6= 0. If d = 0, the impulse response can be generated using a
nonzero initial state and u(t) = 0, for all t ≥ 0. See Problem 8.6. Thus no impulse is needed
in generating impulse responses.
Note that the integration upper limit ∞ must be changed to t, otherwise the second integration
is incorrect because it does not use the condition h(t) = 0, for t < 0. We see that the
convolution in (8.25) is very complex. More seriously, to compute y(t1 ), we need u(t) for all t
in [0, t1 ]. Thus using (8.25) to compute y(t1 ) requires the storage of all input applied before
t1 . This is not done in practice. Thus even though a convolution describes a system, it does
not describe its actual processing.
We next count the numbers of multiplications needed in computer computation of (8.25).
Before computing, the convolution must be discretized as
N h
X i
y(N ∆) = −2u(N ∆) + 3.02e−(N −n)∆ cos[3(N − n)∆ − 0.11] u(n∆)∆ (8.26)
n=0
Because the summation consists of (N + 1) terms and each term requires 6 multiplications,
each y(N ∆) requires 6(N + 1) + 1 multiplications. If we compute y(N ∆) for N = 0 : N̄ , then
the total number of multiplications is8
N̄
X N̄ (N̄ + 1) 2
(6N + 7) = 6 + 7(N̄ + 1) = 3N̄ + 10N̄ + 7
2
N =0
2
Because its total number of multiplications is proportional to N̄ , it will increase rapidly as
N̄ increases. For example, computing one thousand points of y requires roughly 3 million
multiplications.
We next count the number of multiplications needed in computer computation of the ss
equation in (8.19). Its discretized ss equation is, following (8.21) through (8.23),
where ∆ is the step size. Computing x(n∆) requires four multiplication. Thus each y(n∆)
requires a total of 7 multiplications. This number of multiplications is the same for all n.
Thus the total number of multiplications to compute y(n∆), for n = 0 : N̄ , is 7(N̄ + 1) which
2
is much smaller than 3N̄ , for N̄ large. For example, computing one thousand points of y
requires only 7 thousand multiplications in using the ss equation but 3 million multiplications
in using the convolution.
We now compare ss equations and convolutions in the following:
1. State-space equations are much easier to develop than convolutions to describe CT LTI
systems. To develop a convolution, we must compute first its impulse response. The
impulse response can be obtained, in theory, by measurement but cannot be so obtained
in practice. Its analytical computation is not simple. Even if h(t) is available, its actual
employment is complicated as shown in (8.25). Whereas developing an ss equation to
describe a system is generally straightforward as demonstrated in Section 8.4.
2. A convolution relates the input and output of a system, and is called an external de-
scription or input-output description. An ss equation is called an internal description
because it describes not only the input and output relationship but also the internal
structure. A convolution is applicable only if the system is initially relaxed. An ss
equation is applicable even if the system is not initially relaxed.
8 We
PN̄
use the formula N =0
N = N̄ (N̄ + 1)/2.
176 CHAPTER 8. CT LTI AND LUMPED SYSTEMS
for t < t1 , is not needed in determining y(t1 ) because the effect of u(t) for all t < t1
on y(t1 ) is summarized and stored in the state variables. Thus as soon as the input
u(t1 ) excites y(t1 ) and x(t1 ), it is no longer needed. This is the actual operation of the
RLC circuit in Figure 8.1(a). Thus an ss equation describes the actual processing of
real-world systems. But a convolution does not.
5. An ss equation can be more easily modified to describe time varying or nonlinear systems
than a convolution.
Example 8.4.1 Consider the circuit in Figure 8.1(a) and replotted in Figure 8.5.10 The input
is a voltage source u(t) and the output is the voltage y(t) across the 2-F capacitor with the
polarity shown. Let the current passing through the 2-F capacitor be denoted by i1 (t) and
the current passing through the series connection of the 3-Ω resistor and the 4-H inductor be
denoted by i2 (t). Then we have
The current passing through the 5-H inductor is i1 (t) + i2 (t). Thus the voltage across the
inductor is 5i̇1 (t) + 5i̇2 (t). Applying Kirchhoff’s voltage law around the outer loop of Figure
8.5 yields
5i̇1 (t) + 5i̇2 (t) + y(t) − u(t) = 0 (8.30)
9 About fifteen years ago, I saw a questionnaire from a major publisher’s editor asking whether convolution
can be omitted from a text because it is a difficult topic and has turned off many EE students. I regret to say
that I stopped teaching graphical computation of convolution only after 2006.
10 This example may be skipped without loss of continuity.
8.4. STATE-SPACE (SS) EQUATIONS 177
In order to develop a differential equation to relate u(t) and y(t), we must eliminate i1 (t) and
i2 (t) from (8.28) through (8.30). First we substitute (8.28) into (8.30) to yield
10ÿ(t) + 5i̇2 (t) + y(t) = u(t) (8.31)
where ÿ(t) := d2 y(t)/dt2 , and then differentiate it to yield
10y (3) (t) + 5ï2 (t) + ẏ(t) = u̇(t) (8.32)
where y (3) (t) := d3 y(t)/dt3 . The summation of (8.31) multiplied by 3 and (8.32) multiplied
by 4 yields
40y (3) (t) + 5(4ï2 (t) + 3i̇2 (t)) + 30ÿ(t) + 4ẏ(t) + 3y(t) = 4u̇(t) + 3u(t)
which becomes, after substituting the derivative of (8.29),
40y (3) (t) + 30ÿ(t) + 9ẏ(t) + 3y(t) = 4u̇(t) + 3u(t) (8.33)
This is a third-order linear differential equation with constant coefficients. It describes the
circuit in Figure 8.5. Note that (8.33) can be more easily developed using a different method.
See Section 8.7.1. 2
We now compare high-order differential equations and ss equations:
1. For a simple system that has only one or two state variables, there is not much difference
in developing a differential equation or an ss equation to describe it. However, for
a system with three or more state variables, it is generally simpler to develop an ss
equation than a single high-order differential equation because the latter requires to
eliminate intermediate variables as shown in the preceding example. Furthermore, the
form of ss equations is more compact than the form of differential equations.
2. A high-order differential equation, just as a convolution, is an external description. An
ss equation is an internal description. It describes not only the relationship between the
input and output but also the internal variables.
3. High-order differential equations are not suitable for computer computation because of
the difficulties in discretizing second, third, and higher-order derivatives. State-space
equations involve only the discretization of the first derivative, thus they are more
suitable for computer computation as demonstrated in Subsection 8.4.1.
4. An ss equation can be easily simulated as a basic block diagram and be implemented
using an op-amp circuit as we will discuss in the next section.
5. State-space equations can be more easily extended than high-order differential equations
to describe nonlinear systems.
In view of the preceding reasons, there seems no reasons to develop and to study high-order
differential equations in this text.
178 CHAPTER 8. CT LTI AND LUMPED SYSTEMS
Figure 8.6: Time domain: (a) Multiplier. (b) Adder. (c) Integrator.
Note that α can be positive or negative. If α = 1, it is direct transmission and the arrow
and α may be omitted. If α = −1, it changes only the sign of the input and is called an
inverter. In addition, a signal may branch out to two or more signals with gain αi as shown.
The element denoted by a small circle with a plus sign as shown in Figure 8.6(b) is called an
adder. It has two or more inputs denoted by entering arrows and one and only one output
denoted by a departing arrow. Its inputs ui (t), for i = 1 : 3 and output y(t) are related by
The element in Figure 8.6(c) is an integrator. If we assign its input as u(t), then its output
y(t) is given by
Z t
y(t) = u(τ )dτ + y(0) (Integrator)
τ =0
This equation is not convenient to use. If we assign its output as y(t), then its input u(t) is
given by u(t) = dy(t)/dt = ẏ(t). This is simpler. Thus we prefer to assign the output of an
integrator as a variable. These elements are the CT counterparts of the DT basic elements in
Figure 7.2. They are all linear and time invariant. The multiplier and adder are memoryless.
The integrator has memory. Its employment requires the specification of the initial condition
y(0). Unless stated otherwise, the initial condition of every integrator will be assumed to be
zero.
Integrators are very useful in practice. For example, the width of a table can be measured
using a measuring tape. The distance traveled by an automobile however cannot be so mea-
sured. But we can measure the speed of the automobile by measuring the rotational speed
of the driving shaft of the wheels.11 The distance can then be obtained by integrating the
speed.12 An airplane cannot measure its own velocity without resorting to outside signals,
however it can measure its own acceleration using an accelerometer which will be discussed
in Chapter 10. Integrating the acceleration yields the velocity. Integrating once again yields
the distance. Indeed, integrators are very useful in practice.
We now discuss op-amp circuit implementations of the three basic elements. Multipliers
and adders are implemented as shown in Figures 6.19 and 6.20. Consider the op-amp circuit
11 The rotational speed of a shaft can be measured using centrifugal force, counting pulses, and other
methods.
12 In automobiles, the integration can be carried out mechanically using gear trains.
8.5. CT LTI BASIC ELEMENTS 179
Figure 8.7: (a) Integrator (RC = 1). (b) Differentiator (RC = 1).
Figure 8.8: Time domain: Basic block diagram of (8.14) and (8.15).
shown in Figure 8.7(a). It consists of two op-amp circuits. The right-hand-side circuit is an
inverter as shown in Figure 6.13(b). If we assign its output as x(t), then its input is −x(t) as
shown. Let us assign the input of the left-hand-side op amp as vi (t). Because e− = e+ = 0,
the current passing through the resistor R and entering the inverting terminal is vi (t)/R
and the current passing through the capacitor and entering the inverting terminal is −C ẋ(t).
Because i− (t) = 0, we have
vi (t)
− C ẋ(t) = 0 or vi (t) = RC ẋ(t) (8.34)
R
If we select, for example, R = 10 kΩ and C = 10−4 Farads, then RC = 1 and vi (t) = ẋ(t).
In other words, if the input of the op-amp circuit in Figure 8.7(a) is ẋ(t), then its output is
x(t). Thus the circuit carries out integration and is called an integrator.
8.5.2 Differentiators
The opposite of integration is differentiation. Let us interchange the locations of the capacitor
C and resistor R in Figure 8.7(a) to yield the op-amp circuit in Figure 8.7(b). Its right-hand
half is an inverter. If we assign its output as vo (t), then its input is −vo (t). Let us assign the
input of the left-hand-side op-amp circuit as x(t). Because e− = e+ = 0, the current passing
through the resistor R and entering the inverting terminal is −vo (t)/R and the current passing
through the capacitor and entering the inverting terminal is C ẋ(t). Because i− (t) = 0, we
have
vo (t)
− + C ẋ(t) = 0 or vo (t) = RC ẋ(t)
R
If RC = 1, then the output of Figure 8.7(b) equals the differentiation of its input. Thus the
op-amp circuit in Figure 8.7(b) carries out differentiation and is called a differentiator.
An integrator is causal. Is a differentiator causal? The answer depends on how we define
differentiation. If we define the differentiation as, for ² > 0,
u(t + ²) − u(t)
y(t) = lim
²→0 ²
then the output y(t) depends on the future input u(t + ²) and the differentiator is not causal.
However if we define the differentiation as, for ² > 0,
u(t) − u(t − ²)
y(t) = lim
²→0 ²
then the output y(t) does not depend on any future input and the differentiator is causal.
Note that the differentiator is not memoryless; it has a memory of length ² which approaches
0. The integrator however has infinite memory as we will discuss in the next chapter.
If a signal contains high-frequency noise, then its differentiation will amplify the noise,
whereas its integration will suppress the noise. This is illustrated by an example.
Example 8.5.1 Consider the signal x(t) = sin 2t corrupted by noise. Suppose the noise can
be represented by n(t) = 0.02 sin 100t. Then we have
We plot u(t) in Figures 8.9(a) and (c). Because the amplitude of n(t) is small, we have
u(t) ≈ x(t) and the presence of the noise can hardly be detected from Figure 8.9(a) and (c).
If we apply u(t) to the differentiator in Figure 8.7(b), then its output is
du(t)
= 2 cos 2t + 0.02 × 100 cos 100t = 2 cos 2t + 2 cos 100t (8.35)
dt
and is plotted in Figure 8.9(b). We see that the amplitude of the noise is greatly amplified
and we can no longer detect dx(t)/dt from Figure 8.9(b). If we apply u(t) to the integrator
in Figure 8.8(a), then its output is
Z t ¯t ¯t
− cos 2t ¯¯ −0.02 cos 100t ¯¯
u(τ )dτ = ¯ + ¯
τ =0 2 τ =0 100 τ =0
= −0.5 cos 2t + 0.5 − 0.0002 cos 100t + 0.0002
= −0.5 cos 2t + 0.5002 − 0.0002 cos 100t (8.36)
and is plotted in Figure 8.9(d). The plot yields essentially the integration of x(t). 2
This example shows that differentiators will amplify high-frequency noise, whereas inte-
grators will suppress it. Because electrical systems often contain high-frequency noise, differ-
entiators should be avoided wherever possible in electrical systems. Note that a differentiator
is not one of the basic elements discussed in Figure 8.6.
8.6. TRANSFER FUNCTIONS – LAPLACE TRANSFORM 181
1
2
0.5
Amplitude
0 0
−0.5
−2
−1
−1.5 −4
0 2 4 6 0 2 4 6
0.5 1
Amplitude
0
−0.5 0.5
−1
−1.5 0
0 2 4 6 0 2 4 6
Time (s) Time (s)
Figure 8.9: (a) Signal corrupted by high-frequency noise. (b) Its differentiation. (c) Signal
corrupted by high-frequency noise. (d) Its integration.
Even though differentiators are rarely used in electrical systems, they are widely used
in electromechanical systems. A tachometer is essentially a generator attached to a shaft
which generates a voltage proportional to the angular velocity of the shaft. The rotation of a
mechanical shaft is fairly smooth and the issue of high-frequency noise amplification will not
arise. Tachometers are widely used in control systems to improve performances.
where s is a complex variable, called the Laplace-transform variable. The function X(s) is
called the Laplace transform of x(t). Note that the Laplace transform is defined only for
the positive-time part of x(t). Because the negative-time part is not used, we often assume
x(t) = 0 for t < 0, or x(t) to be positive time. Before proceeding, we mention that the Laplace
transform has the following linearity property:
In this equation, it is implicitly assumed that the system is initially relaxed at t = 0 and the
input is applied from t = 0 onward. If the system is causal, then h(t) = 0 for t < 0 and no
output y(t) will appear before the application of the input. Thus all u(t), h(t), and y(t) in
(8.38) are positive time.
182 CHAPTER 8. CT LTI AND LUMPED SYSTEMS
Let us introduce a new variable t̄ as t̄ := t − τ where τ is fixed. Then the term inside the
brackets becomes
Z ∞ Z ∞ Z ∞
h(t − τ )e−s(t−τ ) dt = h(t̄)e−st̄ dt̄ = h(t̄)e−st̄ dt̄
t=0 t̄=−τ t̄=0
where we have used the fact that h(t̄) = 0, for all t̄ < 0 because of causality. The last
integration is, by definition, the Laplace transform of h(t), that is,
Z ∞
H(s) := L[h(t)] := h(t)e−st dt (8.40)
t=0
Because it is independent of τ , H(s) can be moved outside the integration in (8.39). Thus
(8.39) becomes Z ∞
Y (s) = H(s) u(τ )e−sτ dτ
τ =0
or
Y (s) = H(s) × U (s) = H(s)U (s) (8.41)
where Y (s) and U (s) are the Laplace transforms of the output and input, respectively. The
function H(s) is the Laplace transform of the impulse response and is called the CT transfer
function. Because the convolution in (8.38) is applicable only if the system is initially relaxed,
so is the transfer function. In other words, whenever we use a transfer function, the system
is assumed to be initially relaxed.
Using (8.41), we can also define the transfer function H(s) as
¯
L[output] Y (s) ¯¯
H(s) = = (8.42)
L[input] U (s) ¯Initially relaxed
If we use (8.42), there is no need to compute the impulse response. This is the equation used
in practice to compute transfer functions. In computing the transfer function of a system, we
may assume u(t) = 0 and y(t) = 0, for all t < 0. This will insure the initial relaxedness of the
system.
We mention that for nonlinear time-invariant systems, we may use (8.42) to compute
transfer functions. However, different input-output pairs will yield different transfer functions
for the same system. Thus for nonlinear systems, the concept of transfer functions is useless.
For LTI systems, no matter what input-output pair is used in (8.42), the resulting transfer
function is always the same. Thus transfer functions can be used to describe LTI systems.
Integral convolutions, ss equations, and high-order differential equations are called time-
domain descriptions. Transfer functions which are based on the Laplace transform are called
the transform-domain description. Equations (8.41) and (8.42) are the CT counterparts of
(7.25) and (7.29) for DT systems and all discussion in Section 7.8 is directly application. That
is, CT transfer functions are most convenient in the study of composite systems including
feedback systems. They will also be used in qualitative analysis and design as we will discuss
in the remainder of this text.
8.7. TRANSFER FUNCTIONS OF RLC CIRCUITS 183
where we have used e−st = 0 at t = ∞ (this will be discussed in Section 9.3) and de−st /dt =
−se−st . Thus we have, using (8.37),
· ¸
dx(t)
L = sX(s) − x(0) (8.43)
dt
If x(0) = 0, then L[ẋ(t)] = sX(s). Thus differentiation in the time domain is equivalent to
the multiplication by s in the transform domain, denoted as
d
←→ s
dt
Consequently a differentiator has transfer function s.
Next we compute the Laplace transform of the integration of a signal. Let us define
Z t
g(t) := x(τ )dτ (8.44)
τ =0
L[x(t)] = L[dg(t)/dt]
Thus integration in the time domain is equivalent to the multiplication by 1/s = s−1 in the
transform domain, denoted as
Z t
←→ s−1
t=0
Figure 8.10: (a) RLC circuit in the transform domain. (b) Equivalent circuit.
a transfer function to describe the circuit. It is however simpler to develop directly the transfer
function using the concept of impedances as we will show in this section.
The voltage v(t) and current i(t) of a resistor with resistance R, a capacitor with capaci-
tance C, and an inductor with inductance L are related, respectively, by
dv(t) di(t)
v(t) = Ri(t) i(t) = C v(t) = L
dt dt
as shown in Figure 8.1(b). Applying the Laplace transform, using (8.43), and assuming zero
initial conditions, we obtain
where V (s) and I(s) are the Laplace transforms of v(t) and i(t). If we consider the current
as the input and the excited voltage as the output, then the transfer functions of R, C,
and L are R, 1/Cs, and Ls respectively. They are called transform impedances or, simply,
impedances.13 If we consider the voltage as the input and the current as its output, then their
transfer functions are called admittances.
Using impedances, the voltage and current of every circuit element can be written as
V (s) = Z(s)I(s) with Z(s) = R for resistors, Z(s) = 1/Cs for capacitors, and Z(s) = Ls
for inductors. They involves only multiplications. In other words, the relationship between
the input and output of a circuit element is algebraic in the (Laplace) transform domain;
whereas it is calculus (differentiation or integration) in the time domain. Thus the former is
much simpler. Consequently, the manipulation of impedances is just like the manipulation
of resistances. For example, the resistance of the series connection of R1 and R2 is R1 + R2 .
The resistance of the parallel connection of R1 and R2 is R1 R2 /(R1 + R2 ). Likewise, the
impedance of the series connection of Z1 (s) and Z2 (s) is Z1 (s) + Z2 (s). The impedance of
the parallel connection of Z1 (s) and Z2 (s) is
Z1 (s)Z2 (s)
Z1 (s) + Z2 (s)
Using these two simple rules we can readily obtain the transfer function of the circuit in Figure
8.1(a). Before proceeding, we draw the circuit in Figure 8.10(a) in the Laplace transform
domain or, equivalently, using impedances. The impedance of the series connection of the
resistor with impedance 3 and the inductor with impedance 4s is 3 + 4s. The impedance
ZAB (s) between the two nodes A and B shown in Figure 8.10(a) is the parallel connection of
(4s + 3) and the capacitor with impedance 1/2s or
(4s + 3)(1/2s) 4s + 3
ZAB (s) = = 2
4s + 3 + 1/2s 8s + 6s + 1
13 In some circuit analysis texts, impedances are defined as R, 1/jωC, and jωL. The definition here is
Using ZAB (s), we can simplify the circuit in Figure 8.10(a) as in Figure 8.10(b). The current
I(s) around the loop is given by
U (s)
I(s) =
5s + ZAB (s)
Thus the voltage Y (s) across ZAB (s) is
ZAB (s)
Y (s) = ZAB (s)I(s) = U (s)
5s + ZAB (s)
and the transfer function from u(t) to y(t) is
Y (s) ZAB (s)
H(s) = =
U (s) 5s + ZAB (s)
or
4s+3
8s2 +6s+1 4s + 3
H(s) = =
5s + 8s24s+3
+6s+1
5s(8s2 + 6s + 1) + 4s + 3
4s + 3
= (8.47)
40s3 + 30s2 + 9s + 3
This transfer function describes the circuit in Figure 8.2(a) or 8.10(a). It is a ratio of two
polynomials and is called a rational transfer function.
dk
sk ←→ (8.48)
dtk
that is, the kth derivative in the time domain is equivalent to the multiplication by sk in
the transform domain. Using (8.48), we can transform a transfer function into a differential
equation and vice versa.
Consider the transfer function in (8.47). We write it as
or
40s3 Y (s) + 30s2 Y (s) + 9sY (s) + 3Y (s) = 4sU (s) + 3U (s)
which becomes, in the time domain,
This is the differential equation in (8.33). Thus once a transfer function is obtained, we can
readily obtain its differential equation.
Conversely we can readily obtain a transfer function from a differential equation. For
example, consider the differential equation in (8.18) or
which describes the mechanical system in Figure 8.1(a). Applying the Laplace transform and
assuming zero initial conditions, we obtain
or
(ms2 + f s + k)Y (s) = U (s)
Thus the transfer function of the mechanical system is
Y (s) 1
H(s) := = (8.49)
U (s) ms2 + f s + k
s3 + 2 s4
, s − 2,
2s + 3 s2 + 3s + 1
are improper. The rational functions
2s + 5 10 3s2 + s + 5
3, , ,
s+1 (s2 + 1.2s) 2.5s3 − 5
are proper. The first two are also biproper, and the last two are strictly proper. Thus proper
rational functions include both biproper and strictly proper rational functions. Note that if
H(s) = N (s)/D(s) is biproper, so is its inverse H −1 (s) = D(s)/N (s).
Properness of H(s) can also be determined from the value of H(s) at s = ∞. The rational
function H(s) is improper if |H(∞)| = ∞, proper if H(∞) is a zero or nonzero constant,
biproper if H(∞) is a nonzero constant, and strictly proper if H(∞) = 0.
Consider the improper rational function H(s) = (s3 + 2)/(2s + 3). We carry out the
following direct division
Then we have
s3 + 2 −1.375
H(s) = = + 0.5s2 − 0.75s + 1.125 (8.51)
2s + 3 2s + 3
8.8. LUMPED OR DISTRIBUTED 187
If H(s) is the transfer function of a system, then the output y(t) of the system, as will
be discussed later in (8.66), will contain the terms 0.5ü(t) and −0.75u̇, where u(t) is the
input of the system. As discussed in Subsection 8.5.2, differentiations should be avoided
wherever possible in electrical systems. Thus improper rational transfer functions will not
be studied. We study in the remainder of this text only proper rational transfer functions.
Before proceeding, we mention that PID controllers are widely used in control systems. The
transfer function of a PID controller is
k2
H(s) = k1 + + k3 s (8.52)
s
where k1 denotes a proportional gain, k2 , an integration constant, and k3 , a differentiation
constant. Clearly the differentiator k3 s is improper. However, it is only an approximation.
In practice, it is implemented as
k3 s
(8.53)
1 + s/N
where N is a large number. Signals in control systems are generally of low frequency. For
such signals, (8.53) can be approximated as k3 s as we will discuss in Chapter 10. In any case
we study in the remainder of this text only proper rational transfer functions.
can be considered lumped. On the other hand, transmission lines can be hundreds or even
thousands of kilometers long and are distributed.
Consider the clock signal shown in Figure 2.5. The signal has a period of 0.5 ns and will
travel 0.5 × 10−9 × 3 × 108 = 0.15 m or 15 cm (centimeters) in one period. For integrated
circuits (IC) of sizes limited to 2 or 3 cm, the responses of the circuits can be considered to be
instantaneous and the circuits can be considered lumped. However if the period of the clock
signal is reduced to 0.05 ns, then the signal will travel 1.5 cm in one period. In this case, IC
circuits cannot be modeled as lumped.
The mechanical system studied in Figure 8.2(a) has the rational function in (8.49) as its
transfer function. Thus it is also a lumped system. The physical sizes of the block and spring
do not arise in their discussion. If a structure can be modeled as a rigid body, then it can be
modeled as lumped. The robotic arm on a space shuttle is very long and is not rigid, thus it
is a distributed system. Moving fluid and elastic deformation are also distributed.
We list in the following the differences between lumped and distributed systems;
Lumped Distributed
Variables are functions of time Variables are function of time and space
Classical mechanics Relativistic mechanics
Ohm’s and Kirchhoff’s laws Maxwell’s wave equations
Rational transfer functions Irrational transfer functions
Differential equations Partial differential equations
Finite-dim. ss equations Infinite-dim or delay form ss equations
If the analogy between the set of rational numbers and the set of irrational numbers holds,
then the set of rational functions is much smaller than the set of irrational functions. In other
words, the class of LTI lumped systems is much smaller than the class of LTI distributed
systems. The study of LTI distributed systems is difficult. In this text, we study only LTI
lumped systems.14
14 Even though all existing texts on signals and systems claim to study LTI systems, they actually study
only LTI and lumped systems.
8.9. REALIZATIONS 189
8.9 Realizations
We defined an LTI system to be lumped if its transfer function is a rational function of s. We
show in this section that such a system can also be described by an ss equation.
Consider a proper rational transfer function H(s). The problem of finding an ss equation
which has H(s) as its transfer function is called the realization problem. The ss equation is
called a realization of H(s). The name realization is justified because the transfer function
can be built or implemented through its ss-equation realization using an op-amp circuit.
Furthermore, all computation involving the transfer function can be carried out using the ss
equation. Thus the realization problem is important in practice.
We use an example to illustrate the realization procedure. Consider the proper rational
transfer function
Y (s) b̄1 s4 + b̄2 s3 + b̄3 s2 + b̄4 s + b̄5
H(s) = = (8.55)
U (s) ā1 s4 + ā2 s3 + ā3 s2 + ā4 s + ā5
190 CHAPTER 8. CT LTI AND LUMPED SYSTEMS
with ā1 6= 0. The rest of the coefficients can be zero or nonzero. We call ā1 the denominator’s
leading coefficient. The first step in realization is to write (8.55) as
b1 s3 + b2 s2 + b3 s + b4 N (s)
H(s) = + d =: +d (8.56)
s4 + a2 s3 + a3 s2 + a4 s + a5 D(s)
1.5
s4 +3s3 +7.5s2 +6s +2.5 ) 1.5s4 + 2.5s3 + 12s2 + 11.5s − 2.5
1.5s4 + 4.5s3 + 11.25s2 + 9s + 3.75
−2s3 + 0.75s2 + 2.5s − 6.25
Now we claim that the following ss equation realizes (8.56) or, equivalently, (8.55):
−a2 −a3 −a4 −a5 1
1 0 0 0 0
ẋ(t) = 0
x(t) + u(t) (8.59)
1 0 0 0
0 0 1 0 0
y(t) = [b1 b2 b3 b4 ]x(t) + du(t)
with x(t) = [x1 (t) x2 (t) x3 (t) x4 (t)]0 . The number of state variables equals the degree of the
denominator of H(s). This ss equation can be obtained directly from the coefficients in (8.56).
We place the denominator’s coefficients, except its leading coefficient 1, with sign reversed in
the first row of A, and place the numerator’s coefficients, without changing sign, directly as
c. The constant d in (8.56) is the direct transmission part. The rest of the ss equation have
fixed patterns. The second row of A is [1 0 0 · · ·]. The third row of A is [0 1 0 · · ·] and so
forth. The column vector b is all zero except its first entry which is 1. See Problems 8.19 and
8.20 for a different way of developing (8.59).
To show that (8.59) is a realization of (8.56), we must compute the transfer function of
(8.59). We first write the matrix equation explicitly as
We see that the four-dimensional state equation in (8.59) actually consists of four first-order
differential equations as shown in (8.60). Applying the Laplace transform and assuming zero
initial conditions yield
This shows that the transfer function of (8.59) equals (8.56). Thus (8.59) is a realization of
(8.55) or (8.56). The ss equation in (8.59) is said to be in controllable form.
Before proceeding, we mention that if H(s) is strictly proper, then no direct division is
needed and we have d = 0. In other words, there is no direct transmission part from u to y.
Example 8.9.2 Find a realization for the transfer function in Example 8.9.1 or
We see that the realization can be read out from the coefficients of the transfer function. 2
192 CHAPTER 8. CT LTI AND LUMPED SYSTEMS
Example 8.9.3 Find a realization for the transfer function in (8.47). We first normalize its
denominator’s leading coefficient to 1 to yield
4s + 3 0.1s + 0.075
H(s) = = 3 (8.64)
40s3 + 30s2 + 9s + 3 s + 0.75s2 + 0.225s + 0.075
It is strictly proper and d = 0.
Using (8.59), we can obtain its realization as
−0.75 −0.225 −0.075 1
ẋ(t) = 1 0 0 x(t) + 0 u(t) (8.65)
0 1 0 0
y(t) = [0 0.1 0.075]x(t) + 0 · u(t)
This three-dimensional ss equation is a realization of the transfer function of the RLC circuit
in Figure 8.2(a) and describes the circuit. 2
The ss equation in (8.14) and (8.15) also describes the circuit in Figure 8.2(a); it however
looks completely different from (8.65). Even so, they are mathematically equivalent. In fact,
it is possible to find infinitely many other realizations for the transfer function. See Reference
[C6]. However the controllable-form realization in (8.65) is most convenient to develop and
to use.
The MATLAB function tf2ss, an acronym for transfer function to ss equation, carries
out realizations. For the transfer function in (8.57), typing in the control window
>> n=[3 5 24 23 -5];de=[2 6 15 12 5];
>> [a,b,c,d]=tf2ss(n,de)
will yield
This is the controllable-form realization in Example 8.9.2. In using tf2ss, there is no need
to normalize the leading coefficient and to carry out direct division. Thus its use is simple
and straightforward.
We mention that the MATLAB functions lsim, step and impulse discussed in Section
8.4.1 are directly applicable to transfer functions. For example, for the transfer function in
(8.63), typing in the command window
>> n=[3 5 24 23 -5];d=[2 6 15 12 5];
>> step(n,d)
will generate in a figure window its step response. However the response is not computed
directly from the transfer function. It is computed from its controllable-form realization.
Using the realization, we can also readily draw a basic block diagram and then implement it
using an op-amp circuit.
8.9. REALIZATIONS 193
Every proper rational transfer function can be realized as an ss equation of the form in
(8.67). Such an ss equation can be implemented without using differentiators. If a transfer
function is not proper, then the use of differentiators is not avoidable. For example, consider
the improper transfer function in (8.51). We write it as
s3 + 2 −0.6875
H(s) = = + 1.125 − 0.75s + 0.5s2
2s + 3 s + 1.5
Note that the denominator’s leading coefficient is normalized to 1. Its controllable-form
realization is
ẋ(t) = −1.5x(t) + u(t)
y(t) = −0.6875x(t) + 1.125u(t) − 0.75u̇(t) + 0.5ü(t) (8.66)
The output contains the first and second derivatives of the input. If an input contains high-
frequency noise, then the system will amplify the noise and is not used in practice.
b1 s3 + b2 s2 + b3 s + b4
H(s) =
a1 s3 + a2 s2 + a3 s + a4
If we use a realization of H(s) to study the system, then the realization has dimension 3 and
the system has three initial conditions. One may then wonder what the three initial conditions
are. Depending on the coefficients bi and ai , the initial conditions could be u(0), u̇(0), ü(0)
or y(0), ẏ(0), ÿ(0), or their three independent linear combinations. The relationship is com-
plicated. The interested reader may refer to the second edition of Reference [C6].17 The
relationship is not needed in using the ss equation. Moreover, we study only the case where
initial conditions are all zero.
In program 8.1 of Subsection 8.4.2, a system, named dog, is defined as dog=ss(a,b,c,d)
using the ss-equation model. In MATLAB, we may also define a system using its transfer
function. For example, the circuit in Figure 8.1(a) has the transfer function in (8.47). Using
its numerator’s and denominator’s coefficients, we can define
nu=[4 3];de=[40 30 9 3];cat=tf(nu,de);
If we replace dog in Program 8.1 by cat, then the program will generate a figure as in Figure
8.3 but different. Note that all computation involving transfer functions are carried out
using their controllable-form realizations. However when we use a transfer function, all its
initial conditions must be assumed to be zero. Thus in using cat=tf(nu,de), MATLAB will
automatically ignore the initial conditions or set them to zero. The initial state in Program
8.1 is different from 0, thus the results using dog and cat will be different. They will yield
the same result if the initial state is zero.
where ai are real numbers, zero or nonzero. The degree of D(s) is defined as the highest
power of s with a nonzero coefficient. Thus the polynomial in (8.71) has degree 4 and we call
a4 its leading coefficient.
A real or complex number λ is defined as a root of D(s) if D(λ) = 0. For a polynomial
with real coefficients, if a complex number λ = α + jβ is a root, so is its complex conjugate
λ∗ = α − jβ. The number of roots equals the degree of its polynomial.
The roots of a polynomial of degree 1 or 2 can be readily computed by hand. Computing
the roots of a polynomial of degree 3 or higher is complicated. It is best delegated to a
computer. In MATLAB, the function roots computes the roots of a polynomial. For example,
consider the polynomial of degree 3
Typing in MATLAB
>> roots([4 -3 0 5])
will yields the three roots
The polynomial has one real root and one pair of complex-conjugate roots. Thus the polyno-
mial D(s) can be factored as
Do not forget the leading coefficient 4 because MATLAB computes actually the roots of
D(s)/4. Note that D(s), −2D(s), 0.7D(s), and D(s)/4 all have, by definition, the same set
of roots.
A rational function is a ratio of two polynomials. Its degree can be defined as the larger
of its denominator’s and numerator’s degrees. If a rational function is proper, then its degree
reduces to the degree of its denominator. This definition however will not yield a unique
degree without qualification. For example, consider the proper rational functions
D(s) = 2s3 + 4s + 1
The polynomial N (s) has roots −2 and 1. We compute D(−2) = 2(−2)3 +4(−2)+1 = −23 6= 0
and D(1) = 2 + 4 + 1 6= 0. Thus D(s) has no roots of N (s) and N (s) and D(s) are coprime.
Another way is to use the Euclidean algorithm which is often taught in high schools. Yet
another way is to check the non-singularity of a square matrix, called Sylvester resultant. See
Problem 8.29.
It is strictly proper and its denominator’s leading coefficient is 1. Thus its realization can be
obtained as, using (8.59),
−5 −4 2 8 1
1 0 0 0 0
ẋ(t) = 0
x(t) + u(t) (8.74)
1 0 0 0
0 0 1 0 0
y(t) = [3 8 10 4]x(t)
Figure 8.11: (a) Unobservable circuit. (aa) Reduced circuit. (b) Uncontrollable circuit. (bb)
Reduced circuit. (c) Uncontrollable and unobservable circuit. (cc) Reduced circuit. (d)
Unobservable circuit. (dd) Reduced circuit.
198 CHAPTER 8. CT LTI AND LUMPED SYSTEMS
across the capacitor is zero, the voltage will remain zero no matter what input is applied.
Thus the transfer functions of the two circuits in Figures 8.11(c) and 8.11(cc) are the same.
The circuit in Figure 8.11(d) is the dual of Figure 8.11(a). The capacitor with capacitance
C and the inductor with inductance L connected in parallel with the voltage source u(t) can
be deleted in computing the transfer function of the network. In conclusion, all circuits in
Figures 8.11(a) through (d) cannot be described fully by their transfer functions.
R(1/C1 s) R
Ha (s) = = (8.77)
R + 1/C1 s RC1 s + 1
It has degree 1 which equals the number of energy storage element in the RC circuit in Figure
8.11(aa). Thus the circuit is completely characterized by the transfer function. The circuit in
Figure 8.11(a) also has (8.77) as its transfer function but has three energy storage elements
(two capacitors and one inductor). Thus the circuit is not completely characterized by its
transfer function.
For the circuit in Figure 8.11(bb), we have Y (s) = Hb (s)U (s) with
1/C1 s 1
Hb (s) = = (8.78)
R + 1/C1 s RC1 s + 1
It has degree one. Thus the transfer function characterizes completely the circuit in Figure
8.11(bb). To find the transfer function of the circuit in Figure 8.11(b), we first assume that
the circuit is initially relaxed, or the current in the LC loop is zero. Now no matter what input
is applied, the current in the LC loop will remain zero. Thus the circuit in Figure 8.11(b)
also has (8.78) as its transfer function but is not completely characterized by it. Likewise, the
circuits in Figures 8.11(d) and (dd) have (8.78) as their transfer function. But the former is
completely characterized by (8.78) but not the latter.
For the circuit in Figure 8.11(cc), we have Y (s) = Hc (s)U (s) with
2×2 4
Hc (s) = = =1 (8.79)
2+2 4
It has degree zero and characterizes completely the circuit in Figure 8.11(cc) because it has
no energy storage element. On the other hand, the circuit in Figure 8.11(c) has one energy
storage element and is not completely characterized by its transfer function given in (8.79).
What is the physical significance of complete characterization? In practice, we try to
design a simplest possible system to achieve a given task. As far as the input and output
are concerned, all circuits on the left-hand side of Figure 8.11 can be replaced by the simpler
circuits on the right-hand side. In other words, the circuits on the left-hand side have some
redundancies or some unnecessary components, and we should not design such systems. It
turns out that such redundancies can be detected from their transfer functions. If the number
of energy storage elements in a system is larger than the degree of its transfer function, then
the system has some redundancy. Most practical systems, unless inadvertently designed, have
no redundant components and are completely characterized by their transfer functions.18
To conclude this subsection, we mention that complete characterization deals only with
energy-storage elements; it pays no attention to non-energy storage elements such as resistors.
18 The redundancy in this section is different from the redundancy purposely introduced for safety reason
such as to use two or three identical systems to achieve the same control in the space station.
8.11. DO TRANSFER FUNCTIONS DESCRIBE SYSTEMS FULLY? 199
For example, the circuit in Figure 8.11(cc) is completely characterized by its transfer function
H(s) = 1. But the four resistors with resistance 1 can be replaced by a single resistor.
Complete characterization does not deal with this problem.
1. Integral convolutions
The first three are in the time domain. The last one is in the transform domain. The ss
equation is an internal description; the other three are external descriptions. If a system is
completely characterized by its transfer function, then the transfer function and state-space
descriptions of the system are equivalent.
State-space (ss) equations can be
The preceding four merits were developed without using any analytical property of ss equa-
tions. Thus the discussion was self contained. All other three descriptions lack at least one of
the four merits. Thus ss equations are most important in computation and implementation.
Are ss equations important in discussing general properties of systems and in design? This
will be answered in the next chapter.
A system is modeled as a black box as shown in Figure 6.1. If the system is LTI and
lumped, then it can be represented as shown in Figure 8.13(a) in the transform domain and
in Figure 8.13(b) in the time domain. The latter is a graphical representation of an ss equation
in which a single line denotes a single variable and double lines denote two or more variables.
The output y(t1 ) in Figure 8.13(b) depends on u(t1 ) and x(t1 ) and will appear at the same
time instant as u(t1 ). If we use a convolution, then the output y(t1 ) will depend on u(t),
for t ≤ t1 and the system requires an external memory to memorize past input. If we use a
high-order differential equation, then the representation will involve feedback from the output
and its derivatives. This is not the case in using an ss equation. Thus the best representation
of a system in the time domain is the one shown in Figure 8.13(b).
8.12. CONCLUDING REMARKS 201
Problems
8.1 Consider the RLC circuit in Figure 8.1(a). What is its ss equation if we consider the
voltage across the 5-H inductor as the output?
8.2 Consider the RLC circuit in Figure 8.1(a). What is its ss equation if we consider the
current passing through the 2-F capacitor as the output?
8.3 Find ss equations to describe the circuits shown in Figure 8.14. For the circuit in Figure
8.14(a), choose the state variables x1 and x2 as shown.
Figure 8.14: (a) Circuit with two state variables. (b) Circuit with three state variables.
8.4 Use the MATLAB function lsim to compute the response of the circuit in Figure 8.14(a)
excited by the input u(t) = 1+e−0.2t sin 10t for t in [0, 30] and with the initial inductor’s
current and initial capacitor’s voltage zero. Select at least two step sizes.
8.5 Use the MATLAB functions step and impulse to compute the step and impulse responses
of the circuit shown in Figure 8.14(b).
8.6* The impulse response of the ss equation in (8.16) and (8.17) with d = 0 is, by definition,
the output excited by u(t) = δ(t) and x(0) = 0. Verify that it equals the output excited
by u(t) = 0, for all t ≥ 0 and x(0) = b. Note that if d 6= 0, then the impulse response
of the ss equation contains dδ(t). A computer however cannot generate an impulse.
Thus in computer computation of impulse responses, we must assume d = 0. See also
Problem 9.3.
8.7 Develop basic block diagrams to simulate the circuits in Figure 8.14.
8.8 If we replace all basic elements in the basic block diagram for Figure 8.14(a) obtained in
Problem 8.7 by the op-amp circuits shown in Figures 6.19, 6.20, and 8.7(a), how many
op amps will be used in the overall circuit?
202 CHAPTER 8. CT LTI AND LUMPED SYSTEMS
8.9 Consider the basic block diagram shown in Figure 8.15. Assign the output of each
integrator as a state variable and then develop an ss equation to describe the diagram.
8.10 Consider the op-amp circuit shown in Figure 8.16 with RC = 1 and a, b, and c are
positive real constants. Let the inputs be denoted by vi (t), for i = 1 : 3. Verify that if
the output is assigned as x(t), then we have
Figure 8.16: Op-amp circuit that acts as three inverting amplifiers, one adder and one inte-
grator.
8.11 In Problem 8.10, if the output of the op-amp circuit in Figure 8.16 is assigned as −x(t),
what is its relationship with vi (t), for i = 1 : 3?
8.12 Verify that the op-amp circuit in Figure 8.17 with RC = 1 simulates the RLC circuit
shown in Figure 8.14(a). Note that x1 and x2 in Figure 8.17 correspond to those in
Figure 8.14(a). Compare the number of op amps used in Figure 8.17 with the one used
in Problem 8.8? This shows that op-amp implementations of a system are not unique.
8.13 Use impedances to compute the transfer functions of the circuits in Figure 8.14.
8.14 Classify the following rational functions:
8.15 Find a realization of the transfer function of the circuit in Figure 8.14(a) computed
in Problem 8.13. Compare this ss equation with the one in Problem 8.3. Note that
although they look different, they describe the same RLC circuit and are equivalent.
See Reference [C6]. Implement the ss equation using the structure in Figure 8.17.
Which implementation uses a smaller total number of components? Note that the use
of controllable-form realization for implementation will generally use a smaller num-
ber of components. However the physical meaning of its state variables may not be
transparent.
8.12. CONCLUDING REMARKS 203
Figure 8.17: Op-amp circuit implementation of the RLC circuit in Figure 8.14(a).
8.16 Use the transfer function obtained in Problem 8.13 to find a differential equation to
describe the circuit in Figure 8.14(b).
8.17 Find realizations for the transfer functions
3s2 + 1 1
H1 (s) = and H2 (s) =
2s2 + 4s + 5 2s3
realizes the transfer function H(s) = k/(s + a) + d. Note that the ss equation is a special
case of (8.59) with dimension 1.
8.19 Consider the transfer function
b Y (s)
H(s) = =
s3 + a2 s2 + a3 s + a4 U (s)
or, equivalently, the differential equation
where y (3) (t) = d3 y(t)/dt3 and ÿ(t) = d2 y(t)/dt2 . Verify that by defining x1 (t) := ÿ(t),
x2 (t) := ẏ(t), and x3 (t) := y(t), we can transform the differential equation into the
following ss equation
ẋ1 (t) −a2 −a3 −a4 x1 (t) b
ẋ2 (t) = 1 0 0 x2 (t) + 0 u(t)
ẋ3 (t) 0 1 0 x3 (t) 0
x1 (t)
y(t) = [0 0 1] x2 (t)
x3 (t)
What will be the form of the ss equation if we define x1 (t) = y(t), x2 (t) = ẏ(t), and
x3 (t) = ÿ(t).
8.20* Consider the transfer function
b1 s2 + b2 s + b3 Y (s)
H(s) = 3 2
= (8.82)
s + a2 s + a3 s + a4 U (s)
204 CHAPTER 8. CT LTI AND LUMPED SYSTEMS
If we define state variables as in Problem 8.19, then the resulting ss equation will involve
u̇(t) and ü(t) and is not acceptable. Let us introduce a new variable v(t) defined by
1
V (s) := U (s)
s3 + a2 s2 + a3 s + a4
or, equivalently, by the differential equation
Verify
Y (s)
= b1 s2 + b2 s + b3
V (s)
and
y(t) = b1 v̈(t) + b2 v̇(t) + b3 v(t) (8.85)
Note that (8.85) can be directly verified from (8.83) and (8.84) without using transfer
functions but will be more complex. Verify that by defining x1 (t) = v̈(t), x2 (t) = v̇(t),
and x3 (t) = v(t), we can express (8.84) and (8.85) as
ẋ1 (t) −a2 −a3 −a4 x1 (t) 1
ẋ2 (t) = 1 0 0 x2 (t) + 0 u(t)
ẋ3 (t) 0 1 0 x3 (t) 0
x1 (t)
y(t) = [b1 b2 b3 ] x2 (t)
x3 (t)
This is the ss equation in (8.59) with d = 0. Note that the relationships between xi (t)
and {u(t), y(t)} are complicated. Fortunately, we never need the relationships in using
the ss equation.
8.21 Find realizations of
2(s − 1)(s + 3) s3
H1 (s) = and H2 (s) =
s + 5s2 + 8s + 6
3 s3 + 2s − 1
Are they minimal? If not, find one.
8.22* The transfer function of (8.19) was computed using MATLAB in Subsection 8.9.1 as
−2s2 − s − 16
H(s) =
s2 + 2s + 10
Now verify it analytically using (8.70) and the following inversion formula
· ¸−1 · ¸
a b 1 d −b
=
c d ad − bc −c a
8.23 What are the transfer functions from u to y of the three circuits in Figure 8.18? Is every
circuit completely characterized by its transfer function? Which circuit is the best to
be used as a voltage divider?
8.24 Consider the circuit shown in Figure 8.19(a) where the input u(t) is a current source
and the output can be the voltage across or the current passing through the impedance
Z4 . Verify that either transfer function is independent of the impedance Z1 . Thus in
circuit design, one should not connect any impedance in series with a current source.
8.12. CONCLUDING REMARKS 205
Figure 8.19: (a) Circuit with redundant Z1 . (b) Circuit with redundant Z1 .
8.25 Consider the circuit shown in Figure 8.19(b) where the input u(t) is a voltage source
and the output can be the voltage across or the current passing through the impedance
Z4 . Verify that either transfer function is independent of the impedance Z1 . Thus in
circuit design, one should not connect any impedance in parallel with a voltage source.
8.26 Consider the circuit shown in Figure 8.20(a). (a) Because the two capacitor voltages are
identical, assign only one capacitor voltage as a state variable and then develop a one-
dimensional ss equation to describe the circuit. Note that if we assign both capacitor
voltages as state variables xi (t), for i = 1, 2, then x1 (t) = x2 (t), for all t, and they
cannot act independently. (b) Compute the transfer function of the circuit. Is the
circuit completely characterized by its transfer function?
Figure 8.20: (a) Circuit in which not all capacitor voltages are assigned as state variables.
(b) Circuit in which not all inductor currents are assigned as state variables..
8.27 Consider the circuit shown in Figure 8.20(b). (a) Because the two inductor currents
are identical, assign only one inductor current as a state variable and then develop a
two-dimensional ss equation to describe the circuit. (b) Compute the transfer function
of the circuit. Is the circuit completely characterized by its transfer function?
8.28* Consider the circuit shown in Figure 8.21. (a) Verify that if we assign the 1-F capacitor
voltage as x1 (t), the inductor current as x2 (t), and the 2-F capacitor voltage as x3 (t),
206 CHAPTER 8. CT LTI AND LUMPED SYSTEMS
8.30 Verify that the RLC circuit in Figure 8.11(a) with L = 2H, C = 4F, C1 = 1F, and
R = 5Ω can be described by the ss equation in (8.80).
8.31 To find the transfer function of the ss equation in (8.80), typing in MATLAB
will yield n=0 1 0 0.125 and d=1 0.2 0.125 0.025. What is the transfer function? What
is its degree?
8.32 The convolution in (8.5) can be reduced as
Z t
y(t) = h(t − τ )u(τ )dτ
τ =0
by using the causality condition h(t−τ ) = 0, for τ > t. Verify the following commutative
property Z Z
t t
y(t) = h(t − τ )u(τ )dτ = u(t − τ )h(τ )dτ
τ =0 τ =0
8.33 Let yq (t) be the output of a system excited by a step input u(t) = 1 for all t ≥ 0. Use
the second integration in Problem 8.32 to verify
dyq (t)
h(t) =
dt
where h(t) is the impulse response of the system.
8.34 Consider the circuit shown in Figure 8.22(a) which consists of two ideal diodes connected
in parallel but in opposite directions. The voltage v(t) and the current i(t) of an ideal
diode is shown in Figure 8.22(b). If the diode is forward biased or v(t) > 0, the diode
acts as short circuit and the current i(t) will flow. If the diode is reverse biased or
v(t) < 0, the diode acts as open circuit and no current can flow. Thus the current of a
diode can flow only in one direction. Is an ideal diode a linear element? Is the system
from u to y linear? What is its transfer function?
Figure 8.22: (a) Linear system (from u to y) that contains nonlinear elements. (b) Charac-
teristic of ideal diode.
208 CHAPTER 8. CT LTI AND LUMPED SYSTEMS
Chapter 9
9.1 Introduction
The study of systems consists four parts: modeling, developing mathematical equations,
analysis, and design. As an introductory course, we study only LTI lumped systems. Such
systems can be described by, as discussed in the preceding chapter, convolutions, high-order
differential equations, ss equations, and rational transfer functions.
The next step in the study is to carry out analysis. There are two types of analyses:
quantitative and qualitative. In quantitative analysis, we are interested in the outputs excited
by some inputs. This can be easily carried out using ss equations and MATLAB as discussed
in the preceding chapter. In qualitative analysis, we are interested in general properties of
systems. We use transfer functions to carry out this study because it is simplest and most
transparent among the four descriptions. We will introduce the concepts of poles, zeros,
stability, transient and steady-state responses and frequency responses. These concepts are
essential in design. We will then discuss why we do not use ss equations in this study.
The transfer functions studied in this chapter are limited to proper rational functions with
real coefficients. When we use a transfer function to study a system, the system is implicitly
assumed to be initially relaxed (all its initial conditions are zero) at t = 0 and the input is
applied from t = 0 onward. Recall that t = 0 is the instant we start to study the system and
is selected by us.
209
210 CHAPTER 9. QUALITATIVE ANALYSIS OF CT LTI LUMPED SYSTEMS
1 1 1
Amplitude
0 0 0
0 2 4 0 2 4 0 2 4
Time Time Time
9.1(b). An important question about the two systems is the correctness of the final reading.
Does the final reading gives the actual body’s temperature or weight? This depends entirely
on the calibration. If a calibration is correct, then the final reading will be accurate.
Another question regarding the two systems is the time for the output to reach the final
reading. We call the time the response time. The response time of the thermometer is roughly
60 seconds and that of the spring scale is about 2 seconds. If a weighing scale takes 60 seconds
to reach a final reading, no person will buy such a scale. The response time of mercury-in-
glass thermometers is very long. If we use other technology such as infrared sensing, then the
response time can be shorten significantly. For example, thermometers now used in hospitals
take only 4 seconds or less to give a final digital reading.
Yet another question regarding the two systems is the waveform of y(t) before reaching
the final reading. Generally, y(t) may assume one of the waveforms shown in Figure 9.1. The
reading of the thermometer takes the form in Figure 9.1(a). The spring scale takes the form
in Figure 9.1(b); it oscillates and then settles down to the final value. For some systems, y(t)
may go over and then come back to the final value, without appreciable oscillation, as shown
in Figure 9.1(c). If y(t) goes over the final reading as shown in Figures 9.1(b) and (c), then
the response is said to have overshoot.
The accuracy of the final reading, response time, and overshoot are in fact three standard
specifications in designing control systems. For example, in designing an elevator control,
the elevator floor should line up with the floor of the intended story (accuracy). The speed
of response should be fast but not too fast to cause uncomfortable for the passengers. The
overshoot of an elevator will cause the feeling of nausea.
The preceding three specifications are in the time domain. In designing filters, the speci-
fications are given mainly in the frequency domain as we will discuss later.
Keeping the preceding discussion in mind, we will start our qualitative analysis of LTI
lumped systems.
Imaginary part
[2] [2]
0
−1
−2
−3
−4
−5
−5 −4 −3 −2 −1 0 1 2 3 4 5
Real Part
1200 2
1000
1
Amplitude
800
Im (s)
0
600
−1
400
200 −2
0 −3
0 2 4 6 −4 −2 0 1.2 2 4
Time (s) Re (s)
Figure 9.3: (a) The function e1.2t , for t ≥ 0. (b) The region of convergence of its Laplace
transform (Re s > 1.2).
where c is a real constant and will be discussed shortly. We first use an example to discuss
the issue involved in using (9.5).
Example 9.3.1 Consider the signal x(t) = e1.2t , for t ≥ 0. This function grows exponentially
to ∞ as t → ∞ as shown in Figure 9.3(a). Its Laplace transform is
Z ∞ Z ∞ ¯∞
−1 ¯
X(s) = e1.2t e−st dt = e−(s−1.2)t dt = e−(s−1.2)t ¯
0 0 s − 1.2 t=0
9.3. SOME LAPLACE TRANSFORM PAIRS 213
−1 h −(s−1.2)t ¯¯ ¯ i
¯
= e ¯ − e−(s−1.2)t ¯ (9.7)
s − 1.2 t=∞ t=0
The value of e−(s−1.2)t at t = 0 is 1 for all s. However, the value of e−(s−1.2)t at t = ∞ can
be, depending on s, zero, infinity, or undefined. To see this, we express the complex variable
as s = σ + jω, where σ = Re s and ω = Im s are, respectively, the real and imaginary part
of s. Then we have
e−(s−1.2)t = e−(σ−1.2)t e−jωt = e−(σ−1.2)t (cos ωt − j sin ωt)
If σ = 1.2, the function reduces to cos ωt − j sin ωt whose value is not defined as t → ∞. If
σ < 1.2, then e−(σ−1.2)t approaches infinity as t → ∞. If σ > 1.2, then e−(σ−1.2)t approaches
0 as t → ∞. In conclusion, we have
¯ ∞ or − ∞ for Re s < 1.2
¯
e−(s−1.2)t ¯ = undefined for Re s = 1.2
t=∞
0 for Re s > 1.2
Thus (9.7) is not defined for Re s ≤ 1.2. However, if Re s > 1.2, then (9.7) reduces to
Z ∞
1.2t −1 1
X(s) = L[e ] = e1.2t e−st dt = [0 − 1] = (9.8)
0 s − 1.2 s − 1.2
This is the Laplace transform of e1.2t . 2
The Laplace transform in (9.8) is, strictly speaking, defined only for Re s > 1.2. The region
Re s > 1.2 or the region on the right-hand side of the dotted vertical line shown in Figure
9.3(b) is called the region of convergence. The region of convergence is important if we use the
integration formula in (9.6) to compute the inverse Laplace transform of X(s) = 1/(s − 1.2).
For the example, if c is selected to be larger than 1.2, then the formula will yield x(t) = e1.2 ,
for t ≥ 0 and x(t) = 0, for t < 0, a positive-time function. If c is selected incorrectly, the
formula will yield a signal x(t) = 0, for t > 0 and x(t) 6= 0, for t ≤ 0, a negative-time
signal. In conclusion, without specifying the region of convergence, (9.6) may not yield the
original x(t), for t ≥ 0. In other words, the relationship between x(t) and X(s) is not one-
to-one without specifying the region of convergence. Fortunately in practical application,
we study only positive-time signals and consider the relationship between x(t) and X(s) to
be one-to-one. Once we obtain a Laplace transform X(s) from a positive-time signal, we
automatically assume the inverse Laplace transform of X(s) to be the original positive-time
signal. Thus there is no need to consider the region of convergence and to use the inversion
formula in (9.6). Indeed when we developed transfer functions in the preceding chapter, the
region of convergence was not mentioned. Nor will it appear again in the remainder of this
text. Moreover we will automatically set the value of e(a−s)t , for any real or complex a, to
zero at t = ∞.
Before proceeding, we mention that the two-sided Laplace transform of x(t) is defined as
Z ∞
XII (s) := LII [x(t)] := x(t)e−st dt
t=−∞
For such transform, the region of convergence is essential because the same XII (s) may have
many different two-sided x(t). Thus its study is much more complicated than the (one-sided)
Laplace transform introduced in (9.5). Fortunately, the two-sided Laplace transform is rarely,
if not never, used in practice. Thus its discussion is omitted.
We compute in the following Laplace transforms of some positive-time functions. Their
inverse Laplace transforms are the original positive-time functions. Thus they form one-to-one
pairs.
Example 9.3.2 Find the Laplace transform of the impulse δ(t). By definition, we have
Z ∞
∆(s) = δ(t)e−st dt
0
214 CHAPTER 9. QUALITATIVE ANALYSIS OF CT LTI LUMPED SYSTEMS
x(t), t ≥ 0 X(s)
δ(t) 1
1
1 or q(t) s
1
t s2
k!
tk (k : positive integer) sk+1
1
e−at (a : real or complex) s+a
k!
tk e−at (s+a)k+1
sin ω0 t ω0
s2 +ω0 2
s
cos ω0 t s2 +ω0 2
2ω0 s
t sin ω0 t (s2 +ω0 2 )2
s −ω0 2
2
t cos ω0 t (s2 +ω0 2 )2
e−at sin ω0 t ω0
(s+a)2 +ω0 2
s+a
e−at cos ω0 t (s+a)2 +ω0 2
which is, by definition, the Laplace transform of −tx(t). This establishes the formula.
Using the formula and L[e−at ] = 1/(s + a), we can establish
· ¸
d 1 1
L[te−at ] = − =
ds s + a (s + a)2
· ¸
d 1 2!
L[t2 e−at ] = − 2
=
ds (s + a) (s + a)3
..
.
k!
L[tk e−at ] =
(s + a)k+1
where k! = 1 · 2 · 3 · · · · k. We list the preceding Laplace transform pairs in Table 9.1. Note
that all Laplace transforms in the table are strictly proper rational functions except the first
one which is biproper.
s2 − 10
H(s) =
2s2 − 4s − 6
We compute its step response, that is, the output excited by the input u(t) = 1, for t ≥ 0.
The Laplace transform of the input is U (s) = 1/s. Thus the output in the transform domain
216 CHAPTER 9. QUALITATIVE ANALYSIS OF CT LTI LUMPED SYSTEMS
is
s2 − 10 1 Ny (s)
Y (s) = H(s)U (s) = · =:
2(s2 − 2s − 3) s Dy (s)
To find the output in the time domain, we must compute the inverse Laplace transform of
Y (s). The procedure consists of two steps: expanding Y (s) as a sum of terms whose inverse
Laplace transforms are available in a table such as Table 9.1, and the table look-up. Before
proceeding, we must compute the roots of the denominator Dy (s) of Y (s). We then expand
Y (s) as
s2 − 10 1 1 1
Y (s) = = k + r1 + r2 + r3 (9.10)
2(s + 1)(s − 3)s s+1 s−3 s
This is called partial fraction expansion. We call k the direct term, and ri , for i = 1 : 3, the
residues. In other words, every residue is associated with a root of Dy (s) and the direct term
is not. Once all ri and k are computed, then the inverse Laplace transform of Y (s) is, using
Table 9.1,
y(t) = kδ(t) + r1 e−t + r2 e3t + r3
for t ≥ 0. Note that the inverse Laplace transform of r3 /s = r3 /(s − 0) is r3 e0×t = r3 , for all
t ≥ 0, which is a step function with amplitude r3 .
We next discuss the computation of k and ri . They can be computed in many ways. We
discuss in the following only the simplest. Equation (9.10) is an identity and holds for any s.
If we select s = ∞, then the equation reduces to
Y (∞) = 0 = k + r1 × 0 + r2 × 0 + r3 × 0
which implies
k = Y (∞) = 0
Thus the direct term is simply the value of Y (∞). It is zero if Y (z) is strictly proper.
Next we select s = −1, then we have
1 1
Y (−1) = −∞ = k + r1 × ∞ + r2 × + r3 ×
−1 − 3 −1
This equation contains ∞ on both sides of the equality and cannot be used to solve any ri .
However, multiplying (9.10) by s + 1 to yield
s2 − 10 s+1 s+1
Y (s)(s + 1) = = k(s + 1) + r1 + r2 + r3
2(s − 3)s s−3 s
and then substituting s by −1, we obtain
¯
s2 − 10 ¯¯
r1 = Y (s)(s + 1)|s+1=0 =
2(s − 3)s ¯s=−1
(−1)2 − 10 −9
= = = −1.125
2(−4)(−1) 8
Using the same procedure, we can obtain
¯
s2 − 10 ¯¯
r2 = Y (s)(s − 3)|s−3=0 =
2(s + 1)s ¯s=3
9 − 10 −1
= = = −0.0417
2(4)(3) 24
and
¯
s2 − 10 ¯
r3 = X(s)s|s=0 = ¯
2(s + 1)(s − 3) ¯s=0
−10 −10 5
= = = = 1.6667
2(1)(−3) −6 3
9.3. SOME LAPLACE TRANSFORM PAIRS 217
This completes solving k and ri . Note the MATLAB function residue carries out partial
fraction expansions. The numerator’s coefficients of Y (s) can be expressed in MATLAB as
ny=[1 0 -10]; its denominator’s coefficients can be expressed as dy=[2 -4 -6 0]. Typing
in the control window
>> ny=[1 0 -10];dy=[2 -4 -6 0];
>> [r,p,k]=residue(ny,dy)
will yield r=-0.0417 -1.125 1.6667, p=3 -1 0 and k=[ ]. They are the same as com-
puted above. In conclusion, (9.10) can be expanded as
1 1 1
Y (s) = −1.125 − 0.0417 + 1.6667
s+1 s−3 s
and its inverse Laplace transform is
−2s2 − s − 16 −2s2 − s − 16
H(s) = 2
= (9.11)
s + 2s + 10 (s + 1 − j3)(s + 1 + j3)
We compute its impulse response or the output excited by the input u(t) = δ(t). The Laplace
transform of δ(t) is 1. Thus the impulse response in the transform domain is
which implies that the impulse response in the time domain is simply the inverse Laplace
transform of the transfer function. This also follows directly from the definition that the
transfer function is the Laplace transform of the impulse response.
The denominator of (9.11) has the pair of complex conjugate roots −1 ± j3. Thus H(s)
can be factored as
−2s2 − s − 16 r1 r2
H(s) = =k+ +
(s + 1 − j3)(s + 1 + j3) s + 1 − j3 s + 1 + j3
k = H(∞) = −2
and ¯
−2s2 − s − 16 ¯¯
r1 = = 1.5 − j0.1667 = 1.51e−j0.11
s + 1 + j3 ¯s=−1+j3
The computation involves complex numbers and is complicated. It is actually obtained using
the MATLAB function residue. Because all coefficients of (9.11) are real numbers, the
residue r2 must equal the complex conjugate of r1 , that is, r2 = r1∗ = 1.51ej0.11 . Thus H(s)
in (9.11) can be expanded in partial fraction expansion as
1.51e−j0.11 1.51ej0.11
H(s) = −2 + +
s + 1 − j3 s + 1 + j3
and its inverse Laplace transform is, using Table 9.1,
for t ≥ 0. This is the impulse response used in (8.25). Because h(t) has infinite duration, the
system has infinite memory. 2
A great deal more can be said regarding the procedure. We will not discuss it further for
the reasons to be given in the next subsection. Before proceeding, we mention that a system
with a proper rational transfer function of degree 1 or higher has infinite memory. Indeed,
the rational transfer function can be expressed as a sum of terms in Table 9.1. The inverse
Laplace transform of each term is of infinite duration. For example, the transfer function of
an integrator is H(s) = 1/s; its impulse response is h(t) = 1, for all t ≥ 0. Thus the integrator
has infinite memory.
are −0.998, −2.2357, and −1.8837 ± 0.1931j. We see that the two coefficients of D(s)
change less than 0.1%, but all roots of D(s) change their values greatly. Thus the
roots of a polynomial is very sensitive to its coefficients. Any procedure that requires
computing roots is not desirable in computer computation.
In conclusion, transfer functions are not used directly in computer computation. In con-
trast, computation using ss equations can be carried out in real time, does not involve any
transformation, and involves only additions and multiplications. Thus ss equations are most
suited for computer computation. Consequently computation involving transfer functions
should be transformed into computation involving ss equations. This is the realization prob-
lem discussed in Section 8.9.
9.4. STEP RESPONSES – ROLES OF POLES AND ZEROS 219
for t ≥ 0. Note that k0 = Y (∞) = 0 and ku can be computed as, using (9.14),
¯
1 ¯ −10
ku = Y (s)s|s=0 = H(s) × s¯¯ = H(s)|s=0 = H(0) = 3 = −1.25 (9.20)
s s=0 2
220 CHAPTER 9. QUALITATIVE ANALYSIS OF CT LTI LUMPED SYSTEMS
But we cannot use (9.14) to compute r1 because −2 is not a simple pole. There are other
formulas for computing ri , for i = 1 : 3. They will not be discussed because we are interested
only in the form of the response.2
In conclusion, if H(s) contains a repeated pole at pi with multiplicity 3, then its step
response Y (s) will contain the terms
r1 s2 + r2 s + r3 £ ¤
3
= L k1 epi t + k2 tepi t + k3 t2 epi t (9.21)
(s − pi )
for some real ri and ki .
With the preceding discussion, we can now discuss the role of poles and zeros. Consider
the following transfer functions
4(s − 3)2 (s + 1 + j2)(s + 1 − j2)
H1 (s) = (9.22)
(s + 1)(s + 2)2 (s + 0.5 + j4)(s + 0.5 − j4)
−12(s − 3)(s2 + 2s + 5)
H2 (s) = (9.23)
(s + 1)(s + 2)2 (s2 + s + 16.25)
−20(s2 − 9)
H3 (s) = (9.24)
(s + 1)(s + 2)2 (s2 + s + 16.25)
60(s + 3)
H4 (s) = (9.25)
(s + 1)(s + 2)2 (s2 + s + 16.25)
180
H5 (s) = (9.26)
(s + 1)(s + 2)2 (s2 + s + 16.25)
They all have the same set of poles. In addition, they have Hi (0) = 180/65 = 2.77, for
i = 1 : 5. Even though they have different sets of zeros, their step responses, in the transform
domain, are all of the form
1 k1 r2 s + r3 r4 s + r5 ku
Yi (s) = Hi (s) = k0 + + + +
s s + 1 (s + 2)2 (s + 0.5)2 + 42 s
with k0 = Yi (∞) = 0 and
¯
1 ¯¯
ku = Hi (s) · s¯ = Hi (0) = 2.77
s s=0
Thus their step responses in the time domain are all of the form
yi (t) = k1 e−t + k2 e−2t + k3 te−2t + k4 e−0.5t sin(4t + k5 ) + 2.77 (9.27)
for t ≥ 0. This form is determined entirely by the poles of Hi (s) and U (s). The pole −1
generates the term k1 e−t . The repeated pole −2 generates the terms k2 e−2t and k3 te−2t . The
pair of complex-conjugate poles at −0.5 ± j4 generates the term k4 e−0.5t sin(4t + k5 ). The
input is a step function with Laplace transform U (s) = 1/s. Its pole at s = 0 generates a
step function with amplitude Hi (0) = 2.77. We see that the zeros of Hi (s) do not play any
role in determining the form in (9.27). They affect only the coefficients ki .
We plot the step responses of Hi (s), for i = 1 : 5 in Figure 9.4. They are obtained in
MATLAB by typing in an edit window the following
2.5
1.5
Amplitude
1
0.5
−0.5
−1
0 1 2 3 4 5 6 7 8 9 10
Time (s)
Figure 9.4: Step responses of the transfer functions in (9.22) through (9.26).
It is then saved with file name f94.m. Typing in the command window >> f94 will yield
Figure 9.4 in a figure window. Note that the responses are not computed directly from
the transfer functions. They are computed in the time domain using their controllable-form
realizations discussed in Section 8.9. We see that all responses approach 2.77 as t → ∞, but
the responses right after the application of the input are all different even though their step
responses are all of the form in (9.27). This is due to different sets of ki . In conclusion, poles
dictate the general form of responses; zeros affect only the coefficients ki . Thus we conclude
that zeros play lesser role than poles in determining responses of systems.
0.5 100
0.5
Amplitude
0 0 0
−0.5
−0.5 −100
−1
−1 −200
0 5 10 0 2 4 6 0 5 10
Time (s) Time (s) Time (s)
Figure 9.5: (a) e−0.5t sin(6t + 0.3). (b) sin(6t + 0.3). (c) e0.5t sin(6t + 0.3)
(a) (b)
12 30
10 20
8 10
Amplitude
6 0
4 −10
2 −20
0 −30
0 10 20 30 0 10 20 30
Time (s) Time (s)
If α > 0, both eαt and teαt approach ∞ as t → ∞. Thus if a pole, simple or repeated, real or
complex, lying inside the RHP, then its time function approaches infinity as t → ∞.
If α < 0 such as α = −0.5, then L−1 [1/(s + 0.5)2 ] = te−0.5t is a product of ∞ and zero as
t → ∞. To find its value, we must use l’Hôpital’s rule as
t 1
lim te−0.5t = lim = lim =0
t→∞ t→∞ e0.5t t→∞ 0.5e0.5t
Using the same procedure, we can shown L−1 [1/(s + 0.5)3 ] = 0.5t2 e−0.5t → 0 as t → ∞. Thus
we conclude that the time function of a pole lying inside the LHP, simple or repeated, real or
complex, will approach zero as t → ∞. This is indeed the case as shown in Figure 9.6(a) for
t2 e−0.5t .3
The situation for poles with α = 0, or pure imaginary poles, is more complex. The time
function of a simple pole at s = 0 is a constant for all t ≥ 0; it will not grow unbounded
nor approach zero as t → ∞. The time response of a simple pair of pure-imaginary poles is
k2 sin(βt +k2 ) which is a sustained oscillation for all t ≥ 0 as shown in Figure 9.5(b). However
the time function of a repeated pole at s = 0 is t which approaches ∞ as t → ∞. The time
response of a repeated pair of pure-imaginary poles contains k3 t sin(βt+k4 ) which approaches
∞ or −∞ oscillatorily as t → ∞ as shown in Figure 9.6(b). In conclusion, we have
3 Note that the exponential function eαt , for any negative real α, approaches 0 much faster than that tk ,
for any negative integer k, approaches 0 as t → ∞. Likewise, the exponential function eαt , for any positive
real α, approaches ∞ much faster than that tk , for any positive integer k, approaches ∞ as t → ∞. Thus we
have t10 e−0.0001t → 0 as t → ∞ and t−10 e0.0001t → ∞ as t → ∞. In other words, real exponential functions
dominate over polynomial functions as t → ∞.
9.5. STABILITY 223
Figure 9.7: (a) Unstable circuit. (b) and (c) Stable circuits.
We see that the response of a pole approaches 0 as t → ∞ if and only if the pole lies inside
the LHP.
9.5 Stability
This section introduces the concept of stability for systems. In general, if a system is not
stable, it may burn out or saturate (for an electrical system), disintegrate (for a mechanical
system), or overflow (for a computer program). Thus every system designed to process signals
is required to be stable. Let us give a formal definition.
Definition 9.1 A system is BIBO (bounded-input bounded-output) stable or, simply, sta-
ble if every bounded input excites a bounded output. Otherwise, the system is said to be
unstable.4 2
A signal is bounded if it does not grow to ∞ or −∞. In other words, a signal u(t) is
bounded if there exists a constant M1 such that |u(t)| ≤ M1 < ∞ for all t ≥ 0. We first give
an example to illustrate the concept.5
Example 9.5.1 Consider the circuit shown in Figure 9.7(a). The input u(t) is a current
source; the output y(t) is the voltage across the capacitor. The impedances of the inductor
and capacitor are respectively s and 1/4s. The impedance of their parallel connection is
4 Some texts define a system to be stable if every pole of its transfer function has a negative real part.
That definition is applicable only to LTI lumped systems. Definition 9.1 is applicable to LTI lumped and
distributed systems and is more widely adopted.
5 Before the application of an input, the system is required to be initially relaxed. Thus the BIBO stability
is defined for forced or zero-state responses. Some texts introduce the concept of stability using a pendulum
and an inverted pendulum or a bowl and an inverted bowl. Strictly speaking, such stability is defined for
natural or zero-input responses and should be defined for an equilibrium state. Such type of stability is called
asymptotic stability or marginal stability. See Reference [C6]. A pendulum is marginally stable if there is no
air friction, and is asymptotically stable if there is friction. An inverted pendulum is neither marginally nor
asymptotically stable.
224 CHAPTER 9. QUALITATIVE ANALYSIS OF CT LTI LUMPED SYSTEMS
s(1/4s)/(s + 1/4s) = s/(4s2 + 1) = 0.25s/(s2 + 0.25). Thus the input and output of the
circuit are related by
0.25s 0.25s
Y (s) = 2 U (s) = 2 U (s)
s + 0.25 s + (0.5)2
If we apply a step input (u(t) = 1, for t ≥ 0), then its output is
0.25s 1 0.25
Y (s) = = 2
s2 + (0.5)2 s s + (0.5)2
which implies y(t) = 0.5 sin 0.5t. This output is bounded. If we apply u(t) = sin 3t, then the
output is
0.25s 3
Y (s) = 2 2 2
s + (0.5) s + 9
which implies
y(t) = k1 sin(0.5t + k2 ) + k3 sin(3t + k4 )
for some constants ki . This output is bounded. Thus the outputs excited by the bounded
inputs u(t) = 1 and u(t) = sin 3t are bounded. Even so, we cannot conclude that the circuit
is BIBO stable because we have not yet checked all possible bounded inputs.
Now let us apply the bounded input u(t) = sin 0.5t. Then the output is
0.25s 0.5 0.125s
Y (s) = = 2
s2 + 0.25 s2 + 0.25 (s + 0.25)2
Now because Y (s) has a repeated imaginary poles at ±0.5j, its time response is of the form
y(t) = k1 sin(0.5t + k2 ) + k3 t sin(0.5t + k4 )
for some constants ki and with k3 6= 0. The response grows unbounded. In other words, the
bounded input u(t) = sin 0.5t excites an unbounded output. Thus the circuit is not BIBO
stable.6 2
Example 9.5.2 Consider the op-amp circuits in Figures 6.9(a) and (b). Because vo (t) = vi (t),
if an input vi (t) is bounded, so is the output. Thus the two circuits are BIBO stable. Note
that they are stable based on the memoryless model of the op amp. If the op amp is modeled
to have memory, then the former is stable and the latter is not as we will discuss in the next
chapter. 2
Every LTI memoryless system can be described by y(t) = αu(t), for some finite α. Clearly
if u(t) is bounded, so is y(t). Thus such a system is always stable. Other than memoryless
systems, we cannot use Definition 9.1 to check the stability of a system because there are
infinitely many bounded inputs to be checked. Fortunately, stability is a property of a system
and can be determined from its mathematical descriptions.
Theorem 9.1 An LTI system with impulse response h(t) is BIBO stable if and only if h(t)
is absolutely integrable in [0, ∞); that is,
Z ∞
|h(t)|dt ≤ M < ∞ (9.28)
0
The integrand h(t − τ )u(τ ) in the first integration in (9.30) may be positive or negative and
its integration may cancel out each other. This will not happen in the second integration
because its integrand |h(t − τ )||u(τ )| is positive for all t. Thus we have the first inequality
in (9.30). The second inequality follows from |u(t)| ≤ M1 . Let us introduce a new variable
τ̄ := t − τ , where t is fixed. Then we have dτ̄ = −dτ and (9.30) becomes
Z −∞ Z t
|y(t)| ≤ −M1 |h(τ̄ )|dτ̄ = M1 |h(τ̄ )|dτ̄
τ̄ =t τ̄ =−∞
Z t Z ∞
= M1 |h(τ̄ )|dτ̄ ≤ M1 |h(τ̄ )|dτ̄ = M1 M (9.31)
τ̄ =0 τ̄ =0
where we have used the causality condition h(t) = 0, for all t < 0. Because (9.31) holds for all
t ≥ 0, the output is bounded. This show that under the condition in (9.28), every bounded
input will excite a bounded output. Thus the system is BIBO stable.
Next we show that if h(t) is not absolutely integrable, then there exists a bounded input
that will excite an output which will approach ∞ or −∞ as t → ∞. Because ∞ is not a
number, the way to show |y(t)| → ∞ is to show that no matter how large M2 is, there exists
a t1 with y(t1 ) > M2 . If h(t) is not absolutely integrable, then for any arbitrarily large M2 ,
there exists a t1 such that
Z t1 Z t1
|h(t)|dt = |h(τ̄ )|dτ̄ ≥ M2
t=0 τ̄ =0
where we have introduced τ̄ = t1 − τ , dτ̄ = −dτ , and used the causality condition. This shows
that if h(t) is not absolutely integrable, then there exists a bounded input that will excite an
output with an arbitrarily large magnitude. Thus the system is not stable. This establishes
the theorem.2
Example 9.5.3 Consider the circuit shown in Figure 9.7(a). Its transfer function was com-
puted in Example 9.5.1 as H(s) = 0.25s/(s2 + 0.52 ). Its inverse Laplace transform or the
impulse response of the circuit is, using Table 9.1, h(t) = 0.25 cos 0.5t. Although
Z ∞
0.25 cos 0.5tdt
t=0
226 CHAPTER 9. QUALITATIVE ANALYSIS OF CT LTI LUMPED SYSTEMS
is finite because the positive and negative areas will cancel out, we have
Z ∞
|0.25 cos 0.5t| dt = ∞
t=0
See the discussion regarding Figure 3.3. Thus the circuit is not stable according to Theorem
9.1.2
Theorem 9.1 is applicable to every LTI lumped or distributed system. However it is not
used in practice because impulse responses of systems are generally not available. We will use
it to develop the next theorem.
Theorem 9.2 An LTI lumped system with proper rational transfer function H(s) is stable
if and only if every pole of H(s) has a negative real part or, equivalently, all poles of H(s) lie
inside the left half s-plane.2
We discuss only the basic idea in developing Theorem 9.2 from Theorem 9.1. The impulse
response of a system is the inverse Laplace transform of its transfer function. If the system
is lumped, its transfer function H(s) is a rational function.
If H(s) has a pole inside the right half s-plane (RHP), then its impulse response grows to
infinity and cannot be absolutely integrable. If H(s) has simple poles on the jω-axis, then
its impulse response is a step function or a sinusoid. Both are not absolutely integrable. In
conclusion, if H(s) has one or more poles on the jω-axis or lying inside the RHP, then the
system is not stable.
Next we argue that if all poles lying inside the left half s-plane (LHP), then the system
is stable. To simplify the discussion, we assume that H(s) has only simple poles and can be
expanded as X ri
H(s) = + k0
i
s − pi
where the poles pi can be real or complex. Then the impulse response of the system is
X
h(t) = ri epi t + k0 δ(t) (9.32)
i
for t ≥ 0. We compute
Z Z ¯ ¯
∞ ∞ ¯X
¯
¯
¯ XZ ∞
pi t
|h(t)|dt = ¯ ri e + k0 δ(t)¯ dt ≤ |ri epi t |dt + k0 (9.33)
0 0 ¯ ¯ 0
i i
Corollary 9.2 An LTI lumped system with impulse response h(t) is BIBO stable if and only
if h(t) → 0 as t → ∞. 2
The impulse response is the inverse Laplace transform of H(s). As discussed in Subsection
9.4.1, the response of a pole approaches zero as t → ∞ if and only if the pole lies inside the
LHP. Now if all poles of H(s) lie inside the LHP, then all its inverse Laplace transforms
approach zero as t → ∞. Thus we have the corollary. This corollary will be used to check
stability by measurement.
9.5.1 What holds for lumped systems may not hold for distributed
systems
We mention that the results for LTI lumped systems may not be applicable to LTI distributed
systems. For example, consider an LTI system with impulse response
1
h(t) = (9.34)
t+1
for t ≥ 0 and h(t) = 0 for t < 0. Note that its Laplace transform can be shown to be an
irrational function of s. Thus the system is distributed. Even though h(t) approaches zero as
t → ∞, it is not absolutely integrable because
Z ∞ Z ∞
1 ∞
|h(t)|dt = dt = log(t + 1)|0 = ∞
0 0 t+1
Thus the system is not BIBO stable and Corollary 9.2 cannot be used.
We give a different example. Consider an LTI system with impulse response
for t ≥ 0 and h1 (t) = 0 for t < 0. Its Laplace transform is not a rational function of s and
the system is distributed. Computing its Laplace transform is difficult, but it is shown in
Reference [K5] that its Laplace transform meets some differential equation and is analytic
(has no singular point or pole) inside the RHP and on the imaginary axis. Thus the system is
BIBO stable if Theorem 9.2 were applicable. The system is actually not BIBO stable according
to Theorem 9.1 because h1 (t) is not absolutely integrable. In conclusion, Theorem 9.2 and
its corollary are not applicable to LTI distributed systems. Thus care must be exercised in
applying theorems; we must check every condition or word in a theorem.
1 H(0)
Y (s) = H(s)U (s) = H(s) = + terms due to all poles of H(s) (9.35)
s s
Note that the residue associated with every pole of H(s) will be nonzero because U (s) = 1/s
has no zero to cancel any pole of H(s). In other words, the step input will excite every pole
of H(s). See Problem 9.7. The inverse Laplace transform of (9.35) is of the form
for t ≥ 0. If H(s) is stable, then the time responses of all its poles approach zero as t → ∞.
Thus we have y(t) → H(0) as t → ∞.
Next we consider the case where H(s) is not stable. If H(s) has a pole at s = 0, then
Y (s) = H(s)U (s) = H(s)/s has a repeated pole at s = 0 and its step response will approach
∞ as t → ∞. If H(s) contains a pair of pure imaginary poles at ±jω0 , then its step response
contains k1 sin(ω0 t + k2 ) which does not approach a constant as t → ∞. If H(s) has one
or more poles in the RHP, then its step response will approach ∞ or −∞. In conclusion, if
H(s) is not stable, then its step response will not approach a constant as t → ∞; it will grow
unbounded or remain in oscillation. This establishes the theorem.2
If a system is known to be linear, time-invariant and lumped such as an op-amp circuit,
then its stability can be easily checked by measurement. If its step response grows unbounded,
saturates, or remains in oscillation, then the system is not stable. In fact, we can apply any
input for a short period of time to excite all poles of the system. We then remove the input. If
the response does not approach zero as t → ∞, then the system is not stable. Thus stability
of a system can be easily detected in practice. If a system is not stable, it must be redesigned.
This may explain why practical systems were successfully designed even before the advent of
all mathematical stability results.
Resistors with positive resistances, inductors with positive inductances, and capacitors
with positive capacitances are called passive elements. Any circuit built with these three
types of elements is an LTI lumped system. Resistors dissipate energy. Although inductors
and capacitors can store energy, they cannot generate energy. Thus when an input is removed
from any RLC network, the energy stored in the inductors and capacitors will eventually
dissipate in the resistors and consequently the response eventually approaches zero. Note
that the LC circuit in Figure 9.7(a) is a model. In reality, every physical inductor has a small
series resistance as shown in Figure 9.7(b), and every physical capacitor has a large parallel
resistance as shown in Figure 9.7(c). Thus all practical RLC circuits are stable, and their
stability study is unnecessary. For this reason, before the advent of active elements such as
op amps, stability is not studied in RLC circuits. It is studied only in designing feedback
systems. This practice appears to remain to this date. See, for example, References [H3, I1,
S1].
s6 a0 a2 a4 a6
k1 = a0 /a1 s5 a1 a3 a5 [b0 b1 b2 b3 ] = (1st row)-k1 (2nd row)
k2 = a1 /b1 s4 b1 b2 b3 [c0 c1 c2 ] = (2nd row)-k2 (3rd row)
k3 = b1 /c1 s3 c1 c2 [d0 d1 d2 ] = (3rd row)-k3 (4th row)
k4 = c1 /d1 s2 d1 d2 [e0 e1 ] = (4th row)-k4 (5th row)
k5 = d1 /e1 s e1 [f0 f1 ] = (5th row)-k5 (6th row)
s0 f1
coefficients to form the first two rows of the table, called the Routh table, in Table 9.2. They
are placed, starting from a0 , alternatively in the first and second rows. Next we compute
k1 = a0 /a1 , the ratio of the first entries of the first two rows. We then subtract from the first
row the product of the second row and k1 . The result [b0 b1 b2 b3 ] is placed at the right-hand
side of the second row, where
b0 = a0 − k1 a1 = 0 b1 = a2 − k1 a3 b2 = a4 − k1 a5 b3 = a6 − k1 · 0 = a6
Note that b0 is always zero because of k1 = a0 /a1 . We discard b0 and place [b1 b2 b3 ] in the
third row as shown in the table. The fourth row is obtained using the same procedure from
its previous two rows. That is, we compute k2 = a1 /b1 , the ratio of the first entries of the
second and third rows. We subtract from the second row the product of the third row and k2 .
The result [c0 c1 c2 ] is placed at the right-hand side of the third row. We discard c0 = 0 and
place [c1 c2 ] in the fourth row as shown. We repeat the process until the row corresponding
to s0 = 1 is computed. If the degree of D(s) is n, the table contains n + 1 rows.7
We discuss the size of the Routh table. If the degree n of D(s) is even, the first row has
one more entry than the second row. If n is odd, the first two rows have the same number of
entries. In both cases, the number of entries decreases by one at odd powers of s. For example,
the numbers of entries in the rows of s5 , s3 , s decrease by one from their previous rows. The
last entries of all rows corresponding to even powers of s are the same. For example, we have
a6 = b3 = d2 = f1 in Table 9.2.
Theorem 9.4 A polynomial with a positive leading coefficient is CT stable if and only if
every entry in the Routh table is positive. If a zero or a negative number appears in the table,
then D(s) is not a CT stable polynomial.2
A proof of this theorem can be found in Reference [2nd ed. of C6]. We discuss here only
its employment. A necessary condition for D(s) to be CT stable is that all its coefficients are
positive. If D(s) has missing terms (coefficients are zero) or negative coefficients, then the
first two rows of its Routh table will contain 0’s or negative numbers. Thus it is not a CT
stable polynomial. For example, the polynomials
D(s) = s4 + 2s2 + 3s + 10
and
D(s) = s4 + 3s3 − s2 + 2s + 1
are not CT stable. On the other hand, a polynomial with all positive coefficients may not
be CT stable. For example, the polynomial in (9.36) with all positive coefficients is not CT
stable as discussed earlier. We can also verify this by applying the Routh test. We use the
coefficients of
D(s) = 2s5 + s4 + 7s3 + 3s2 + 4s + 2
7 This formulation of the Routh table is different from the conventional cross-product formulation. This
formulation might be easier to be grasped and is more in line with its DT counterpart.
230 CHAPTER 9. QUALITATIVE ANALYSIS OF CT LTI LUMPED SYSTEMS
to form
s5 2 7 4
k1 = 2/1 s4 1 3 2 [0 1 0]
s3 1 0
A zero appears in the table. Thus we conclude that the polynomial is not CT stable. There
is no need to complete the table.
We form
s5 4 14 8
k1 = 2/1 s4 2 6 3 [0 2 2]
k2 = 2/2 s3 2 2 [0 4 3]
k3 = 2/4 s2 4 3 [0 0.5]
k4 = 4/0.5 s1 0.5 [0 3]
s0 3
Every entry in the table is positive, thus the polynomial is CT stable.2
the transient (tr) response of the system excited by the input u(t). We first give examples.
Example 9.6.1 Consider
6
H(s) = (9.40)
(s + 3)(s − 1)
Its step response in the transform domain is
(a) (b)
3 2
2.5 1.5
2 1
Amplitude
1.5 0.5
1 0
0.5 −0.5
0 −1
0 5 10 15 0 5 10 15
Time (s) Time (s)
Figure 9.8: (a) Step response of (9.42). (b) Its transient response.
yss (t) → ∞ as t → ∞. Note that H(s) in (9.40) is not stable, and its steady-state response
generally approaches infinite no matter what input is applied. 2
Example 9.6.2 Consider a system with transfer function
24 24
H(s) = = (9.42)
s3 + 3s2 + 8s + 12 (s + 2)(s + 0.5 + j2.4)(s + 0.5 − j2.4)
Its step response in the transform domain is
24 1
Y (s) = H(s)U (s) = ·
(s + 2)(s + 0.5 + j2.4)(s + 0.5 − j2.4) s
To find its time response, we expand it as
k1 r1 s + r2 k4
Y (s) = + +
s + 2 (s + 0.5)2 + 2.42 s
with
k4 = Y (s)s|s=0 = H(0) = 2
Thus the step response of the system is of the form
y(t) = H(0) + k1 e−2t + k2 e−0.5t sin(2.4t + k3 ) (9.43)
for t ≥ 0 and is plotted in Figure 9.8(a).
Because e−2t and e−0.5t approach zero as t → ∞, the steady-state response of (9.43) is
yss (t) = H(0) (9.44)
and the transient response is
ytr (t) = k1 e−2t + k̄ 2 e−0.5t sin(2.4t + k̄ 3 ) (9.45)
We see that the steady-state response is determined by the pole of the input alone and the
transient response is due to all poles of the transfer function in (9.42). We plot in Figure
9.8(b) −ytr (t) for easier viewing. 2
Systems are designed to process signals. The most common processions are amplification
and filtering. The processed signals or the outputs of the systems clearly should be dictated
by the signals to be processed. If a system is not stable, its output will generally grow
unbounded. Clearly such a system cannot be used. On the other hand, if a system is stable,
after its transient response dies out, the processed signal is dictated by the applied signal.
Thus the transient response is an important criterion in designing a system. The faster the
transient response vanishes, the faster the response reaches steady state or the smaller the
response time.
232 CHAPTER 9. QUALITATIVE ANALYSIS OF CT LTI LUMPED SYSTEMS
(a) (b)
1 1
0.5
0.5
Amplitude
0
−0.5
−0.5 −1
0 5 10 15 0 5 10 15
Time (s) Time (s)
Figure 9.9: (a) The function e−2t . (b) The function e−0.5t sin(2.4t).
for all t. Thus the response will decrease to less than one percentage of its peak value in
five time constants. For example, the time constant of −0.5 ± j2.4 is 1/| − 0.5| = 2, and the
response due to the pair of poles vanishes in five time constant or 10 seconds as shown in
Figure 9.9(b).
With the preceding discussion, we may define the time constant of a stable H(s) as
1
tc =
Smallest real part in magnitude of all poles
1
= (9.46)
Smallest distance from all poles to the jω-axis
If H(s) has many poles, the larger the real part in magnitude, the faster its time response
approaches zero. Thus the pole with the smallest real part in magnitude dictates the time
for the transient response to approach zero. For example, the transfer function in (9.42) has
three poles −2 and −0.5 ± j2.4. The smallest real part in magnitude is 0.5. Thus the time
constant of (9.42) is 1/0.5 = 2. Indeed its transient response due to a step input vanishes in
10 seconds as shown in Figure 9.8(b) or, equivalently, its step response approaches its steady
state H(0) = 2 in 10 seconds as shown in Figure 9.8(a).
The time constant of a stable transfer function is defined from its poles. Its zeros do not
play any role. For example, the transfer functions from (9.22) through (9.26) have the same
set of poles but different zeros. Their poles are −1, −2, −0.5 ± j4. The smallest real part in
magnitude is 0.5. Thus their time constants all equal 1/0.5 = 2 and their step responses all
9.7. FREQUENCY RESPONSES 233
reach their steady state in roughly 10 seconds as shown in Figure 9.4. Indeed, the rule of five
time constants appears to be widely applicable. Even so, the rule should be used only as a
guide. It is possible to construct examples whose transient responses do not decrease to less
than 1% of their peak values in five time constants. The transfer function 2/(s + 0.5)3 has
time constant 2 and its time response t2 e−0.5t does not approach zero in 10 seconds as shown
in Figure 9.6(a). See also Problem 9.16. However, it is generally true that the smaller the
time constant, the faster a system reaches steady state.
The final reading discussed in Subsection 9.1.1 corresponds to the steady-state response
and the response time is the time for the response to reach steady state. These are time-
domain specifications. In the design of filters, the specifications are mainly in the frequency
domain as we discuss next.
(a) (b)
1 0
−20
0.8 −40
−100
0.4
−120
0.2 −140
−160
0 −180
0 10 20 30 40 0 10 20 30 40
Frequency (rad/s) Frequency (rad/s)
Figure 9.10: (a) Magnitude responses of H(s) (solid line) and H1 (s) (dotted line). (b) Phase
responses of H(s) (solid line) and H1 (s) (dotted line).
Note that as ω → ∞, the frequency response can be approximated by 2/jω. Thus its mag-
nitude r = 2/ω approaches zero but its phase approaches −90o . From the preceding compu-
tation, we can plot the magnitude and phase responses as shown in Figures 9.10(a) and (b)
with solid lines. They are actually obtained in MATLAB by typing in an edit window the
following
The first two lines compute the values of H(s) at s = jω, with ω ranging from 0 to 40 rad/s
with increment 0.1. The function abs computes the absolute value or magnitude and angle
computes the angle or phase. We save Program 9.1 as an m-file named f910.m. Typing in
the command window >> f910 will yield Figure 9.10 in a figure window.
Example 9.7.2 We repeat Example 9.7.1 for H1 (s) = 2/(s − 2). The result is shown in
Figures 9.10 with dotted lines. Note that the dotted line in Figure 9.10(a) overlaps with the
solid line. See Problem 9.18.2
We now discuss the implication of stability and physical meaning of frequency responses.
Consider a system with stable transfer function H(s) with real coefficients. Let us apply
the input u(t) = aejω0 t , where a is a real constant. Note that this input is not real-valued.
However, if we use the real-valued input u(t) = sin ω0 t or cos ω0 t, then the derivation will be
more complex. The Laplace transform of aejω0 t is a/(s − jω0 ). Thus the output of the system
is given by
a 1
Y (s) = H(s)U (s) = H(s) = ku + terms due to poles of H(s)
s − jω0 s − jω0
Because H(s) is stable, it has no poles on the imaginary axis. Thus jω0 is a simple pole of
Y (s) and its residue ku can be computed as
If H(s) is stable, then all its poles have negative real parts and their time responses all
approach 0 as t → ∞. Thus we conclude that if H(s) is stable and if u(t) = aejω0 t , then we
have
yss (t) := lim y(t) = aH(jω0 )ejω0 t (9.50)
t→∞
and, in particular,
2
The steady-state response of H(s) in (9.50) is excited by the input u(t) = aej ω0 t . If ω0 = 0,
then the input is a step function with amplitude a, and the output approaches a step function
with amplitude aH(0). If u(t) = a sin ω0 t = Im aejω0 t , where Im stands for the imaginary
part, then the output approaches the imaginary part of (9.51) or aA(jω) sin(ω0 t + θ(ω0 )).
Using the real part of aejω0 t , we can obtain the third equation. In conclusion, if we apply
a sinusoidal input to a system, then the output will approach a sinusoidal signal with the
same frequency, but its amplitude will be modified by A(ω0 ) = |H(jω0 )| and its phase by
θ(ω0 ) = <
) H(jω0 ). We stress that the system must be stable. If it is not, then the output
generally grows unbounded and H(jω0 ) has no physical meaning as we will demonstrate
shortly.
Example 9.7.3 Consider a system with transfer function H(s) = 2/(s + 2) and consider the
signal
u(t) = cos 0.2t + 0.2 sin 25t (9.52)
The signal consists of two sinusoids, one with frequency 0.2 rad/s and the other 25 rad/s. We
study the eventual effect of the system on the two sinusoids.
In order to apply Theorem 9.5, we must compute
2
H(j0.2) = = 0.995e−j0.0997
2 + j0.2
and
2
H(j25) = = 0.079e−j1.491
2 + j25
They can also be read from Figure 9.10 but the reading cannot be very accurate. Note that
the angles must be expressed in radians. Then Theorem 9.5 implies
We see that the system attenuates greatly the high-frequency component (from 0.2 to 0.0159)
and passes the low-frequency component with only a small attenuation (from 1 to 0.995).
Thus the system is called a low-pass filter. 2
236 CHAPTER 9. QUALITATIVE ANALYSIS OF CT LTI LUMPED SYSTEMS
Figure 9.11: (a) Ideal lowpass filter with cutoff frequency ω c . (b) Ideal bandpass filter with
upper and lower cutoff frequencies ω u and ω l . (c) Ideal highpass filter with cutoff frequency
ωc .
Example 9.7.4 Consider a system with transfer function H(s) = 2/(s − 2). We compute its
step response. If u(t) = 1, for t ≥ 0, then U (s) = 1/s and
2 1 1 1
Y (s) = H(s)U (s) = = −
s−2s s−2 s
Thus its step response is
y(t) = e2t − 1 = e2t + H(0)
for t ≥ 0, where we have used H(0) = −1. Even though the output contains the step function
with amplitude H(0), it also contains the exponentially increasing function e2t . As t → ∞,
the former is buried by the latter. Thus the output approaches ∞ as t → ∞ and Theorem
9.5 does not hold. In conclusion, the stability condition is essential in using Theorem 9.5.
Moreover the frequency response of unstable H(s) has no physical meaning. 2
In view of Theorem 9.5, if we can design a stable system with magnitude response as
shown in Figure 9.11(a) with solid line and phase response with dotted line, then the system
will pass sinusoids with frequency |ω| < ω c and stop sinusoids with frequency |ω| > ω c . We
require the phase response to be linear to avoid distortion as we will discuss in a later section.
We call such a system an ideal lowpass filter with cutoff frequency ω c . The frequency range
[0, ω c ] is called the passband, and [ω c , ∞) is called the stopband. Figures 9.11(b) and (c)
show the characteristics of ideal bandpass and highpass filters. The ideal bandpass filter will
pass sinusoids with frequencies lying inside the range [ω l , ω u ], where ω l and ω u are the lower
and upper cutoff frequencies, respectively. The ideal highpass filter will pass sinusoids with
frequencies larger than the cutoff frequency ω c . They are called frequency selective filters and
are special types of systems.
The impulse response h(t) of the ideal lowpass filter with linear phase −ωt0 can be com-
puted as
sin[ωc (t − t0 )]
h(t) =
π(t − t0 )
for all t in (−∞, ∞). See Problem 9.20. It is plotted in Figure 2.10(b) with ω c = 1/a = 10
and t0 = 4. We see that h(t) is nonzero for t < 0, thus the ideal lowpass filter is not causal and
cannot be built in the real world. In practice, we modify the magnitude responses in Figure
9.11 to the ones shown in Figure 9.12. We insert a transition band between the passband and
stopband. Furthermore, we specify a passband tolerance and stopband tolerance as shown with
shaded areas. The transition band is generally not specified and is the “don’t care” region.
We also introduce the group delay defined as
dθ(ω)
Group delay = τ (ω) := − (9.54)
dω
9.7. FREQUENCY RESPONSES 237
Figure 9.12: Specifications of practical (a) lowpass filter, (b) bandpass filter, and (c) highpass
filter.
For an ideal filter with linear phase such as θ(ω) = −t0 ω, its group delay is t0 , a constant.
Thus instead of specifying the phase response to be a linear function of ω, we specify the group
delay in the passband to be roughly constant or t0 − ² < τ (ω) < t0 + ², for some constants t0
and ². Even with these more relaxed specifications on the magnitude and phase responses, if
we specify both of them, it is still difficult to find causal filters to meet both specifications.
Thus in practice, we often specify only magnitude responses as shown in Figure 9.12. The
design problem is then to find a proper stable rational function of a degree as small as possible
to have its magnitude response lying inside the specified region. See, for example, Reference
[C7].
1000s2
H1 (s) = (9.55)
s4 + 28.28s3 + 5200s2 + 67882.25s + 5760000
Similar to Program 9.2, we type in an edit window the following
%program 9.3
>> w=0:0.01:200;
>> n=[1000 0 0];d=[1 28.28 5200 67882.25 576000];
>> H=freqs(n,d,w);
>> plot(w,abs(H))
9 It is a Butterworth filter. See Reference [C7].
238 CHAPTER 9. QUALITATIVE ANALYSIS OF CT LTI LUMPED SYSTEMS
(a) (b)
2.5 1
2 0.8
Magnitude
Magnitude
1.5 0.6
1 0.4
0.5 0.2
0 0
0 50 100 150 200 0 50 100 150 200
Frequency (rad/s) Frequency (rad/s)
Figure 9.13: (a) Magnitude response and bandwidth of (9.55). (b) Magnitude response and
bandwidth of (9.57).
will also generate Figure 9.13(a). Note that n and d are the numerator’s and denominator’s
coefficients of (9.55). Thus H=freqs(n,d,w) computes the frequency response of the CT
transfer function at the specified frequencies and stores them in H. Note that the last character
‘s’ in freqs denotes the Laplace transform variable s. If we continue to type in the command
window
>> freqs(n,d)
then it will generate the plots in Figure 9.14. Note that freqs(n,d) contains no output H
and no frequencies w. In this case, it automatically selects 200 frequencies in [0, ∞) and then
generates the plots as shown. Note that the scales used in frequency and magnitude are not
linear; they are in logarithmic scales. Such plots are called Bode plots. Hand plotting of
Bode plots is discussed in some texts on signals and systems because for a simple H(s) with
real poles and real zeros, its frequency response can be approximated by sections of straight
lines. We will not discuss their plotting because they can now be plotted exactly using a
computer.10
5
10
0
10
Magnitude
−5
10
−10
10
−2 −1 0 1 2 3
10 10 10 10 10 10
Frequency (rad/s)
200
100
Phase (degrees)
−100
−200
−2 −1 0 1 2 3
10 10 10 10 10 10
Frequency (rad/s)
Figure 9.14: Top: Magnitude response of (9.55). Bottom: Phase response of (9.55).
then
a = 20 log10 0.707 = −3 dB
The 3-dB bandwidth of a passband is then defined as the width of the frequency range in [0, ∞)
in which the magnitude response of H(s) is −3 dB or larger. Note that we can also define a
2-dB or 1-dB bandwidth. We mention that the 3-dB bandwidth is also called the half-power
bandwidth. The power of y(t) is y 2 (t) and, consequently, is proportional to |H(jω)|2 . If
the power at Hmax is A, then the power at 0.707Hmax is (0.707)2 A = 0.5A. Thus the 3-dB
bandwidth is also called the half-power bandwidth.
We give some examples. Consider H(s) = 2/(s + 2) whose magnitude response is shown
in Figure 9.10(a). Its peak magnitude Hmax is 1 as shown. From the plot we see that the
magnitude is 0.707 or larger for ω in [0, 2]. Thus the 3-dB bandwidth of 2/(s + 2) is 2 rad/s.
We call ω p = 2 the passband edge frequency. Consider the transfer functions in (9.55). Its
magnitude response is shown in Figure 9.13(a). Its peak magnitude Hmax is 2.5. From the
plot, we see that for ω in [40, 60], the magnitude is 0.707 × 2.5 = 1.77 or larger as shown. We
call 40 the lower passband edge frequency and 60 the upper passband edge frequency. Thus
the bandwidth of the filter is 60 − 40 = 20 rad/s.
We give one more example. Consider the transfer function11
921.27s2
H2 (s) = (9.57)
s4 + 38.54s3 + 5742.73s2 + 92500.15s + 5760000
Its magnitude response is plotted, using MATLAB, in Figure 9.13(b). For the transfer function
in (9.57), we have H2max = 1 and the magnitude response is 0.707 or larger in the frequency
range [35, 70]. Thus its bandwidth is 70 − 35 = 35 rad/s.
a 1 1
H(ja) = = = = 0.707e−jπ/4
ja + a 1+j 1.414ejπ/4
We see that the smaller a is, the more severely the signal sin 80t is attenuated. However,
the time constant of the filter is 1/a. Thus the smaller a is, the longer for the response
to reach steady state. For example, if we select a = 0.1, then the amplitude of sin 80t is
reduced to 0.00025, but it takes 5/0.1 = 50 seconds to reach steady state as shown in Figure
9.15(a). If we select a = 1, then the amplitude of sin 80t is reduced to 0.0025, but it takes
only 5/1 = 5 seconds to reach steady state as shown in Figure 9.15(b). In conclusion, if the
speed of response is also important, then the selection of a must be a compromise between
filtering and the response time. Other practical issues such as cost and implementation may
also play a role. Thus most designs are not unique in practice.
1
M (ω) = p (9.60)
aω 4 + (1 − a)ω 2 + 1
where a is a positive real number. Clearly, we have M (0) = 1 or 20 log M (0) = 0 dB and
1 1
M (1) = p = √ = 0.707
a + (1 − a) + 1 2
9.7. FREQUENCY RESPONSES 241
(a)
2.5
Amplitude
1.5
0.5
0
0 5 10 15 20 25 30 35 40 45 50
(b)
2.5
2
Amplitude
1.5
0.5
0
0 5 10 15 20 25 30 35 40 45 50
Time (s)
Figure 9.15: (a) The output of (9.58) excited by (9.59) with a = 0.1. (b) With a = 1.
(a) (b)
1.2 1.2
a=1.5
1 1
0.8 0.8
Magnitude
0.6 0.6
0.2
0.4 0.5 0.4
0.2 1 0.2
1.5
0 0
0 2 4 6 0 2 4 6
Frequency (rad/s) Frequency (rad/s)
Figure 9.16: (a) Magnitude of (9.51) with a = 0.2, 0.5, 1, 1.5. (b) Plots of (9.60) (solid line)
and (9.62) (dotted line).
or 20 log M (1) = −3 dB, for any a. We plot M (ω) in Figure 9.16(a) for various a. If a = 1.5,
then M (ω) goes outside the permissible passband region. If a = 0.2, 0.5, and 1, then all M (ω)
remain inside the permissible passband region. However for a = 1, M (ω) rolls off most steeply
to zero. Thus we conclude that
1
M1 (ω) = √ (9.61)
4
ω +1
with a = 1, is the best among all (9.60) with various a. We plot in Figure 9.16(b) M1 (ω) in
(9.61) with a solid line and
1
M2 (ω) = √ (9.62)
5
ω +1
with a dotted line. We see that M2 (ω) rolls off more steeply than M1 (ω) after ω p = 1 and is
a better magnitude response.
The preceding search of Mi (ω) is carried out entirely in the frequency domain. This is
however not the end of the design. The next step is to find a stable proper rational function
H(s) so that |H(jω)| = Mi (ω). It turns out that no proper rational function H(s) exits to
242 CHAPTER 9. QUALITATIVE ANALYSIS OF CT LTI LUMPED SYSTEMS
to meet |H1 (jω)| = M1 (ω) or H1 (jω)H1 (−jω) = M12 (ω). Indeed we have
1 1 1 1
√ × √ = 2 2 2
= 4
−ω 2 2
+ j 2ω + 1 −ω − j 2ω + 1 (1 − ω ) + 2ω ω +1
Such a M1 (ω) is said to be spectral factorizable. This completes the design. See Reference
[C7].
The purpose of this subsection is to show the roles of frequency and transform domains in
filter design. The search of M (ω), limited to those spectral factorizable, is carried out in the
frequency domain. Once such an M (ω) is found, we must find a proper stable rational transfer
function H(s) to meet H(jω)H(−jω) = M 2 (ω). We can then find a minimal realization of
H(s) and implement it using an op-amp circuit. We cannot implement a filter directly from
the magnitude response in (9.61).
complex. See Reference [C5]. In conclusion, it is possible to develop transfer functions from
measurements for simple systems. Commercial devices such as HP 3563A, Control system
Analyzer, can carry out this task automatically.
Y (s) km
H(s) = =
U (s) s(τm s + 1)
where km is called the motor gain constant and τm the motor time constant. It is obtained
by ignoring some pole. See Reference [C5] and Subsection 10.2.1. The parameters km and τm
can be obtained from the specification of the motor and the moment of inertia of the load.
Because the load such as an antenna may not be of regular shape, computing analytically its
moment of inertia may not be simple. Thus we may as well obtain km and τm directly by
measurement. The preceding transfer function is from applied voltage to the motor shaft’s
angular position. Now if we consider the motor shaft’s angular velocity v(t) = dy(t)/dt or
V (s) = sY (s) as the output, then the transfer function becomes
V (s) sY (s) km
Hv (s) = = =
U (s) U (s) τm s + 1
If we apply a step input, that is, u(t) = 1, for t ≥ 0, then the output in the transform domain
is
km 1 km km τm km km
V (s) = = − = −
τm s + 1 s s τm s + 1 s s + 1/τm
Thus the output in the time domain is
v(t) = km − km e−t/τm
for t ≥ 0. The velocity will increase and finally reach the steady state which yields km . The
time it reaches steady state equals five time constants or 5τm . Thus from the step response,
we can obtain km and τm .
Our discussion ignores completely the noise problem. Identification under noise is a vast
subject area.
Note the use of different notations, otherwise confusion may arise. See Section 3.4. From
(9.64) and (9.65), we see immediately
Does this equation hold for all x(t)? The answer is negative as we discuss next.
Consider the positive function x(t) = e2t , for t ≥ 0. Its Laplace transform is X(s) =
1/(s − 2). Its Fourier transform however, as discussed in Example 4.3.1, does not exist and
thus cannot equal X(jω) = 1/(jω − 2). Next we consider the step function q(t) = 1, for t ≥ 0.
Its Laplace transform is Q(s) = 1/s. Its Fourier transform is, as discussed in (4.51),
1
Q̄(ω) = πδ(ω) + (9.67)
jω
It contains an impulse at ω = 0 and is different from Q(jω). Thus (9.66) does not hold for a
step function. Note that both e2t and q(t) = 1, for t ≥ 0, are not absolutely integrable. Thus
(9.66) does not hold if x(t) is not absolutely integrable.
On the other hand, if x(t) is absolutely integrable, then (9.66) does hold. In this case, its
Fourier transform X̄(ω) is bounded and continuous for all ω, as discussed in Section 4.3, and
its Laplace transform can be shown to contain the j − ω axis in its region of convergence.
Thus replacing s by jω in X(s) will yield X̄(ω). Because tables of Laplace transform pairs
are more widely available, we may use (9.66) to compute frequency spectra of signals.
Example 9.8.2 Consider the network shown in Figure 9.17. It consists of a capacitor with
capacitance 1F and a resistor with resistance −1Ω. Note that such a negative resistance can
be generated using an op-amp circuit. See Problem 9.25. The transfer function of the network
is H(s) = s/(s − 1). If we apply u(t) = cos 2t, then its output can be computed as
for t ≥ 0. Even though the output contains the phasor |H(j2)|< ) H(j2), it will be buried by
the exponentially increasing function 0.2et . Thus the circuit will burn out or saturate and
the phasor analysis cannot be used.2
In conclusion, real-world RLC circuits with positive R, L, and C, are, as discussed earlier,
automatically stable. In this case, phasor analysis can be employed to compute their sinusoidal
steady-state responses. Furthermore, their impedances can be defined as R, jωL, and 1/jωC.
However, if a system is not stable, phasor analysis cannot be used.
In this equation, the system is assumed to be initially relaxed at t = 0 and the input is applied
from t = 0 onward. Now if the system is assumed to be initially relaxed at t = −∞ and the
input is applied from t = −∞, then (9.75) must be modified as
Z ∞
y(t) = h(t − τ )u(τ )dτ (9.76)
τ =−∞
Because the role of h and u can be interchanged, (9.77) is said to have a commutative property.
Note that (9.75) does not have the commutative property as in (9.77) and cannot be used in
subsequent development. This is one of the reasons of extending the time to −∞.
Now if we apply u(t) = ejω0 t from t = −∞, then the second form of (9.77) becomes
Z ∞ µZ ∞ ¶
y(t) = h(τ )ejω0 (t−τ ) dτ = h(τ )e−jω0 τ dτ ejω0 t
τ =−∞ τ =−∞
µZ ∞ ¶
= h(τ )e−jω0 τ dτ ejω0 t = H(jω0 )ejω0 t (9.78)
τ =0
where we have used h(t) = 0 for t < 0 and (9.64) with s replaced by jω0 . This equation is
similar to (9.50) with a = 1 and is the way of introducing frequency response in most texts
on signals and systems.
Even though the derivation of (9.78) appears to be simple, the equation holds, as discussed
in the preceding sections, only if the system is stable or h(t) is absolutely integrable. This
stability condition is not used explicitly in its derivation and is often ignored. More seriously,
the equation in (9.78) holds for all t in (−∞, ∞). In other words, there is no transient response
in (9.78). Thus (9.78) describes only steady-state responses. In reality, it is not possible to
apply an input from time −∞. An input can be applied only from some finite time where we
may call it time zero. Recall that time zero is a relative one and is defined by us. Thus there
is always a transient response. In conclusion, the derivation of (9.50), that uses explicitly the
stability condition and is valid only for t → ∞, is more revealing than the derivation of (9.78).
It also describes real-world situation.
where we have also introduced a linear phase −ωt0 , for some constant t0 , in the passband.
Now if u(t) = u1 (t) + u2 (t) and if the magnitude spectra of u1 (t) and u2 (t) are as shown in
Figure 9.18, then the output’s frequency spectrum is given by
where H(jω)U2 (jω) is identically zero because their nonzero parts do not overlap. As derived
in Problem 4.12, U1 (jω)e−j ω t0 is the frequency spectrum of u1 (t − t0 ). Thus the output of
the ideal lowpass filter is
y(t) = u1 (t − t0 ) (9.83)
That is, the filter stops the signal u2 (t) and passes u1 (t) with only a delay of t0 seconds. This
is called a distortionless transmission of u1 (t). Note that t0 is the group delay of the ideal
lowpass filter defined in (9.54). We see that filtering is based entirely on (9.80).
If U (jω) = 0, for ω in some frequency range, then the equation implies Y (jω) = 0 for the same
frequency range. It means that the output of an LTI stable system cannot contain nonzero
9.9. FREQUENCY RESPONSES AND FREQUENCY SPECTRA 249
frequency components other than those contained in the input or, equivalently, an LTI system
can only modify the nonzero frequency spectrum of an input but cannot generate new frequency
components. Modulation however always create new frequencies and thus cannot be a linear
time-invariant process. Recall that the modulation discussed in Section 4.4.1 is a linear but
time-varying process.
Then why bother to discuss LTI systems in communication texts? In modulation, the
frequency spectrum of a signal will be shifted to new locations. This is achieved using mostly
nonlinear devices. The output of a nonlinear device will generate the desired modulated
signal as well as some undesired signals. The undesired signals must be eliminated using
LTI lumped filters. Demodulators such as envelop detectors are nonlinear devices and must
include LTI lowpass filters. Thus filters are integral parts of all communication systems.
Communication texts however are concerned only with the effects of filtering on frequency
spectra based on (9.80). They are not concerned with actual design of filters. Consequently
many communication texts introduce only the Fourier transform (without introducing the
Laplace transform) and call H(jω) or H̄(ω) a transfer function.
It is stable and has its impulse response (inverse Laplace transform of H(s)) shown in Figure
9.19(a) and its magnitude response (magnitude of H(jω)) shown in Figure 9.19(aa). The
magnitude response shows a narrow spike at frequency roughly ω r = 20 rad/s. We call ω r
the resonance frequency.
Let us study the outputs of the system excited by the inputs
with ω 1 = 5, ω 2 = 20, and ω 3 = 35. The inputs ui (t) against t are plotted in Figures 9.19(b),
(c), and (d). Their magnitude spectra |Ui (jω)| against ω are plotted in Figures 9.19(bb),
(cc), and (dd). Figures 9.19(bbb), (ccc), and (ddd) show their excited outputs yi (t). Even
though the three inputs have the same peak magnitude 1 and roughly the same total amount
of energy16 , their excited outputs are very different as shown in Figure 9.19(bbb), (ccc), and
(ddd). The second output has the largest peak magnitude and the most energy among the
three outputs. This will be difficult to explain in the time domain. However it becomes
transparent in the frequency domain. The nonzero portion of the magnitude response of the
system and the nonzero portions of the magnitude spectra of the first and third inputs do
not overlap. Thus their products are practically zero for all ω and the corresponding outputs
are small. On the other hand, the nonzero portion of the magnitude response of the system
and the nonzero portion of the magnitude spectrum of the second input coincide. Thus their
product is nonzero in the neighborhood of ω = 20 rad/s and the corresponding output is large
as shown.
No physical system is designed to be completely rigid. Every structure or mechanical
system will vibrate when it is subjected to a shock or an oscillating force. For example, the
wings of a Boeing 747 vibrate fiercely when it flies into a storm. Thus in designing a system, if
its magnitude response has a narrow spike, it is important that the spike should not coincide
with the most significant part of the frequency spectrum of possible external excitation.
15 This subsection may be skipped without loss of continuity.
16 Because their magnitude spectra have roughly the same form and magnitude, it follows from (4.36) that
the three inputs have roughly the same total amount of energy.
250 CHAPTER 9. QUALITATIVE ANALYSIS OF CT LTI LUMPED SYSTEMS
(a) (aa)
1 2
System
0 1
−1 0
0 5
(b) 10 0 20(bb) 40 (bbb)
1 2 0.5
Output 1
Input 1
0 1 0
−1 0 −0.5
0 5
(c) 10 0 20(cc) 40 0 5
(ccc) 10
1 2 0.5
Output 2
Input 2
0 1 0
−1 0 −0.5
0 5
(d) 10 0 20(dd) 40 0 5
(ddd) 10
1 2 0.5
Output 3
Input 3
0 1 0
−1 0 −0.5
0 5 10 0 20 40 0 5 10
Time (s) Frequency (rad/s) Time (s)
Figure 9.19: (a) The impulse response h(t) of the system in (9.84). (aa) The magnitude
response |H(jω)| of the system in (9.84). (b) u1 = e−0.5t cos 5t. (bb) Its magnitude spectrum.
(bbb) Its excited output y1 (t). (c) u2 = e−0.5t cos 20t. (cc) Its magnitude spectrum. (ccc) Its
excited output y2 (t). (d) u3 = e−0.5t cos 35t. (dd) Its magnitude spectrum. (ddd) Its excited
output y3 (t).
9.10. REASONS FOR NOT USING SS EQUATIONS IN DESIGN 251
Otherwise, excessive vibration may occur and cause eventual failure of the system. The most
infamous such failure was the collapse of the first Tacoma Narrows Suspension Bridge in
Seattle in 1940 due to wind-induced resonance.17
To conclude this section, we discuss how the plots in Figures 9.19 are generated. The
inverse Laplace transform of (9.84) is h(t) = e−0.2t sin 20t. Typing
t=0:0.05:10;h=exp(-0.2*t).*sin(20*t);plot(t,h)
yields the impulse response of (9.84) in Figure 9.19(a). Typing
n=20;d=[1 0.4 400.04]; [H,w]=freqs(n,d);plot(w,abs(H))
yields the magnitude response of (9.84) in Figure 9.19(aa). Note that the function [H,w]=freqs(n,d)
selects automatically 200 frequencies, denoted by w, and then computes the frequency re-
sponses at those frequencies and stores them in H. Typing
t=0:0.05:10;u1=exp(-0.5*t).*cos(5*t);plot(t,u1)
yields the input u1 in Figure 9.19(b). Because the Laplace transform of u1 (t) with ω 1 = 5 is
1. A proper rational transfer function of degree n has at most 2(n+1) nonzero parameters.
For a general ss equation of dimension n, the matrix A is of order n × n and has n2
entries. The vectors b and c each have n entries. Thus the set {A, b, c, d} has a total
of n2 + 2n + 1 parameters. For example, if n = 5, then a transfer function may have 12
parameters, whereas an ss equation may have 36 parameters. Thus the use of transfer
functions in design is considerably simpler than the use of ss equations.
2. Filters to be designed are specified in the frequency domain as shown in Figure 9.12.
The design consists of searching a proper stable transfer function of a smallest possible
degree so that its magnitude response meets the specifications. In this searching, ss
equations cannot be used. Thus we use exclusively transfer functions in filter design.
17 Some texts, for example Reference [D1], use the collapse of the bridge to illustrate the concept of stability.
3. In control systems, design often involves feedback. Transfer functions of feedback sys-
tems can be easily computed as in Figures 7.8(c) and 7.11. Note that those formulas are
directly applicable to the CT case if z is replaced by s. State-space equations describing
feedback systems are complicated. Thus it is simpler to use transfer functions to design
control systems. See the next chapter and References [C5, C6].
In conclusion, we encounter in this and the preceding chapters three domains:
• Transform domain: where we carry out design. In filters design, we search a transfer
function whose frequency response meets the specification given in the frequency do-
main. In control system design, we search a compensator, a transfer function, so that
the feedback system meets the specifications in the time domain such as overshoot and
response time. Once a transfer function is found, we use its minimal realization to carry
out implementation.
• Time domain: where actual processing is carried out. The state-space equation is most
convenient in real-time processing. Systems’ frequency responses and signals’ frequency
spectra do not play any role in actual real-time processing.
• Frequency domain: where the specification is developed from frequency spectra of signals
to be processed.
It is important to understand the role of these three domains in engineering.
possible. Amplifiers for audio signals and for radio signals, power amplifiers, tuners, filters,
and oscillators were developed to build radio transmitters and receivers. By 1922, there were
more than 500 ratio stations in the US and a large number of transmitters and receivers. By
then, it is fair to say that the design of electronic circuits had become routine. The design
was probably all carried out using empirical methods.
Even though the Laplace transform was developed by Pierre-Simon Laplace in 1782, its
application to study circuits began in 1926. See Reference [C3]. Although filters were widely
used by 1922, the Butterworth filter was first suggested in 1930. Chebyshev and Elliptic filters
appeared later. They were all based on transfer functions.
In order to explain the dancing of flyballs and jerking of machine shafts in steam engines,
James Maxwell developed in 1868 a third-order linearized differential equation to raise the
issue of stability. He concluded that in order to be stable, the characteristic polynomial of
the differential equation must be CT stable. This earmarked the beginning of mathematical
study of control systems. In 1895, Edward Routh developed the Routh test (Section 9.5.3).
In 1932, Henry Nyquist developed a graphical method of checking the stability of a feedback
system from its open-loop system. The method however is fairly complex and is not suitable
for design. In 1940, Hendrick Bode simplified the method to use the phase margin and gain
margin of the open-loop system to check stability. However the method is applicable only
to a small class of open-loop systems. Moreover, the relationship between phase and gain
margins and system performances is vague. In 1948, W. R. Evans developed the root-locus
method to carry out design of feedback systems. The method is general but the compensator
used is essentially limited to degree 0. The aforementioned methods are all based on transfer
functions and constitute the entire bulk of most texts on control systems published before
1970.
State-space equations first appeared in the engineering literature in the early 1960s. The
formulation is precise: it first gives definitions, and then develops conditions and finally
establishes theorems. Moreover its formulation for SISO and MIMO systems are the same
and all results for SISO systems can be extended to MIMO systems. The most celebrated
results are: If an ss equation is controllable, then state feedback can achieve arbitrary pole-
placement. If an ss equation is observable, then a state estimator with any desired poles can
be constructed. By 1980, ss equations and designs were introduced into many undergraduate
texts on control.
With the impetus of ss results, researchers took a fresh look in the 1970s into transfer
functions. By considering a rational function of a ratio of two polynomials, the polynomial-
fraction approach was born. By so doing, results in SISO systems can also be extended
to MIMO systems. The important concept in this approach is coprimeness. Under the
coprimeness assumption, it is possible to achieve pole-placement and model-matching designs.
The method is simpler and the results are more general than those based on ss equations. See
Reference [C5, C6].
From the preceding discussion, we see that mathematical methods of designing feedback
control systems started to appear in the 1940s. By then feedback control systems had been
in existence for over 150 years counting from the feedback control of steam engines. Because
the aforementioned design methods are applicable only to LTI lumped systems, whereas all
practical systems have gone through many iterations of improvement, are generally very good,
and cannot be described by simple mathematical equations. Thus it is not clear to this author
how much of control text’s design methods are used in practice.
The role of transfer functions has been firmly established in ECE curricula. Now transfer
functions appear in texts on circuit analysis, analog filter design, microelectronic circuits,
digital signal processing, and, to a lesser extent, communication. The role of ss equations
however is still not clear. State-space equations are now introduced in every undergraduate
control texts. See, for example, Reference [D1]. They stress analytical solutions and discuss
some design.
The class of systems studied in most texts on signals and systems is limited to linear time-
invariant and lumped systems. Such a system can be described by a convolution, higher-order
254 CHAPTER 9. QUALITATIVE ANALYSIS OF CT LTI LUMPED SYSTEMS
differential equation, ss equation, and rational transfer function. The first three are in the time
domain and the last one is in the frequency domain. Many existing texts stress convolutions
and Fourier analysis of systems and ignore ss equations. This text gives the reasons for not
doing so. We stress the use of ss equations in computer computation, real-time processing,
and op-amp circuit implementations and the use of transfer functions in qualitative analysis
of systems. This selection of topics may be more useful in practice.
Problems
9.1 What are the poles and zeros of the following transfer functions:
s2 − 1
H1 (s) =
3s2 + 3s − 6
2s + 5
H2 (s) =
3s2 + 9s + 6
s2 − 2s + 5
H3 (s) =
(s + 0.5)(s2 + 4s + 13)
Plot the poles and zeros of H3 (s) on a complex s-plane. Note that you have the freedom
in selecting the scales of the plot.
9.2 Find the impulse and step responses of a system with transfer function
2s2 − 10s + 1
H(s) =
s2 + 3s + 2
9.3 Verify that if H(s) is proper, then its impulse response equals the step response of sH(s).
Note that in computer computation, a transfer functions is first realized as an ss equation
and then carried out computation. If H(s) is biproper, then sH(s) is improper and
cannot be realized in a standard-form ss equation. Thus if we use a computer to compute
the impulse response of H(s), we require H(s) to be strictly proper which implies d = 0
in its ss-equation realization. If H(s) is biproper, the MATLAB function impulse
will simply ignore d. For example, for H1 (s) = 1/(s + 1) and H2 (s) = H1 (s) + 2 =
(2s+3)/(s+1), typing in MATLAB n1=1;d=[1 1];impulse(n1,d) and n2=[2 3];d=[1
1];impulse(n2,d) will yield the same result. See also Problem 8.6.
9.4* Consider the heating system shown in Figure 9.20. Let y(t) be the temperature of the
chamber and u(t) be the amount of heat pumping into the chamber. Suppose they are
related by
ẏ(t) + 0.0001y(t) = u(t)
If no heat is applied and if the temperature is 80o , how long will it take for the temper-
ature to drop to 70o ? [Hint: Use (8.43).]
9.10. REASONS FOR NOT USING SS EQUATIONS IN DESIGN 255
9.5 What is the general form of the step response of a system with transfer function
10(s − 1)
H(s) =
(s + 1)3 (s + 0.1)
the form in Problem 9.21? Is it lowpass? What is its 3-dB bandwidth? What is its
steady state response excited by the input
u(t) = sin 0.1t + sin 100t
9.23 Consider the network shown in Figure 9.21(b). Compute its transfer function and plot
their magnitude and phase responses. What is its steady state response excited by the
input
u(t) = sin 0.1t + sin 100t
Is it lowpass or highpass? What is its 3-dB bandwidth? What is its 3-dB passband
edge frequency?
9.10. REASONS FOR NOT USING SS EQUATIONS IN DESIGN 257
9.24 Verify that for a properly designed transfer function of degree 2, a lowpass, bandpass,
and highpass filter must assume respectively the following form
b
Hl (s) =
s2 + a2 s + a3
bs + d
Hb (s) = 2
s + a2 s + a3
bs2 + cs + d
Hh (s) =
s2 + a2 s + a3
9.26* Let x(t) be two sided and absolutely integrable in (−∞, ∞). Let x+ (t) = x(t), for
t ≥ 0 and x+ (t) = 0, for t < 0. In other words, x+ (t) denotes the positive-time part
of x(t). Let x− (t) = x(t), for t < 0 and x− (t) = 0, for t ≥ 0. In other words, x− (t)
denotes the negative-time part of x(t). Verify
Using this formula, the Fourier transform of a two-sided signal can be computed using
the (one-sided) Laplace transform. Note that x− (t) is negative time, but x− (−t) is
positive time.
9.27 Let X(s) be the Laplace transform of x(t) and be a proper rational function. Show that
x(t) approaches a constant (zero or nonzero) if and only if all poles, except possibly a
simple pole at s = 0, of X(s) have negative real parts. In this case, we have
This is called the final-value theorem. To use the theorem, we first must check essentially
the stability of X(s). It is not as useful as Theorem 9.5.
258 CHAPTER 9. QUALITATIVE ANALYSIS OF CT LTI LUMPED SYSTEMS
Chapter 10
10.1 Introduction
This chapter introduces three independent topics.1 The first topic, consisting of Sections
10.2 and 10.3, discusses model reduction which includes the concept of dominant poles as a
special case. Model reduction is widely used in engineering and yet rarely discussed in most
texts. Because it is based on systems’ frequency responses and signals’ frequency spectra,
its introduction in this text is most appropriate. We introduce the concept of operational
frequency ranges of devices and apply it to op-amp circuits, seismometers, and accelerometers.
The second topic, consisting of Sections 10.4 and 10.5, discusses composite systems, in
particular, feedback systems. Feedback is used in refrigerators, ovens, and homes to maintain
a set temperature and in auto-cruise of automobiles to maintain a set speed. We use examples
to demonstrate the necessity and advantage of using feedback. We then discuss pole-placement
design and feedback design of inverse systems.
The last topic discusses the Wien-bridge oscillator. The circuit will generate a sustained
oscillation once it is excited and the input is removed. We first design it directly. We then
develop a feedback model for an op-amp circuit and then use it to design the oscillator. We
also relate the conventional oscillation condition with pole location.
259
260 CHAPTER 10. MODEL REDUCTION AND SOME FEEDBACK DESIGNS
Figure 10.2: (a) Magnitude response of an op amp. (b) Its frequency response.
through the gain at 2 × 105 . Thus the transfer function of the op amp is
2 × 105
A(s) = ¡ s
¢¡ s
¢ (10.1)
1+ 16π 1+ 6π·106
See Subsection 9.7.5 and References [C5, C8]. It has two poles, one at −16π and the other
at −6π × 106 . The response due to the latter pole will vanish much faster than that of the
former, thus the response of (10.1) is dominated by the pole −16π and the transfer function
in (10.1) can be reduced to or approximated by
2 × 105 32π · 105 107
A(s) = ¡ s
¢ = ≈ (10.2)
1 + 16π s + 16π s + 50.3
This is called a single-pole or dominate-pole model of the op amp. This simplification or model
reduction will be discussed in the next subsection.
If an op amp is modeled as memoryless and operates in its linear region, then its inputs
and output are related by
vo (t) = A[e+ (t) − e− (t)]
in the time-domain, or
Vo (s) = A[E+ (s) − E− (s)] (10.3)
in the transform-domain, where A is a constant and capital letters are the Laplace transform
of the corresponding lower-case letters. Now if the op amp is modeled to have memory and
has the transfer function A(s), then (10.3) must be modified as
Vo (s) = A(s)[E+ (s) − E− (s)] (10.4)
Now we use this equation with A(s) in (10.2) to study the stability of the op-amp circuits.
The use of (10.1) will yield the same conclusion but the analysis will be more complicated.
Consider the circuit in Figure 10.1(a). Substituting Vi (s) = E+ (s) and Vo (s) = E− (s)
into (10.4) yields
Vo (s) = A(s)(Vi (s) − Vo (s))
10.2. OP-AMP CIRCUITS BASED ON A SINGLE-POLE MODEL 261
which implies
(1 + A(s))Vo (s) = A(s)Vi (s)
Thus the transfer function of the circuit in Figure 10.1(a) is
Vo (s) A(s)
H(s) = = (10.5)
Vi (s) A(s) + 1
or, substituting (10.2),
107
s+50.3 107 107
H(s) = 107
= ≈ (10.6)
1 + s+50.3 s + 50.3 + 107 s + 107
It is stable. If we apply a step input, the output vo (t), as shown in Theorem 9.5, will approach
H(0) = 1. If we apply vi (t) = sin 10t, then, because
107
H(j10) = ≈ 1ej·0
j10 + 107
the output will approach sin 10t. Furthermore, because the time constant is 1/107 , it takes
roughly 5 × 10−7 second to reach steady state. In other words, the output will follow the
input almost instantaneously. Thus the circuit is a very good voltage follower.
Now we consider the op-amp circuit in Figure 10.1(b). Substituting Vi (s) = E− (s) and
Vo (s) = E+ (s) into (10.4) yields
which implies
(1 − A(s))Vo (s) = −A(s)Vi (s)
Thus the transfer function of the circuit is
Vo (s) −A(s) A(s)
H(s) = = = (10.7)
Vi (s) 1 − A(s) A(s) − 1
or, substituting (10.2),
107
s+50.3 107 107
H(s) = 107
= ≈ (10.8)
s+50.3 − 1 107 − s − 50.3 107 − s
It has a pole in the RHP. Thus the circuit is not stable and its output will grow unbounded
when an input is applied. Thus the circuit will either burn out or run into a saturation region
and cannot be used as a voltage follower.
To conclude this section, we mention that the instability of the circuit in Figure 10.1(b)
can also be established using a memoryless but nonlinear model of the op amp. See Reference
[C9, pp. 190–193]. But our analysis is simpler.
(a) (b)
1.2 0.5
1
0
0.8
Phase (rad)
Magnitude
−0.5
0.6
−1
0.4
−1.5
0.2
0 −2
0 2 4 6 8 10 0 2 4 6 8 10
Frequency (log10 w in rad/s) Frequency (log10 w in rad/s)
and
Ys (jω) = Hs (jω)U (jω)
As discussed in Section 9.9, (10.9) has physical meaning only if the system is stable. If
H(jω) = Hs (jω) for ω in B, then Y (jω) = Ys (jω) for ω in B. If the spectrum of u(t) is zero
outside B or U (jω) = 0 for ω lying outside B, then Y (jω) = Ys (jω) = 0 for ω outside B.
Thus we have Y (jω) = Ys (jω) for all ω. Consequently we conclude y(t) = ys (t) for all t. This
establishes the theorem. 2
An important implication of Theorem 10.1 is that a stable transfer function can often be
simplified for a class of inputs. For example, Consider the voltage follower in Figure 10.1(a)
with transfer function in (10.6). The magnitude and phase responses of (10.6) are plotted in
Figure 10.3 against log10 ω for ω from 100 = 1 to 1011 . We see that the magnitude response
is 1 and the phase response is practically 0 in the frequency range [0, 106 ]. In this frequency
range, the frequency response of (10.6) is the same as the frequency response of Hs (s) = 1.
The frequency spectra of the step input and u(t) = sin 10t lie inside the range and for these
two inputs, the transfer function in (10.6) can be simplified as Hs (s) = 1 as discussed earlier.
However if the spectrum of a signal lies outside the range, then the simplified model cannot
be used. For example, consider u(t) = cos 1020 t whose nonzero spectrum lies outside [0, 106 ].
For this input, the output of Hs (s) = 1 is ys (t) = cos 1020 t, for t ≥ 0. To find the output of
H(s) in (10.6), we compute
107 107
H(j1020 ) = ≈ ≈ 0 · e−jπ/2
j1020 + 10 7 j1020
ki
C(s) = kp + + kd s
s
where kp is a proportional gain, ki is the parameter associated with 1/s, an integrator, and
kd is associated with s, a differentiator. Thus the transfer function is called a PID controller
and is widely used in control systems. See Reference [C5].
The differentiator kd s is actually a simplified model. In reality, it is designed as
kd s
H(s) = (10.10)
1 + s/N
10.3. SEISMOMETERS 263
(a) (b)
70 4
60 3
2
50
1
Magnitude
Amplitude
40
0
30
−1
20
−2
10 −3
0 −4
0 10 20 30 40 50 0 1 2 3 4 5
Frequency (rad/s) Time (s)
Figure 10.4: (a) Magnitude responses of 2s (solid line) and (10.10) with N = 20 (dotted line).
(b) Outputs of 2s (solid line) and (10.10) (dotted line).
where N is a constant, called a taming factor. The transfer function is biproper and can be
implemented without using any differentiator. We plot in Figure 10.4(a) with a solid line the
magnitude response of kd s with kd = 2 and with a dotted line the magnitude response of
(10.10) with N = 20. We see that they are close for low frequencies or for frequency in [0, 10].
Now if we apply the signal cos 1.5t whose spectrum lies inside the range, then the outputs of
(10.10) and kd s are indistinguishable as shown in Figure 10.4(b) except in the neighborhood of
t = 0. Note that the transfer function in (10.10) with N = 20 has time constant 1/20 = 0.05
and it takes roughly 0.25 second for the output of (10.10) to reach steady-state as shown in
Figure 10.4(b). Thus the transfer function in (10.10) acts as a differentiator for low frequency
signals.
Model reduction is widely used in practice. For example, an amplifier with gain A could
be a simplified model of A/(1 + s/N ). In practical application, the conditions in Theorem
10.1 are replaced by |H(jω)| ≈ |Hs (jω)| for ω in B and U (jω) ≈ 0 for ω outside B and B
needs not be precisely specified.
To conclude this section, we mention that a simplified model of H(s) can often be obtained
by inspection. For example, consider H(s) = 107 /(s+107 ). If |jω| ≤ 106 , then jω+107 ≈ 107 .
Thus for low-frequency signals, we can reduce (10.6) to
107
Hs (s) = =1
107
Note that even though we have 107 − jω ≈ 107 , for ω small, we cannot reduce (10.8) as
107
Hs (s) = =1
107
because the transfer function in (10.8) is not stable.
10.3 Seismometers
A seismometer is a device to measure and record vibratory movements of the ground caused
by earthquakes or man-made explosion. There are many types of seismometers. We consider
the one based on the model shown in Figure 10.5(a). A block with mass m, called a seismic
mass, is supported inside a case through a spring with spring constant k and a dashpot as
shown. The dashpot generates a viscous friction with viscous friction coefficient f . See the
discussion pertaining Figure 8.2. The case is rigidly attached to the ground. Let u and z
be, respectively, the displacements of the case and seismic mass relative to the inertia space.
They are measured from the equilibrium position. By this, we mean that if u = 0 and z = 0,
then the gravity of the seismic mass and the spring force cancel out and the mass remains
stationary. The input u in Figure 10.5(a) is the movement of the ground.
Now if the ground vibrates and u(t) becomes nonzero, the spring will exert a force to
the seismic mass and cause it to move. If the mass is rigidly attached to the case, then
264 CHAPTER 10. MODEL REDUCTION AND SOME FEEDBACK DESIGNS
z(t) = u(t). Otherwise, we generally have z(t) < u(t). Let us define y(t) := u(t) − z(t). It
is the displacement of the mass with respect to the case and can be read from the scale on
the case as shown. It can also be transformed into a voltage signal using, for example, a
potentiometer. Note that the direction of y(t) in Figure 10.5(a) is opposite to that of u(t).
Now the acceleration force of m must be balanced out by the spring force ky(t) and the
viscous friction f ẏ(t). Thus we have
or
mÿ(t) + f ẏ(t) + ky(t) = mü(t) (10.12)
Applying the Laplace transform and assuming zero initial conditions, we obtain
Y (s) ms2
H(s) = = (10.13)
U (s) ms2 + f s + k
This system is always stable for any positive m, f , and k (see Problem 9.11).
The reading y(t) of a seismometer should be proportional to the actual ground movement
u(t). This can be achieved if (10.13) can be reduced or simplified as
ms2
Hs (s) = =1
ms2
Thus the design problem is to find f, k, and m so that the frequency range B in which the
frequency response of H(s) equals that of Hs (s) is the largest.
In order for (10.13) to reduce to Hs (s) = ms2 /ms2 , we require |jωf + k| ¿ m|(jω)|2 .
This provides us a way of selecting the parameters. First, there is no loss of generality to
assume m = 1. In order for the inequality to hold for a large range B, we should select k to
be small. Arbitrarily, we select k = 0.2. We then carry out the selection of f by computer
simulation. We plot in Figure 10.6 the magnitude responses of (10.13) for f = 0.2, 0.4, 0.63,
and 0.8. It is obtained in MATLAB by typing
w=0:0.01:5;n=[1 0 0];
d1=[1 0.2 0.2];H1=freqs(n,d1,w);
d2=[1 0.4 0.2];H2=freqs(n,d2,w);
d3=[1 0.63 0.2];H3=freqs(n,d3,w);
d4=[1 0.8 0.2];H4=freqs(n,d4,w);
plot(w,abs(H1),’:’,w,abs(H2),w,abs(H3),’--’,w,abs(H4),’-.’)
10.3. SEISMOMETERS 265
2.5
f=0.2
2
Magnitude response
1.5
0.4
1
0.63
0.8
0.5
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Frequency (rad/s)
Figure 10.6: Magnitude responses of (8.12) for m = 1, k = 0.2, and f = 0.2, 0.4, 0.63, 0.8.
We see that for f = 0.63, the magnitude response approaches 1 at the smallest ω or has
the largest frequency range ω > 1.25 in which the magnitude response equals 1. Thus we
conclude that (10.13) with m = 1, f = 0.63, and k = 0.2 is a good seismometer. However, it
will yield accurate results only for signals whose nonzero frequency spectra lie inside the range
B = [1.25, ∞]. In other words, in the frequency range B, the transfer function in (10.13) can
be reduced as
Ys (s) ms2
Hs (s) = = =1
U (s) ms2
We see that y(t) = u1 (t), for all t ≥ 0. Indeed the transfer function in (10.13) can be reduced
to Hs (s) = 1.
Next we consider the signal u2 (t) = e−0.3t sin t shown in Figure 10.7(b). Its magnitude
spectrum is shown in Figure 10.8(b). It is significantly different from zero outside B. Thus
we can no longer expect the output to equal u2 (t). This is indeed the case as shown in
Figure 10.7(bb). This shows the importance of the concepts of operational frequency ranges
of systems and frequency spectra of signals.
To conclude this section, we mention that even for the simple model in Figure 10.5(a), we
may raise many design questions. Can we obtain a larger B by selecting a smaller k? Can we
select k = 0 or f = 0? What are the real-world constraints on f and k? Once a seismometer is
designed and installed, it will always give some reading even if the input’s frequency spectrum
is not inside B. Thus we may also question the importance of B. In conclusion, a real-world
design problem is much more complicated than the discussion in this section.
266 CHAPTER 10. MODEL REDUCTION AND SOME FEEDBACK DESIGNS
(a) (aa)
1 1
0.5 0.5
Amplitude
0 0
−0.5 −0.5
−1 −1
0 2 4 6 8 10 0 2 4 6 8 10
(b) (bb)
1 1
0.8
0.6
0.5
Amplitude
0.4
0.2
0
0
−0.2
−0.4 −0.5
0 2 4 6 8 10 0 2 4 6 8 10
Time (s) Time (s)
Figure 10.7: (a) u1 (t) = e−0.3t sin t cos 20t. (aa) Output of the seismometer excited by u1 .
(b) u2 (t) = e−0.3t sin t. (bb) Output of the same seismometer excited by u2 .
(a) (b)
1 2
0.8
1.5
Magnitude spectrum
0.6
1
0.4
0.5
0.2
0 0
0 10 20 30 0 2 4 6 8 10
Frequency (rad/s) Frequency (rad/s)
Figure 10.8: (a) Magnitude spectrum of u1 (t) = e−0.3t sin t cos 20t. (b) Magnitude spectrum
of u2 (t) = e−0.3t sin t.
10.3. SEISMOMETERS 267
10.3.1 Accelerometers
Consider next the system shown in Figure 10.5(b). It is a model of a type of accelerometers.
An accelerometer is a device that measures the acceleration of an object to which the device is
attached. By integrating the acceleration twice, we obtain the velocity and distance (position)
of the object. It is used in inertial navigation systems on airplanes and ships. It is now widely
used to trigger airbags in automobiles.
The model consists of a block with seismic mass m attached to a case through two springs
as shown in Figure 10.5(b). The case is filled with oil to create viscous friction and is attached
rigidly to an object such as an airplane. Let u and z be, respectively, the displacements of
the case and the mass with respect to the inertia space. Because the mass is floating inside
the case, u may not equal z. We define y := u − z. It is the displacement of the mass with
respect to the case and can be transformed into a voltage signal. Note that the direction of
y(t) in Figure 10.5(b) is opposite to that of u(t). The input of the accelerometer is u and
the output is y. Let the spring constant of each spring be k/2 and let the viscous friction
coefficient be f . Then we have, as in (10.12) and (10.13),
and
Y (s) ms2
H(s) = = (10.14)
U (s) ms2 + f s + k
It is stable for any positive m, f , and k. Note that the transfer function in (10.14) is identical
to (10.13) which is designed to act as a seismometer. Now for the same transfer function, we
will design it so that it acts as an accelerometer.
The reading y(t) of an accelerometer should be proportional to the acceleration of the case
or d2 u(t)/dt2 . This can be achieved if (10.14) can be simplified as Hs (s) = ms2 /k. Indeed,
the output ys (t) of Hs (s) excited by u(t) is
Ys (s) ms2
Hs (s) = = (10.15)
U (s) k
1.6
1.4
1.2 f=3
1 3.75
Magnitude spectrum
4
0.8
4.75
0.6
0.4
0.2
0
0 0.5 1 1.5 2 2.5 3 3.5 4
Frequency (rad/s)
Figure 10.9: Magnitude responses of (10.14) for m = 1, k = 10, and f = 3, 3.75, 4, 4.75 and
the reduced model.
The subject of sensors, including seismometers and accelerometers, is a vast one. There
are piezoelectric, micro-mechanical, microthermal, magnetic, capacitive, optical, chemical
and biological sensors and transducers. See, for example, Reference [F1]. Mathematical
modeling of those real-world devices will be complicated. The models in Figure 10.5 are
used as examples in many books. However, their discussions often stop after developing the
transfer function. No reasons are given why it can act as a seismometer or an accelerometer.
where Hi (s), for i = 1, 2, are proper rational functions. In the parallel connection shown
in Figure 10.10(a), we have U1 (s) = U2 (s) = U (s) and Y (s) = Y1 (s) + Y2 (s). By direct
substitution, we have
Y (s) = H1 (s)U1 (s) + H2 (s)U2 (s) = (H1 (s) + H2 (s))U (s) =: Hp (s)U (s)
Thus the overall transfer function of the parallel connection is simply the sum of the two
individual transfer functions or Hp (s) = H1 (s) + H2 (s).
In the tandem or cascade connection shown in Figure 10.10(b), we have U (s) = U1 (s),
Y1 (s) = U2 (s), and Y2 (s) = Y (s). Its overall transfer function is the multiplication of the two
individual transfer functions or Ht (s) = H1 (s)H2 (s) = H2 (s)H1 (s).
In the positive-feedback connection shown in Figure 10.10(c), we have U1 (s) = U (s)+Y2 (s)
and Y1 (s) = Y (s) = U2 (s). By direct substitution, we have
and
U1 (s) = U (s) + Y2 (s) = U (s) + H2 (s)H1 (s)U1 (s)
which implies
U (s)
[1 − H2 (s)H1 (s)]U1 (s) = U (s) and U1 (s) =
1 − H2 (s)H1 (s)
10.4. COMPOSITE SYSTEMS – LOADING PROBLEM 269
Figure 10.10: (a) Parallel connection. (b) Tandem connection. (c) Negative-feedback connec-
tion. (d) Positive-feedback connection.
Example 10.4.1 Consider the two RC circuits shown in Figure 10.11(a). Using impedances,
we can find their transfer functions as
1/s 1 1 2s
H1 (s) = = and H2 (s) = = (10.20)
1 + 1/s s+1 1 + 1/2s 2s + 1
Let us connect them in tandem as shown with dashed lines. We compute its transfer function
from u to y. The impedance of the parallel connection of 1/s and 1 + 1/2s is
(1/s)(1 + 1/2s) 2s + 1
Z1 (s) = =
(1/s) + (1 + 1/2s) s(2s + 3)
Thus the voltage V1 (s) shown is given by
Z1 (s) 2s + 1
V1 (s) = U (s) = 2 U (s)
1 + Z1 (s) 2s + 5s + 1
270 CHAPTER 10. MODEL REDUCTION AND SOME FEEDBACK DESIGNS
The preceding example shows that Ht (s) = H1 (s)H2 (s) may not hold in practice. If this
happens, we say that the connection has a loading problem. The loading in Figure 10.11(a)
is due to the fact that the current i(t) shown is zero before connection and becomes nonzero
after connection. This provides a method of eliminating the loading.
Let us insert an amplifier with gain 1 or a voltage follower between the two circuits as
shown in Figure 10.11(b). If the amplifier has a very large input resistance Ri as is usually
the case, the current i(t) will remain practically zero. If the output resistance Ro is zero or
very small, then there will be no internal voltage drop in the amplifier. Thus the amplifier is
called an isolating amplifier or a buffer and can eliminate or reduce the loading problem. In
conclusion, in electrical systems, loading can often be eliminated by inserting voltage followers
as shown in Figure 6.10. Because op amps have large input resistances, the loading problem
in op-amp circuits is generally negligible.
total number of N1 + N2 energy storage elements. Let Ho (s) be the transfer function of an
overall system. Then the overall system is completely characterized by Ho (s) if and only if
Ho (s) has a degree equal to N1 + N2 . This condition can be stated in terms of the poles and
zeros of Hi (s) as in the next theorem.
Theorem 10.2 Consider two systems S1 and S2 which are completely characterized by its
proper rational transfer functions H1 (s) and H2 (s).
1. The parallel connection of S1 and S2 is completely characterized by its transfer function
Hp = H1 (s) + H2 (s) if and only if H1 (s) and H2 (s) has no pole in common.
2. The tandem connection of S1 and S2 is completely characterized by its transfer function
Ht = H1 (s)H2 (s) if and only if there is no pole-zero cancellation between H1 (s) and
H2 (s).
3. The feedback connection of S1 and S2 is completely characterized by its transfer function
Hf = H1 (s)/(1 ± H2 (s)) if and only if no pole of H2 (s) is canceled by any zero of H1 (s).
We first use examples to demonstrate the case for feedback connection.
Example 10.4.2 Consider two systems Si with transfer functions
2 s−1
H1 (s) = and H2 (s) =
s−1 (s + 3)(s + 1)
where no pole of H2 (s) is canceled by any zero of H1 (s). Thus the feedback system is
completely characterized by its overall transfer function. Indeed, the transfer function of
their positive-feedback connection is
2 2
s−1 s−1
Hpf (s) = 2 s−1 = 2
1− s−1 · (s+3)(s+1) 1− (s+3)(s+1)
2
s−1 2(s + 1)(s + 3)
= =
(s+3)(s+1)−2 (s − 1)(s2 + 4s + 1)
(s+3)(s+1)
It has degree 3, sum of the degrees of H1 (s) and H2 (s). Thus the feedback system is completely
characterized by Hpf (s). Note that the cancellation of the zero of H2 (s) by the pole of H1 (s)
does not affect the complete characterization. 2
Example 10.4.3 Consider two systems Si with transfer functions
s−1 2
H1 (s) = and H2 (s) =
(s + 1)(s + 2) s−1
where the pole of H2 (s) is canceled by the zero of H1 (s). Thus the feedback system is not
completely characterized by its overall transfer function. Indeed, the transfer function of their
positive-feedback connection is
s−1 s−1
(s+1)(s+3) (s+1)(s+3)
Hpf (s) = 2 s−1 = 2
1 − s−1 · (s+1)(s+3) 1 − (s+1)(s+3)
s−1
(s+1)(s+3)) s−1
= 2
s +4s+3−2
= 2
(s+1)(s+3)
s + 4s + 1
It has degree 2, less than the sum of the degrees of H1 (s) and H2 (s). Thus the feedback
system is not completely characterized by Hpf (s). 2
The preceding two examples show the validity of the feedback part of Theorem 10.2. We
now argue formally the first part of Theorem 10.2. The overall transfer function of the parallel
connection is
N1 (s) N2 (s) N1 (s)D2 (s) + N2 (s)D1 (s)
Hp (s) = + = (10.21)
D1 (s) D2 (s) D1 (s)D2 (s)
272 CHAPTER 10. MODEL REDUCTION AND SOME FEEDBACK DESIGNS
If D1 (s) and D2 (s) have the same factor s + a, then the same factor will also appear in
(N1 (s)D2 (s) + N2 (s)D1 (s)). Thus the degree of Hp (s) is less than the sum of the degrees
of D1 (s) and D2 (s) and the parallel connection is not completely characterized by Hp (s).
Suppose D1 (s) has the factor s + a but D2 (s) does not. Then the only way for N1 (s)D2 (s) +
N2 (s)D1 (s) to have the factor s + a is that N1 (s) contains s + a. This is not possible because
D1 (s) and N1 (s) are coprime. Thus we conclude that the parallel connection is completely
characterized by Hp (s) if and only if H1 (s) and H2 (s) have no common poles. The cases for
tandem and feedback connections can be similarly argued.
Next we consider the tandem connection of H2 (s) = N2 (s)/D2 (s) and H1 (s). Its overall
transfer function is
N1 (s)N2 (s)
Ht (s) = H1 (s)H2 (s) = (10.24)
D1 (s)D2 (s)
We see that the pole of H1 (s) remains in Ht (s) unless it is canceled by a zero of H2 (s) as we
discuss in the next example.
Example 10.4.5 Consider the tandem connection shown in Figure 10.12(b). The overall
transfer function is
s−1 1 1
Ht (s) = = (10.25)
s+1s−1 s+1
and is a good transfer function. However all discussion in the preceding example applies
directly to this design. That is, the design is not acceptable in practice and in theory. 2
Next we consider the negative-feedback system shown in Figure 10.10(d). Its overall
transfer function is
H1 (s)
Hnf (s) =
1 + H1 (s)H2 (s)
which becomes, after substituting Hi (s) = Ni (s)/Di (s),
N1 (s)/D1 (s)
Hnf (s) =
1 + N1 (s)N2 (s)/(D1 (s)D2 (s))
N1 (s)D2 (s)
= (10.26)
D1 (s)D2 (s) + N1 (s)N2 (s)
We see that the poles of (10.26) are different from the poles of Hi (s), for i = 1, 2. Thus
feedback will introduce new poles. This is in contrast to the parallel and tandem connections
where the poles of Hi (s) remain unchanged as we can see from (10.21) and (10.24).
Now for the transfer function H1 (s) in (10.22), if we introduce a negative feedback with
H2 (s) = 2, then the feedback transfer function is
H1 (s) 1/(s − 1) 1 1
Hnf (s) = = = = (10.27)
1 + H1 (s)H2 (s) 1 + 2/(s − 1) (s − 1) + 2 s+1
The unstable pole of H1 (s) at s = 1 is being shifted to s = −1 in Hnf (s). Thus the feedback
system is stable. The design does not involve any cancelation. Furthermore the feedback
system is completely characterized by (10.27) because its degree equals the total number of
energy-storage elements in the system. In conclusion, the only way to stabilize an unstable
system is to introduce feedback.
More generally, suppose we are required to design a composite system to improve the
performance of a system which has some undesirable poles. If we use the parallel or tandem
connections, the only way to remove those poles is by direct cancelation. Such design however
is not acceptable as discussed above. If we introduce feedback, then those poles, as we will
shown in a later section, can be shifted to any desired positions. Thus the only way to improve
the performance of the system is to introduce feedback.
Figure 10.13: (a) Inverting amplifier. (b) Inverting feedback amplifier. (c) Op-amp imple-
mentation of (b).
All systems in Figure 10.13 are memoryless. The transfer function of a memoryless system
with gain A is simply A. Let −Af be the gain from u to y of the positive-feedback system in
Figure 10.13(b). Then we have, using (10.18),
(−A)3 A3
−Af = = −
1 − β(−A)3 1 + βA3
or
A3
Af = (10.28)
1 + βA3
Next we will find a β so that Af = 10. We solve
103
10 =
1 + 103 β
which implies 10 + 104 β = 103 and
103 − 10
β= = 0.099
104
In other words, if β = 0.099, then the feedback system in Figure 10.13(b) is also an inverting
amplifier with gain 10. The feedback gain β can be implemented as shown in Figure 10.13(c)
with Rf = R/β = 10.101R (Problem 10.9).
Even though the inverting feedback amplifier in Figure 10.13(b) uses three times more
components than the one in Figure 10.13(a), it is the preferred one. We give the reason. To
dramatize the effect of feedback, we assume that A decreases 10% each year due to aging or
whatever reason. In other words, A is 10 in the first year, 9 in the second year, and 8.1 in the
third year as listed in the second row of Table 10.1. Next we compute Af from (10.28) with
β = 0.099 and A = 9:
93
Af = = 9.963
1 + 0.099 × 93
We see that even though A decreases 10%, Af decreases only (10 − 9.963)/10 = 0.0037 or less
than 0.4%. If A = 8.1, then
8.13
Af = = 9.913
1 + 0.099 × 8.13
10.5. DESIGN OF CONTROL SYSTEMS – POLE PLACEMENT 275
n 1 2 3 4 5 6 7 8 9 10
A 10 9.0 8.1 7.29 6.56 5.9 5.3 4.78 4.3 3.87
Af 10 9.96 9.91 9.84 9.75 9.63 9.46 9.25 8.96 8.6
and so forth. They are listed in the third row of Table 10.1. We see that the inverting feedback
amplifier is much less sensitive to the variation of A.
If an amplifier is to be replaced when its gain decreases to 9 or less, then the open-loop
amplifier in Figure 10.13(a) lasts only one year; whereas the feedback amplifier in Figure
10.13(b) lasts almost 9 years. Thus even though the feedback amplifier uses three times
more components, it lasts nine times longer. Thus it is more cost effective. Not to mention
the inconvenience and cost of replacing the open-loop amplifier every year. In conclusion, a
properly designed feedback system is much less sensitive to parameter variations and external
disturbances. Thus feedback is widely used in practice.
The first step in design is to find a C(s) to make the feedback system stable. If not, the
output y(t) will grow unbounded for any applied r(t) and cannot track any step reference
input. We first try C(s) = k, where k is a real constant, a compensator of degree 0. Then
the transfer function from r to y is
s−2
C(s)P (s) k (s+0.5)(s−1)
Hf (s) = A =A s−2
1 + C(s)P (s) 1 + k (s+0.5)(s−1)
k(s − 2)
= A
(s + 0.5)(s − 1) + k(s − 2)
k(s − 2)
= A 2 (10.30)
s + (k − 0.5)s − 0.5 − 2k
The condition for Hf (s) to be stable is that the polynomial
be CT stable. The conditions for (10.31) to be stable are k − 0.5 > 0 and −0.5 − 2k > 0 (See
Problem 9.11). Any k > 0.5 meeting the first inequality will not meet the second inequality.
Thus the feedback system cannot be stabilized using a compensator of degree 0.
Next we try a compensator of degree 1 or
N1 s + N0
C(s) = (10.32)
D1 s + D0
where Ni and Di are real numbers. Using this compensator, Hf (s) becomes, after some simple
manipulation,
(N1 s + N0 )(s − 2)
Hf (s) = A
(D1 s + D0 )(s + 0.5)(s − 1) + (N1 s + N0 )(s − 2)
A(N1 s + N0 )(s − 2)
=: (10.33)
Df (s)
where
Now Hf (s) is stable if the polynomial of degree 3 in (10.34) is CT stable. Because the four
parameters of the compensator in (10.32) appear in the four coefficients of Df (s), we can
easily find a compensator to make Hf (s) stable. In fact we can achieve more. We can find a
C(s) to place the poles of Hf (s) or the roots of Df (s) in any positions so long as complex-
conjugate poles appear in pairs. For example, we can select the poles arbitrarily at −1, −1, −1
or, equivalently, select Df (s) as
D1 = 1
D0 − 0.5D1 + N1 = 3
−0.5D1 − 0.5D0 − 2N1 + N0 = 3 (10.36)
−0.5D0 − 2N0 = 1
Substituting D1 = 1 into the other equations and then carrying out elimination, we can finally
obtain
D1 = 1 D0 = 8.8 N1 = −5.3 N0 = −2.7
10.5. DESIGN OF CONTROL SYSTEMS – POLE PLACEMENT 277
(a) (b)
1.2 1.5
1
0.8
Amplitude
Amplitude
0.6
0.5
0.4
0.2
0
−0.2 −0.5
0 5 10 15 20 0 5 10 15 20
Time (s) Time (s)
Figure 10.15: (a) Step response of Figure 10.14 with all three poles selected at −1. (b) With
all three poles selected at −0.4.
output is yss (t) = 0.9a, then the tracking has 10% error ((a − 0.9a)/a = 0.1). If yss (t) = a,
then the system can track any step reference input r(t) = a without any error. This design
can be easily achieved by selecting the gain A in Figure 10.14 so that Hf (0) = 1 as in (10.39).
Thus accuracy is easy to achieve in designing control systems.
The transient specification is concerned with response time and overshoot. For the pole-
placement design in the preceding section, by selecting all three poles at −1, we obtain the
response shown in Figure 10.15(a). If we select all three poles at −0.4, then the step response
of the resulting system will be as shown in Figure 10.15(b). The response has no overshoot; it
has a smaller undershoot but a larger response time than the one in Figure 10.15(a). Which
design shall we select depends on which is more important: response time or overshoot.
The selection of all three poles at −1 or at −0.4 is carried out by computer simulation.
If we select a real pole and a pair of complex-conjugate poles in the neighborhood of −1 or
−0.4, we will obtain probably a comparable result. Thus the design is not unique. How to
select a set of desired poles is not a simple problem. See Reference [C5].
This problem may find application in oil exploration: To find the source from measured data.
Every system, including every inverse system, involves two issues: realizability and stabil-
ity. If the transfer function of a system is not a proper rational function, then its implemen-
tation requires the use of differentiators. If the transfer function is not stable, then its output
will grow unbounded for any input. Thus we require every inverse system to have a proper
rational transfer function and to be stable.
Consider a system with transfer function H(s). If H(s) is biproper, then its inverse
Hin (s) = H −1 (s) is also biproper. If all zeros of H(s) lie inside the left half s-plane,2 then
H −1 (s) is stable. Thus if a stable biproper transfer function has all its zeros lying inside the
LHP, then its inverse system exists and can be easily built. If it has one or more zeros inside
the RHP or on the jω-axis, then its inverse system is not stable and cannot be used.
If H(s) is strictly proper, then H −1 (s) is improper and cannot be realized without using
differentiators. In this case, we may try to implement its inverse system as shown in Figure
10.16 where A is a very large positive gain. The overall transfer function of the feedback
system in Figure 10.16 is
A
Ho (s) =
1 + AH(s)
2 Such a transfer function is called a minimum-phase transfer function. See Reference [C5].
10.6. INVERSE SYSTEMS 279
Now if A is very large such that |AH(jω)| À 1, then the preceding equation can be approxi-
mated by
A A
Ho (s) = ≈ = H −1 (s) (10.40)
1 + AH(s) AH(s)
Thus the feedback system can be used to implement approximately an inverse system. Fur-
thermore, for A very large, Ho (s) is practically independent on A, thus the feedback system
is insensitive to the variations of A. Thus it is often suggested in the literature to implement
an inverse system as shown in Figure 10.16. See, for example, Reference [O1, pp. 820-821].
The problem is not so simple as suggested. It is actually the model reduction problem
discussed in Section 10.2.1. In order for the approximation in (10.40) to hold, the overall
system Ho (s) must be stable. Moreover, the approximation holds only for signals whose
spectra are limited to some frequency range. This is illustrated with examples.
Example 10.6.1 Consider H(s) = 0.2/(s + 1). Because H −1 (s) = 5(s + 1) is improper, it
cannot be implemented without using differentiators. If we implement its inverse system as
shown in Figure 10.16, then the overall transfer function is
A A(s + 1)
Ho (s) = =
1 + AH(s) s + 1 + 0.2A
It is stable for any A ≥ −5. Thus if A is very large, Ho (s) can be reduced to
A(s + 1)
Hos (s) ≈ = 5(s + 1) = H −1 (s)
0.2A
in the frequency range |jω + 1| ¿ 0.2A. Thus the feedback system can be used to implement
approximately the inverse system of H(s). The approximation, however, is valid only for
low-frequency signals as demonstrated in the following.
Consider the signal u1 (t) = e−0.3t sin t shown in Figure 10.17(a). Its nonzero magnitude
spectrum, as shown in Figure 10.8(b), lies inside [0, 5]. If we select A = 100, then we have
|jω + 1| ¿ 0.2A for ω in [0, 5]. The output y1 (t) of H(s) excited by u1 (t) is shown in Figure
10.17(aa). The output of Ho (s) with A = 100 excited by y1 (t) is shown in Figure 10.17(aaa).
It is close to u1 (t). Thus the feedback system in Figure 10.16 implements the inverse of H(s)
for u1 (t).
Next we consider u2 (t) = e−0.3t sin 20t and select A = 100. The corresponding results are
shown in Figures 10.17(b), (bb), and (bbb). The output of the feedback system has the same
wave form as the input u2 (t) but its amplitude is smaller. This is so because we do not have
|jω + 1| ¿ 0.2A = 20. Thus the feedback system is not exactly an inverse system of H(s) for
high-frequency signals.2
0 0 0
0.5 0.5
Amplitude
0 0 0
−0.5 −0.5
−1 −0.05 −1
0 2 4 0 2 4 0 2 4
Time (s) Time (s) Time (s)
Figure 10.17: (a) Input u1 (t) of H(s). (aa) Output y1 (t) of H(s). (aaa) Output of Ho (s)
excited by y1 (t). (b) Input u2 (t) of H(s). (bb) Output y2 (t) of H(s). (bbb) Output of Ho (s)
excited by y2 (t).
0.5 0.5 50
Amplitude
0 0 0
−1 −1 −100
0 10 20 0 10 20 0 2 4
Time (s) Time (s) Time (s)
Figure 10.18: (a) Input u(t) of H(s) in (10.41). (b) Its output y(t). (c) Output of Ho (s)
excited by y(t).
10.7. WIEN-BRIDGE OSCILLATOR 281
Even though feedback implementation of inverse systems is often suggested in the litera-
ture, such implementation is not always possible as shown in the preceding examples. Thus
care must be exercised in using the implementation.
and
(R2 Z3 − R1 Z4 )Vo (s) = (Z3 + Z4 )R2 Vi (s)
Thus the transfer function from vi to vo is
Vo (s) (Z3 + Z4 )R2
H(s) = = (10.44)
Vi (s) R2 Z3 − R1 Z4
Substituting (10.42) and (10.43) into (10.44) and after simple manipulation, we finally obtain
the transfer function as
−((RCs)2 + 3RCs + 1)R2
H(s) = (10.45)
R1 (RCs)2 + (2R1 − R2 )RCs + R1
It has two poles and two zeros.
We now discuss the condition for the circuit to maintain a sustained oscillation once it
is excited and then the input is removed. If H(s) has one or two poles inside the RHP, its
output will grow unbounded once the circuit is excited. If the two poles are inside the LHP,
then the output will eventually vanish once the input is removed. Thus the condition for the
circuit to maintain a sustained oscillation is that the two poles are on the jω-axis. This is
the case if
2R1 = R2
Under this condition, the denominator of (10.45) becomes R1 [(RC)2 s2 + 1] and its two roots
are located at ±jω0 with
1
ω0 :=
RC
They are pure imaginary poles of H(s). After the circuit is excited and the input is removed,
the output vo (t) is of the form
vo (t) = k1 sin(ω0 t + k2 )
for some constants k1 and k2 . It is a sustained oscillation with frequency 1/RC rad/s. Because
of its simplicity in structure and design, the Wien-bridge oscillator is widely used.
Although the frequency of oscillation is fixed by the circuit, the amplitude k1 of the
oscillation depends on how it is excited. Different excitation will yield different amplitude. In
order to have a fixed amplitude, the circuit must be modified. First we select a R2 slightly
larger than 2R1 . Then the transfer function in (10.45) becomes unstable. In this case, even
though no input is applied, once the power is turned on, the circuit will start to oscillate due
to thermal noise or power-supply transient. The amplitude of oscillation will increase with
time. When it reaches A, a value predetermined by a nonlinear circuit, called a limiter, the
circuit will maintain A sin ω r t, where ω r is the imaginary part of the unstable poles and is
very close to ω 0 . See Reference [S1]. Thus an actual Wien-bridge oscillator is more complex
than the one shown in Figure 10.19. However, the preceding linear analysis does illustrate
the basic design of the oscillator.3
The current I1 passing through the impedance Z1 is (V1 − E− )/Z1 , and the current I2
passing through Z2 is (Vo − E− )/Z2 . Because I− = 0, we have I1 = −I2 or
V1 − E− Vo − E −
=−
Z1 Z2
which implies
Z 2 V1 + Z 1 Vo Z2 Z1
E− = = V1 + Vo (10.46)
Z1 + Z2 Z1 + Z2 Z1 + Z2
This equation can also be obtained using a linearity property. If Vo = 0 or the output terminal
is grounded, then voltage E− at the inverting terminal excited by V1 is [Z2 /(Z1 + Z2 )]V1 . If
V1 = 0 or the inverting terminal is grounded, then the voltage E− excited by Vo is [Z1 /(Z1 +
Z2 )]Vo . Using the additivity property of a linear circuit, we obtain (10.46).
Likewise, because I+ = 0, we have, at the noninverting terminal,
V2 − E+ Vo − E +
=−
Z3 Z4
which implies
Z 4 V2 + Z 3 Vo Z4 Z3
E+ = = V2 + Vo (10.47)
Z3 + Z4 Z3 + Z4 Z3 + Z4
This equation can also be obtained using the additivity property as discussed above. Using
(10.46), (10.47), and Vo = A(E+ − E− ), we can obtain the block diagram in Figure 10.20(b).
It is a feedback model of the op-amp circuit in Figure 10.20(a). If V1 = 0 and V2 = 0, the
feedback model reduces to the one in Reference [H4, p. 847].
R1 + R2
H1 (s) := (10.48)
R1
and
Z3 RCs
H2 (s) := = 2
(10.49)
Z3 + Z4 (RCs) + 3RCs + 1
284 CHAPTER 10. MODEL REDUCTION AND SOME FEEDBACK DESIGNS
Figure 10.21: (a) Feedback model of Wien-bridge oscillator with finite A. (b) With A = ∞.
where we have substituted (10.42) and (10.43). Because the input Vi enters into the inverting
terminal and feedback enters into the noninverting terminal, the adder has the negative and
positive signs shown. This is essentially the feedback model used in References [H4, S1]. Note
that the system in Figure 10.21(b) is a positive feedback system.
The transfer function of the feedback system in Figure 10.21(b) can be computed as
Vo (s) −R2 H1 (s)
H(s) = = × (10.50)
Vi (s) R1 + R2 1 − H1 (s)H2 (s)
Substituting (10.48) and (10.49) into (10.50), we will obtain the same transfer function in
(10.45) (Problem 10.17).
We discuss an oscillation condition, called the Barkhausen criterion. The criterion states
that if there exists an ω0 such that
H1 (jω0 )H2 (jω0 ) = 1 (10.51)
then the feedback system in Figure 10.21(b) will maintain a sinusoidal oscillation with fre-
quency ω0 . Note that once the circuit is excited, the input is removed. Thus the oscillation
condition depends only on H1 (s) and H2 (s). Their product H1 (s)H2 (s) is called the loop
gain. It is the product of the transfer functions along the loop.
The criterion in (10.51) can be established as follows. Suppose the feedback system in
Figure 10.21(b) maintains the steady-state oscillation Re(Aejω0 t ) at the output after the
input is removed (vi = 0). Then the steady-state output of H2 (s) is Re(AH2 (jω0 )ejω0 t ) (See
(9.50)). This will be the input of H1 (s) because vi = 0. Thus the steady-state output of
H1 (s) is
Re(H1 (jω0 )AH2 (jω0 )ejω0 t )
If the condition in (10.51) is met, then this output reduces to Re(Aej ω0 t ), the sustained
oscillation. This establishes (10.51). The criterion is widely used in designing Wien-bridge
oscillators. See References [H4, S1].
We mention that the criterion is the same as checking whether or not H(s) in (10.50) has
a pole at jω0 . We write (10.51) as
1 − H1 (jω0 )H2 (jω0 ) = 0
which implies H1 (jω0 ) 6= 0 and H(jω0 ) = ∞ or −∞. Thus jω0 is a pole of H(s) in (10.50). In
other words, the criterion in (10.51) is the condition for H(s) to have a pole at jω0 . Because
H(s) has only real coefficients, if jω0 is a pole, so is −jω0 . Thus the condition in (10.51) checks
the existence of a pair of complex conjugate poles at ±jω0 in H(s). This is the condition
used in Section 10.7. In conclusion, the Wien-bridge oscillator can be designed directly as in
Section 10.7 or using a feedback model as in this subsection.
Problems
10.8. FEEDBACK MODEL OF GENERAL OP-AMP CIRCUIT 285
10.1 Consider the voltage follower shown in Figure 10.1(a). What is its overall transfer
function if the op amp is modeled to have memory with transfer function
105
A(s) =
s + 100
Is the voltage follower a lowpass filter? Find its 3-dB passband. We may consider the
passband as its operational frequency range.
10.2 Consider the positive-feedback op-amp circuit shown in Figure 6.17 with R2 = 10R1 .
Show that it has transfer function Hs (s) = −10 if its op amp is modeled as ideal.
Suppose the op amp is modeled to have memory with transfer function in (10.2). What
is its transfer function H(s)? Verify that for ω in [0, 1000], the frequency responses
of H(s) and Hs (s) are almost identical. For low-frequency signals with spectra lying
inside [0, 1000], can H(s) be reduced to Hs (s)? What are the step responses of H(s)
and Hs (s)? Note that although the circuit in Figure 6.17 cannot be used as a linear
amplifier; it can be used to build a very useful nonlinear device, called Schmitt trigger.
10.3 Consider the noninverting amplifier shown in Figure 6.18 with R2 = 10R1 . What is its
transfer function if the op amp is modeled as ideal? What is its transfer function if the
op amp is modeled to have memory with transfer function in (10.2)? Is it stable? What
is its time constant? What is its operational frequency range in which the op amp can
be modeled as ideal?
10.4 Find a realization of (10.10) with kd = 2 and N = 20 and then implement it using an
op-amp circuit.
10.5 Consider the transfer function in (10.13). If m = 1 and k = 2, find the f so that
the operational frequency range for (10.13) to act as a seismometer is the largest. Is
this operational frequency range larger or smaller than [1.25, ∞) computed for m = 1,
k = 0.2, and f = 0.63?
10.6 Repeat Problem 10.5 for m = 1 and k = 0.02.
10.7 Consider the negative feedback system shown in Figure 10.10(d) with
10
H1 (s) = and H2 (s) = −2
s+1
Check the stability of H1 (s), H2 (s), and the feedback system. Is it true that if all
subsystems are stable, then its negative feedback system is stable? Is the statement
that negative feedback can stabilize a system necessarily correct?
10.8 Consider the positive feedback system shown in Figures 10.10(c) with
−2 3s + 4
H1 (s) = and H2 (s) =
s−1 s−2
Check the stability of H1 (s), H2 (s), and the feedback system. Is it true that if all
subsystems are unstable, then its positive feedback is unstable? Is the statement that
positive feedback can destabilize a system necessarily correct?
10.9 Verify that if Rf = R/β, then Figure 10.13(c) implements the β in Figure 10.13(b).
10.10 Consider the unity feedback system shown in Figure 10.14 with
2
P (s) =
s(s + 1)
Can you find a gain C(s) = k so that the poles of the feedback system are located at
−0.5 ± j2? Can you find a gain C(s) = k so that the poles of the feedback system are
located at −1 ± j2?
286 CHAPTER 10. MODEL REDUCTION AND SOME FEEDBACK DESIGNS
10.11 For the problem in Problem 10.10, find a proper rational function C(s) of degree 1 to
place the three poles of the feedback system at −2, −1 + j2, and −1 − j2. What is the
gain A in order for the output to track any step reference input?
10.12 Consider the op-amp circuit shown in Figure 10.22 where the op amp is modeled as
Vo (s) = A(s)[E+ (s) − E− (s)] and I− = −I+ = 0. Verify that the transfer function from
Vi (s) to Vo (s) is
Vo (s) −A(s)Z2 (s)
H(s) = =
Vi (s) Z1 (s) + Z2 (s) + A(s)Z1 (s)
Also derive it using the block diagram in Figure 10.20(b).
10.13 Consider the op-amp circuit in Figure 10.22 with Z1 = R and Z2 = 10R. Find its
transfer functions if A = 105 and A = 2 × 105 . The two open-loop gains differ by 100%.
What is the difference between the two transfer functions? Is the transfer function
sensitive to the variation of A?
10.14 Use the Routh test to verify that the polynomial s3 + 2s2 + s + 1 + A is a CT stable
polynomial if and only if −1 < A < 1.
10.15 Can you use Figure 10.16 to implement the inverse system of H(s) = (s + 1)/(s2 +
2s + 5)? How about H(s) = (s − 1)/(s2 + 2s + 5)?
10.16 Consider the stable biproper transfer function H(s) = (s − 2)/(s + 1). Is its inverse
system stable? Can its inverse system be implemented as shown in Figure 10.16?
10.17 Verify that the transfer function in (10.50) equals the one in (10.45).
10.18 Use the ideal model to compute the transfer function from vi to vo of the op-amp
circuit in Figure 10.23. What is the condition for the circuit to maintain a sinusoidal
oscillation once it is excited, and what is its frequency of oscillation? Are the results
the same as those obtained in Section 10.7?
Figure 10.23:
10.19 Use Figure 10.20(b) to develop a feedback block diagram for the circuit in Figure 10.23,
and then compute its transfer function. Is the result the same as the one computed in
Problem 10.18?
Chapter 11
11.1 Introduction
We introduced in Chapter 7 discrete convolutions for DT LTI systems with finite or infinite
memory and then (non-recursive) difference equations, state-space (ss) equations, and rational
transfer functions for DT systems with finite memory. Now we will extend the results to DT
systems with infinite memory. The extension is possible only for a small subset of such
systems. For this class of systems, we will discuss their general properties. The discussion is
similar to the CT case in Chapters 8 and 9 and will be brief.
As discussed in Section 6.4.1, in the study of DT systems we may assume the sampling
period to be 1. This will be the standing assumption for all sections except the last where
DT systems will be used to process CT signals.
Before proceeding, we need the formula in (4.67) or
∞
X 1
rn = (11.1)
n=0
1−r
where |r| < 1 and r can be real or complex. Note that the infinite summation in (11.1)
diverges if |r| > 1 and r = 1 and is not defined if r = −1. See Section 3.5.
where z is a complex variable. Note that it is defined only for the positive-time part of x[n] or
x[n], for n ≥ 0. Because its negative-time part (x[n], for n < 0) is not used, we often assume
x[n] = 0, for n < 0, or x[n] to be positive time. Using the definition alone, we have developed
the concept of transfer functions and discussed its importance in Section 7.8. In this section,
we discuss further the z-transform.
Example 11.2.1 Consider the DT signal x[n] = 1.3n , for n ≥ 0. This signal grows unbounded
as n → ∞. Its z-transform is
∞
X
X(z) = Z[1.3n ] = 1.3n z −n
n=0
287
288 CHAPTER 11. DT LTI AND LUMPED SYSTEMS
This is an infinite power series and is not very useful. Fortunately, for the given x[n], we can
cast it into the form of (11.1) with r = 1.3z −1 as
∞
X ¡ ¢n 1 z
X(z) = 1.3z −1 = = (11.3)
n=0
1 − 1.3z −1 z − 1.3
which is a simple rational function of z. This is the z-transform we will use throughout this
chapter. However (11.3) holds only if |1.3z −1 | < 1 or 1.3 < |z|. For example, if z = 1, the
infinite sum in (11.3) diverges (approaches ∞) and does not equal 1/(1 − 1.3) = −10/3. Thus
we have
1 z
Z[1.3n ] = =
1 − 1.3z −1 z − 1.3
only if |z| > 1.3. We call z/(z − 1.3) the z-transform of 1.3n and 1.3n the inverse z-transform
of z/(z − 1.3), denoted as · ¸
z
Z −1 = 1.3n
z − 1.3
for n ≥ 0.2
The z-transform in (11.3) is, strictly speaking, defined only for |z| > 1.3, or the region
outside the dotted circle with radius 1.3 shown in Figure 11.1. The region is called the region
of convergence. The region of convergence is important if we use the integration formula
I
1
x[n] := Z −1 [X(z)] := X(z)z n−1 dz (11.4)
2πj
or, in particular, by selecting z = cejω ,
Z 2π
1 ¡ ¢n−1
x[n] := Z −1 [X(z)] = X(cejω ) cejω cjejω dω
2πj ω=0
Z 2π
cn jω jnω
= X(ce )e dω (11.5)
2π ω=0
to compute the inverse z-transform. For our example, if we select c = 2, then the integration
contour lies inside the region of convergence and (11.5) will yield
½
1.3n for n ≥ 0
x[n] =
0 for n < 0
which is the original positive-time signal. However, if we select c = 1, then the integration
contour lies outside the region of convergence and (11.5) will yield the following negative-time
signal ½
0 for n ≥ 0
x[n] =
−1.3n for n < 0
In conclusion, without specifying the region of convergence, the inversion formula in (11.5)
may not yield the original x[n]. In other words, the relationship between x[n] and X(z) is not
one-to-one without specifying the region of convergence. Fortunately in practical application,
we study only positive-time signals and consider the relationship between x[n] and X(z) to be
one-to-one. Once we obtain a z-transform X(z) from a positive-time signal, we automatically
assume its inverse z-transform to be the original positive-time signal. Thus there is no need
to consider the region of convergence and the inversion formula. Indeed when we develop
transfer functions in Section 7.7, the region of convergence was not mentioned. Nor will it
appear again in the remainder of this chapter.
Before proceeding, we mention that if x[n] is not positive time, we may define its two-sided
z-transform as
X∞
XII (s) := ZII [x[n]] := x[n]z −n
n=−∞
11.2. SOME Z-TRANSFORM PAIRS 289
1.5
0.5
Im (z)
0
−0.5
−1
−1.5
−2
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
Re (z)
For such a transform, the region of convergence is essential because the same XII (z) may
have many different two-sided x[n]. Thus its study is much more complicated than the (one-
sided) z-transform introduced in (11.2). Fortunately, the two-sided z-transform is rarely, if
not never, used in practice. Thus its discussion is omitted.
We now extend 1.3n to x[n] = bn , for n ≥ 0, where b can be a real or complex number.
Note that bn is an exponential sequence as discussed in Subsection 2.8.1. Its z-transform is
∞
X ∞
X ¡ ¢n 1 z
X(z) = Z[bn ] = bn z −n = bz −1 = −1
= (11.6)
n=0 0
1 − bz z−b
1 1
Z[δd [n]] = = =1
1 − 0 · z −1 1
and
1 z
Z[step sequence] = Z[1] = −1
=
1−z z−1
If b = aejω0 , where a and ω 0 are real numbers, then
¡ ¢n 1 z
Z[an ejnω0 ] = Z[ aejω0 ] = jω −1
=
1 − ae z 0 z − aejω0
Using this, we can compute
· ¸ · ¸
n an ejω0 n − an e−jω0 n 1 z z
Z[a sin ω0 n] = Z = −
2j 2j z − aejω0 z − ae−jω0
· −jω0 jω0
¸
z z − ae − z + ae a(sin ω0 )z
= 2 jω −jω 2
= 2 (11.7)
2j z − a(e 0 +e 0 )z + a z − 2a(cos ω0 )z + a2
and
(z − a cos ω0 )z
Z[an cos ω0 n] = (11.8)
z2 − 2a(cos ω0 )z + a2
If a = 1, they reduce to
(sin ω0 )z
Z[sin ω0 n] =
z 2 − 2(cos ω0 )z + 1
and
(z − cos ω0 )z
Z[cos ω0 n] =
z2 − 2(cos ω0 )z + 1
290 CHAPTER 11. DT LTI AND LUMPED SYSTEMS
Its right-hand-side, by definition, is the z-transform of nx[n]. This establishes the formula.
Using the formula and Z[bn ] = z/(z − b), we can establish
· ¸
n d z (z − b) − z bz bz −1
Z[nb ] = −z = −z = =
dz z − b (z − b)2 (z − b)2 (1 − bz −1 )2
Applying the formula once again, we can obtain
b(z + b)z
Z[n2 bn ] =
(z − b)3
We list in Table 11.1 some z-transform pairs. Unlike the Laplace transform where we use
exclusively positive-power form, we may encounter both negative-power and positive-power
forms in the z-transform. Thus we list both forms in the table.
The z-transforms listed in Table 11.1 are all proper rational functions of z. Note that the
definitions in Subsection 8.7.2 are applicable to any rational function, be it a rational function
of s or z. If x[0] 6= 0, such as in δd [n], 1, bn , and bn cos ω0 n, then its z-transform is biproper.
If x[0] = 0, such as in nbn , n2 bn , and bn sin ω0 n, then its z-transform is strictly proper. The
Laplace transforms in Table 9.1 are all strictly proper except the one of δ(t).
The z-transforms of the DT signals in the preceding examples are all rational functions
of z. This is also the case if a DT signal is of finite length. For example, the z-transform of
x[n], for n = 0 : 5, and x[n] = 0, for all n > 5, is
X(z) = x[0] + x[1]z −1 + x[2]z −2 + x[3]z −3 + x[4]z −4 + x[5]z −5
x[0]z 5 + x[1]z 4 + x[2]z 3 + x[3]z 2 + x[4]z + x[5]
=
z5
It is a proper rational function of z. However if a positive-time sequence is of infinite length,
then its z-transform is an infinite power series of z −1 and the following situations may occur:
11.3. DT LTI LUMPED SYSTEMS – PROPER RATIONAL FUNCTIONS 291
2 n
1. The infinite power series has no region of convergence. The sequences en and ee , for
n ≥ 0, are such examples. These two sequences are mathematically contrived and do
not arise in practice.
2. The infinite power series exists but cannot be expressed in closed form. Most, if not all,
randomly generated sequences of infinite length belong to this type.
3. The infinite series can be expressed in closed form but the form is not a rational function
of z. For example, if h[0] = 0 and h[n] = 1/n, for n ≥ 1, then it can be shown that its
z-transform
√ is H(z) = − ln(1 − z −1 ). It is an irrational function of z. Note that sin z,
z, and ln z are all irrational.
We study in this chapter only z-transforms that belong to the last class.
where y[n] and u[n] are respectively the output and input of the system and n is the time
index. The sequence h[n] is the impulse response and is the output of the system excited
by u[n] = δd [n], an impulse sequence. The impulse response has the property h[n] = 0, for
all n < 0, if the system is causal. We also classify a DT system to be FIR (finite impulse
response or finite number of nonzero h[n]) or IIR (infinite impulse response or infinite number
of nonzero h[n]).
The application of the z-transform to (11.9) yields, as derived in (7.25),
where
∞
X
H(z) = h[n]z −n (11.11)
n=0
is the (DT) transfer function of the system. Recall from (7.29) that the transfer function can
also be defined as ¯
Z[output] Y (z) ¯¯
H(z) := =
Z[input] U (z) ¯Initially relaxed
We now define a DT system to be a lumped system if its transfer function is a rational
function of z. It is distributed if its transfer function cannot be so expressed. According to
this definition, every DT LTI FIR system is lumped because its transfer function, as discussed
earlier, is a rational function of z. For a DT LTI IIR system, its transfer function (an infinite
power series), as listed in the preceding section, may not be defined or be expressed in closed
form. Even if it can be expressed in closed form, the form may not be a rational function.
Thus the set of DT LTI IIR lumped systems is, as shown in Figure 8.12, a small part of DT
LTI systems.
In the remainder of this chapter we study only DT LTI lumped systems or systems that
are describable by
N (z)
H(z) = (11.12)
D(z)
where N (z) and D(z) are polynomials of z with real coefficients. Note that all discussion
in Section 8.7.2 are directly applicable here. For example, H(z) is improper if deg N (z) >
292 CHAPTER 11. DT LTI AND LUMPED SYSTEMS
deg D(z), proper if deg D(z) ≥ deg N (z). In the CT case, we study only proper rational
functions of s because, as discussed in Section 8.5.2, improper rational functions will amplify
high-frequency noise. In the DT case, we also study only proper rational functions of z. But
the reason is different. Consider the simplest improper rational function H(z) = z/1 = z or
Y (z) = H(z)U (z) = zU (z). As discussed in Subsection 7.7.1, z is an unit-sample advance
operator. Thus the inverse z-transform of Y (z) = zU (z) is
y[n] = u[n + 1]
The output at n depends on the future input at n + 1. Thus the system is not causal. In
general, a DT system with an improper rational transfer function is not causal and cannot be
implemented in real time (subsection 2.6.1). In conclusion, we study proper rational transfer
functions because of the causality reason in the DT case and of noise problem in the CT case.
To conclude this section, we mention that a rational function can be expressed in two
forms such as
8z 3 − 24z − 16
H(z) = (11.13)
2z 5 + 20z 4 + 98z 3 + 268z 2 + 376z
8z −2 − 24z −4 − 16z −5
= (11.14)
2 + 20z −1 + 98z −2 + 268z −3 + 376z −4
The latter is obtained from the former by multiplying z −5 to its numerator and denominator.
The rational function in (11.13) is said to be in positive-power form and the one in (11.14)
in negative-power form. Either form can be easily obtained from the other. In the CT case,
we use exclusively positive-power form as in Chapters 8 and 9. If we use the positive-power
form in the DT case, then many results in the CT case can be directly applied. Thus with
the exception of the next subsection, we use only positive-power form in this chapter.1
for any positive integer k. Using (11.15), we can readily obtain a difference equation from a
rational transfer function and vise versa. For example, consider
Y (z) z 2 + 2z z −1 + 2z −2
H(z) = = 3 = (11.16)
U (z) 2z + z 2 − 0.4z + 0.8 2 + z −1 − 0.4z −2 + 0.8z −3
The negative-power form is obtained from the positive-power form by multiplying z −3 to its
numerator and denominator. We write (11.16) as
or
2Y (z) + z −1 Y (z) − 0.4z −2 Y (z) + 0.8z −3 Y (z) = z −1 U (z) + 2z −2 U (z)
Its inverse z-transform is, using (11.15),
This is a third-order linear difference equation with constant coefficients or a third-order LTI
difference equation. Conversely, given an LTI difference equation, we can readily obtain its
transfer function using (11.15).
To conclude this subsection, we give the reason of using the negative-power form in (11.16).
As derive in (7.33) and (7.34), we have
and £ ¤
Z[x[n + 2]] = z 2 X(z) − x[0] − x[1]z −1
where x[0] and x[1] are not necessarily zero. Thus the use of the positive-power form will be
more complicated. However the final result will be the same. See Reference [C7, C8].
N (z)
H(z) =
D(z)
where N (z) and D(z) are two polynomials with real coefficients. If they are coprime or have
no roots in common, then all roots of N (z) are the zeros of H(z) and all roots of D(z) are
the poles of H(z). For example, consider
N (z) 8z 3 − 24z − 16
H(z) = = 5 (11.19)
D(z) 2z + 20z + 98z 3 + 268z 2 + 376z + 208
4
It is a ratio of two polynomials. Its numerator’s and denominator’s coefficients can be repre-
sented in MATLAB as n=[8 0 -24 -16] and d=[2 20 98 268 376 208]. Typing in MAT-
LAB >> roots(n) will yield 2, −1, −1. Thus we can factor N (z) as
Note that N (z), N (z)/8, and kN (z), for any nonzero constant k, have the same set of roots,
and the MATLAB function roots computes the roots of a polynomial with leading coefficient
1. Typing in MATLAB >> roots(d) will yield −2, −2, −2, −2 + j3, −2 − j3. Thus we can
factor D(z) as
D(z) = 2(z + 2)3 (z + 2 − 3j)(z + 2 + 3j)
Clearly, N (z) and D(z) have no common roots. Thus H(z) in (11.19) has zeros at 2, −1, −1
and poles at −2, −2, −2, −2 + 3j, −2 − 3j. Note that if H(z) has only real coefficients, then
complex-conjugate poles or zeros must appear in pairs.
A pole or zero is called simple if it appears only once, and repeated if it appears twice or
more. For example, the transfer function in (11.19) has a simple zero at 2 and two simple
poles at −2 ± 3j. It has a repeated zero at −1 with multiplicity 2 and a repeated pole at −2
with multiplicity 3.
We mention that the zeros and poles of a transfer function can also be obtained using the
MATLAB function tf2zp, an acronym for transfer function to zero/pole. For the transfer
function in (11.19), typing in the command window of MATLAB
>> n=[8 0 -24 -16];d=[2 20 98 268 376 208];
>> [z,p,k]=tf2zp(n,d)
294 CHAPTER 11. DT LTI AND LUMPED SYSTEMS
will yield z=[-1 -1 2]; p=[-2 -2 -2 -2-3j -2+3j]; k=4. It means that H(z) in (11.19)
can also be expressed as
It is called the zero/pole/gain form. Note that the gain k = 4 is the ratio of the leading
coefficients of N (z) and D(z).
2z 2 + 10 2z 2 + 10
H(z) = = (11.21)
z2 − 1.2z − 1.6 (z − 2)(z + 0.8)
We compute its step response, that is, the output excited by a step input. If u[n] = 1, for
n ≥ 0, then its z-transform is U (z) = z/(z − 1). Thus the step response of (11.21) in the
transform domain is
2z 2 + 10 z
Y (z) = H(z)U (z) = · (11.22)
(z − 2)(z + 0.8) z − 1
To find its response in the time domain, we must compute its inverse z-transform. The
procedure consists of two steps: (1) expanding Y (z) as a sum of terms whose inverse z-
transforms are available in a table such as Table 11.1, and then (2) using the table to find
the inverse z-transform. In order to use the procedure in Section 9.3.1, we expand Y (z)/z,
instead of Y (z), as3
Y (z) 2z 2 + 10
=: Ȳ (z) =
z (z − 2)(z + 0.8)(z − 1)
1 1 1
= k0 + k1 + k2 + ku (11.23)
z−2 z + 0.8 z−1
Then we have k0 = Ȳ (∞) = 0,
¯
¯ 2z 2 + 10 ¯ 18
k1 = Ȳ (z)(z − 2)¯ = ¯ = = 6.43
z=2 (z + 0.8)(z − 1) ¯z=2 2.8
¯
¯ 2z 2 + 10 ¯ 2(−0.8)2 + 10
k2 = Ȳ (z)(z + 0.8)¯z=−0.8 = ¯ = = 2.24
(z − 0.2)(z − 1) ¯z=−0.8 (−2.8)(−1.8)
and
¯
¯ Y (z)(z − 1) ¯¯
ku = Ȳ (z)(z − 1)¯z=1 = ¯ = H(z)|z=1 (11.24)
z z=1
2 + 10 12
= H(1) = = = −6.67
1 − 1.2 − 1.6 −1.8
Thus Y (z)/z can be expanded as
Y (z) 1 1 1
= 0 + 6.43 + 2.24 − 6.67
z z−2 z + 0.8 z−1
3 The reason of this expansion will be given in the next example.
11.4. INVERSE Z-TRANSFORM 295
y[n] = 6.43 · 2n + 2.24 · (−0.8)n − 6.67 · 1n = 6.43 · 2n + 2.24 · (−0.8)n − 6.67 (11.25)
for n ≥ 0. This is the step response of the system described by (11.21). Note that the
sequences 2n and (−0.8)n are due to the poles 2 and −0.8 of the system and the sequence
−6.67(1)n is due to the pole 1 of the input’s z-transform.2
Example 11.4.2 The step response of the preceding example can also be obtained by ex-
panding Y (z) as
2z 2 + 10 z
Y (z) = H(z)U (z) = ·
(z − 2)(z + 0.8) z − 1
z z z
= k0 + k1 + k2 + k3
z−2 z + 0.8 z−1
Note that every term in the expansion is listed in Table 11.1. Thus once ki are computed,
then its inverse z-transform can be directly obtained. However if z = ∞, the equation implies
Y (∞) = k0 + k1 + k2 + k3 . Thus the procedure in Section 9.3.1 cannot be directly applied.
This is the reason of expanding Y (z)/z in the preceding example. Note that we can also
expand Y (z) as
X(z) = 0 + 0 · z −1 + 3z −2 − 2z −3 + 7z −4 + · · ·
Thus the inverse z-transform of X(z) is x[0] = 0, x[1] = 0, x[2] = 3, x[4] = −2, and so forth.
This method is difficult to express the inverse in closed form. However it can be used to
296 CHAPTER 11. DT LTI AND LUMPED SYSTEMS
establish a general property. Consider a proper rational function X(z) = N (z)/D(z) with
M = Deg(D(z)) − Deg(N (z)) ≥ 0. The integer M is the degree difference between the
denominator and numerator of X(z). Then the inverse z-transform x[n] of X(z) has the
property
x[n] = 0, x[1] = 0, · · · , x[M − 1] = 0, x[M ] = N0 /D0 6= 0 (11.27)
where N0 and D0 are the leading coefficients of N (z) and D(z). For example, for the rational
function in (11.27), we have M = 2 and its time sequence has first non-zero value at time
index n = M = 2 with x[2] = 3/1 = 3. In particular, if X(z) is biproper, then M = 0 and its
time sequence is nonzero at n = 0.
They all have the same set of poles and the property Hi (1) = 1. Even though they have
different zeros other than those at z = 0,4 their step responses can all be expressed in the
z-transform domain as
z
Yi (z) = Hi (z)U (z) = Hi (z)
z−1
or
b2 z 2 + b1 z + b0 b̄2 z 2 + b̄1 z + b̄0 ku z
Yi (z) = k0 + + + (11.34)
(z + 0.6)2 (z − 0.9ej0.5 )(z − 0.9e−j0.5 ) z − 1
In this equation, we are interested in only the parameter ku associated with the pole of the
step input. If we multiply (z − 1)/z to (11.34) and then set z = 1, we obtain
¯
z − 1 ¯¯
ku = Yi (z) = Hi (z)|z=1 = Hi (1) (11.35)
z ¯z=1
This equation also follows from the procedure discussed in Example 11.4.1, as derived in
(11.24). Note that (11.35) is applicable for any Hi (z) which has no pole at z = 1.
The inverse z-transform of (11.34) or the step responses of (11.31) through (11.33) in the
time domain are all of the form, using (11.28) through (11.30),
y[n] = k̄ 0 δd [n] + k1 (−0.6)n + k2 n(−0.6)n + k3 0.9n sin(0.5n + k4 ) + Hi (1) (11.36)
for n ≥ 0. This form is determined solely by the poles of Hi (z) and U (z). The transfer function
Hi (z) has a repeated real pole at −0.6, which yields the response k1 (−0.6)n +k2 n(−0.6)n , and
a pair of complex conjugate poles 0.9e±j0.5 , which yields k3 0.9n sin(0.5n+k4 ). The z-transform
of the step input is z/(z −1). Its pole at z = 1 yields the response Hi (1)×1n = Hi (1). We plot
in Figure 11.2 their step responses followed by zero-order holds. See the discussion pertaining
to Figure 7.5. They can be obtained in MATLAB by typing in an edit window the following
The first two lines express the numerators and denominators of Hi (z) as row vectors. The
function pig1=tf(n1,d,1) uses the transfer function H1=n1/d to define the DT system and
names it as pig1. It is important to have the third argument T = 1 in tf(n1,d,T). Without
it, tf(n1,d) defines a CT system. The function step(pig1,60) computes its step response
for n = 0 : N = 60. We then type ”hold on” in order to have the step responses of H2 (z) and
H3 (z) plotted on the same plot. We name the program f112. It is called an m-file because
an extension .m is automatically attached to the file name. Typing in the command window
>> f112 will yield in a figure window the plot in Figure 11.2. We see that even though their
step responses are all of the form in (11.35), the responses right after the application of the
step input are all different. This is due to different sets of ki . In conclusion, poles dictate the
general form of responses; zeros affect only the coefficients ki . Thus we conclude that zeros
play a lesser role than poles in determining responses of systems.
To conclude this section, we mention that the response generated by step is not computed
using the transfer function ni/d for the same reasons discussed in Section 9.3.2. As in the
CT case, the program first transforms the transfer function into an ss equation, as we will
discuss in a later section, and then uses the latter to carry out the computation.
4 We introduce zeros at z = 0 to make all H (z) biproper so that their nonzero step responses all start to
i
appear at n = 0 as discussed in the preceding section. Without introducing the zeros at z, the responses in
Figure 11.2 will be difficult to visualize and compare.
298 CHAPTER 11. DT LTI AND LUMPED SYSTEMS
Step Response
2
1.8
1.6
1.4
1.2
Amplitude
0.8
0.6
0.4
0.2
0
0 10 20 30 40 50 60
Time (sec)
Figure 11.2: Step responses of the transfer functions in (11.30) through (11.32) followed by
zero-order holds.
z = es
We show that z = es maps the jω-axis on the s-plane into the unit circle on the z-plane and
the left-half s-plane into the interior of the unit circle. Let s = σ + jω. Then we have
z = sσ+jω = eσ ejω
If σ = 0 (the imaginary axis of the s-plane), then eσ = 1 and |z| = |ejω | = 1, for all ω (the
unit circle of the z-plane). Thus the jω-axis is mapped into the unit circle. Note that the
mapping is not one-to-one. For example, all s = 0, ±j2π, ±j4π, . . . are mapped into z = 1. If
σ < 0 (left half s-plane), then
(interior of the unit circle on the z-plane). Thus the left half s-plane is mapped into the
interior of the unit circle on the z-plane as shown in Figure 11.3.
If s = σ, with −∞ ≤ σ ≤ ∞, then we have 0 ≤ z = eσ ≤ ∞. Thus the entire real
axis denoted by a solid line on the s-plane is mapped into the positive real axis denoted by
the solid line on the z-plane. If s = σ ± jπ, with −∞ ≤ σ ≤ ∞, then we have −∞ ≤ z =
eσ±jπ = −eσ ≤ 0. Thus the entire dashed lines passing through ±π shown in Figure 11.3(a)
are mapped into the negative real axis on the z-plane as shown in Figure 11.3(b). Thus the
meanings of the two real axes are different. In the CT case or on the s-plane, the real axis
11.4. INVERSE Z-TRANSFORM 299
1 1 100
0.5 0.5 50
Amplitude
0 0 0
−1 −1 −100
0 10 20 0 10 20 0 10 20
Time index Time index Time index
Figure 11.4: (a) 0.7n sin(1.1n + 0.3). (b) sin(1.1n + 0.3). (c) 1.3n sin(1.1n + 0.3).
denotes zero frequency. In the DT case or on the z-plane, the positive real axis denotes zero
frequency; whereas the negative real axis denotes the highest frequency π.
In the CT case, the s-plane is divided into three parts: the left-half plane (LHP), right-half
plane (RHP), and the jω-axis. In view of the mapping in Figure 11.3, we will now divide the
z-plane into three parts: the unit circle, its interior and its exterior.
k1 rn sin(θn + k2 )
for some real constants k1 and k2 . Note that the magnitude r governs the envelope of the
response because
and the phase θ governs the frequency of oscillation as shown in Figure 11.4. If r < 1 or the
complex-conjugate poles lie inside the unit circle, then the response vanishes oscillatorily as
shown in Figure 11.4(a). If r = 1, or the poles lie on the unit circle, then the response is a
pure sinusoid which will not vanish nor grow unbounded as shown in Figure 11.4(b). If r > 1
or the poles lie outside the unit circle, then the response will grow to ±∞ as n → ∞ as shown
in Figure 11.4(c).
If θ = π, then the pole p becomes rejπ = −r and is located in the negative real axis. Its
response is (−r)n . We plot in Figures 11.5(a), (b), and (c) for r = 0.7, 1, 1.3 respectively.
Note that we have assumed that the sampling period T is 1 and its Nyquist frequency range
is (−π/T, π/T ] = (−π, π]. Thus θ = π is the highest possible frequency. In this case, its
response will change sign every sample (positive to negative or vice versa) as shown in Figure
11.5. Note that if θ = 0 or the frequency is zero, then the response will never change sign.
We next discuss repeated poles. For repeated poles, we have
· 2
¸
−1 b2 z + b1 z + b0
Z = k0 δd [n] + k1 rn + k2 nrn
(z − r)2
· 4 3 2
¸
−1 b4 z + b3 z + b2 z + b1 z + b0
Z = k0 δd [n] + k1 rn sin(θn + k2 ) + k3 nrn sin(θn + k4 )
(z − rejθ )2 (z − re−jθ )2
300 CHAPTER 11. DT LTI AND LUMPED SYSTEMS
1 1 100
0.5 0.5 50
Amplitude
0 0 0
−1 −1 −100
0 10 20 0 10 20 0 10 20
Time index Time index Time index
Figure 11.5: (a) (−0.7)n , for n = 0 : 20. (b) (−1)n . (c) (−1.3)n .
(a) (b)
4.5 40
4 30
3.5
20
3
10
Amplitude
2.5
0
2
−10
1.5
−20
1
0.5 −30
0 −40
0 10 20 30 40 0 10 20 30 40
Time index Time index
If r > 1, then rn and nrn approach ∞ as n → ∞. Thus if a pole, simple or repeated, real or
complex, lying outside the unit circle, then its time response approaches infinite as n → ∞.
The response of a repeated pole contains nrn . It is a product of ∞ and 0 as n → ∞ for
r < 1. However, because
(n + 1)rn+1
lim =r<1
n→∞ nrn
the response approaches zero as n → ∞ following Cauchy’s ratio test. See Reference [P3].
Similarly, we can show that, for any r < 1 and any positive integer k, nk rn approaches 0
as n → ∞. Thus we conclude that the time response of a pole, simple or repeated, real or
complex, lying inside the unit circle, approach 0 as n → ∞. This is indeed the case as shown
in Figure 11.6(a) for n2 (0.7)n .
The situation for poles with r = 1, or on the unit circle, is more complex. The time
response of a simple pole at z = 1 is a constant for all n. The time response of a simple pair
of complex-conjugate poles on the unit circle is k1 sin(θn + k2 ) which is a sustained oscillation
for all n as shown in Figure 11.4(b). However the time response of a repeated pole at z = 1 is
n which approaches ∞ as n → ∞. The time response of a repeated pair of complex-conjugate
poles on the unit circle contains k3 n sin(θn + k4 ) which approaches ∞ or −∞ oscillatorily as
n → ∞ as shown in Figure 11.6(b). We summarize the preceding discussion in the following:
case where the response of a pole approaches zero as t → ∞ if and only if the pole lies inside
the LHP.
11.5 Stability
This section introduces the concept of stability for DT systems. If a DT system is not stable,
its response excited by any input generally will grow unbounded or overflow. Thus every DT
system designed to process signals must be stable. Let us give a formal definition.
A signal is bounded if it does not grow to ∞ or −∞. In other words, a signal u[n] is
bounded if there exists a constant M1 such that |u[n]| ≤ M1 < ∞ for all n ≥ 0. As in the
CT case, Definition 11.1 cannot be used to conclude the stability of a system because there
are infinitely many bounded inputs to be checked. Fortunately, stability is a property of a
system and can be determined from its mathematical descriptions.
Theorem 11.1 A DT LTI system with impulse response h[n] is BIBO stable if and only if
h[n] is absolutely summable in [0, ∞), that is,
∞
X
|h[n]| ≤ M < ∞
n=0
where h[n] is the impulse response of the system and has the property h[n] = 0, for all n < 0
as we study only causal systems. We use the equation to prove the theorem.
We first show that the system is BIBO stable if h[n] is absolutely summable. Let u[k]
be any bounded input. Then there exists an M1 such that |u[n]| ≤ M1 for all n ≥ 0. We
compute ¯ ¯
¯X∞ ¯ X ∞ ∞
X
¯ ¯
|y[n]| = ¯ h[n − k]u[k]¯ ≤ |h[n − k]||u[k]| ≤ M1 |h[n − k]|
¯ ¯
k=0 k=0 k=0
Note that h[n − k]u[k] can be positive or negative and their summation can cancel out. This
will not happen in the second summation because |h[n − k]||uk| are all positive. Thus we
have the first inequality. The second inequality follows from |u[k]| ≤ M1 , for all k ≥ 0. Let
us introduce the new integer variable k̄ := n − k and use the causality condition h[k̄] = 0 for
all k̄ < 0. Then the preceding equation becomes
−∞
X n
X n
X
|y[n]| ≤ M1 |h[k̄]| = M1 |h[k̄]| = M1 |h[k̄]|
k̄=n k̄=−∞ k̄=0
X ∞
≤ M1 |h[k̄]| ≤ M1 M
k̄=0
for all n ≥ 0. This shows that if h[n] is absolutely summable, every bounded input will excite
a bounded output. Thus the system is BIBO stable.
302 CHAPTER 11. DT LTI AND LUMPED SYSTEMS
Next we show that if h[n] is not absolutely summable, then there exists a bounded input
that will excite an output whose magnitude approaches ∞ as n → ∞. Because ∞ is not a
number, the way to show |y[n]| → ∞ is to show that no matter how large M2 is, there exists
an n1 such that |y[n1 ]| > M2 . If h[n] is not absolutely summable, then for any arbitrarily
large M2 , there exists an n1 such that
n1
X
|h[k]| > M2
k=0
where we have introduced k̄ := n1 − k and used the causality condition h[k̄] = 0 for all k̄ < 0.
This shows that if h[n] is not absolutely summable, then there exists a bounded input that
will excite an output with an arbitrarily large magnitude. Thus the system is not stable. This
establishes the theorem.2
A direct consequence of this theorem is that every DT FIR system is BIBO stable. Indeed,
a DT FIR system has only a finite number of nonzero entries in h[n]. Thus its impulse response
is absolutely summable. We next use the theorem to develop the next theorem.
Theorem 11.2 A DT LTI lumped system with proper rational transfer function H(z) is
stable if and only if every pole of H(z) has a magnitude less than 1 or, equivalently, all poles
of H(z) lie inside the unit circle on the z-plane. 2
Proof: If H(z) has one or more poles lying outside the unit circle, then its impulse response
grows unbounded and is not absolutely summable. If it has poles on the unit circle, then its
impulse response will not approach 0 and is not absolutely summable. In conclusion if H(z)
has one or more poles on or outside the unit circle, then the system is not stable.
Next we argue that if all poles lie inside the unit circle, then the system is stable. To
simplify the discussion, we assume that H(z) has only simple poles and can be expanded as
X ki z
H(z) = + k0
i
z − pi
where the poles pi can be real or complex. Then its inverse z-transform or the impulse
response of the system is X
h[n] = ki pni + k0 δd [n]
i
for n ≥ 0. We compute
¯
∞ ¯X
¯ Ã∞ !
∞
X X ¯ X X
¯ n ¯ n
|h[n]| = ¯ ki pi + k0 δd [n]¯ ≤ |ki pi | + |k0 | (11.37)
¯ ¯
n=0 n=0 i i n=0
(z + 2)(z − 10)
H(z) =
(z − 0.9)(z + 0.95)(z + 0.7 + j0.7)(z + 0.7 − j0.7)
Its two real poles 0.9 and −0.95 have magnitudes less than 1. We compute the magnitude of
the complex poles:
p √ √
(0.7)2 + (0.7)2 = 0.49 + 0.49 = 0.98 = 0.9899
It is less than 1. Thus the system is stable. Note that H(z) has two zeros outside the unit
circle on the z-plane. 2
Example 11.5.2 Consider an FIR system with impulse response h[n], for n = 0 : N , with
h[N ] 6= 0 and h[n], for n > N . Its transfer function is
N
X h[0]z N + h[1]z N −1 + · · · + h[N − 1]z + h[N ]
H(z) = h[n]z −n =
n=0
zN
All its N poles are located at z = 0.5 They all lie inside the unit circle. Thus it is stable.2
Corollary 11.2 A DT LTI lumped system with impulse response h[n] is BIBO stable if and
only if h[n] → 0 as n → ∞.2
11.5.1 What holds for lumped systems may not hold for distributed
systems
We mention that the results for LTI lumped systems may not be applicable to LTI distributed
systems. For example, consider a DT LTI system with impulse response
1
h[n] =
n
for n ≥ 1. It approaches 0 as n → ∞. Can we conclude that the system is stable? Or is
Corollary 11.2 applicable? To answer this, we must check whether the system is lumped or
not. The transfer function or the z-transform of h[n] = 1/n can be computed as − ln(1−z −1 ).
It is not a rational function of z. Thus the system is not lumped and Corollary 11.2 is not
applicable.
5 If h[N ] = 0, then H(z) has at most N − 1 number of poles at z = 0.
304 CHAPTER 11. DT LTI AND LUMPED SYSTEMS
Let us compute
∞
X X∞ µ ¶ µ ¶
1 1 1 1 1 1 1 1
S := |h[n]| = =1+ + + + + + +
n=0 n=1
n 2 3 4 5 6 7 8
µ ¶
1 1 1 1
+ + + ··· + + + ···
9 10 15 16
The first pair of parentheses contains two terms, each term is 1/4 or larger, thus the sum is
larger than 2/4 = 1/2. The second pair of parentheses contains four terms, each term is 1/8
or larger. Thus the sum is larger than 4/8 = 1/2. The third pair of parentheses contains
eight terms, each term is 1/16 or larger. Thus the sum is larger than 8/16 = 1/2. Proceeding
forward, we have
1 1 1
S > 1 + + + + ··· = ∞
2 2 2
and the sequence is not absolutely summable. Thus the DT system with impulse response
1/n, for n ≥ 1, is not stable according to Theorem 11.1. In conclusion, Theorem 11.1 is
applicable to lumped and distributed systems; whereas Theorem 11.2 and its corollary are
applicable only to lumped systems.
Typing
>> d=[2 -0.2 -0.24 -0.08];
>> roots(d)
in MATLAB will yield −0.5, −0.2±j0.2. It has one real root and one pair of complex conjugate
roots. They all have magnitudes less than 1. Thus any transfer function with D(z) as its
denominator is BIBO stable.
A polynomial is defined to be DT stable if all its roots have magnitudes less than 1. We
discuss a method of checking whether a polynomial is DT stable or not without computing
its roots.6 We use the following D(z) to illustrate the procedure:
We call a0 the leading coefficient. If a0 < 0, we apply the procedure to −D(z). Because D(z)
and −D(z) have the same set of roots, if −D(z) is DT stable, so is D(z). The polynomial
D(z) has degree 5 and six coefficients ai , i = 0 : 5. We form Table 11.2, called the Jury
table. The first row is simply the coefficients of D(z) arranged in the descending power of
z. The second row is the reversal of the first row. We compute k1 = a5 /a0 , the ratio of the
last entries of the first two rows. The first bi row is obtained by subtracting from the first ai
row the product of the second ai row and k1 . Note that the last entry of the first bi row is
automatically zero and is discarded in the subsequent discussion. We then reverse the order
of bi to form the second bi row and compute k2 = b4 /b0 . The first bi row subtracting the
product of the second bi row and k2 yields the first ci row. We repeat the process until the
table is completed as shown. We call b0 , c0 , d0 , e0 , and f0 the subsequent leading coefficients.
If D(z) has degree N , then the table has N subsequent leading coefficients.
6 The remainder of this subsection may be skipped without loss of continuity.
11.5. STABILITY 305
a0 a1 a2 a3 a4 a5
a5 a4 a3 a2 a1 a0 k1 = a5 /a0
b0 b1 b2 b3 b4 0 (1st ai row) −k1 (2nd ai row)
b4 b3 b2 b1 b0 k2 = b4 /b0
c0 c1 c2 c3 0 (1st bi row) −k2 (2nd bi row)
c3 c2 c1 c0 k3 = c3 /c0
d0 d1 d2 0 (1st ci row) −k3 (2nd ci row)
d2 d1 d0 k4 = d2 /d0
e0 e1 0 (1st di row) −k4 (2nd di row)
e1 e0 k5 = e1 /e0
f0 0 (1st ei row) −k5 (2nd ei row)
Theorem 11.3 A polynomial with a positive leading coefficient is DT stable if and only if
every subsequent leading coefficient is positive. If any subsequent leading coefficient is 0 or
negative, then the polynomial is not DT stable.2
The proof of this theorem can be found in Reference [2nd ed. of C6]. We discuss only its
employment.
Example 11.5.3 Consider
D(z) = z 3 − 2z 2 − 0.8
Note that the polynomial has a missing term. We form
1 −2 0 −0.8
−0.8 0 −2 1 k1 = −0.8/1 = −0.8
0.36 −2 −1.6 0
−1.6 −2 0.36 k2 = −1.6/0.36 = −4.44
−6.74
A negative subsequent leading coefficient appears. Thus the polynomial is not DT stable.2
Example 11.5.4 Consider
We form
2 −0.2 −0.24 −0.08
−0.08 −0.24 −0.2 2 k1 = −0.08/2 = −0.04
1.9968 −0.2096 −0.248 0
−0.248 −0.2096 1.9968 k2 = −0.248/1.9968 = −0.124
1.966 −0.236 0
−0.236 1.966 k3 = −0.236/1.966 = −0.12
1.938 0
The three subsequent leading coefficients are all positive. Thus the polynomial is DT stable.2
We defined in Section 9.5.3 a polynomial to be CT stable if all its roots have negative real
parts. A CT stable polynomial cannot have missing terms or negative coefficients. This is
not so for DT stable polynomials. For example, the polynomial in the preceding Example has
negative coefficients and is still DT stable. Thus the conditions for a polynomial to be CT
stable or DT stable are different and are independent of each other.
306 CHAPTER 11. DT LTI AND LUMPED SYSTEMS
the transient (tr) response of the system excited by the input u[n]. For example, consider the
system in Example 11.4.1 with transfer function
2z 2 + 10
H(z) =
(z − 2)(z + 0.8)
Its step response was computed in (11.24) as
y[n] = 6.43 · 2n + 2.24 · (−0.8)n − 6.67 · 1n = 6.43 · 2n + 2.24 · (−0.8)n − 6.67 (11.41)
It has a real pole at 0.5 and a pair of complex-conjugate poles at 0.9e±j2.1 . They all lie inside
the unit circle, thus the system is stable.
The step response of (11.42) in the transform domain is
z+2 z
Y (z) = H(z)U (z) = j2.1 −j2.1
·
(z − 0.5)(z − 0.9e )(z − 0.9e ) z−1
and its time response is of the form, for n ≥ 0,
with, as in (11.35),
1+2 3
H(1) = = = 2.21
1 + 0.4 + 0.36 − 0.405 1.355
and some real constants ki .
Because the poles of H(z) all have magnitudes less than 1, their responses all approach
zero as n → ∞. Thus the steady-state response of (11.43) is
(a) (b)
3 3
2.5 2.5
2 2
1.5 1.5
Amplitude
1 1
0.5 0.5
0 0
−0.5 −0.5
−1 −1
0 20 40 60 0 20 40 60
Time index Time index
Figure 11.7: (a) Step response of (11.42) followed by zero-order hold. (b) Its transient re-
sponse.
We see that the steady-state response is determined by the pole of the input alone and the
transient response is due to all poles of the transfer function in (11.42). 2
Systems are designed to process signals. The most common processions are amplification
and filtering. The processed signals or the outputs of the systems clearly should be dictated
by the signals to be processed. If a system is not stable, its output will generally grow
unbounded. Clearly such a system cannot be used. On the other hand, if a system is stable,
after its transient response dies out, the processed signal is dictated by the applied signal.
Thus the transient response is an important criterion in designing a system. The faster the
transient response vanishes, the faster the response reaches steady state.
For example, the time constant of 0.9e±j2.1 is nc = −1/ ln 0.9 = 9.49, and the response due
to the pair of poles vanishes in 5nc = 47.45 samples as shown in Figure 11.8(b).
With the preceding discussion, we may define the time constant of a stable H(z) as
−1
nc = (11.46)
ln (largest magnitude of all poles)
7 If we plot the responses directly, they will be more difficult to visualize. In Figure 11.7(b), we actually
(a) (b)
1.2 0.8
1 0.6
0.8 0.4
0.6 0.2
Amplitude
0.4
0
0.2
−0.2
0
−0.4
−0.2
−0.6
−0.4
−0.8
0 5 10 15 20 0 20 40 60
Time index Time index
Figure 11.8: (a) The sequence 0.5n followed by zero-order hold. (b) The sequence
0.9n sin(2.1n) followed by zero-order hold.
If H(z) has many poles, the smaller the pole in magnitude, the faster its time response
approaches zero. Thus the pole with the largest magnitude dictates the time for the transient
response to approach zero. For example, the transfer function in (11.42) has three poles 0.5
and 0.9e±j2.1 . The largest magnitude of all poles is 0.9. Thus the time constant of (11.42)
is nc = −1/ ln 0.9 = 9.49. Indeed its transient response due to a step input vanishes in
5nc = 47.45 samples as shown in Figure 11.7(b) or, equivalently, its step response approaches
the steady state H(1) = 2.21 in 47.45 samples as shown in Figure 11.7(a).
The time constant of a stable transfer function is defined from its poles. Its zeros do not
play any role. For example, the transfer functions from (11.31) through (11.33) have the same
set of poles but different zeros. Their poles are −0.6, −0.6, 0.9e±j2.1 . The largest magnitude
is 0.9. Thus their time constants all equal nc = −1/ ln 0.9 = 9.49 and their step responses
all reach steady state in roughly 47 samples as shown in Figure 11.2. Indeed, the rule of five
time constants appears to be widely applicable. Even so, the rule should be used only as a
guide. As in the CT case, it is possible to construct examples whose transient responses do
not decrease to less than 1% of their peak values in five time constants.
1.5
−0.5
−1
−1.5
−2
−10 −8 −6 −4 −2 0 2 4 6 8 10
Frequency (rad/s)
Figure 11.9: Magnitude response (solid line) and phase response (dotted line) of (11.48).
j1 + 1 1.4ejπ/4
ω = π/2 = 1.57 : H(j1) = = = 0.11e−j1.46
j10 − 8 12.8ej2.245
−1 + 1
ω = π = 3.14 : H(−1) = =0
−10 − 8
Other than these ω, its computation is complicated. Fortunately, the MATLAB function
freqz, where the last character z stands for the z-transform, carries out the computation. To
compute the frequency response of (11.48) from ω = −10 to 10 with increment 0.01, we type
The result is shown in Figure 11.9. We see that the frequency response is periodic with period
2π, that is
H(ejω ) = H(ej(ω+k2π) )
for all ω and all integer k. This is in fact a general property and follows directly from
ej(ω+k2π) = ejω ejk2π = ejω . Thus we need to plot H(ejω ) for ω only in a frequency interval of
2π. As in plotting frequency spectra of DT signals, we plot frequency responses of DT systems
only in the Nyquist frequency range (−π/T, π/T ] with T = 1, or in [0, π] as we discuss next.
This is in contrast to plotting frequency responses of CT systems for ω in (−∞, ∞) or in
[0, ∞). See Sections 4.8 and 9.7.
If all coefficients of H(z) are real, then we have [H(ejω )]∗ = H(e−jω ), which implies
Thus we have
A(ω) = A(−ω) (even) (11.49)
and
θ(−ω) = −θ(ω) (odd) (11.50)
In other words, if H(z) has only real coefficients, then its magnitude response is even and its
phase response is odd. Thus we often plot frequency responses only in the positive frequency
310 CHAPTER 11. DT LTI AND LUMPED SYSTEMS
(a) (b)
1 2
1.5
0.8
1
0
0.4 −0.5
−1
0.2
−1.5
0 −2
0 1 2 3 0 1 2 3
Frequency (rad/s) Frequency (rad/s)
Figure 11.10: (a) Magnitude responses of (11.48) (solid line) and (11.51) (dotted line). (b)
Phase responses of (11.48) (solid line) and (11.51) (dotted line).
range [0, π]. If we do not specify ω in using freqz(n,d), then freqz selects automatically
512 points in [0, π].9 Thus typing
generates the magnitude and phase responses of (11.47) in Figures 11.10(a) and (b) with solid
lines. Note that we did not specify the frequencies in the preceding program.
Next we consider the DT transfer function
z+1
H1 (z) = (11.51)
−8z + 10
Its pole is 10/8 = 1.25 and is outside the unit circle. Thus the system is not stable. Re-
placing d=[10 -8] by d=[-8 10], the preceding program generates the magnitude and phase
responses of (11.51) in Figures 11.10(a) and (b) with dotted lines. We see that the magnitude
responses of (11.48) and (11.51) are identical, but their phase responses are different. See
Problem 11.19.
We next discuss the physical meaning of frequency responses under the stability condition.
Consider a DT system with transfer function H(z). Let us apply to it the input u[n] = aejω0 n .
Note that u[n] is not a real-valued sequence. However its employment will simplify the
derivation. The z-transform of u[n] = aejω0 n is az/(z − ejω0 ). Thus the output of H(z) is
az
Y (z) = H(z)U (z) = H(z)
z − ejω0
We expand Y (z)/z as
Y (z) a k1
= H(z) = + terms due to poles of H(z)
z z − ejω0 z − ejω0
with
k1 = aH(z)|z=ejω0 = aH(ejω0 )
9 The function freqz is based on FFT, thus its number of frequencies is selected to be a power of 2 or
29 = 512. However, the function freqs for computing frequency responses of CT systems has nothing to do
with FFT, and its default uses 200 frequencies.
11.7. FREQUENCY RESPONSES 311
Thus we have
z
Y (z) = aH(ejω0 ) + z × [terms due to poles of H(z)]
z − ejω0
which implies
y[n] = aH(ejω0 )ejω0 n + responses due to poles of H(z)
If H(z) is stable, then all responses due to its poles approach 0 as n → ∞. Thus we have
Theorem 11.4 Consider a DT system with proper rational transfer function H(z). If the
system is BIBO stable, then
2
The steady-state responses in (11.52) and (11.53) are excited by the input u[n] = aejω0 n .
If ω0 = 0, the input is a step sequence with amplitude a, and the output approaches a step
sequence with amplitude aH(1). If u[n] = a sin ω0 n = Im aejω0 n , where Im stands for the
imaginary part, then the output approaches the imaginary part of (11.53) or aA(ω0 ) sin(ω0 n+
θ(ω0 )). Using the real part of ejω0 n , we will obtain the third equation. In conclusion, if we
apply a sinusoidal input to a system, then the output approaches a sinusoidal output with
the same frequency, but its amplitude will be modified by A(ω0 ) = |H(ejω0 )| and its phase
by <) H(ejω0 ) = θ(ω0 ). We give an example.
Example 11.7.2 Consider a system with transfer function H(z) = (z + 1)/(10z + 8). The
system has pole −8/10 = −0.8 which has a magnitude less than 1, thus it is stable. We
compute the steady-state response of the system excited by
Is there any problem with the expression of u[n]? Recall that we have assumed T = 1.
Thus the Nyquist frequency range is (−π, π] = (−3.14, 3.14] in rad/s. The frequency 6.38 in
sin 6.38n is outside the range and must be shifted inside the range by subtracting 2π = 6.28.
Thus the input in principal form is
u[n] = 2 + sin(6.38 − 6.28)n + 0.2 cos 3n = 2 + sin 0.1n + 0.2 cos 3n (11.54)
See Section 4.6.2. In the positive Nyquist frequency range [0, π] = [0, 3.14], 2 = 2 cos(0 · n)
and sin 0.1n are low-frequency components and cos 3n is a high-frequency component.
In order to apply Theorem 11.4, we read from Figures 11.10(a) and (b) H(1) = 1,
H(ej0.1 ) = 0.9e−j0.4 , and H(ej3 ) = 0.008e−j1.6 . Clearly, the reading cannot be very ac-
curate. Then Theorem 11.4 implies
yss [n] = 2 × 1 + 0.9 sin(0.1n − 0.4) + 0.2 × 0.008 cos(3n − 1.6) (11.55)
312 CHAPTER 11. DT LTI AND LUMPED SYSTEMS
Figure 11.11: (a) Ideal lowpass filter with cutoff frequency ω c . (b) Ideal bandpass filter with
upper and lower cutoff frequencies ω u and ω l . (c) Ideal highpass filter with cutoff frequency
ωc .
We see that cos 3n is greatly attenuated, whereas the dc component 2 goes through without
any attenuation and sin 0.1n is only slightly attenuated. Thus the system passes the low-
frequency signals and stops the high-frequency signal and is therefore called a lowpass filter.
2
We mention that the condition of stability is essential in Theorem 11.4. If a system is not
stable, then the theorem is not applicable as demonstrated in the next example.
Example 11.7.3 Consider a system with transfer function H1 (z) = (z + 1)/(−8z + 10). It
has pole at 1.25 and is not stable. Its magnitude and phase responses are shown in Figure
11.10(a) and (b) with dotted lines. We can also read out H1 (ej0.1 ) = 0.91ej0.42 . If we apply
the input u[n] = sin 0.1n, the output can be computed as
In view of Theorem 11.4, if we can design a stable system with magnitude response as
shown in Figure 11.11(a) with a solid line and phase response with a dotted line, then the
system will pass sinusoids with frequency |ω| < ω c and stop sinusoids with frequency |ω| > ω c .
We require the phase response to be linear to avoid distortion as we will discuss in a later
section. We call such a system an ideal lowpass filter with cutoff frequency ω c . The frequency
range [0, ω c ] is called the passband, and [ω c , π] is called the stopband. Figures 11.11(b) and
(c) show the characteristics of ideal bandpass and highpass filters. The ideal bandpass filter
will pass sinusoids with frequencies lying inside the range [ω l , ω u ], where ω l and ω u are the
lower and upper cutoff frequencies, respectively. The ideal highpass filter will pass sinusoids
with frequencies lying inside [ω c , π]. They are called frequency selective filters. Note that
filters are special types of systems that are designed to pass some frequency bands of signals.
Note that Figure 11.11 is essentially the same as Figure 9.11. The only difference is that
the frequency range in Figure 9.11 is [0, ∞) and the frequency range in Figure 11.11 is [0, π]
because of the assumption T = 1.
The impulse response h[n] of the ideal lowpass filter with linear phase −ωn0 , where n0 is
a positive integer can be computed as
sin[ωc (n − n0 )]
h[n] =
π(n − n0 )
for all n in (−∞, ∞). See Problem 11.20. It is not zero for all n < 0, thus the ideal
lowpass filter is not causal and cannot be built in the real world. In practice, we modify the
11.8. FREQUENCY RESPONSES AND FREQUENCY SPECTRA 313
Figure 11.12: Specifications of practical (a) lowpass filter, (b) bandpass filter, and (c) highpass
filter.
magnitude responses in Figure 11.11 to the ones shown in Figure 11.12. We insert a transition
band between the passband and stopband. Furthermore, we specify a passband tolerance and
stopband tolerance as shown with shaded areas and require the magnitude response to lie
inside the areas. The transition band is generally not specified and is the “don’t care” region.
We also introduce the group delay defined as
dθ(ω)
Group delay = τ (ω) = − (11.56)
dω
For an ideal filter with linear phase such as θ(ω) = −n0 ω, its group delay is n0 , a constant.
Thus instead of specifying the phase response to be a linear function of ω, we specify the
group delay in the passband to be roughly constant or n0 − ² < τ (ω) < n0 + ², for some
constants n0 and ². Even with these more relaxed specifications on the magnitude and phase
responses, if we specify both of them, it is still difficult to design causal filters to meet both
specifications. Thus in practice, we often specify only magnitude responses as shown in Figure
11.12. The design problem is then to find a proper stable rational function H(z) of a degree
as small as possible to have its magnitude response lying inside the specified region. See, for
example, Reference [C7].
We see that replacing z = ejω , the z-transform of x[n] becomes the frequency spectrum of
x[n], that is,
Xd (ω) = Z[x[n]]|z=ejω = X(ejω ) (11.57)
This is similar to the CT case where the frequency spectrum of a positive-time and absolutely
integrable x(t) equals its Laplace transform with s replaced by jω. It is important to mention
314 CHAPTER 11. DT LTI AND LUMPED SYSTEMS
that (11.57) holds only if x[n] is positive time and absolutely summable. For example, Con-
sider x[n] = 1.2n , for n ≥ 0. Its z-transform is X(z) = z/(z − 1.2). Its frequency spectrum
however is not defined and cannot equal X(ejω ) = ejω /(ejω − 1.2).
The input and output of a DT system with transfer function H(z) are related by
This equation is applicable whether the system is stable or not and whether the frequency
spectrum of the input signal is defined or not. For example, consider H(z) = z/(z − 2), which
is not stable, and u[n] = 1.2n , which grows unbounded and its frequency spectrum is not
defined. The output of the system excited by u[n] is
z z 2.5z 1.5z
Y (z) = H(z)U (z) = = −
z − 2 z − 1.2 z − 2 z − 1.2
which implies y[n] = 2.5 × 2n − 1.5 × 1.2n , for n ≥ 0. The output grows unbounded and its
frequency spectrum is not defined.
Before proceeding, we show that if H(z) is stable (its impulse response h[n] is absolutely
summable) and if the input u[n] is absolutely summable, then so is the output y[n]. Indeed,
we have
X∞
y[n] = h[n − k]u[k]
k=0
which implies
∞
X
|y[n]| ≤ |h[n − k]||u[k]|
k=0
Thus we have
∞ ∞
Ã∞ ! ∞
̰ !
X X X X X
|y[n]| ≤ |h[n − k]||u[k]| = |h[n − k]| |u[k]|
n=0 n=0 k=0 k=0 n=0
∞
à ∞ ! Ã∞ !à ∞
!
X X X X
= |h(n̄)| |u[k]| = |h[n̄]| |u[k]|
k=0 n̄=−k n̄=0 k=0
The equation is meaningless if the system is not stable or if the input frequency spectrum is
not defined. However, if the system is BIBO stable and if the input is absolutely summable,
then the output is absolutely summable and consequently its frequency spectrum is well
defined and equals the product of the system’s frequency response H(ejω ) and the input’s
frequency spectrum U (ejω ). Equation (11.59) is the basis of digital filter design and is the
DT counterpart of (9.80).
Let us consider the magnitude and phase responses of an ideal lowpass filter with cutoff
frequency ω c shown in Figure 11.11(a) or
½
1 · e−jωn0 for |ω| ≤ ω c
H(ejω ) = (11.60)
0 for ω c < |ω| ≤ π
where n0 is a positive integer. Now if u[n] = u1 [n] + u2 [n] and if the magnitude spectra of
u1 [n] and u2 [n] are as shown in Figure 11.13, then the output frequency spectrum is given by
11.9. REALIZATIONS – STATE-SPACE EQUATIONS 315
Y (ejω ) = H(ejω )U (ejω ) = H(ejω )[U1 (ejω ) + U2 (ejω )] = U1 (ejω )e−jωn0 (11.61)
where H(ejω )U2 (ejω ) = 0, for all ω in (−π, π] because their nonzero parts do not overlap. If
the z-transform of u1 [n] is U1 (z), then the z-transform of u1 [n − n0 ] is z −n0 U1 (z) as derived
in (7.30). Thus the spectrum in (11.61) is the spectrum of u1 [n − n0 ]. In other words, the
output of the ideal lowpass filter is
y[n] = u1 [n − n0 ] (11.62)
That is, the filter stops completely the signal u2 [n] and passes u1 [n] with only a delay of n0
samples. This is called a distortionless transmission of u1 [n].
We stress once again that the equation in (11.58) is more general than the equation in
(11.59). Equation (11.59) is applicable only if the system is stable and the input frequency
spectrum is defined.
that has H(z) as its transfer function. We call the ss equation a realization of H(z). Once a
realization is available, we can use it, as discussed in Subsection 7.6.1, for computer compu-
tation and real-time processing.
The realization procedure is almost identical to the one in Section 8.9 and its discussion
will be brief. Consider the DT proper rational transfer function
with ā1 6= 0. We call ā1 the leading coefficient. The rest of the coefficients can be zero or
nonzero. The transfer function is proper and describes a causal system. The first step in
realization is to write (11.65) as
N (z) b1 z 3 + b2 z 2 + b3 z + b4
H(z) = +d= 4 +d (11.66)
D(z) z + a2 z 3 + a3 z 2 + a4 z + a5
a direct division. The procedure is identical to the one in Example 8.9.1 and will not be
repeated.
Now we claim that the following ss equation realizes (11.66):
−a2 −a3 −a4 −a5 1
1 0 0 0 0
x[n + 1] = 0
x[n] + u[n] (11.67)
1 0 0 0
0 0 1 0 0
y[n] = [b1 b2 b3 b4 ]x[n] + du[n]
with x[n] = [x1 [n] x2 [n] x3 [n] x4 [n]]0 . This ss equation can be obtained directly from the
coefficients in (11.66). We place the denominator’s coefficients, except its leading coefficient
1, with sign reversed in the first row of A, and the numerator’s coefficients, without changing
sign, directly as c. The constant d in (11.66) is the direct transmission part in (11.67). The
rest of the ss equation have fixed patterns. The second row of A is [1 0 0 · · ·]. The third row
of A is [0 1 0 · · ·] and so forth. The column vector b is all zero except its first entry which is
1.
To show that (11.67) is a realization of (11.66), we must compute its transfer function.
Following the procedure in Section 8.9, we write (11.67) explicitly as
Applying the z-transform, using (11.18), and assuming zero initial conditions yield
This shows that the transfer function of (11.67) equals (11.66). Thus (11.67) is a realization
of (11.66) or (11.65). The realization in (11.67) is said to be in the controllable form. We first
give an example.
Example 11.9.1 Consider the transfer function in (7.28) or
3z 3 − 2z 2 + 5 −2z 2 + 0 · z + 5
H(z) = = 3 + (11.72)
z3 z3 + 0 · z2 + 0 · z + 0
It is in the form of (11.66) and its realization can be obtained form (11.67) as
0 0 0 1
x[n + 1] = 1 0 0 x[n] + 0 u[n] (11.73)
0 1 0 0
y[n] = [−2 0 5]x[n] + 3u[n]
where x[n] = [x1 [n] x2 [n] x3 [n]]0 . This is the ss equation in (7.18) and (7.19). 2
Example 11.9.2 Find a realization for the transfer function
We see that the realization can be read out from the coefficients of the transfer function. Note
that the {A, b, c, d} of this realization is identical to its CT counterpart in Example 8.9.2.2
The MATLAB function tf2ss, an acronym for transfer function to ss equation, carries
out realizations. For the transfer function in (11.74), typing in the command window
>> n=[3 5 24 23 -5];de=[2 6 15 12 5];
>> [a,b,c,d]=tf2ss(n,de)
will yield
This ss equation is the CT counterpart in (8.14) and (8.15). It has dimension three and needs
three unit-delay elements as shown in Figure 11.14. We assign the output of each unit-delay
element as a state variable xi [n]. Then its input is xi [n + 1]. To carry out simulation, we
write (11.77) explicitly as
We use the first equation to generate x1 [n + 1] in Figure 11.14 using dashed lines. We use
the second equation to generate x2 [n + 1] using dotted lines. Using the last equation and the
output equation in (11.78), we can readily complete the connections in Figure 11.14. This is
the DT counterpart of Figure 8.8. Replacing every integrator by a unit-delay element, Figure
8.8 becomes Figure 11.14.
Additions and multiplications are basic operations in every PC, DSP processor, or spe-
cialized hardware. A unit-delay element is simply a memory location. We store a number in
the location and then fetch it in the next sampling instant. Thus a basic block diagram will
provide a schematic diagram for specialized hardware implementation of the ss equation. See,
11.10. DIGITAL PROCESSING OF CT SIGNALS 319
Figure 11.15: (a) Analog procession of CT signal. (b) Digital processing of CT signal.
for example, Reference [C7]. In conclusion, ss equations are used in computer computation,
real-time processing, and specialized hardware implementation. Even so, they are not used,
as discussed in Section 9.10, in design.
It consists of two sinusoids. Suppose cos 0.2t is the desired signal and 0.2 sin 25t is noise. We
first use a CT or analog filter and then a DT or digital filter to pass the desired signal and to
stop the noise as shown in Figure 11.15.
Consider H(s) = 2/(s + 2). It is a lowpass filter with 3-dB bandwidth 2 rad/s. Because
2 2
H(j0.2) = ≈ =1
2 + j0.2 2
and
2 2
H(j25) = ≈ = 0.08e−jπ/2
j25 + 2 j25
the steady-state response of the filter excited by u(t) is
Thus the filter passes the desired signal and attenuates greatly the noise. We plot in in
Figures 11.16(a) and (b) the input and output of the analog filter. Indeed the noise 0.2 cos 20t
320 CHAPTER 11. DT LTI AND LUMPED SYSTEMS
1 1 1
Amplitude
0 0 0
−1 −1 −1
−2 −2 −2
0 20 40 0 20 40 0 20 40
Time (s) Time (s) Time (s)
0.6 0.6
1
0.4 0.4
0.9
0.7 0 0
0 0.5 1 0 0.5 1 0 0.5 1
Time (s) Time (s) Time (s)
Figure 11.16: (a) The CT signal in (11.79). (b) The output y(t) of the analog filter 2/(s + 2)
in Figure 11.15(a). (c) The output ȳ(t) in Figure 11.15(b), that is, the output y[n] of the
digital filter (z + 1)/(10z − 8) passing through a zero-order hold. (aa) Small segment of (a).
(bb) Small segment of (b). (cc) Small segment of (c)
The first line is the number of samples to be used and the selected sampling period. The
second line is the sampled input or, equivalently, the output of the ADC in Figure 11.15(b).
The third line uses transfer function (tf) to define the DT system by including the sampling
period T . We mention that the T used in defining the DT filter must be the same as the
sampling period used in the input signal. The function lsim, an acronym for linear simulation,
computes the output of the digital filter (z + 1)/(10z − 8). It is the sequence of numbers y[n]
shown in Figure 11.15(b). The MATLAB function stairs carries out zero-order hold to yield
the output ȳ(t). It is the output of the DAC shown in Figure 11.15(b). We name and save the
program as f1116. Typing in the command window of MATLAB >> f1116 will yield Figure
11.16(c) in a figure window. The result is comparable to the one in Figure 11.16(b) obtained
using an analog filter. Note that even though the responses in Figures 11.16(b) and (c) are
indistinguishable, the former is a continuous function of t as shown in Figure 11.16(bb), the
latter is a staircase function as shown in Figure 11.16(cc).
In computer computation or specialized hardware implementation, the digital filter H(z) =
(z + 1)/(10z − 8) must be realized as an ss equation. We write H(z) as
z+1 0.1z + 0.1 0.18
H(z) = = = 0.1 +
10z − 8 z − 0.8 z − 0.8
then its controllable-form realization is
This is the equation in Figure 11.15(b) and is used in Program 11.2. Note that the realization
can be obtained in MATLAB by typing
To conclude this example, we see from Figures 11.16(b) and (c) that it takes some finite
time for the outputs to reach steady state. The time constant of the analog filter 2/(s + 2) is
1/2 = 0.5 and the output in Figure 11.16(b) takes roughly 2.5 seconds to reach steady state.
The time constant of the digital filter (z + 1)/(10z − 8) is nc = −1/ ln 0.8 = 4.48 and the
response in Figure 11.16(c) takes roughly 5nc T = 5×4.48×0.1 = 2.24 seconds to reach steady
state. It is important to remember that no physical system can respond instantaneously; it
always takes some finite time to respond. However the response time is usually very small,
thus we often pay no attention to it.
(a) (b)
0.08 0.08
0.07 0.07
0.06 0.06
Magnitude spectrum
0.05 0.05
0.04 0.04
0.03 0.03
0.02 0.02
0.01 0.01
0 0
0 2000 4000 6000 8000 10000 (Hz) 0 500 1000 1500 2000 2500
Frequency Frequency
Figure 11.17: (a) Magnitude spectrum in [0, 11025] (Hz) for fs = 22050 and in normalized
positive Nyquist frequency range [0, π] (rad/s). (b) Magnitude spectrum in [0, 2756] (Hz) for
f¯s = fs /4 and in normalized positive Nyquist frequency range [0, π] (rad/s).
60
50
40
Magnitude response
30
20
10
0
0 0.5 1 1.5 2 2.5 3 3.5
Frequency (rad/s)
The magnitude spectrum of the middle-C signal is practically zero for frequencies larger
than 2500 Hz as shown in Figure 11.17(a). Thus we may consider it to be bandlimited to
fmax = 2500 and the sampling frequency can be selected as f¯s > 2fmax = 5000. Clearly the
sampling frequency fs = 22050 used is unnecessarily large or its sampling period T = 1/fs is
unnecessarily small. As discussed in Subsection 5.5.2, in order to utilize the recorded data,
we select f¯s = fs /4 = 5512.5 or the new sampling period as T̄ = 4T . For this f¯s , the positive
Nyquist frequency range becomes [0, 2756] in Hz and the computed magnitude spectrum is
shown in Figure 5.12 and repeated in Figure 11.17(b). We see that the spikes are less clustered
and it will be simpler to design a digital filter to pass one spike and stop the rest.
In order to design a digital filter to pass the part of the waveform with frequency centered
around 522 Hz in the Nyquist frequency range [0, 2756] in Hz, we must normalize 522 to
522 × π/2756 = 0.595 in the normalized Nyquist frequency range [0, π] in rad/s. Consider the
digital filter
(z + 1)(z − 1) z2 − 1
H(z) = j0.595 −j0.595
= 2 (11.81)
(z − 0.98e )(z − 0.98e ) z − 0.98(2 cos 0.595)z + 0.982
Recall that the function freqz automatically select 512 frequencies in [0, π] and then compute
its frequency response. We plot only its magnitude response in Figure 11.18. We see that
the filter in (11.81) has a narrow spike around 0.595 rad/s in its magnitude response, thus it
will pass the part of the signal whose spectrum centered around 0.595 rad/s and attenuate
greatly the rest. The filter has degree 2 and is called a resonator. See Reference [C7].
Although the filter in (11.81) is designed using T = 1, it is applicable for any T > 0.
Because the peak magnitude of H(z) is roughly 50 as shown in Figure 11.18, the transfer
function 0.02H(z) has its peak magnitude roughly 1 and will carry out only filtering without
amplification. We plot in Figure 11.19(a) the middle-C signal, for t from 0 to 0.12 second, ob-
tained using the sampling period T̄ = 4T = 4/22050 and using linear interpolation. Applying
this signal to the filter 0.02H(z) yields the output shown in Figure 11.19(b). It is obtained
from the program
where u is the signal in Figure 11.19(a). We see from Figure 11.19(b) that there is some
transient response. We plot in Figure 11.19(c) a small segment from 0.05 to 0.075 second
of Figure 11.19(b). It is roughly a sinusoid. It has 13 cycles in 0.025 second and thus has
frequency 13/0.025 = 520 Hz.
The number of samples in Figure 11.19 is 5500. The non-real time procession y=lsim(dog,
u) takes 0.016 second to complete. This elapsed time is obtained using tic and toc.
Problems
11.1 The impulse response of the savings account studied in Example 7.3.2 was computed as
h[n] = (1.00015)n , for n ≥ 0. What is its transfer function? Is the result the same as
the one obtained in Problem 7.22.
11.2 Use the z-transform to solve Problem 7.8. Are the results the same?
11.3 The 20-point moving average studied in Example 7.4.1 can be described by the convo-
lution or nonrecursive difference equation in (7.12) and the recursive difference equation
in (7.13). Compute the transfer functions from (7.12) and (7.13). Are they the same?
Y (z) 2z 2 + 5z + 3
H(z) = = 2
U (z) 4z + 3z + 1
11.5 Find the poles and zeros for each of the following transfer functions.
3z+6
1. H1 (z) = 2z 2 +2z+1
z −1 −z −2 −6z −3
2. H2 (z) = 1+2z −1 +z −2
11.6 Compute the poles of the transfer functions of (7.12) and (7.13) computed in Problem
11.3. Do they have the same set of poles?
11.7 Find the impulse and step responses of a DT system with transfer function
0.9z
H(z) =
(z + 1)(z − 0.8)
324 CHAPTER 11. DT LTI AND LUMPED SYSTEMS
(a)
0.5
Amplitude
−0.5
0 0.02 0.04 0.06 0.08 0.1 0.12
(b)
0.4
0.2
Amplitude
−0.2
−0.4
0 0.02 0.04 0.06 0.08 0.1 0.12
(c)
0.4
0.2
Amplitude
−0.2
−0.4
0.05 0.055 0.06 0.065 0.07 0.075
Time (s)
Figure 11.19: (a) Middle-C signal. (b) The output of 0.02H(z). (c) Its small segment.
11.10. DIGITAL PROCESSING OF CT SIGNALS 325
11.8 Verify that the output of H(z) = (z + 1)/(−8z + 10) excited by u[n] = sin 0.1n is given
by
y[n] = −0.374(1.25)n + 0.91 sin(0.1n + 0.42)
for n ≥ 0.
11.9 Consider (11.28) or · ¸
b1 z + b0
Z −1 = k0 δd [n] + k1 pn
z−p
for n ≥ 0. Verify k0 = −b0 /p and k1 = (b1 p + b0 )/p.
11.10 What is the general form of the response of
z 2 + 2z + 1
H(z) =
(z − 1)(z − 0.5 + j0.6)(z − 0.5 − j0.6)
excited by a step sequence?
11.11 Consider the DT transfer function
N (z)
H(z) =
(z + 0.6)3 (z − 0.5)(z 2 + z + 0.61)
where N (z) is a polynomial of degree 4 with leading coefficient 2 and H(1) = 10. What
is the general form y[n] of its step response? What is y[n] as n → ∞? What are y[n],
for n = 0 : 2?
11.12 Find polynomials of degree 1 which are, respectively, CT stable and DT stable, CT
stable but not DT stable, DT stable but not CT stable, and neither CT stable nor DT
stable.
11.13 Determine the DT stability of the following systems
z+1
1. (z−0.6)2 (z+0.8+j0.6)(z+0.8−j0.6)
3z−6
2. (z−2)(z+0.2)(z−0.6+j0.7)(z−0.6−j0.7)
z−10
3. z 2 (z+0.95)
Read the needed values from the plots in Problem 11.16. How many samples will it take
to reach steady state?
326 CHAPTER 11. DT LTI AND LUMPED SYSTEMS
for all n in (−∞, ∞). Verify that for the ideal lowpass filter defined in (11.60), its inverse
DT Fourier transform is
sin[ωc (n − n0 )]
h[n] =
π(n − n0 )
for all integers n in (−∞, ∞). Because h[n] 6= 0, for all n < 0, the ideal filter is not
causal and cannot be built in the real world.
11.21 Find a state-space equation realization of the transfer function in Problem 11.4.
11.22 Find a realization for the transfer function in Problem 11.1. Is the result the same as
the one in Problem 7.19.
11.23 Find a one-dimensional realization of H(z) = z/(z − 0.8). Find a two-dimensional
realization of H(z) by introducing the factor z − 0.5 to its numerator and denominator.
11.24 Draw a basic block diagram for the system in Problem 11.4.
11.25 Let X(z) be the z-transform of x[n]. Show that if all poles, except possibly a simple
pole at z = 1, of X(z) have magnitudes less than 1, then x[n] approaches a constant
(zero or nonzero) as n → ∞ and
This is called the final-value theorem of the z-transform. This is the DT counterpart of
the one in Problem 9.27. Before using the equation, we first must check essentially the
stability of X(z). The theorem is not as useful as Theorem 11.4.
References
(Brackets with an asterisk denote books on Signals and Systems)
[A1] Anderson, B.D.O., and J.B. Moore, Optimal Control - Linear Quadratic Methods, Upper
Saddle River, NJ: Prentice Hall, 1990.
[B2] Burton, D., The History of mathematics, 5/e, New York: McGraw Hill, 2003.
[C1]* Cadzow, J.A. and H.F. Van Landingham, Signals and Systems, Prentice-Hall, 1985.
[C2]* Carlson, G.E., Signal and Linear System Analysis, 2/e, New York: Wiley, 1998.
[C3] Carson, J.R., Electric Circuit Theory and Operational Calculus, New York: Chelsea
Publication, 1926.
[C4]* Cha, P.D. and J.I. Molinder, Fundamentals of Signals and Systems: A Building Block
Approach, Cambridge University Press, 2006.
[C5] Chen, C.T., Analog and Digital Control System Design: Transfer-Function, State-Space,
and Algebraic Methods, New York: Oxford University Press, 1993.
[C6] Chen, C.T., Linear System Theory and Design, 3/e, New York: Oxford University Press,
1999.
[C7] Chen, C.T., Digital Signal Processing: Spectral Computation and Filter Design, New
York: Oxford University Press, 2001.
[C8]* Chen, C.T., Signals and Systems, 3/e, New York: Oxford University Press, 2004.
[C9] Chua, L.O., C.A. Desoer, and E.S. Kuo, Linear and Nonlinear Circuits, New York:
McGraw-Hill, 1987.
[C10] Couch, L.W. II, Digital and Analog Communication Systems, 7/e, Upper Saddle River,
NJ: Prentice Hall, 2007.
[C11] Crease, R.P., The Great Equations, New York: W.W. Norton & Co., 2008.
[D1] Dorf, R.C. and R.H. Bishop, Modern Control Systems, 11/e, Upper Saddle River, NJ:
Prentice Hall, 2008.
[D2] Dunsheath, P., A History of Electrical Power Engineering, Cambridge, Mass.: MIT
press, 1962.
[E2] Edwards, C.H. and Oenney, D.E., Differential Equations and Boundary Value Problems,
3/e, Upper Saddle River, NJ: Pearson Education Inc., 2004.
[E3]* ElAli, T.S. and M.A. Karim, Continuous Signals and Systems with MATLAB, 2/e,
Boca Raton, FL: CRC Press, 2008.
[E4] Epp, S.S., Discrete Mathematics with Applications, 2nd ed., Boston: PWS Publishing
Co., 1995.
[F1] Fraden, J., Handbook of modern sensors: Physics, design, and applications, Woodbury:
American Institute of Physics, 1997.
327
[G1]* Gabel,R.A. and R.A. Roberts, Signals and Linear Systems, 3/e, New York: John
Wiley, 1987.
[G2] Gardner, M.F. and J.L. Barnes, Transients in Linear Systems: Studied by the Laplace
Transformation, Volume 1, New York: John Wiley, 1942.
[G3] Giancoli, D.C., Physics: Principles with Applications, 5th ed., Upper Saddle River, NJ:
Prentice Hall, 1998.
[G4]* Girod, B., R. Rabenstein, and A. Stenger, Signals and Systems, New York: Wiley,
2003
[G5]* Gopalan, K., Introduction to Signal and System Analysis, Toronto: Cengage Learning,
2009.
[H1]* Haykin, S. and B. Van Veen, Signals and Systems, 2/e, New York: John Wiley, 2003.
[H2] Haykin, S., Communiation Systems, 4/e, New York: John Wiley, 2001.
[H3] Hayt, W.H., J.E. Kemmerly, and S.M.Durbin, Engineering Circuit Analysis, 6th ed.,
New York: McGraw Hill, 2002.
[H4] Horenstein, M.N., Microelectronic Circuits and Devices, 2/e, Upper Saddle River, NJ:
Prentice Hall, 1996.
[H5]* Houts, R.C., Signal Analysis in Linear Systems, New York: Saunders, 1991.
[I1] Irwin, J.D. and R.M. Nelms, Basic Engineering Circuit Analysis, 8th ed., New York:
John Wiley, 2005.
[I2] Isaacson, W., Einstein: His Life and Universe, New York: Simon & Schuster, 2007.
[J1]* Jackson, L.B., Signals, Systems, and Transforms, Boston: Addison Wesley, 1991.
[J2] Johnson, C.D., Process Control Instrumentation Technology, New York: John Wiley,
1977.
[K1] Kailath, T., Linear Systems, Upper Saddle River, NJ: Prentice Hall, 1980.
[K2]* Kamen, E.W. and B.S. Heck, Fundamentals of Signals and Systems, 3/e, Upper Saddle
River, NJ: Prentice Hall, 2007.
[K3] Kammler, D.W., A First Course in Fourier Analysis, Upper Saddle River, NJ: Prentice
Hall, 2000.
[K4] Kingsford, P.W., Electrical Engineering: A History of the Men and the Ideas, New
York: St. Martin’s Press, 1969.
[K5] Körner, T.W., Fourier Analysis, Cambridge, U.K.: Cambridge Univ. Press, 1988.
[K6]* Kudeki, E. and D.C. Munson Jr., Analog Signals and Systems, Upper Saddle River,
NJ: Prentice Hall, 2009.
[K7]* Kwakernaak, H. and R. Sivan, Modern Signals and Systems, Upper Saddle River, NJ:
Prentice Hall, 1991.
[L1]* Lathi, B. P. Linear Systems and Signals, New York: Oxford Univ. Press, 2002.
[L2] Lathi, B. P. Modern Digital and Analog Communication Systems, 3/e, New York: Oxford
Univ. Press, 1998.
328
[L3]* Lee, E.A., and P. Varaiya, Structure and Interpretation of Signals and Systems, Boston,
Addison Wesley, 2003.
[L4] Lindley, D., The end of Physics, New York: BasicBooks, 1993.
[L5]* Lindner, D.K., Introduction to Signals and Systems, New York: McGraw-Hill, 1999.
[L6]* Liu, C.L., and Jane W.S. Liu, Linear Systems Analysis, New York: McGraw-Hill, 1975.
[L7] Love, A. and J.S. Childers, Listen to Leaders in Engineering, New York: David McKay
Co., 1965.
[M1]* Mandal, M., and A. Asif, Continuous and Discrete Time Signals and Systems, New
York: Cambridge University Press, 2009.
[M2] Manley, J.M., “ The concept of frequency in linear system analysis”, IEEE Communi-
cation Magazine, vol. 20, issue 1, pp. 26-35, 1982.
[M3]* McGillem, C.D., and G.R. Cooper, Continuous and Discrete Signal and System Anal-
ysis, 3/e, New York: Oxford University Press, 1983.
[M4]* McClellan, J.H., R.W. Schafer, and M.A. Yoder, Signal Processing First, Upper Saddle
River, NJ: Prentice Hall, 2003.
[M6]* McMahon, D., Signals and Systems Demystified, New York: McGraw-Hill, 2007.
[O1]* Oppenheim, A.V., and A.S. Willsky, Signals and Systems, Upper Saddle River, NJ:
Prentice Hall, 1983. Its 2nd edition, with S.H. Nawab, 1997.
[P2]* Phillips, C.L., J.M. Parr, and E.A. Riskin, Signals, Systems, and Transforms, 3/e,
Upper Saddle River, NJ: Prentice Hall, 2003.
[P3] Pipes, L.A. and L.R. Harville, Applied Mathematics for Engineers and Physicists, 3/e,
New York: McGraw-Hill, 1970.
[P4] Pohlmann, K.C., Principles of Digital Audio, 4/e, New York: McGraw-Hill, 2000.
[P5]* Poularikas, A., and S. Seely, Signals and Systems, Boston: PWS-Kent, 1994.
[P6]* Poularikas, A., Signals and Systems Primer with MATLAB, Boca Raton, FL: CRC
Press, 2007.
[R1]* Robert, M.J., Fundamentals of Signals and Systems, New York: McGraw-Hill, 2008.
[S1] Schroeder, M.R., Computer Speech, 2/e, New York: Springer, 2004.
[S2] Sedra, A.S., and K.C. Smith, Microelectronic Circuits, 5/e, New York: Oxford University
Press, 2003.
[S3]* Sherrick, J.D., Concepts in Systems and Signals, 2/e, Upper Saddle River, NJ: Prentice
Hall, 2004.
[S4] Singh, B., Fermat’s Enigma, New York: Anchor Books, 1997.
[S5] Singh, B., Big Bang, New York: Harper Collins Publishers, Inc., 2004.
[S6]* Siebert, W.M., Circuits, Signals, and Systems, New York: McGraw-Hill, 1986.
329
[S7] Slepian, D., “ On Bandwidth”, Proceedings of the IEEE, vol. 64, no. 3, pp. 292-300,
1976.
[S8]* Soliman, S., and M. Srinath, Continuous and Discrete Signal and Systems, Upper
Saddle River, NJ: Prentice Hall, 1990.
[S9] Sony (UK) Ltd, Digital Audio and Compact disc technology, 1988.
[S10]* Stuller, J.A., An Introduction to Signals and Systems, Ontario, Canada: Thomson,
2008.
[S11]* Sundararajan, D., A Practical Approach to Signals and Systems, New York: John
Wiley, 2009.
[T1] Terman, F.E., “A brief history of electrical engineering education”, Proc. of the IEEE,
vol. 64, pp. 1399-1404, 1976. Reprinted in vol. 86, pp. 1792-1800, 1998.
[U1] Uyemura, J.P., A First Course in Digital Systems Design, New York: Brooks/Cole,
2000.
[W1] Wakerly, J.F., Digital Design: Principles & Practices, 3/e, Upper Saddle River, NJ:
Prentice Hall, 2000.
[W2] Wikipedia, web-based encyclopedia at wikipedia.org
[Z1] Zemanian, A.H., Distribution Theory and Transform Analysis, New York: McGraw-Hill,
1965.
[Z2]* Ziemer, R.E., W.H. Tranter, and D.R. Fannin, Signals and Systems, 4/e, Upper Saddle
River, NJ: Prentice Hall, 1998.
330
Index
331
recursive, 146 transition band, 236, 312
Differential equation, 176, 185 Final-value theorem,
Differentiator, 180, 183 Laplace transform, 257
op-amp circuit, 179 z-transform, 326
Digital processing, 25, 319 Finite impulse response (FIR), 145, 188, 291
non-real-time, 26 First-order hold, 20
real-time, 27, 150 Forced response, 141, 164, 308
Digital signal, 20, 25 Fourier series,
quantization, 20 coefficients, 61
Digital-to-analog converter (DAC), 25 complex, 61, 66
Dirac delta function, 22 real, 61
Direct transmission part, 150, 170 Fourier transform, 63, 243
Dirichlet conditions, 64 continuous-time, 63, 243
Discrete Fourier transform (DFT), 100 discrete-time, 85
Discrete-time Fourier transform, 85 in system analysis, 244
Discrete-time (DT) signals, 19 inverse, 63
positive-time, 29 of sinusoids, 75
time index, 19 sufficient conditions, 64
Discrete-time (DT) systems, 124 Frequency, 58
Discretization, 172 aliased, 81, 84
Distortionless transmission, 248, 314 carrier, 69
Distributed systems, 187, 227, 291, 303 cutoff, 236, 248, 312
Divisor, 60 index, 61, 100
greatest common (gcd), 60 negative, 58
Dominant pole, 260 range, 58, 79
Downsampling, 170 resolution, 100
sampling, 19, 58
Energy storage elements, 169 Frequency aliasing, 84, 96
Envelope, 80 Frequency bandwidth, 75, 239
primary, 80 Frequency division multiplexing, 71
Euler’s formula, 36, 58, 61 Frequency domain, 65, 240, 249
Exponential functions, 28, 58 Frequency index, 61, 100
complex, 58 Frequency range, 58, 79
real, 28 Frequency resolution, 100
Exponential sequences, real 31 Frequency response, 233, 237, 246, 247, 308,
313
Fast Fourier transform (FFT), 100 Frequency selective filters, 236, 312
Feedback connection, 158, 269 See also filters,
advantage of, 273 Frequency shifting, 68
necessity of, 272, Frequency spectrum, 65, 85, 247, 513
unity, 275 discrete, 76
Feedback model of op-amp circuit, 283 of pure sinusoids, 75, 76
Filters, magnitude, 65, 85
anti-aliasing, 25, 89, 97 phase, 65, 85
bandpass, 236 Friction, 167
frequency selective, 236, 312 Coulomb, 167
FIR, 145 static, 168
ideal, 216, 248, 312, 314 viscous, 168
IIR, 145 Function, 17
lowpass, 236, 312 sinusoidal, 58
moving-average, 144 real exponential, 28
passband, 236, 312 time constant, 28
stopband, 236, 312 staircase, 18
tolerance, 236, 312 step, 37
332
Fundamental frequency, 59, 60 subsequent, 304
harmonic, 60 Linear operators,
Fundamental period, Fourier transform, 63
CT periodic signal, 59, Laplace transform, 181
DT periodic sequence, 77 z-transform, 153
Linearity, 126, 142, 165
Group delay, 236, 312 Linear simulation (lsim), 172
Loading problem, 270
Homogeneity, 126, 142, 165 Loop, 161
Lumped systems, 126, 187, 291, 303
Ideal interpolator, 96 modeling, 167
Ideal op amp, 133
Identification, 242 Magnitude, 27, 36
parametric, 243 Magnitude response, 233, 308
Impedances, 184 Magnitude spectrum, 65, 105
Impulse, 22 Marginal stability, 223
sifting property, 23 Matrix, 38
weight, 22 dimension, 38
Impulse response, 143, 166, 173, 291 order, 38
finite (FIR), 145, 188 transpose, 39
infinite (IIR), 145, 188 Minimal realization, 180
Impulse sequence, 29 Mod, 38, 80
Infinite impulse response (IIR), 145, 188, 291 Model reduction, 261
Initial conditions, 141, 164, 194 operational frequency range, 261
Initial relaxedness, 125, 141 Modulated signal, 69
Integrable, 42 Modulation, 69
absolutely, 42, 224, not an LTI process, 248
squared absolutely, 43 Modulo, 38
Integrator, 178, 183 Moving average, 144
op-amp circuit, 179 Multiplier, 148, 178
Internal description, 175, 196, 200 op-amp circuit, 138
Interpolation, 20
first-order hold, 20 Natural response, 142, 164
ideal, 96 Non-real-time processing, 27
linear, 20 Nyquist frequency range (NFR), 79, 88, 309
zero-order hold, 20, 25 Nyquist rate, 84
Inverse system, 278 Nyquist sampling theorem, 96
feedback implementation of, 278
Inverter, 134, 148 Op-amp circuit, 282
Inverting amplifier, 134 feedback model, 282
Isolating amplifier, 131 Operational amplifier (op amp), 128
modeled as memoryless, 129
Jury table, 305 ideal, 133
Jury test, 304 limitation, 133
linear, 129
Kronecker delta sequence, 29 nonlinear, 129
LTI system with memory, 260
Laplace transform, 181, 243 Operational frequency range, 261, 284
final-value theorem, 257 Output equation, 150, 170
inverse, 215 Overshoot, 210
region of convergence, 213
table of, 215 Parameteric identification, 243
two-sided, 213 Parallel connection, 157, 268
variable, 181, 298 Parseval’s formula, 68
Leading coefficient, 194, 304 Partial fraction expansion, 216
333
direct term, 216 Real-time processing, 27
residues, 216 Region of convergence, 213, 288
Passband, 236 Resonance, 249
edge frequency, 236 frequency, 249
tolerance, 236 Response time, 210, 233
Passive elements, 228 Responses, 124
Periodic signals, forced, 141, 164
CT, 59 impulse, 143, 166, 173
fundamental frequency, 59 natural, 142, 164
fundamental period, 59, 77 steady-state, 230, 305
DT, 77 step, 173, 219, 296
Phase, 36 transient, 230, 306
Phase response, 233, 308 zero-input, 142, 164
Phase spectrum, 65, 97 zero-state, 141, 164
Phasor analysis, 245 Routh table, 229
Piano’s middle C, 2, 4 Rough test, 229
its spectrum, 109
its processing, 323 Sample-and-hold circuit, 131
PID controller, 187, 262 Sampling,
Pole, 210, 293 frequency, 19, 79
repeated, 211, 293, 300 instant, 19
responses of, 221, 299 period, 19, 126
simple, 211, 293 rate, 19, 80, 84
Pole placement, 277 Sampling function, 62, 95
Pole-zero cancelation, 271 Sampling theorem, 84, 96
Polynomial, 194 its dual, 118
CT stable, 228, 255, 305 Savings account, 140, 144, 146, 160
degree of, 194 its convolution, 146
DT stable, 304, 305 its difference equation, 147
leading coefficient of, 194, 228 its impulse response, 145
roots of, 195, 304 scalar, 38
Principal form of DT sinusoids, 80 Seismometer, 263
pulse, 21 operational frequency range, 265
triangular, 23 Sequences, 20
Pulse-amplitude modulation (PAM), 19, 23 impulse, 29
Pulse-code modulation (PCM), 19 Kronecker delta, 29
sinusoidal, 77
Quantization, 20 step, 31
error, 20 Signals, 17
analog, 25
Rational function, 186, 291 aperiodic, 59
biproper, 186 band-limited, 95
coprime, 195 baseband, 75
degree, 195 CT, 17
improper, 186, 291 digital, 20, 25
proper, 186, 292 DT, 19
strictly proper, 186 energy of, 43
Real number, 15 finite duration, 44
integer, 15 frequency bandwidth, 73
irrational, 16 frequency-domain description, 65
rational, 15 modulated, 69
Realization, 189, 315 periodic, 58
controllable form, 191, 316 time-domain description, 65
minimal, 196 Sinusoid,
334
CT, 57, 58 Time constant,
DT, 77 of real exponential function, 28
frequency, 80 of real exponential sequence, 31
principal form, 80 of stable system, 232, 307
Spectrogram, 117 Time-domain, 249
Spectrum, description, 182, 200
See frequency spectrum specifications, 65
Spring constant, 167 Time expansion, 71
Stability, Time-limited band-limited theorem, 74
asymptotic, 223 Time shifting, 28, 125, 142, 165
BIBO, 223, 301 Transducer, 1, 121
marginal, 223 Transfer function,
State, 150, 169, 170, 171 CT, 182
initial, 150, 164, 171 DT, 154, 291
variable, 149, 150, 169 negative-power, 292
State equation, 150, 170 positive-power, 292
discretization 172 See also rational functions
step size, 172 Transform impedance, 184
State-space (ss) equation, 149, 170, 313 Transient response, 230, 306
computation of, 172 Tuning fork, 1, 3
RLC circuits, 169 its spectrum, 107
Steady-state response,
CT, 230, 235 Unit-advance element, 156, 292
DT, 305, 311 Unit-delay element, 148, 155
Step response, 173, 219, 227, 294, 296
Vector, 36
Step size, 172
column, 38
summable, 41
row, 38
absolutely, 41, 301, 314
Viscous friction coefficient, 168
squared absolutely, 43
Voltage follower, 131
superposition, 128
Sylvester resultant, 206 Wien-bridge oscillator, 280
System, 1, 122 feedback model, 283
causal, 125
continuous-time (CT), 124 Zero, 210, 293
discrete-time (DT), 124, 126 Zero-input response, 142, 164
distributed, 187, 227 Zero-order hold, 20, 25
linear, 126, 142, 165 Zero/pole/gain form, 211, 294
lumped, 187, 291 Zero-state response, 141, 164
memoryless, 126 z-transform, 153, 287
MIMO, 124 inverse, 294
nonlinear, 127, 165 negative-power form, 290
SISO, 124 positive-power form, 290
time-invariant, 125, 126 region of convergence, 288
time-varying, 125 table of, 290
See also filters, variable, 153, 298
335