Feedback and Control System Chapter 1. Introduction To Control System
Feedback and Control System Chapter 1. Introduction To Control System
Feedback and Control System Chapter 1. Introduction To Control System
In modern usage the word system has many meanings. So let us begin by defining what we mean
when we use this word in this book, first abstractly then slightly more specifically in relation to scientific
literature.
Definition 1.1a: A system is an arrangement, set, or collection of things connected or related in such a
manner as to form an entirety or whole.
The word control is usually taken to mean regulate, direct, or command. Combining the above
definitions, we have
Definition 2.2: A control system is an arrangement of physical components connected or related in such
a manner as to command, direct, or regulate itself or another system.
In engineering and science we usually restrict the meaning of control systems to apply to those
systems whose major function is to dynamically or actively command, direct, or regulate. The system
shown in Fig. 1-2, consisting of a mirror pivoted at one end and adjusted up and down with a screw at the
other end, is properly termed a control system. The angle of reflected light is regulated by means of the
screw.
It is important to note, however, that control systems of interest for analysis or design purposes
include not only those manufactured by humans, but those that normally exist in nature, and control
systems with both manufactured and natural components.
Control systems abound in our environment. But before exemplifying this, we define two terms:
input and output, which help in identifying, delineating, or defining a control system.
Definition 1.3: The input is the stimulus, excitation or command applied to a control system, typically
from an external energy source, usually in order to produce a specified response from the control system.
Definition 1.4: The output is the actual response obtained from a control system. It may or may not be
equal to the specified response implied by the input.
Inputs and outputs can have many different forms. Inputs, for example, may be physical
variables, or more abstract quantities such as reference, setpoint, or desired values for the output of the
control system.
The purpose of the control system usually identifies or defines the output and input. If the output
and input are given, it is possible to identify, delineate, or define the nature of the system components.
EXAMPLE 1.1. An electric switch is a manufactured control system, controlling the flow of electricity.
By definition, the apparatus or person flipping the switch is not a part of this control system.
Flipping the switch on or off may be considered as the input. That is, the input can be in one of
two states, on or off .The output is the flow or nonflow (two states) of electricity.
The electric switch is one of the most rudimentary control systems.
Control systems are classified into two general categories: open-loop and closed-loop systems.
The distinction is determined by the control action, that quantity responsible for activating the system to
produce the output.
The term control action is classical in the control systems literature, but the word action in this
expression does not always directly imply change, motion, or activity. For example, the control action in
a system designed to have an object hit a target is usually the distance between the object and the target.
Distance, as such, is not an action, but action (motion) is implied here, because the goal of such a control
system is to reduce this distance to zero.
Definition 1.5: An open-loop control system is one in which the control action is independent of the
output.
Definition 1.6: A closed-loop control system is one in which the control action is somehow dependent on
the output.
EXAMPLE 1.6. Most automatic toasters are open-loop systems because they are controlled by a timer.
The time required to make ‘‘good toast” must be estimated by the user, who is not part of the system.
Control over the quality of toast (the output) is removed once the time, which is both the input and the
control action, has been set. The time is typically set by means of a calibrated dial or switch.
EXAMPLE 1.7. An autopilot mechanism and the airplane it controls is a closed-loop (feedback)
control system. Its purpose is to maintain a specified airplane heading, despite atmospheric changes. It
performs this task by continuously measuring the actual airplane heading, and automatically adjusting the
airplane control surfaces (rudder, ailerons, etc.) so as to bring the actual airplane heading into
correspondence with the specified heading. The human pilot or operator who presets the autopilot is not
part of the control system.
1.4 FEEDBACK
Feedback is that characteristic of closed-loop control systems which distinguishes them from
open-loop systems.
Definition 1.7: Feedback is that property of a closed-loop system which permits the output (or some
other controlled variable) to be compared with the input to the system (or an input to some other
internally situated component or subsystem) so that the appropriate control action may be formed as some
function of the output and input.
More generally, feedback is said to exist in a system when a closed sequence of cause-and-effect
relations exists between system variables.
EXAMPLE 1.8. The concept of feedback is clearly illustrated by the autopilot mechanism of Example
1.7. The input is the specified heading, which may be set on a dial or other instrument of the airplane
control panel, and the output is the actual heading, as determined by automatic navigation instruments. A
comparison device continuously monitors the input and output. When the two are in correspondence,
control action is not required. When a difference exists between the input and output, the comparison
device delivers a control action signal to the controller, the autopilot mechanism. The controller provides
the appropriate signals to the control surfaces of the airplane to reduce the input-output difference.
Feedback may be effected by mechanical or electrical connections from the navigation instruments,
measuring the heading, to the comparison device. In practice, the comparison device may be integrated
within the autopilot mechanism.
1 Increased accuracy. For example, the ability to faithfully reproduce the input. This property is
illustrated throughout the text.
2. Tendency toward oscillation or instability. This all-important characteristic is considered in detail in
Chapters 5 and 9 through 19.
3. Reduced sensitivity of the ratio of output to input to variations in system parameters and other
characteristics (Chapter 9).
4. Reduced effects of nonlinearities (Chapters 3 and 19).
5. Reduced effects of external disturbances or noise (Chapters 7, 9, and 10).
6. Increased bandwidth. The bandwidth of a system is a frequency response measure of how well the
system responds to (or filters) variations (or frequencies) in the input signal (Chapters 6, 10,
12, and 15 through 18).
The signals in a control system, for example, the input and the output waveforms, are typically
functions of some independent variable, usually time, denoted t.
Definition 1.8: A signal dependent on a continuum of values of the independent variable t is called a
continuous-time signal or, more generally, a continuous-data signal or (less frequently) an analog
signal.
Definition 1.9: A signal defined at, or of interest at, only discrete (distinct) instants of the independent
variable t (upon which it depends) is called a discrete-time, a discrete-data, a sampled-data, or a digital
signal.
The interior of the rectangle representing the block usually contains a description of or the name
of the element, or the symbol for the mathematical operation to be performed on the input to yield the
output. The arrows represent the direction of information or signal flow.
EXAMPLE 2.1
The operations of addition and subtraction have a special representation. The block becomes a small
circle, called a summing point, with the appropriate plus or minus sign associated with the arrows
entering the circle. The output is the algebraic sum of the inputs. Any number of inputs may enter a
summing point.
EXAMPLE 2.2
This notation is avoided here because it is sometimes confused with the multiplication operation.
In order to have the same signal or variable be an input to more than one block or summing point,
a takeoff point is used. This permits the signal to proceed unaltered along several different paths to
several destinations.
EXAMPLE 2.3
2.2 BLOCK DIAGRAMS OF CONTINUOUS (ANALOG) FEEDBACK CONTROL SYSTEMS
The blocks representing the various components of a control system are connected in a fashion
which characterizes their functional relationships within the system. The basic configuration of a simple
closed-loop (feedback) control system with a single input and a single output (abbreviated SISO) is
illustrated in Fig. 2-6 for a system with continuous signals only.
Fig. 2-6
We emphasize that the arrows of the closed loop, connecting one block with another, represent
the direction of flow of control energy or information, which is not usually the main source of energy for
the system. For example, the major source of energy for the thermostatically controlled furnace of
Example
1.2 is often chemical, from burning fuel oil, coal, or gas. But this energy source would not appear in the
closed control loop of the system.
It is important that the terms used in the closed-loop block diagram be clearly understood.
Lowercase letters are used to represent the input and output variables of each element as well as
the symbols for the blocks g 1, g2, and h. These quantities represent functions of time, unless otherwise
specified.
EXAMPLE 2.4. r = r ( t )
Definition 2.5: The control signal u (or manipulated variable m) is the output signal of the feedforward
elements g1 applied as input to the plant g2.
Definition 2.6: The feedback path is the transmission path from the controlled output c back to the
summing point.
Definition 2.7: The feedback elements h establish the functional relationship between the controlled
output c and the primary feedback signal b. Note: Feedback elements typically include sensors of the
controlled output c, compensators, and/or controller element s.
Definition 2.8: The reference input r is an external signal applied to the feedback control system,
usually at the first summing point, in order to command a specified action of the plant. It usually
represents ideal (or desired) plant output behavior.
In subsequent chapters, we use capital letters to denote Laplace transformed or z-transformed
quantities, as functions of the complex variable s, or z , respectively, or Fourier transformed quantities
(frequency functions), as functions of the pure imaginary variable jω . Functions of s or z are often
abbreviated to the capital letter appearing alone. Frequency functions are never abbreviated.
The letters r, c, e, etc., were chosen to preserve the generic nature of the block diagram. This
convention is now classical.
Definition 2.1: The plant (or process, or controlled system) g2 is the system, subsystem, process, or
object controlled by the feedback control system.
Definition 2.2: The controlled output c is the output variable of the plant, under the control of the
feedback control system
Definition 2.3: The forward path is the transmission path from the summing point to the controlled
output c.
Definition 2.4: The feedforward (control) elements g1, are the components of the forward path that
generate the control signal u or m applied to the plant. Note: Feedforward elements typically include
controller(s), compensator(s) (or equalization elements), and/or amplifiers.
Definition 2.9: The primary feedback signal b is a function of the controlled output c, algebraically
summed with the reference input r to obtain the actuating (error) signal e , that is, r ± b = e. Note: An
open-loop system has no primary feedback signal.
Definition 2.10: The actuating (or error) signal is the reference input signal r plus or minus the primary
feedback signal b. The control action is generated by the actuating (error) signal in a feedback control
system (see Definitions 1.5 and 1.6). Note: In an open-loop system, which has no feedback, the actuating
signal is equal to r.
Definition 2.11: Negative feedback means the summing point is a subtractor, that is, e = r - b.
Positive feedback means the summing point is an adder, that is, e = r + b.
Definition 2.12: A sampler is a device that converts a continuous-time signal, say u( t ) , into a discrete-
time signal, denoted u*(t), consisting of a sequence of values of the signal at the instants t1, t2, . . . , that is,
u( t1 ) ,u( t2 ), . . . , etc.
Definition 2.13: A hold, or data hold, device is one that converts the discrete-time output of a sampler
into a particular kind of continuous-time or analog signal.
Definition 2.14: An analog-to-digital (A/D) converter is a device that converts an analog or continuous
signal into a discrete or digital signal.
Definition 2.15: A digital-to-analog (D/A) converter is a device that converts a discrete or digital signal
into a continuous- time or analog signal.
EXAMPLE 2.12. Digital computers or microprocessors are often used to control continuous plants or
processes. A/D and D/A converters are typically required in such applications, to convert signals from the
plant to digital signals, and to convert the digital signal from the computer into a control signal for the
analog plant. The joint operation of these elements is usually synchronized by a clock and the resulting
controller is sometimes called a digital filter, as illustrated in Fig. 2-13.
Fig. 2-13
Definition 2.16: A computer-controlled system includes a computer as the primary control element.
Definition 2.17: A transducer is a device that converts one energy form into another
Definition 2.18: The command u is an input signal, usually equal to the reference input r. But when the
energy form of the command u is not the same as that of the primary feedback b, a transducer is required
between the command u and the reference input r as shown in Fig. 2-16(a).
Fig. 2-16
Definition 2.19: When the feedback element consists of a transducer, and a transducer is required at the
input, that part of the control system illustrated in Fig. 2-16(b) is called the error detector.
Definition 2.20: A stimulus, or test input, is any externally (exogenously) introduced input signal
affecting the controlled output c. Note: The reference input r is an example of a stimulus, but it is not the
only kind of stimulus.
Definition 2.21: A disturbance n (or noise input) is an undesired stimulus or input signal affecting the
value of the controlled output c. It may enter the plant with u or m, as shown in the block diagram of Fig.
2-6, or at the first summing point, or via another intermediate point.
Definition 2.22 The time response of a system, subsystem, or element is the output as a function of time,
usually following application of a prescribed input under specified operating conditions.
Definition 2.23: A multivariable system is one with more than one input (multiinput, MI-), more than
one output (multioutput, -MO), or both (multiinput-multioutput, MIMO)
Definition 2.24: The term controller in a feedback control system is often associated with the elements
of the forward path, between the actuating (error) signal e and the control variable u. But it also
sometimes includes the summing point, the feedback elements, or both, and some authors use the term
controller and compensator synonymously. The context should eliminate ambiguity.
The following five definitions are examples of control laws, or control algorithms.
Definition 2.25: An on-off controller (two-position, binary controller) has only two possible values at
its output u, depending on the input e to the controller.
EXAMPLE 2.13. A binary controller may have an output u = +1 when the error signal is positive, that is,
e > 0, and u = -1 when e ≤ 0.
Definition 2.26: A proportional (P) controller has an output u proportional to its input e, that is, u =
Kpe, where Kp, is a proportionality constant.
Definition 2.27: A derivative (D) controller has an output proportional to the derivative of its input e,
that is, u = KD de/dt, where KD is a proportionality constant.
Definition 2.28: An integral (I) controller has an output u proportional to the integral of its input e,
that is, u = Kl ʃ e( t )dt, where Kl, is a proportionality constant.
Definition 2.29: PD, PI, DI, and PID controllers are combinations of proportional (P), derivative (D),
and integral (I) controllers.