Elements of Measurement

Download as pdf or txt
Download as pdf or txt
You are on page 1of 80

Elements of Measurement

Measurement?
• A measurement is an experimental process to acquire
new knowledge about a product. A measurement is
the realization of planned actions for the quantitative
comparison of a measurand with a unit.
• A measurand is the physical quantity that is to be
measured. Generally, a measurand is the input of a
measurement system.
• Measuring is a process for ascertaining or determining
the quantity of a product by application of some object
of known size or capacity, or by comparison with some
fixed unit
Elements of a measuring instrument.
Measurement chain
• Present-day applications of measuring
instruments can be classified into three major
areas.
– The first of these is their use in regulating trade,
applying instruments that measure physical quantities
such as length, volume and mass in terms of standard
units.
– The second application area of measuring instruments
is in monitoring functions.
– Used as part of automatic feedback control systems
forms the third application area of measurement
systems.
Elements of a simple closed-loop control system.
Elements of a measurement system
• A measuring system exists to provide information
about the physical value of some variable being
measured.
• In simple cases, the system can consist of only a
single unit that gives an output reading or signal
according to the magnitude of the unknown
variable applied to it.
• However, in more complex measurement
situations, a measuring system consists of several
separate elements.
• These components might be contained within
one or more boxes, and the boxes holding
individual measurement elements might be
either close together or physically separate.
The term measuring instrument is commonly
used to describe a measurement system,
whether it contains only one or many
elements.
• The first element in any measuring system is the
primary sensor: this gives an output that is a function
of the measurand (the input applied to it).
• For most but not all sensors, this function is at least
approximately linear. Some examples of primary
sensors are a liquid-in-glass thermometer, a
thermocouple and a strain gauge. In the case of the
mercury-in-glass thermometer, the output reading is
given in terms of the level of the mercury, and so this
particular primary sensor is also a complete
measurement system in itself.
• However, in general, the primary sensor is only part of
a measurement system.
• Variable conversion elements are needed where the
output variable of a primary transducer is in an
inconvenient form and has to be converted to a more
convenient form.
• For instance, the displacement-measuring strain gauge
has an output in the form of a varying resistance. The
resistance change cannot be easily measured and so it
is converted to a change in voltage by a bridge circuit,
which is a typical example of a variable conversion
element.
• In some cases, the primary sensor and variable
conversion element are combined, and the
combination is known as a transducer.
• Signal processing elements exist to improve the quality
of the output of a measurement system in some way.
• A very common type of signal processing element is
the electronic amplifier, which amplifies the output of
the primary transducer or variable conversion element,
thus improving the sensitivity and resolution of
measurement. This element of a measuring system is
particularly important where the primary transducer
has a low output.
• For example, thermocouples have a typical output of
only a few millivolts.
• Other types of signal processing element are
those that filter out induced noise and remove
mean levels etc. In some devices, signal
processing is incorporated into a transducer,
which is then known as a transmitter.
• In addition to these three components just
mentioned, some measurement systems have
one or two other components, firstly to
transmit the signal to some remote point and
secondly to display or record the signal if it is
not fed automatically into a feedback control
system.
• Signal transmission is needed when the
observation or application point of the output of
a measurement system is some distance away
from the site of the primary transducer.
• Sometimes, this separation is made solely for
purposes of convenience, but more often it
follows from the physical inaccessibility or
environmental unsuitability of the site of the
primary transducer for mounting the signal
presentation/recording unit.
• The final optional element in a measurement
system is the point where the measured signal is
utilized.
• In some cases, this element is omitted altogether
because the measurement is used as part of an
automatic control scheme, and the transmitted
signal is fed directly into the control system.
• In other cases, this element in the measurement
system takes the form either of a signal
presentation unit or of a signal-recording unit.
Choosing appropriate measuring
instruments
• The starting point in choosing the most
suitable instrument to use for measurement
of a particular quantity in a manufacturing
plant or other system is the specification of
the instrument characteristics required,
especially parameters like the desired
measurement accuracy, resolution, sensitivity
and dynamic performance.
• It is also essential to know the environmental
conditions that the instrument will be
subjected to, as some conditions will
immediately either eliminate the possibility of
using certain types of instrument or else will
create a requirement for expensive protection
of the instrument.
• Instrument choice is a compromise between
performance characteristics, ruggedness and
durability, maintenance requirements and
purchase cost.
Instrument types and
performance characteristics
• Active and passive instruments
– Instruments are divided into active or passive
ones according to whether the instrument output
is entirely produced by the quantity being
measured or whether the quantity being
measured simply modulates the magnitude of
some external power source.
Example of a passive instrument

The pressure of the fluid


is translated into a
movement of a pointer
against a scale. The
energy expended in
moving the pointer is
derived entirely from
the change in pressure
measured: there are no
other energy inputs to
the system.
Example of an active instrument
the change in petrol level moves a
potentiometer arm, and the output
signal consists of a proportion of
the external voltage source applied
across the two ends of the
potentiometer. The energy in the
output signal comes from the
external power source: the primary
transducer float system is merely
modulating the value of the
voltage from this external power
source.
Null-type and deflection-type
instruments
• The pressure gauge just mentioned is a good
example of a deflection type of instrument,
where the value of the quantity being
measured is displayed in terms of the amount
of movement of a pointer.
An alternative type of pressure
gauge is the deadweight gauge,
which is a null-type instrument.
Here, weights are put on top
of the piston until the
downward force balances the
fluid pressure. Weights are
added until the piston reaches a
datum level, known as the null
point. Pressure measurement is
made in terms of the value of
the weights needed to reach
this null position.
Analogue and digital instruments
• An analogue instrument gives an output that
varies continuously as the quantity being
measured changes. The output can have an
infinite number of values within the range
that the instrument is designed to measure.
• A digital instrument has an output that varies
in discrete steps and so can only have a finite
number of values.
What type of instrument is this?
Indicating instruments and
instruments with a signal output
• The final way in which instruments can be
divided is between those that merely give an
audio or visual indication of the magnitude of
the physical quantity measured
• and those that give an output in the form of a
measurement signal whose magnitude is
proportional to the measured quantity.
• The class of indicating instruments normally includes
all null-type instruments and most passive ones.
• Indicators can also be further divided into those that
have an analogue output and those that have a digital
display.
• Instruments that have a signal-type output are
commonly used as part of automatic control systems.
• In other circumstances, they can also be found in
measurement systems where the output measurement
signal is recorded in some way for later use.
Smart and non-smart instruments
• The advent of the microprocessor has created
a new division in instruments between those
that do incorporate a microprocessor (smart)
and those that don’t.
Static characteristics of instruments
• Which instruments require more accuracy
thermometer or a instrument measuring
temperature of a chemical process sensitive to
temperature?


• Accuracy of measurement is thus one
consideration in the choice of instrument for a
particular application. Other parameters such
as sensitivity, linearity and the reaction to
ambient temperature changes are further
considerations. These attributes are
collectively known as the static characteristics
of instruments
Accuracy and inaccuracy
(measurement uncertainty)
• The accuracy of an instrument is a measure of
how close the output reading of the instrument is
to the correct value. In practice, it is more usual
to quote the inaccuracy figure rather than the
accuracy figure for an instrument. Inaccuracy is
the extent to which a reading might be wrong,
and is often quoted as a percentage of the full-
scale (f.s.) reading of an instrument.
• The term measurement uncertainty is frequently
used in place of inaccuracy.
Precision/ repeatability/
reproducibility
• Precision is a term that describes an instrument’s
degree of freedom from random errors. If a large
number of readings are taken of the same quantity by
a high precision instrument, then the spread of
readings will be very small.
• Precision is often, though incorrectly, confused with
accuracy.
• High precision does not imply anything about
measurement accuracy. A high precision instrument
may have a low accuracy. Low accuracy measurements
from a high precision instrument are normally caused
by a bias in the measurements, which is removable by
recalibration.
• The terms repeatability and reproducibility mean
approximately the same but are applied in different
contexts as given below.
• Repeatability describes the closeness of output
readings when the same input is applied repetitively
over a short period of time, with the same
measurement conditions, same instrument and
observer, same location and same conditions of use
maintained throughout.
• Reproducibility describes the closeness of output
readings for the same input when there are changes in
the method of measurement, observer, measuring
instrument, location, conditions of use and time of
measurement.
• Both terms thus describe the spread of output
readings for the same input.
• This spread is referred to as repeatability if the
measurement conditions are constant and
• as reproducibility if the measurement
conditions vary.
Tolerance
• Tolerance is a term that is closely related to
accuracy and defines the maximum error that is
to be expected in some value. Whilst it is not,
strictly speaking, a static characteristic of
measuring instruments, it is mentioned here
because the accuracy of some instruments is
sometimes quoted as a tolerance figure. When
used correctly, tolerance describes the maximum
deviation of a manufactured component from
some specified value.
Range or span
• The range or span of an instrument defines
the minimum and maximum values of a
quantity that the instrument is designed to
measure.
Linearity
• It is normally desirable that the output
reading of an instrument is linearly
proportional to the quantity being measured.
Sensitivity of measurement
• The sensitivity of measurement is a measure
of the change in instrument output that
occurs when the quantity being measured
changes by a given amount. Thus, sensitivity is
the ratio:
scale deflection
value of measurand producing deflection
Threshold
• If the input to an instrument is gradually
increased from zero, the input will have to
reach a certain minimum level before the
change in the instrument output reading is of
a large enough magnitude to be detectable.
This minimum level of input is known as the
threshold of the instrument.
Resolution
• When an instrument is showing a particular
output reading, there is a lower limit on the
magnitude of the change in the input measured
quantity that produces an observable change in
the instrument output. Like threshold, resolution
is sometimes specified as an absolute value and
sometimes as a percentage of f.s. deflection. One
of the major factors influencing the resolution of
an instrument is how finely its output scale is
divided into subdivisions.
Sensitivity to disturbance
• All calibrations and specifications of an
instrument are only valid under controlled
conditions of temperature, pressure etc.
These standard ambient conditions are usually
defined in the instrument specification. As
variations occur in the ambient temperature
etc., certain static instrument characteristics
change, and the sensitivity to disturbance is a
measure of the magnitude of this change.
• Such environmental changes affect instruments in
two main ways, known as zero drift and
sensitivity drift. Zero drift is sometimes known by
the alternative term, bias.
• Zero drift or bias describes the effect where the
zero reading of an instrument is modified by a
change in ambient conditions. This causes a
constant error that exists over the full range of
measurement of the instrument.
• Sensitivity drift (also known as scale factor
drift) defines the amount by which an
instrument’s sensitivity of measurement
varies as ambient conditions change. It is
quantified by sensitivity drift coefficients that
define how much drift there is for a unit
change in each environmental parameter that
the instrument characteristics are sensitive to.
Hysteresis effects
• If the input measured quantity to the
instrument is steadily increased from a
negative value, the output reading varies in
the manner shown in curve (a). If the input
variable is then steadily decreased, the output
varies in the manner shown in curve (b). The
non-coincidence between these loading and
unloading curves is known as hysteresis.
Dead space
• Dead space is
defined as the
range of different
input values over
which there is no
change in output
value. Any
instrument that
exhibits hysteresis
also displays dead
space
Dynamic characteristics of
instruments
• The static characteristics of measuring
instruments are concerned only with the steady
state reading that the instrument settles down to,
such as the accuracy of the reading etc.
• The dynamic characteristics of a measuring
instrument describe its behaviour between the
time a measured quantity changes value and the
time when the instrument output attains a steady
value in response.
• In any linear, time-invariant measuring system,
the following general relation can be written
between input and out put for time t > 0:

• where qi is the measured quantity, q0 is the


output reading and a0 . . . an, b0 . . . bm are
constants.
• If we limit consideration to that of step
changes in the measured quantity only, then
equation reduces to:

Equation 1
Zero order instrument
• If all the coefficients a1 . . . an other than a0 are
assumed zero in equation 1, then:

• where K is a constant known as the


instrument sensitivity as defined earlier.
• Any instrument that
behaves according to
equation on the
previous slide is said
to be of zero order
type. Following a step
change in the
measured quantity at
time t, the instrument
output moves
immediately to a new
value at the same
time instant t
First order instrument
• If all the coefficients a2 . . . an except for a0 and
a1 are assumed zero in equation 1 then:

• Any instrument that behaves according to


above equation is known as a first order
instrument.
• If d/dt is replaced by the D operator in above
equation, we get:

• Defining K = b0/a0 as the static sensitivity and


τ = a1/a0 as the time constant of the system,
the equation becomes:
First order instrument characteristic.
Second order instrument
• If all coefficients a3 . . . an other than a0, a1 and
a2 in equation 1 are assumed zero, then we
get:

• Applying the D operator again: a2D2q0 + a1Dq0


+ a0q0 = b0qi, and rearranging:
• It is convenient to re-express the variables a0,
a1, a2 and b0 in terms of three parameters K
(static sensitivity), ω (undamped natural
frequency) and ξ (damping ratio), where:

K = b0/a0; ω = a0/a2; ξ = a1/2a0a2


• This is the standard equation for a second order system and
any instrument whose response can be described by it is
known as a second order instrument.
Calibration
• The foregoing discussion has described the static and
dynamic characteristics of measuring instruments in
some detail.
• However, an important qualification that has been
omitted from this discussion is that an instrument only
conforms to stated static and dynamic patterns of
behaviour after it has been calibrated.
• It can normally be assumed that a new instrument will
have been calibrated when it is obtained from an
instrument manufacturer, and will therefore initially
behave according to the characteristics stated in the
specifications.
• During use, however, its behaviour will gradually
diverge from the stated specification for a variety
of reasons. Such reasons include mechanical
wear, and the effects of dirt, dust, fumes and
chemicals in the operating environment.
• The rate of divergence from standard
specifications varies according to the type of
instrument, the frequency of usage and the
severity of the operating conditions.
Errors during the measurement
process
• Errors in measurement systems can be divided
into
– those that arise during the measurement process
and
– those that arise due to later corruption of the
measurement signal by induced noise during
transfer of the signal from the point of
measurement to some other point.
• Errors arising during the measurement process
can be divided into two groups, known as
systematic errors and random errors.
• Systematic errors describe errors in the output
readings of a measurement system that are
consistently on one side of the correct reading,
i.e. either all the errors are positive or they are all
negative. Two major sources of systematic errors
are system disturbance during measurement and
the effect of environmental changes (modifying
inputs).
• Random errors are perturbations of the
measurement either side of the true value
caused by random and unpredictable effects,
such that positive errors and negative errors
occur in approximately equal numbers for a
series of measurements made of the same
quantity. Such perturbations are mainly small,
but large perturbations occur from time to
time, again unpredictably.
• Random errors often arise when
measurements are taken by human
observation of an analogue meter, especially
where this involves interpolation between
scale points. Electrical noise can also be a
source of random errors. To a large extent,
random errors can be overcome by taking the
same measurement a number of times and
extracting a value by averaging or other
statistical techniques.
• Measurement error is defined as the
difference between the distorted information
and the undistorted information about a
measured product, expressed in its
measurands. In short, an error is defined as
real (untrue, wrong, false, no go) value at the
output of a measurement system minus ideal
(true, good, right, go) value at the input of a
measurement system according to (1):
Δx = xr – xi
• where x is the error of measurement, xr is the
real untrue measurement value, and xi is the
ideal true measurement value.
HOW ERRORS ARISE IN
MEASUREMENT SYSTEMS
• A measurement under ideal conditions has no
errors. Real measurement results, however, will
always contain measurement errors of varying
magnitudes.
• A systematic (clearly defined process) and
systemic (all encompassing) approach is needed
to identify every source of error that can arise in
a given measuring system. It is then necessary to
decide their magnitude and impact on the
prevailing operational conditions.
• Measurement system errors can only be defined in relation
to the solution of a real specific measurement task. If the
errors of measurement systems given in technical
documentation are specified, then one has to decide how
that information relates to which
– measurand
– input
– elements of the measurement system
– auxiliary means
– measurement method
– output
– kind of reading
– environmental conditions.
• If the measurement system has the general structure, the
following errors may appear for a general measurement
task:
• input error
– sensor error
– signal transmission error 1
– transducer error
– signal transmission error 2
– converter error
– signal transmission error 3
– computer error
– signal transmission error 4
– indication error.
TYPES OF ERRORS IN DEFINED
CLASSES
• Systematic error (bias) is a permanent deflection
in the same direction from the true value. It can
be corrected. Bias and long-term variability are
controlled by monitoring measurements against a
check standard over time.
• Random error is a short-term scattering of values
around a mean value. It cannot be corrected on
an individual measurement basis. Random errors
are expressed by statistical methods.
LIST OF ERROR SOURCES IN
MEASUREMENTS
• To investigate sources of systematic errors, a
general checklist of error sources in
measurement should be used, which has been
collected by specialists working in the field
concerned. The main sources are
– Lack of gauge resolution
– Lack of linearity
– Drift
– Hysteresis
Lack of gauge resolution
• Resolution better called (but rarely done so)
discrimination is the ability of the
measurement system to detect and faithfully
indicate small enough changes in the
characteristic of the measurement result
Lack of linearity
• A test of linearity starts by establishing a plot
of the measured values versus corresponding
values of the reference standards. This obtains
an indication of whether or not the points fall
on a straight line with slope equal to 1, which
indicates linearity (proportional variation)
• Nonlinearities of gauges can be caused by the
following facts:
– gauge is not properly calibrated at the lower and
upper ends of the operating range,
– errors in the values at the maximum or minimum
range,
– worn gauge,
– internal design problems (in, say the electronic
units of the gauge).
Drift
• Drift is defined as a slow change in the
response of a gauge.
• Short-term drift is frequently caused by heat
buildup in the instrument during the time of
measurement. Long-term drift is usually not a
problem for measurements with short
calibration cycles
Hysteresis
• Hysteresis is a retardation of the effect when
the forces acting upon a body are changed (as
in viscosity or internal friction); for example, a
lagging in the values of resulting
magnetization in a magnetic material (as iron)
because of a changing magnetizing force.
Hysteresis represents the history dependence
of a physical system under real environmental
conditions
• Specific devices will posses their own set of
additional error sources. A checklist needs to
be developed and matured. The following is
an example of such a list.
• Hall Effect measurement error checklist
1. Are the probes or wires making good contact to the sample?
2. Are the contact I -V characteristics linear?
3. Is any contact much higher in the resistance than the others?
4. Do the voltages reach equilibrium quickly after current
reversal?
5. Is there visible damage (cracks, especially around the
contacts)?
6. Is the sample being used in the dark?
7. Is the sample temperature uniform?
8. Are dissimilar wiring materials used resulting in large
temperature gradients across the wiring?
Calibration of measuring sensors and
instruments
• Calibration consists of comparing the output
of the instrument or sensor under test against
the output of an instrument of known
accuracy when the same input (the measured
quantity) is applied to both instruments. This
procedure is carried out for a range of inputs
covering the whole measurement range of the
instrument or sensor.
• Calibration ensures that the measuring accuracy
of all instruments and sensors used in a
measurement system is known over the whole
measurement range, provided that the calibrated
instruments and sensors are used in
environmental conditions that are the same as
those under which they were calibrated. For use
of instruments and sensors under different
environmental conditions, appropriate correction
has to be made for the ensuing modifying inputs.

You might also like