FMMS

Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

Mechatronic and

Measurement Systems
Errors in Measurements
• Error is the difference between the true value and the measured value of a quantity such as displacement,
pressure, temperature etc.
• The aim of study of errors is to find out the ways to minimize them.
• Errors may be introduced from different sources.
• Errors are usually classified as follows:
1. Gross Errors,
2. Systematic Errors
3. Random Errors.
1. Gross Errors:
• These errors are largely due to human errors in reading of instruments, incorrect adjustment and improper
application of instruments, and computational mistakes.
• Complete elimination of such errors is probably impossible. The common gross error is the improper use of
an instrument for measurement.
• For example, a well calibrated voltmeter can give an error in reading when connected across a high
resistance circuit. The same voltmeter will give more accurate reading when connected in a low
resistance circuit. It means the voltmeter has a “loading effect” on the circuit, altering the characteristics
by the measurement process.
2. Systematic Errors:
• These errors are shortcomings of instruments, such as defective or worn parts, and effects of the environment
on the equipment.
• This type of error is usually divided into two different categories:
(i) Instrumental Errors,
(ii) Environmental Errors.
(i) Instrumental Errors:
• These errors are defined as shortcomings of the instrument.
• Instrumental errors are inherent in measuring instruments due to their mechanical structure.
• For example, in the deflection type instrument friction in bearings of various moving components, irregular
spring tension, stretching of the spring or reduction in tension due to improper handling or overloading of
the instrument may cause incorrect readings, which will result in errors.
• Other instrumental errors may be due to calibration; improper zero setting, variation in the air gap, etc.
• Instrumental errors may be avoided by following methods:
(a) Selecting a suitable instrument for the particular measurement;
(b) Applying correction factors after determining the amount of instrumental error;
(c) Calibrating the instrument against a standard instrument.
(ii) Environmental Errors:
• These errors are due to external conditions surrounding the instruments which affect the measurements.
• The surrounding conditions may be the changes in temperature, humidity, barometric pressure, or of
magnetic or electrostatic fields.
• The change in ambient temperature at which the instrument is used causes a change in the elastic properties
of the spring in a moving-coil mechanism and so causes an error in the reading of the instrument.
• To reduce the effects of external conditions surrounding the instruments the corrective measures are to be
taken as follows:
(a) To provide air-conditioning,
(b) Certain components in the instrument should be completely closed i.e., hermetically sealed, and
(c) To provide magnetic and electrostatic shields.
Systematic errors can also be subdivided into:
(a) Static errors, and
(b) Dynamic errors.
• Static errors are caused by limitations of the measuring device or the physical laws governing its
behaviour. A static error is introduced in a micrometer when excessive pressure is applied in twisting or
rotating the shaft.
• Dynamic errors caused by the instrument do not respond fast enough to follow the changes in a measured
variable.
3. Random Errors:
• These errors are those errors which are due to unknown causes and they occur even when all systematic errors
have been taken care of.
• This error cannot be corrected by any method of calibration or other known methods of control.
• Few random errors usually occur in well-designed experiments, but they become important in high-accuracy
work.
• For example, a voltmeter with accurately calibrated is being used in ideal environmental conditions to read
voltage of an electric circuitry system. It will be found that the readings vary slightly over the period of
observation. This variation cannot be corrected by any method of calibration or other known method of control
and it cannot be explained without minute investigation. The only way to offset these errors is by increasing the
number of readings and using statistical methods in order to obtain the best approximation of the true value of
the quantity under measurement.
CALIBRATION
• Calibration is the quantitative comparison between a known standard and the output of the measuring
system measuring the same quantity.
• If the output-input response of the system is linear, then a single-point calibration is sufficient, wherein only
a single known standard value of the input is employed.
• If the system response is non-linear, then a set of known standard inputs to the measuring system are
employed for calibrating the corresponding outputs of the system.
• Calibration procedure is the process of checking the inferior instrument against a superior instrument of
known traceability certified by a reputed standards organisation/national laboratory.
• ‘traceability’ of a calibrating device refers to its certified accuracy when compared with superior standard of
highest possible accuracy.
• Calibration procedures can be classified as follows:
1. Primary calibration
• When a device/system is calibrated against primary standards, the procedure is termed primary calibration.
• After primary calibration, the device is employed as a secondary calibration device.
• The standard resistor or standard cell available commercially are examples of primary calibration.

2. Secondary calibration
• When a secondary calibration device is used for further calibrating another device of lesser accuracy, then the
procedure is termed secondary calibration.
• Secondary calibration devices are very widely used in general laboratory practice as well as in the industry
because they are practical calibration sources.
• For example, standard cell may be used for calibrating a voltmeter or an ammeter with suitable circuitry.

3. Direct calibration with known input source


• Direct calibration with a known input source is in general of the same order of accuracy as primary calibration.
• Devices that are calibrated directly are also used as secondary calibration devices.
• For example, a turbine flow meter may be directly calibrated by using the primary measurements such as
weighing a certain amount of water in a tank and recording the time taken for this quantity of water to flow
through the meter. Subsequently, this flow meter may be used for secondary calibration of other flow metering
devices such as an orificemeter or a venturimeter.

4. Indirect calibration
• Indirect calibration is based on the equivalence of two different devices that can be employed for measuring a
certain physical quantity. This can be illustrated by a suitable example, say a turbine flow meter. The requirement
of dynamic similarity between two geometrically similar flow meters is obtained through the maintenance of
𝐷1 𝜌1 𝑉1 𝐷2 𝜌2 𝑉2
equal Reynold’s number, i.e. = .
𝜇1 𝜇2

• where the subscripts 1 and 2 refer to the ‘standard’ and the meter to be calibrated, respectively. For such a
condition, the discharge coefficients of the two meters are directly comparable
Advantage:
• It is possible to carry out indirect calibration, i.e. to predict the performance of one meter on the basis of an
experimental study of another.
• This way, a small meter may be employed to determine the discharge coefficient of large meters.
• Also, the discharge coefficient of the meter intended for gas may be determined by carrying out test runs on a
liquid provided that similarity through Reynold’s numbers is maintained.

5. Routine calibration
• Routine calibration is the procedure of periodically checking the accuracy and proper functioning of an
instrument with standards that are known to be accurately reproducible.
• The entire procedure is normally laid down for making various adjustments, checking the scale reading, etc.
which conforms to the accepted norms/standards.
Displacement measuring devices
Potentiometer transducer
• The potentiometer is a resistance type transducer for
measuring displacement. It converts mechanical
displacement into electrical output.
• It consists of a resistance element with a movable contact as
shown in Figure. A voltage Vs is applied across the two ends
A and B of the resistance element and an output voltage V0 is
measured between the point of contact C of the sliding
element and the end of the resistance element A.
• The body whose motion is being measured is connected to
the sliding element of the potentiometer, so that
translational motion of the body causes a motion of equal
magnitude of the slider along the resistance element and a
corresponding change in the output voltage V0.
Capacitive transducer
• Capacitive transducer is a passive transducer that works on the
principle of variable capacitance.
• It contains two conductive plates separated by a dielectric
medium, with the capacitance varying due to changes in the plate
area, distance, or dielectric properties. This makes it highly
suitable for measuring displacements, pressures, forces, and fluid
levels.
• A capacitive transducer contains two parallel metal plates
separated by a dielectric medium, which can be air, a material,
gas, or liquid. Unlike a normal capacitor where the distance
between the plates is fixed, the distance in a capacitive transducer
varies.
• The capacitive transducer uses the principle of variable capacitance to convert mechanical movement into an electrical
signal. The input quantity causes a change in capacitance, which is directly measured by the transducer.
• Capacitive transducers can measure both static and dynamic changes. Displacement is measured directly by connecting
the measurable devices to the movable plate of the capacitor. They operate in both contacting and non-contacting
modes.
Working Principle of Capacitive Transducer
The working principle revolves around the basic formula for capacitance C=εA/d, Any factor affecting these parameters
causes the capacitance to change.
The equations below express the capacitance between the plates of a capacitor:
C=εA/d, C=εrε0A/d
Where, A is the overlapping area of plates in m2
•d is the distance between two plates in meter
•ε is the permittivity of the medium in F/m
•εr is the relative permittivity
•ε0 is the permittivity of free space
Strain gauge
• A Strain gauge is a sensor whose resistance varies with applied force.
• It is one of the significant sensors used in the geotechnical field to measure the amount of strain on any structure
(Dams, Buildings, Nuclear Plants, Tunnels, etc.).
• The resistance of a strain gauge varies with applied force and, it converts parameters such as force, tempressure,
tension, weight, etc. into a change in resistance that can be measured.
• Whenever an external force is applied to an object, it tends to change its shape and size thereby, altering its
resistance.
• The stress is the internal resisting capacity of an object while a strain is the amount of deformation experienced by it.
• Resistance is directly dependent on the length and the cross-sectional area of the conductor given by:
R= ρL/A.
R = Resistance
L = Length
A = Cross-Sectional Area
Ρ = Resistivity of the material
Gauge factor

• The gauge factor is defined as the unit change in resistance per unit change in length.
• It is denoted as Ks. It is also called sensitivity of the strain gauge.
• Gauge factor Ks =∆R/R / ∆𝐿/𝐿
Where ∆R= Corresponding change in resistance R
∆L= Change in length per unit length.
The resistance of the wire of strain gauge R is given by:
R = ρ.L/A
∆𝜌
GF=1+2v+𝜌𝜖 . V=poisons ratio note
E=∆𝐿/𝐿
WHEATSTONE BRIDGE
Wheatstone bridge is an electric circuit that is used for measuring the
instantaneous change in the instant resistance.
𝑅1 = 𝑅2 = 𝑅3 = 𝑅3 or
𝑅1𝑋𝑅3 = 𝑅2𝑋𝑅4
When applying any voltage to input, the output of the system may be zero “0”.
In this way, the bridge is in balance. When the any resistance changes, the
output will be different than zero.
A strain gauge connects to the circuit in Figure 7. When strain gauge loads and
the resistance changes, the voltage is obtained at the output of the bridge.
Two strain gauged connect to the circuit in Figure 8. When strain
gauges load and the resistances change, the voltage is obtained at
the output of the bridge.

or
Bonded strain gauge
• These gauges are directly bonded (that is pasted) on the surface of the structure
under study.
• Along with the construction of transducers, a bonded metal wire strain gauge is
used for stress analysis.
• A resistance wire strain gauge has a wire of diameter 0.25mm or less.
• The grid of fine resistance wire is cemented to carrier. It can be a thin sheet of
paper, Bakelite or a sheet of Teflon. To prevent the wire from any mechanical
damage, it is covered on top with a thin sheet of material. The spreading of wire
allows us to have a uniform distribution of stress over the grid. The carrier is
bonded with an adhesive material. Due to this, a good transfer of strain from
carrier to a grid of wires is achieved.
• Typically, the resistance of strain gauges is 120Ω, 350Ω, 1000Ω. But a high
resistance value results in lower sensitivity. Thus, to have higher sensitivity,
higher bridge voltages have to be used.
Piezoelectric transducers
• A type of transducer that convert mechanical energy into electrical voltage
when pressed or strained. This voltage directly corresponds to the amount
of force/pressure applied.
• The electric voltage produced by a piezoelectric transducer can be easily
measured by the voltage measuring instruments.
• The device is manufactured from a crystal, which can be either a natural
material such as quartz or a synthetic material such as lithium sulphate.
• The crystal is mechanically stiff (i.e. a large force is required to compress it),
and consequently piezoelectric transducers can only be used to measure
the displacement of mechanical systems that are stiff enough themselves to
be unaffected by the stiffness of the crystal.
• When the crystal is compressed, a charge is generated on the surface that is
measured as the output voltage.
Fiber Optics Displacement Transducer
• It measures displacement based on intensity of light.
• It consists of a bundle of transmitting fibers coupled to the laser source
and a bundle of receiving fibers coupled to the detector.
• The axis of the transmitting fiber and the receiving fiber with respect to
the moving target can be adjusted to increase the sensitivity of the
sensor.
• Light from the source is transmitted through the transmitting fiber and
is made to fall on the moving target.
• The light reflected from the target is made to pass through the
receiving fiber and the same is detected by the detector.
• Based on the intensity of the light received, the displacement of the
target can be measured, (i.e.) if the received intensity is more than we
can say that the target is moving towards the sensor and if the intensity
is less, we can say that the target is moving away from the sensor.
Linear Variable Differential Transformer (LVDT)

• LVDT converts the Linear motion into an electrical signal using an inductive transducer.
• It consists of one primary winding(P) and two secondary windings (S1 & S2). The primary winding is at the
center and the secondary windings are present on both sides of the primary winding at an equal distance from the
center. Both the secondary windings have an equal no. of turns and they are linked with each other in series
opposition, i.e. they are wounded in opposite directions, but are connected in series with each other. The entire
coil assembly remains stationary during distance measurement. The moving part of the LVDT is an arm made of
magnetic material.
Working Principle of LVDT
• The working of LVDT is based on Faraday’s law of electromagnetic induction, which states that “the electrical
power in the network induction circuit is proportional to the rate of change of magnetic flux in the circuit.”
• As the primary winding of LVDT is connected to the AC power supply, The alternating magnetic field is produced
in the primary winding, which results in the induced EMF of secondary windings.
• Induced voltages in the secondary windings S1 & S2 are E1 & E2 respectively. The rate of change of magnetic
flux i.e. dΦ/dt is directly proportional to the magnitude of induced EMF i.e E1 and E2. The total output voltage Eo
in the circuit is given by Eo = E1-E2
• Since depending on the position of the core the flux varies, the object whose translational displacement is to be
measured is physically attached to the central iron core of the transformer, so that all motions of the body are
transferred to the core.
• The output of a Linear Variable Differential Transformer (LVDT) is an AC voltage that is proportional to the
displacement or position of its core.
Force Measurement
Balance
• A simple lever system, shown in Fig. 9.1, called a balance, has long been used
as a force measuring device.
• To measure the unknown force F at a distance L from the pivot, a mass m at a
distance / from the pivot is used.
• The system is in equilibrium when FL = mgl
• With the knowledge of the other parameters, viz. lengths L and /, mass m and
gravitational constant g, the force F can be calculated.

Hydraulic Load Cell


• In this type of device (Fig. 9.2), hydraulic pressure is used to indicate the
force F, applied to a diaphragm or some other type of force transmitting
element.
• When a force F is applied, pressure is developed in the fluid which is
normally an oil.
• This can be measured by a pressure indicating device like a Bourdon gauge.
• Such a device can be used up to very large forces, of the order of millions of
Newtons.
Pneumatic Load Cell
• In this type of load cell, shown in Fig. 9.3, air is supplied under pressure to a chamber having a diaphragm at one end and a
nozzle (Nozzle flapper) at the other.
• Application of force to the diaphragm deforms it and changes the gap between the extension of the diaphragm and the nozzle,
thus changing the pressure in the chamber.
• If the force F increases, the gap reduces and this increases pressure P2 in the chamber.
• This increase in pressure produces a force tending to return the diaphragm to its original position.
• For any force F, the system attains equilibrium and pressure P2 gives an indication of the force F.
• This type of load cell is used up to 20 kN.
Nozzle Flapper
• The nozzle flapper is a displacement transducer that translates displacements into
a pressure change.
• A secondary pressure-measuring device is therefore required within the
instrument. The general form of a nozzle flapper is shown schematically in Figure
19.7.
• Fluid at a known supply pressure, Ps, flows through a fixed restriction and then
through a variable restriction formed by the gap, x, between the end of the main
vessel and the flapper plate.
• The body whose displacement is being measured is connected physically to the
flapper plate. The output measurement of the instrument is the pressure Po in the
chamber shown in Figure 19.7, and this is almost proportional to x over a limited
range of movement of the flapper plate.
• Air is very commonly used as the working fluid and this gives the instrument a time constant of about 0.1 seconds.
• The instrument has extremely high sensitivity but its range of measurement is quite small.
• One very common application of nozzle flappers is measuring the displacements within a load cell, which are typically very small.
Thank
You

You might also like