Ch.5 Errors During The Measuremen T Process

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 82
At a glance
Powered by AI
The key takeaways are that error is the difference between the measured and true value, and there are different types of errors like systematic, random and uncertainty in measurements.

The different types of errors discussed are systematic errors, random errors and uncertainty. Systematic errors are consistently in one direction while random errors can be on either side. Uncertainty gives a range where the true value is likely to fall.

The error in a single measurement can be estimated using the formula Error = ±(1.96σ + α) where σ is the standard deviation and α is the standard error of the mean.

Ch.

5 Errors
during the
measuremen
t process

Introduction
Error is the difference between the
measured value and the true value:
Absolute error
|Error| = | measured value true value |
=|EA|
Percent Error
Problem: The true value is very
seldom known

Example
An object is known to weigh 25.0 grams. You weight the
object as 26.2 grams. What is the accuracy, inaccuracy,
error and percentage error of your measurement?

The instrument is 95.2 % accurate

Introduction
Uncertainty
Any measured quantity should be
subjected to uncertainty [cant be
avoided, depends on instrument
resolution]
Uncertainty = we are estimating the
probable error, giving us an interval
about the measured value in which we
believe the true value must fall.
Uncertainty Analysis
process of identifying and qualifying
errors.

Introduction

Uncertainty

Confidence interval: The range of


probable values of an experiment
Error is primarily a theoretical concept,
because its value is unknowable
Uncertainty is a more practical concept
Evaluating uncertainty allows you to
place a bound on the likely size of the
error

process
ERRORS
2- Corruption during transfer of the signal
from the point of measurement to some
other point due to noise.
Only first type will be discussed here which
divided to:
Systematic errors: Errors that are
consistently on one side of the correct
reading, i.e. either all the errors are
positive or they are all negative.
Random errors : Errors on either side of the
true value caused by random and
unpredictable effects.
Random errors often arise when
measurements are taken by human
observation of an analogue meter,

Sources of systematic
error
1- Effect of
environmental
disturbances, wear, dust,
and frequent use
2- Disturbance of the
measured system by the
act of measurement.

System
due to
hot
waterdisturbance
with a mercury-in-glass
measurement
process
thermometer. The
thermometer is a cold
mass, causes to lower the temperature of
the water.
Measure flow rate of a fluid in a pipe using
orifice plate, which is a function of the
pressure drop due to orifice and the orifice,
causes a pressure loss in the flowing fluid.
General rule, the process of measurement
always disturbs the system being
measured.
The magnitude of the disturbance varies
from one measurement system to other
depends on the type of instrument used for
measurement.
Ways of minimizing disturbance of

Measurements
in electric
circuits
Bridge
circuits for
measuring
resistance values are a further
example of the need for careful
design of the measurement
system. The impedance of the
instrument measuring the bridge
output voltage must be very large
in comparison with the
component resistances in the
bridge circuit. Otherwise, the
measuring instrument will load
the circuit and draw current from

Errors
environmental
inputs
The
staticdue
andtodynamic
characteristics
specified for measuring instruments
are only valid for particular
environmental conditions (e.g. of
temperature and pressure), away from
the specified calibration conditions,
the characteristics of measuring
instruments vary to some extent and
cause measurement errors. The
magnitude of this environment
variation is quantified by the two
constants known as sensitivity drift

Wear in instrument components

Systematic errors can frequently


develop over a period of time
because of wear in instrument
components. Recalibration often
provides a full solution to this
problem.

Connecting leads
The resistance of connecting leads in
electrical measurement systems (or
pipes in the case of pneumatically or
hydraulically actuated measurement
systems), are a common source of
error. For instance, in a resistance
thermometer that separated 100 m
from other parts of the measurement
system. The resistance of such a
length of 20 gauge copper wire is 7.

errors variations.
Caused byRandom
unpredictable
Occurs on either side of the correct
value, i.e. positive errors and negative
errors occur.
Can be eliminated by calculating the
average of a number of repeated
measurements.
The degree of confidence in the
calculated mean/median values can
be quantified by calculating the
standard deviation or variance of the

Uncertainty Analysis

WHY IS THERE UNCERTAINTY?


Measurements

are performed with


instruments, and no instrument can read to
an infinite number of decimal places
Uncertainty in measurement depends on
the scale of the apparatus?

UNCERTAINTY IN MEASUREMENT
A reading uncertainty is how accurately an
instruments scale can be read.
Analogue Scales
Where the divisions are
fairly large, the
uncertainty is taken as:
half the smallest scale
division

Where the divisions are small, the uncertainty is


taken as:

the smallest scale


division

Digital Scale
For a digital scale, the uncertainty
is taken as:

the smallest scale reading


e.g.voltage = 29.7 mV 0.1 mV
This means the actual reading
could be anywhere from
29.6 to 29.8

Example 1:
Measuring Length with meter: smallest scale is 0.1 mm
so uncertainty is 0.05 mm
length of a marker is 12.6 cm uncertainty is 0.05mm
Indicating the length of the marker could be 12.65 cm or
12.55

Example 2 :
Measuring volume liquid in a cylinder: smallest scale is
2 ml so uncertainty is 1 ml.
Measured volume is 34.4 ml uncertainty is 1 ml
so value can be 35.4 ml or 33.4 ml

:Example 3

Measuring a mass with a digital weigh scale


smallest measurement is 0.1 g so uncertainty
is 0.1 g
A measurement of a sample is 23.3 g
Written as 23.3 0.1 g
so the value can be 23.2 or 23.4 g

BASIC RULES FOR UNCERTAINTY


CALCULATIONS

absolute uncertainty
% uncertainty
100
reading

Exampl
e
m = (3.3 0.2) kg = (3.3 kg
6.1%)
The Absolute Uncertainty is:
m = 0.2 kg = (6.1/100) x 3.3
kg
The Relative Uncertainty is:
m = 6.1% = (0.2/3.3) x 100%

Combining uncertainties
1) Addition and Subtraction: ADD the
Absolute Uncertainties
Rule: (A A) + (B B) = (A + B)
(A + B)
(A A) - (B B) = (A - B) (A + B)
Consider the numbers: (6.5 0.5) m and
(3.3 0.1) m
Add: (6.5 0.5) m + (3.3 0.1) m = (9.8
0.6) m
Subtract: (6.5 0.5) m - (3.3 0.1) m =

Combining uncertainties
2) Multiplication and Division: ADD the
Relative Uncertainties
Rule: (A A) x (B B) = (A x B) (A +
B)
(A A) / (B B) = (A / B) (A +
B)
Consider the numbers:
(5.0 m 4.0%) and (3.0 s 3.3%)
Multiply:
(5.0 m 4.0%) x (3.0 s 3.3%) = (15.0 ms
7.3%)

Combining uncertainties
3) For a number raised to a power,
fractional or not, the rule is simply to
MULTIPLY the Relative Uncertainty by the
power.
Rule: (A A)n = (An nA)
Consider the number: (2.0 m 1.0%)
Cube: (2.0 m 1.0%)3 = (8.0 m3 3.0%)
Square Root: (2.0 m 1.0%)1/2 = (1.4 m1/2
0.5%)

Combining uncertainties
4) For multiplying a number by a
constant
there are two different rules depending on
which type of uncertainty you are working
with at the time.
Rule - Absolute Uncertainty: c(A A)
= cA c(A)
Consider: 1.5(2.0 0.2) m = (3.0 0.3) m
Note that the Absolute Uncertainty is
multiplied by the constant.
Rule - Relative Uncertainty: c(A A) =

Example: Calculation of Speed


Use the following data to calculate the
speed, and the uncertainty in speed, of a
moving object.

d 16 cm 0.5 cm

t 2 s 0.5 s

v?

d
v
t

ertainties for Calculations Involving Func


FUNCTIONS OF ONE VARIABLE

If the calculated parameter R is a function of the measured value x,


then R is said to be a function of x, and it is often written as R(x).
When this is the case, the uncertainty associated with R is obtained by

where

is the absolute value of the derivative of R with respect to x


x is the uncertainty in the measurement of x.

ertainties for Calculations Involving Func


FUNCTIONS OF MORE THAN ONE VARIABLE

cal analysis of measurements subject to random

The
average
value
of a set of measurements of a
ean
and
median
values
constant quantity can be expressed as either the mean
value or the median value.
As the number of measurements increases, the
difference between the mean value and median values
becomes very small.
For any set of n measurements x1, x2, , xn the
mean given by:
The median is the middle value when the
measurements in the data set are written down in
ascending order of magnitude; the median value is
given by:

ean and median values

For a set of 9 measurements x1, x2, ..,x9 the


Suppose
median =that
x5. the length of a steel bar is measured
by a number of different observers and the
For
10 measurements x1,.., x10, theare
median
=
following sets of 11 measurements
recorded
(units mm). We will call this measurement set A.
(Measurement set A)
398 - 420 - 394 - 416 - 404 - 408 - 400 - 420 - 396 413 - 430
430 - 420 - 420 - 416 - 413 - 408 - 404 - 400 - 398 396 - 394
= 4499 , Mean = 4499/11 = 409
Mean = 409.0 and median =408. Suppose now that
the measurements are taken again using a better
measuring rule, and with the observers taking
more care, to produce the following measurement
set B:
(Measurement set B)
409 - 406 - 402 - 407 - 405 - 404 - 407 - 404 - 407 -

Which
of median
the two measurement
sets A and B, should we
ean
and
values
have most confidence in?.
Set B as being more reliable since the measurements are
much closer together. In set A, the spread between the
smallest (396) and largest (430) value is 34, whilst in set B,
the spread is only 6.
Thus, the smaller the spread of the measurements,
the more confidence we have in the mean or median
value calculated.
Let us now see what happens if we increase the number of
measurements by extending measurement set B to 23
measurements. We will call this measurement set C.
(Measurement set C)
409 - 406 - 402 - 407 - 405 - 404 - 407 - 404 - 407 407 - 408 - 406 - 410 - 406 - 405 408 - 406 - 409 406
- 405 - 409 - 406 - 407
Mean =406.5 and median =406.
This confirms our earlier statement that the median

Standard deviation and variance


Expressing the spread of measurements as the
range between the largest and smallest value is
not a very good way of examining how the
measurement values are distributed about the
mean value. A much better way of expressing
the distribution is to calculate the variance or
standard deviation of the measurements.
The variance (V) is then given by:

Where: d is the deviation from the mean

The standard deviation is simply the square root


of the variance. Thus:

Example:
Calculate and V for measurement sets A, B and C
Solution :
Set A (mean = 409)

(deviations)2 = 1370;
n = number of measurements = 11.
Then, V=(deviations2)/n -1
=1370/10=137, = = 11.7

Set B (mean = 406)

From this data, using same analysis,


V = 4.2 and = 2.05.

Set C (mean = 406.5)

From this data, using same analysis, V = 3.53


and = 1.88

Summary
V

Set A 137 11.7


Set B 4.2 2.05
Set C 3.53 1.88

Note that the smaller values of


V and for measurement set B
compared with A correspond
with the respective size of the
spread in the range between
maximum and minimum values
for the two sets.

Thus, as V and decrease for a measurement set, we are able to


express greater confidence that the calculated mean or median
value is close to the true value, i.e. that the averaging process has
reduced the random error value close to zero.
Comparing V and for measurement sets B and C, V and get
smaller as the number of measurements increases, confirming
that confidence in the mean value increases as the number of
measurements increases.

Graphical data analysis techniques frequency distributions

Graphical techniques are a very useful way of


analyzing the way in which random measurement
errors are distributed. The simplest way of doing this
is to draw a histogram, in which bands of equal width
and the number of measurements within each band is
counted.
Table below shows set C data, in which the bands
chosen are 2mm wide.
Band [Interval] 401.5-403.5 403.5-405.5 405.5-407.5 407.5-409.5 409.5-411.5
Number of
measurements

11

Graphical data analysis techniques frequency distributions

Set C data
Band [Interval] 401.5-403.4 403.5-405.5 405.5-407.5 407.5-409.5 409.5-411.5
Number of
measurements

Figure A: Histogram of
measurements and deviations.

11

Figure B: Frequency distribution


curve of deviations.

Graphical data analysis techniques frequency distributions

Figure A

Figure B

Figure A: Histogram of measurements and


deviations.
As the number of measurements
approaches infinity, the histogram becomes
a smooth curve known as a frequency

Graphical data analysis techniques frequency distributions

The ordinate of this


curve is the frequency
of occurrence of each
deviation value, F (D),
and the abscissa is the
deviation, D.
If the frequency distribution curve is
normalized such that the area under it is
unity, then the curve is known as a
probability curve, and F(D) at any given

Graphical data analysis techniques frequency distributions

The condition that the


area under the curve is
unity can be expressed
mathematically as:

The probability that the error in any measurement lies between


two levels D1 and D2 = the area under the curve contained
between two vertical lines drawn through D1 and D2. This can
be expressed mathematically as:

Graphical data analysis techniques frequency distributions

The
cumulative
distribution
function
(c.d.f.) This is
defined as
the
probability of
observing a
value less
than
orisequal
The
c.d.f.
the area under the curve to the left of
D0, and
is
ato
vertical
line drawn
through D0

Gaussian distribution
For measurement of random errors only
The frequency of small deviations from the
mean value is much greater than the frequency
of large deviations.
The number of measurements with a small error
is much larger than the number of measurements
with a large error.
Alternative names for the Gaussian distribution
are the Normal distribution or Bell-shaped
distribution.

Gaussian distribution
For measurement of random errors only
The frequency of small deviations from the
mean value is much greater than the frequency
of large deviations.
The number of measurements with a small error
is much larger than the number of measurements
with a large error.
Alternative names for the Gaussian distribution
are the Normal distribution or Bell-shaped
distribution.

Gaussian distribution
A Gaussian curve is defined as a normalized frequency
distribution that is symmetrical about the line of zero error and
in which the frequency and magnitude of quantities are related
by the expression:

Where m is the mean value


of the data set x and is the
standard deviation.

Gaussian distribution

If the deviations [D = x-m]


substituted in the equation,
then

Gaussian distribution
The curve of deviation frequency F(D) plotted against
deviation magnitude D is a Gaussian curve known as
the error frequency distribution curve.
If the standard deviation is used as a unit of error, the
Gaussian curve can be used to determine the
probability that the deviation in any particular
measurement in a Gaussian data set is greater than a
certain value. the probability that the error lies in
a band between error levels D1 and D2 can be
expressed as:

Gaussian distribution
Solution of this expression is simplified by the
substitution: z = D/ = x-m/
The effect of this is to change the error distribution
curve into a new Gaussian distribution that has a
standard deviation of one (=1) and a mean of zero
(m=0).
This new form, shown in Figure C, is known as a
standard Gaussian curve, and the dependent variable
is now z instead of D. Equation can now be reexpressed as:

Gaussian distribution

Unfortunately, this equation can't be solved analytically using


tables of standard integrals, and numerical integration
provides the only method of solution.
However, standard Gaussian tables that tabulate F(z) for
various values of z can be used.

Gaussian distribution
Standard Gaussian tables

It tabulates F(z) for


various values of z,
where F(z) is given by:

Thus, F(z) gives the proportion of data values that are


less than or equal to z. This proportion is the area
under the curve of F(z) against z that is to the left of z.

Gaussian distribution
Standard Gaussian tables

The Gaussian table


can be used to
determine the
probability that any
measurement lies in
between D1 and D2 [z1
and z2]can be
expressed as:
F(z2) =
F(z1)

EXAMPLE:
Finding the area under the
standard normal curve to the
left of 1.23

EXAMPLE: Finding the area under the standard


normal curve to the right of 0.76

EXAMPLE:
FINDING THE AREA UNDER THE
STANDARD NORMAL CURVE THAT
LIES BETWEEN 0.68 AND 1.82

SUMMARY

Example:
How many measurements in a data set subject to
random errors lie outside deviation boundaries of +
and -
Solution
For E = , Z = 1
The required number is
represented by the sum of the
two shaded areas in Figure D
This can be expressed
mathematically as:

P(E < - or E > +)= P(Z < -1) + P(Z > +1)

P(E < - or E > +)= P(Z < -1) + P(Z > +1)
Using Table
P(Z < -1) = 0.1587
P(Z > +1) =1- 0.8413
=0.1587

P(E < - or E > +) = 0.1587 + 0.1587 = 0.3174 32%


i.e. 32% of the measurements lie outside the
boundaries, then 68% of the measurements lie inside.

Similar analysis shows that boundaries of 2 contain


95.4% of data points, and extending the boundaries to
3 encompasses 99.7% of data points.

Standard error of the mean


The previous analysis measurements with random
errors are distributed about the mean value. However,
some error remains between the mean value of a set of
measurements and the true value,
i.e. averaging a number of measurements will only
yield the true value if the number of measurements
is infinite.
The error between the mean of a finite data set and the
true measurement value (mean of the infinite data set)
is defined as the standard error of the mean, . This is
calculated as:

Standard error of the mean


tends towards zero as the number of measurements
expands towards infinity. The measurement value
obtained from a set of n measurements, x1, x2,
xn, can then be expressed as: x = xmean
For the data set C , n = 23, = 1.88 and = 0.39.
The length can therefore be expressed as 406.5 0.4
(68% confidence limit). However, it is more usual to
express measurements with 95% confidence limits
(2 boundaries). In this case, 2 = 3.76, 2 = 0.78
and the length can be expressed as 406.5 0.8 (95%
confidence limits).

Estimation of random error in a single measurement:


Error = (1.96 + )
Example 3.4
Suppose that a standard mass is measured 30 times,
and the calculated values of and are = 0.43 and
= 0.08. If the instrument is then used to measure an
unknown mass and the reading is 105.6 kg, how
should the mass value be expressed?
Solution
Error = 1.96 + = 0.92. The mass value should
therefore be expressed as:
105.6 0.9 kg.

:Example
The following 10 measurements were made of output voltage from a high-gain
:amplifier contaminated due to noise fluctuations
1.53 ,1.56 ,1.54 ,1.55 ,1.51 ,1.50 ,1.54 ,1.54 ,1.57 ,1.53
Estimate the accuracy to which the mean value is determined from these 10
measurements.
If 1000measurements were taken, instead of 10, but remained the same, by how
much would the accuracy of the calculated mean value be improved?
What is the error in the 1.51 reading, and write it.
:Solution

Distribution of manufacturing tolerances


Manufacturing processes are subject to random
variations that cause random errors in measurements. In
most cases, these random variations in manufacturing,
which are known as tolerances fit a Gaussian
distribution
Example
An integrated circuit chip contains 105 transistors. The
transistors have a mean current gain of 20 and a
standard deviation of 2. Calculate the following:
(a) The number of transistors with a current gain
between 19.8 and 20.2
(b) The number of transistors with a current gain greater
than 17.

P[19.8<X < 20.2] = P[-0.1< z < +0.1] = P[z < 0.1] - P[z < -0.1]
From tables, P[z < 0.1] = 0.5398
P[z < -0.1] = 0.4602
Hence, P[z < 0.1] - P[z < -0.1] = 0.5398 - 0.4602 = 0.0796
Thus 0.0796 105 = 7960 transistors have a current gain in the range from
19.8 to 20.2.

Goodness of fit to a Gaussian distribution


All of the analysis of random deviations presented so
far only applies when the data being analyzed belongs
to a Gaussian distribution. Hence, the degree to which
a set of data fits a Gaussian distribution should be
tested. This test can be carried out in one of three
ways, the simplest way to test is to plot a histogram
and look for a Bell-shape. Deciding whether or not
the histogram confirms a Gaussian distribution is a
matter of judgment. However, some deviation from
the perfect shape is to be expected even if the data
really is Gaussian.

END OF CH5

You might also like