12-Phys261-AppendixA Errors - F2015
12-Phys261-AppendixA Errors - F2015
12-Phys261-AppendixA Errors - F2015
One of the main goals of the Physics Lab is that you learn about error analysis and the
role it plays in experimental science. This appendix briefly reviews the topics you will need to
know about error analysis.
Illegitimate errors involve making gross mistakes in the experimental setup, in taking or
recording data, or in calculating results. Examples of illegitimate errors include: measuring
time t when you were supposed to be measuring temperature T, misreading a measurement on
a scale so that you think it is 2.0 when it should be 12.0, typing 2.2 into your spreadsheet
when you meant to type 20.2, or using the formula "momentum = mv2" rather than
"momentum = mv".
Random errors involve errors in measurement due to random changes or fluctuations in the
process being measured or in the measuring instrument. Random measuring errors are very
common. For example, suppose you measure the length of an object using a ruler and cannot
decide whether the length is closer to 10 or 11 mm. If you simply cannot tell which it is
closer to, then you will tend to make a random error of about ±0.5 mm in your choice.
Another example of a random error is when you try to read a meter on which the reading is
fluctuating. We say that "noise" causes the reading to change with time, and this leads to a
random error in determining the true reading of the meter.
A sampling error is a special kind of random error that occurs when you make a finite number
of measurements of something which can take on a range of values. For example, the
students in the university have a range of ages. We could find the exact average age of a
student in the University by averaging together the ages of all of the students. Suppose
instead that we only took a sub-group, or random sample of students, and averaged together
their ages. This sample average would not in general be equal to the exact average age of all
of the students in the University. It would tend to be more or less close to the exact average
depending on how large or small the group was. The difference between the sample average
and the exact average is an example of a sampling error. In general, the larger the sample, the
closer the sample average will approach the true average.
A systematic error is a repeated and consistent error that occurs in all of your measurements
due to the design of the apparatus. Examples of systematic errors include: measuring length
with a ruler which is too short, measuring time with a stopwatch which runs too fast, or
measuring voltage with a voltmeter which is not properly calibrated. Systematic errors can be
very difficult to detect because your results will tend to be consistent, repeatable and precise.
The best way to find a systematic error is to compare your results with results from a
completely different apparatus.
97
(2) What is error analysis good for?
If you're like most students who have worked on these labs, you may find yourself
wondering why you have to go through all of the trouble of using error analysis. In science and
engineering, if you don't understand why you are calculating something, then you really are
wasting your time. The most important thing to understand about error analysis is what it can do
for you in the lab. There are four main things that you should use error analysis for:
(i) Finding silly mistakes, such as typing a wrong number into your spreadsheet, using the
wrong units, entering a wrong formula, or reading a scale wrong. These illegitimate
errors must be corrected before your data can be meaningfully interpreted.
(ii) Finding a more accurate value for a quantity by making several measurements.
(iii) Determining the precision of your experimental results.
(iv) Finding whether your results agree with theory or other experimental results.
It is important to realize that all of the formulas which follow only work for random
errors! Despite this, performing error analysis on the random errors in an experiment can often
reveal the presence of illegitimate errors. In fact, in these labs you will probably find this to be
the most useful thing you can do with error analysis. How to use error analysis to find
illegitimate errors is discussed in section 8.
Ruler: One of the most common cases you will encounter involves using a ruler to
measure a length. A ruler has a series of marks that are separated by a fixed increment. Suppose
that you can very carefully align one end of the object with one of the marks, then the
uncertainty in measuring the length of the object comes from the difficulty of figuring out
exactly where the other end of the object falls on the ruler. If you can measure to the nearest
mark (i.e. decide which mark is closer to the length), then the worst mistake you will make is
one-half of the distance between the marks. For the rulers used in the lab, the smallest division is
usually 1 mm, so you should be able to measure to a precision of about ±0.5 mm. Since you will
not always make the worst mistake, this is actually an overestimate of the typical mistake you
can expect to make. A better estimate is about 0.3 mm or about 1/3 of the distance between the
marks. This is called the “1/3 rule”. If you have good eyesight, are careful reading the value, and
carefully align the ruler with the object, then you might be able to measure to ±0.25 mm. To
determine whether you should use one half, one-third or one quarter of a unit as the error, you
need to consider how carefully you read the scale when you took the data.
Pointer and scale: The next most common case you will encounter is taking a reading
off of a scale that has a pointer that indicates the value. Such scales are found on analog
voltmeters, ammeters, pressure gauges, and thermometers. The rule for estimating the error in
such a "pointer and scale" instrument is exactly the same as for using a ruler. Most people can
figure out which mark the pointer is closest to and this implies a worst case error of one-half the
98
smallest division marked on the scale. A better estimate for the uncertainty would be 1/3 of the
increment between divisions on the scale.
Digital readouts: The next most common case involves measurements made with a
digital readout. These readouts always have a finite number of digits. If the reading is stable,
and there are no other sources of error, then the estimated experimental error can be taken as
being equal to ±1/2 unit on the rightmost digit on the scale. If the system is designed such that it
displays the digit that is nearest to the true value, then the worst mistake the instrument will
make is ±1/2 unit on the rightmost digit on the scale and a better estimate for the uncertainty is
±1/2 unit. One needs to be careful however. Often when making a measurement, the rightmost
digits on a digital scale will fluctuate randomly due to noise. In this case, the estimated error
should be taken as about ±1 unit on the rightmost digit which does not fluctuate. Also, if you
time an event with a stopwatch, you will need to take into account the fact that starting and
stopping the timer is not very precise. To determine the error in this case, you will need to take
repeated measurements, as discussed below. Also, some instruments change by multiples of ±1
unit on the smallest scale. To be sure of what your instrument does, you need to check the
instrument’s operating manual or use the procedure in the next paragraph.
Determining the error from the data: The above techniques work fairly well when you
only have one measurement of a quantity. In some of the labs you make many measurements of
the same quantity. In this case, it is possible to directly determine the random experimental error
in each observation, rather than simply estimating it. Suppose you make N measurements of x,
lets call them x1, x2, x3, x4, ...xN. We will generally denote a given measurement, the i-th one, as
xi. If N is large enough, then the experimental error in one measurement can be taken as the
standard deviation (see the discussion below on the standard deviation)
1
1 N
2
2
∆x ≈ σ x = ( )
∑ xi − x , [A.1]
( N − 1) i =1
where <x> is the average value of x, and the symbol Σ is the Greek letter Sigma and the notation
means that the expression which follows the Σ should be summed up while letting i range from 1
to N. For example, suppose that you made 5 measurements (N=5) and x1=0, x2=2, x3=1.5,
x4=2.5, and x5=0.5. A simple calculation shows that the average is <x>=1.3. The estimated
error in each point is thus:
1
1
)
5 2
∆x ≈ σ x = ∑ (x i − x 2
(5 − 1) i =1
1/ 2
1
( )
2
= (0 − 1.3) + (2 − 1.3) + (1.5 − 1.3) + (2.5 − 1.3) + (0.5 − 1.3)
2 2 2 2
4
1
2
[(
= (1.3) + (0.7 ) + (0.2 ) + (1.2 ) + (0.8)
2 2 2 2 2 1/ 2
)]
= 1.04.
99
Notice that Δx = ±1.04 is quite reasonable since this is about how far each measurement is from
the average; x1=0 is 1.3 below the average, x2=2 is 0.7 above the average, x3=1.5 is 0.2 above the
average, x4=2.5 is 1.2 above from the average, and x5=0.5 is 0.8 below the average.
Radioactive decay and counting random events: Some experiments involve counting
how many random events happen in a certain period of time, for example, counting how many
atoms decay in a second in a radioactive material. How can we assign an error to such a
measurement? Assigning an error can seem puzzling because in each second there is a definite
integer-number of counts. The idea is that if we repeated the measurements many times, we
would tend to find a different number of counts in the same time interval, even if the sample and
detector were prepared in exactly the same way. By repeating the measurements many times, we
could find the average number of counts in each time interval. In general, the average number of
decays in a given time interval will be different from the number of decays found in an
individual measurement. The difference between the average number of decays in a given
interval and the number of decays found in one measurement can be though of as the error in the
measurement. For random decays, the rule is very simple; if you measure N events, then the
error in the measurement is ±N1/2. Thus if you measure 100 counts, the error is ±1001/2 = ±10
counts.
100
∂v
∆v = ∆x. [A.4]
∂x
You should recognize this as just the ordinary result from calculus for finding the change in a
function v when its argument x changes by a small amount Δx.
Now suppose that there are random errors in both x and t, of magnitude Δx and Δt
respectively, and we are trying to find the error in v = x/t. A derivation of this result is beyond
the scope of this class, and we will simply quote the answer; viz.,
1/ 2
∂v 2 ∂v 2
∆v = ∆x + ∆t . [A.5]
∂x ∂t
There are three things to notice about this expression. First, if Δt = 0 then it reduces to
Equation A.4. Second, this expression does not correspond to the usual rule from calculus for
finding the change in a function v when x and t change by small amounts. This is because we
are assuming that the errors in x and t are random and uncorrelated, so that they can work
together or oppose each other in producing changes in v. Finally notice that this expression can
be simplified by evaluating the derivatives. We can use A.3 to replace ∂v/∂x and also use:
∂v x
=− 2. [A.6]
∂t t
We can then rewrite Equation A.5 as
1/ 2 1/ 2
1 2 − x 2 x∆x 2 x ∆t 2
∆v = ∆x + 2 ∆t = x +
t t t t t
1/ 2 1/ 2
[A.7]
∆x ∆t
2 2
∆x ∆t
2 2
= v + v = v + .
x t x t
The above ideas can be generalized to include functions with errors in an arbitrary
number of arguments. For example, if f is a function of x, y, z, t, r, and B, and these have
random errors Δx, Δy, Δz, Δt, Δr, and ΔB respectively, then the error in f is just
1/ 2
∂f
2 2 2 2 2 2
∂f ∂f ∂f ∂f ∂f
∆f = ∆x + ∆y + ∆z + ∆t + ∆r + ∆B . [A.8]
∂x ∂y ∂z ∂t ∂r ∂B
The following examples illustrate some special cases that you will encounter in the labs.
(i) Suppose f = ax+b, where a and b are constants and x has an uncertainty Δx.
The uncertainty in f can be found from Equation A.8 by noting that x is the
only variable and ∂f/∂x = a. Thus,
1/ 2
∂f
2
∆f = ∆x [
= (a∆x ) ]
2 1/ 2
= a∆x.
∂x
(ii) Suppose f = x+y, where x has error Δx and y has error Δy. Then ∂f/∂x = 1 and
∂f/∂y = 1 and
101
1/ 2
∂f 2
∂f
2
∆f = ∆x + ∆y = (∆x) 2 + (∆y ) 2 .
∂x ∂y
(iii) Suppose f = xnym, where n and m are constants. The derivatives are just
∂f/∂x= nxn-1ym = nf/x and ∂f/∂y = mxnym-1 = mf/y. Thus,
1/ 2 1/ 2 1/ 2
∂f nf 2 ∆x 2
2 2 2 2 2
∂f mf 2 ∆y
∆f = ∆x + ∆y = ∆x + ∆y = f n + m .
∂x ∂y x y x y
(5) The mean value <x> and the error in the mean Δ<x>
Suppose you make N measurements of the quantity x and denote the result of the first
measurement by x1, the second measurement by x2,... and the N-th measurement by xN. The
average or mean value of x is denoted by <x> and is just:
1 N
x = ∑ xi . [A.9]
N i =1
Excel Tip: In Excel you can use the command "=average" (..) to automatically
calculate the mean or average of a set of data. For example, suppose you wanted to find
the mean of some data that was in cells D13 to D27. You would enter the command
=average(D13:D27) in the cell where you want the average value to appear. This is nice
because Excel figures out how many points N there are and you don’t have to keep track.
Notice that the above definition of <x> is just our normal definition of the average of a
set of numbers. Why is the mean value important? It turns out that if all of the measurements
have the same experimental uncertainty, then the mean value is the best estimate of the true
value of x.
The error in the mean value can be found by propagating errors, as in section 4.
Assuming that each of the measurements xi are independent variables, and that the error in each
measurement is Δx, one finds
∆x
∆ 〈 x〉 = . [A.10]
N
This result says that the error in the mean value, Δ<x>, is smaller than the error in any one
measurement Δx, by a factor of N1/2. For example, suppose you make 100 measurements. The
error in the mean, Δ<x>, will be 10 times smaller than the error Δx in an individual
measurement. What this means is that you can obtain very precise measurements, even with
imprecise instruments, provided you take many data points.
102
1
1 N
(xi − x )2 .
2
σx = ∑
N − 1 i =1
[A.11]
Excel Tip: In Excel you can use the command "=STDEV(..)" to automatically
calculate the standard deviation of a set of data. For example, suppose you
wanted to find the standard deviation of some data which was in cells D13 to
D27. You would enter the command =STDEV(D13..D27) in the cell where you
want the standard deviation to appear.
The standard deviation tells you how far a typical data point is from the average. If your data has
a lot of scatter in it, then you will find a large standard deviation. If all of your measurements are
practically identical, then the standard deviation will be quite small. Typically you would expect
that a given measured value of x might be different from the true value by about Δx, the
uncertainty in the measurement. Thus if the average value is a good estimate of the true value,
you expect
xi - <x> ≈ Δx
Substituting this into Equation A.8, one finds that for large N
1/ 2 1/ 2 1/ 2
1 N N N
σx ≈ ∑
N − 1 i =1
(∆x) 2
=
N −1
(∆x) 2
= (∆x)
N − 1
≈ ∆x.
(ii) Why do we take the square each of the terms? If we did not take the square, but
just added together all of the terms xi - <x>, we would get zero. To see this, just
look at the definition of <x>! The point is that data which falls below the average
is balanced by data which falls above that average. Roughly speaking, by taking
the square, we make all the terms positive and end up finding the (root mean
square) distance of a typical data point from the average, independent of whether
it is above or below the average.
(iii) Does σx get bigger if we measure more data points? No. The standard
deviation does not tend to get bigger (or smaller) as you take more data points.
Physically speaking, σx is just the typical distance a data point is from the
average. As you take more data, you tend to get a more accurate value for the
true value of σx, not a bigger value.
103
(7) The weighted mean <x> and the error in the weighted mean Δ<x>
In some of the labs, you will need to find the best estimate for a measured parameter by
combining together measurements which have different sizes of errors. For example, in 261 Lab
2, some of your measurements will be made with a ruler and some with vernier calipers. The
measurements made with the calipers will be much more accurate than those taken with the ruler.
If we want to combine together data from measurements with different errors, we need to use the
weighted average, which is defined by
N xi
∑ (∆x )2
i =1 i
x = . [A.12]
N
1
∑ 2
(∆x )i
i =1
Notice that in this expression, each measurement gets multiplied by 1/Δxi2 before it is
added to the other measurements. The factor 1/Δxi2 can be thought of as the importance or
"weight" of the measurement. Thus if Δx1 = 1 mm and Δx2 = 0.1 mm, then the second
measurement is 100 times more important that the first. From this you can see that it really pays
to make more accurate measurements, they carry a lot of weight! If we want to simplify
Equation A.12, we can define the weight of the i-th measurement as wi, where
wi = 1/Δxi2, [A.13]
and rewrite Equation A.12 as
N
∑w x i i
x = i =1
N
. [A.14]
∑w
i =1
i
Notice that the denominator is just the sum of all of the weights, so that it acts to normalize out
the total weight of all the measurements.
The error in the weighted mean is given by
1 1
∆ x = 1
= 1 . [A.15]
N 1 2 N 2
∑ 2 ∑ wi
(∆ x
i =1 i ) i =1
Notice that the error in the weighted mean is just the square root of the same term which appears
in the denominator of the weighted average. If you want to remember this result, notice that it is
just says that the error in the average is one over the square root of the total weight. Squaring an
rearranging would give you an equation which says that the total weight is one over the square of
the error in the mean. This is the same relationship as Equation A.13, except now it is for the
mean rather than an individual measurement.
Excel Tip : Excel does not have a command for calculating the weighted
mean or the error in the weighted mean. The easiest way to calculate the
weighted mean is to set up three columns, the first with the data in it, the
second with 1 over the square of the error in each measurement, and the third
104
with the product of the first and second columns. To get the weighted mean
you then sum the third column and divide by the sum of the second column.
This section briefly discusses everything you need to know about χ2.
How to calculate χ2
In order to calculate, you need three things: N measured data points (x1, x2, ...xN), a
theory which tells you how big each of the data points was supposed to be (we'll call this xi,theory
for the i-th data point), and estimated errors for each measurement (Δx1, Δx2, ...ΔxN). χ2 can
then be found from the formula
2
N
xi − xtheory
χ = ∑
2
.
i =1 ∆xi
For example, suppose that you made 5 measurements (N=5, x1=1.1, x2=1.2, x3=1.5,
x4=1.5, and x5=1.3), that the theory says that x should have been 1.25, and that the estimated
error in each measurement was Δxi=0.15. Then
2
5
x − 1.25
χ2 = ∑ i
i =1 0.15
2 2 2 2 2
1.1 − 1.25 1.2 − 1.25 1.5 − 1.25 1.5 − 1.25 1.3 − 1.25
= + + + + = 6.74
0.15 0.15 0.15 0.15 0.15
Excel Tip: Excel does not have a command for calculating χ2, but the easiest way
to calculate it is to set up five columns, the first with the data in it, the second with
the theory in it, the third with the difference between the theory and data, the
fourth with the error in it, and the fifth with the square of the third column divided
by the square of the error in the data. To get χ2, you then sum the last column.
For example:
105
The degrees of freedom ν
In order to use χ2 you also need to know ν, the "degrees of freedom" in your experiment.
In these labs there are only two cases of practical importance:
(i) You did not use any fitting parameters or averages to compute the theoretical
values used in χ2. If the theory is given and you did not use any of your data to
fit the theory to the data, then the degrees of freedom is equal to the number N
of data points, i.e. ν=N.
(ii) Fitting parameters or averages were used to find the theory. If you computed
your theoretical values (used in χ2 ) by averaging your N data points or by
using a fitting routine, then ν = N-α, where α is the number of fitting parameters
you used. For example, if xtheory = <x> then you computed one parameter, the
average, from your data, so that α = 1 and ν = N-1. If you used Excel to fit
your data to a straight line, and used the slope and intercept of the line to
compute theoretical values, then you used two fitting parameters (slope and
intercept), so that ν = N-2.
(i) χ2/ν ≈ 1. This means that your results and theory are consistent to within your
experimental errors.
(ii) χ2/ν >> 1. This means that something is wrong! There are three possibilities:
(a) You have made an illegitimate error in measuring the data, calculating the
theory, recording your data or errors, or analyzing your data. To determine
if this is what happened, you need to go back and look at your data and
theory and see if the numbers are reasonable. Be especially careful of units.
(b) You have underestimated the size of the errors you are making in your
measurements, or, the quantity you are trying to measure has a distribution
of possible values. To determine if this is what happened, look at the scatter
in your data and see if it is much larger than your estimated error.
(c) If you can rule out (a) and (b) above, then having χ2/ν much bigger than 1
means that the theory is wrong.
(iii) χ2/ν << 1. This also means that something is wrong! There are only two
possibilities:
(a) You have overestimated the size of your errors, that is, you have actually
measured things much better than your claimed errors. You need to go back
and get a more accurate, and less conservative, estimate of your errors.
(b) You have made an illegitimate error in calculating χ2/ν. Most likely you
have neglected units or made a simple computational error.
106
What is P(χ2, ν) and how do you find it
You may be wondering how much bigger or smaller than 1 that χ2/ν has to be for
something to be clearly wrong, or how close to 1 does χ2/ν have to be for you to say that the
theory and experiment agree. This can only be answered by considering the quantity P(χ2/ν).
Suppose you have found a value for χ2 and that you have ν degrees of freedom. Then P(χ2,ν) is
the probability that random errors will cause χ2 to be larger than you found.
Excel Tip: The easiest way to get P(χ2, ν) is directly from Excel by typing in the
command =CHIDIST(χ2, ν). For example, if your value of χ2 is 3 and you have five
degrees of freedom, then you would type in =CHIDIST (3,5). See the help manual in
Excel for more information.
Values does P(χ2, ν) can also be obtained from the following table.
For example, suppose that you have N =9 measurements, you used one fitting parameter,
and you computed χ2 =12. The degrees of freedom is thus ν =N –1 =9 –1 =8 and the reduced χ2
is χ2/ν =12/8 =1.5. Since ν=8, we first find the ν=8 row. Looking along this row, we see the
values 0.206, 0.342, 0.436, 0.691, 0.918, 1.191, 1.670, etc. Since our value of χ2/ν is 1.5, the
closest value listed in the row is 1.670. Looking at the top of this column we find the number
0.10. Thus P(χ2 =12, ν =8) is equal to about 0.1. This means that there is about a 10% chance
that random errors would produce a value of χ2 which is larger than 12.
107
What does P(χ2, ν) tell you about your experiment
As discussed above, the value of χ2/ν can tell you if something is wrong with your
experiment or if it agrees with theory. P(χ2, ν) does the same thing, except that it provides a
more precise statement of the agreement between theory and experiment. This section uses a
few examples to help you understand P(χ2, ν).
(i) P(χ2, ν)≈ 0.5 means everything is OK. Suppose you have 8 degrees of freedom (ν =8) and
have found that χ2=8 so that χ2/ν =1. From our discussion of χ2/ν above, we know that when
χ2/ν is close to 1, our results are consistent with the theory. We expect then that P(χ2, ν)
should tell us the same thing. If you use table above, you will find that P(χ2 =8, ν =8) ≈ 0.40
(because χ2/ν =1 is about halfway between the P=0.3 and P=0.5 columns). What this means
is that if you repeated the experiment many times, you would find that 40% of the time your
random errors (with the size you used in calculating χ2) will produce a χ2 which is bigger
than 8. Logically this also means that 60% of the time random errors will produce a χ2 which
is smaller than 8. This means that random errors are just about as likely to produce a bigger
or a smaller value of χ2. If you think about this, you will realize that this is just what you
would expect if the theory is in agreement with the data.
(ii) If P(χ2, ν) is smaller than 0.05 then something may be wrong. Suppose you have 8 degrees
of freedom (ν =8) and have found that χ2=16 so that χ2/ν =2. From our discussion of χ2/ν
above, we know that when χ2/ν is much bigger than 1, our results are not consistent with the
theory. If you use the table above, you would find that P(χ2=16, ν =8) ≈ 0.05. What this
means is that if you repeated the experiment many times, you would find that about 5% of
the time your random errors (with the size you used in calculating χ2) will produce a χ2
which is bigger than 16. Logically, this also means that 95% of the time random errors
should produce a χ2 which is smaller than 16. This means that random errors are much more
likely to produce a value of χ2 which is smaller than you found. Another way to say this is
that 5% isn't very big, so that it is rather unlikely that random errors could have produced a
χ2 as large as you found. This is equivalent to saying that the theory is not in good
agreement with the data.
(iii) If P(χ2, ν) exceeds about 0.95 then something may be wrong. Suppose you have 8 degrees
of freedom (ν =8) and have found that χ2 = 2.7 so that χ2/ν = 0.33. From our discussion of
χ2/ν above, we know that when χ2/ν is much smaller than 1, we have probably made an
illegitimate error or used too large of an error estimate. From the table, you will find that
P(χ2=2.4, ν =8) ≈ 0.95. What this means is that if you repeated the experiment many times,
about 95% of the time your random errors (with the size you used in calculating χ2) will
produce a χ2 which is bigger than 2.7. Logically, this also means that only 5% of the time
random errors should produce a χ2 which is smaller than 2.7. This means that random errors
are much more likely to produce a value of χ2 which is larger than you found. Another way
to say this is that you should have expected random errors to produce a value of χ2 which
was larger than you found. This is equivalent to saying that the data has too little scatter in it
to be consistent with your estimated errors, and this suggests that something may be wrong
with your estimated errors or your analysis.
108
There are a few other things that may be useful to understand about P(χ2, ν). First, it
provides the same information as χ2/ν, but it does a better job. In particular, it tells you precisely
how much bigger or smaller χ2/ν has to be compared to 1 to decide something is wrong. Second,
P(χ2, ν) must be understood statistically. If you find P(χ2=8, ν =8)=0.5, this does not mean that
the theory is definitely correct. Rather, it means that the theory is statistically consistent with the
data. Similarly, P(χ2, ν)=0.01 does not mean that your theory is definitely wrong or that you
have necessarily done anything wrong; in fact, 1% of the time you should expect to find P(χ2,
ν)=0.01. It just says that you wouldn't expect this to happen more than 1% of the times you tried
the experiment, so you better check things to see if something else isn't wrong.
109
(9) Summary of Some Error Analysis Results
In the following formulas, xi denotes the i-th measurement of the quantity x, Δxi denotes the
estimated random experimental error in xi, and a total of N measurements were made.
Note: For most experiments, one expects that σx ≈ Δxi, i.e. the standard deviation is about
equal to the estimated experimental error in one measurement of x.
Degrees of freedom in χ2......... ν = N-α (where N is the number of measurements and α is the
number of fitting parameters used to find the theoretical value from the data)
110