AC2.1SolnManual (1) - Các Trang Đã Xóa

Download as pdf or txt
Download as pdf or txt
You are on page 1of 51

Chapter 4 Evaluating Analytical Data 21

Chapter 4
Most of the problems in this chapter require the calculation of a data set’s
basic statistical characteristics, such as its mean, median, range, standard
deviation, or variance. Although equations for these calculations are high-
lighted in the solution to the first problem, for the remaining problems,
both here and elsewhere in this text, such values simply are provided. Be
sure you have access to a scientific calculator, a spreadsheet program, such as
Excel, or a statistical software program, such as R, and that you know how
to use it to complete these most basic of statistical calculations.
1. The mean is obtained by adding together the mass of each quarter and
dividing by the number of quarters; thus
n
/X i

X= i=1
n
= 5.683 + 5.549 + g
12
+ 5.554 + 5.632

= 5.583 g
To find the median, we first order the data from the smallest mass to
the largest mass
5.536 5.539 5.548 5.549 5.551 5.552
5.552 5.554 5.560 5.632 5.683 5.684
and then, because there is an even number of samples, take the aver- As a reminder, if we have an odd number
age of the n/2 and the n/2+1 values; thus of data points, then the median is the mid-
N X +X dle data point in the rank-ordered data
X = 6 2 7 = 5.552 + 2
5.552 = 5.552 g
set, or, more generally, the value of the
(n+1)/2 data point in the rank-ordered
The range is the difference between the largest mass and the smallest data set where n is the number of values
mass; thus in the data set.

w = X largest - X smallest = 5.684 - 5.536 = 0.148 g


The standard deviation for the data is
n
/ (X - X )
i
2

s= i
n-1
(5.683 - 5.583) 2 + g + (5.632 - 5.583) 2
= 12 - 1 2
The variance in this case has units of g ,
= 0.056 g which is correct but not particularly in-
formative in an intuitive sense; for this
The variance is the square of the standard deviation; thus reason, we rarely attach a unit to the vari-
ance. See Rumsy, D. J. Journal of Statistics
s 2 = (0.056) 2 = 3.1 # 10 -3 Education 2009, 17(3) for an interesting
argument that the variance should be ex-
cluded for summary statistics.
22 Solutions Manual for Analytical Chemistry 2.1

2. (a) The values are as follows:


mean: 243.5 mg
median: 243.4 mg
range: 37.4 mg
standard deviation: 11.9 mg
variance: 141
(b) We are interested in the area under a normal distribution curve
that lies to the right of 250 mg, as shown in Figure SM4.1. Because
this limit is greater than the mean, we need only calculate the devi-
ation, z, and look up the corresponding probability in Appendix 3;
200 220 240 260 280 thus,
mg of acetaminophen
X-n
Figure SM4.1 Normal distribution curve z = v = 250 - 243.5 = 0.546
11.9
for Problem 4.2 given a population with a
mean of 243.5 mg and a standard deviation From Appendix 3 we see that the probability is 0.2946 when z is 0.54
of 11.9 mg; the area in blue is the proba- and 0.2912 when z is 0.55. Interpolating between these values gives
bility that a random sample has more than the probability for a z of 0.546 as
250.0 mg of acetaminophen. 0.2946 - 0.6 (0.2946 - 0.2912) = 0.2926
Based on our experimental mean and standard deviation, we expect
that 29.3% of the tablets will contain more than 250 mg of acetamin-
ophen.
3. (a) The means and the standard deviations for each of the nominal
dosages are as follows:
nominal dosage mean std. dev.
100-mg 95.56 2.16
60-mg 55.47 2.11
30-mg 26.85 1.64
10-mg 8.99 0.14
(b) We are interested in the area under a normal distribution curve
that lies to the right of each tablet’s nominal dosage, as shown in
Figure SM4.2 for tablets with a nominal dosage of 100-mg. Because
the nominal dosage is greater than the mean, we need only calculate
the deviation, z, for each tablet and look up the corresponding prob-
ability in Appendix 3. Using the 100-mg tablet as an example, the
90 95 100
mg of morphine hydrochloride
deviation is
X-n
Figure SM4.2 Normal distribution curve z = v = 100 - 95.56 = 2.06
2.16
for Problem 4.3 given a population with a
mean of 95.56 mg and a standard deviation for which the probability is 0.0197; thus, we expect that 1.97% of
of 2.16 mg; the area in blue is the proba- tablets drawn at random from this source will exceed the nominal
bility that a random sample has more than dosage. The table below summarizes results for all four sources of
100.0 mg of morphine hydrochloride. tablets.
Chapter 4 Evaluating Analytical Data 23

% exceeding
nominal dosage z nominal dosage
100-mg 2.06 1.97
60-mg 2.15 1.58
30-mg 1.92 2.74
10-mg 7.21 —
For tablets with a 10-mg nominal dosage, the value of z is sufficiently
large that effectively no tablet is expected to exceed the nominal dos-
age.
4. The mean and the standard deviation for the eight spike recoveries are
99.5% and 6.3%, respectively. As shown in Figure SM4.3, to find the
expected percentage of spike recoveries in the range 85%–115%, we
find the percentage of recoveries that exceed the upper limit by calcu-
lating z and using Appendix 3 to find the corresponding probability
X-n
z = v = 115 - 99.5 = 2.46 or 0.695%
6.3
and the percentage of recoveries that fall below the lower limit
X-n
z = v = 85 - 99.5 =- 2.30 or 1.07%
6.3 80 90 100 110 120
% recovery
Subtracting these two values from 100% gives the expected probabil-
Figure SM4.3 Normal distribution curve
ity of spike recoveries between 85%–115% as for Problem 4.4 given a population with a
100% - 0.695% - 1.07% = 98.2% mean of 99.5% and a standard deviation of
6.3%; the area in blue is the probability that
5. (a) Substituting known values for the mass, the gas constant, the tem- a spike recovery is between 85% and 115%.
perature, the pressure, and the volume gives the compound’s formula
weight as
(0.118 g) (0.082056 L : atm ) (298.2 K)
FW = mol : K = 16.0 g/mol
(0.724 atm) (0.250 L)
To estimate the uncertainty in the formula weight, we use a propaga-
tion of uncertainty. The relative uncertainty in the formula weight is
002 j2 + a 0.000001 k2 +
` 00..118
u FW = 0.082056
2 = 0.0271
a 298.2 k + a 0.724 k + a 0.250 k
FW 0 . 1 2
0 . 005 2
0 . 005

which makes the absolute uncertainty in the formula weight


u FW = 0.0271 # 16.0 g/mol = 0.43 g/mol
The formula weight, therefore, is 16.0±0.4 g/mol.
(b) To improve the uncertainty in the formula weight we need to
identify the variables that have the greatest individual uncertainty.
The relative uncertainties for the five measurements are
24 Solutions Manual for Analytical Chemistry 2.1

mass: 0.002/0.118 = 0.017


gas constant: 0.000001/0.082056 = 1.22×10–5
temperature: 0.1/298.2 = 3.4×10–4
pressure: 0.005/0.724 = 0.007
volume: 0.005/0.250 = 0.020
Of these variables, the two with the largest relative uncertainty are the
mass in grams and the volume in liters; these are the measurements
where an improvement in uncertainty has the greatest impact on the
formula weight’s uncertainty.
6. (a) The concentration of Mn2+ in the final solution is
0.250 g 1000 mg 10.00 mL
0.1000 L # g # 500.0 mL = 50.0 mg/L

To estimate the uncertainty in concentration, we complete a prop-


agation of uncertainty. The uncertainties in the volumes are taken
from Table 4.2; to find the uncertainty in the mass, however, we must
account for the need to tare the balance. Taking the uncertainty in
any single determination of mass as ±1 mg, the absolute uncertainty
in mass is
u mass = ^(0.001h2 + (0.001) 2 = 0.0014 g
The relative uncertainty in the concentration of Mn2+, therefore, is
a 00.0014
.250 k + ` 0.1000 j +
0.00008
2 2

uC = = 0.00601
C 0.02 j2 + a 0.20 k2
` 10 .00 500.0
which makes the relative uncertainty in the concentration
u C = 0.00601 # (50.0 ppm) = 0.3 ppm
The concentration, therefore, is 50.0±0.3 ppm.
(b) No, we cannot improve the concentration’s uncertainty by mea-
suring the HNO3 with a pipet instead of a graduated cylinder. As
we can see from part (a), the volume of HNO3 does not affect our
calculation of either the concentration of Mn2+ or its uncertainty.
There is no particular need to tare the bal-
ance when we weigh by difference if the 7. The weight of the sample taken is the difference between the contain-
two measurements are made at approxi- er’s original weight and its final weight; thus, the mass is
mately the same time; this is the usual
situation when we acquire a sample by mass = 23.5811 g − 22.1559 g =1.4252 g
this method. If the two measurements are
separated by a signifcant period of time, and its absolute uncertainty is
then we should tare the balance before
each measurement and then include the u mass = (0.0001) 2 + (0.0001) 2 = 0.00014 g
uncertainty of both tares when we calcu-
late the absolute uncertainty in mass. The molarity of the solution is
Chapter 4 Evaluating Analytical Data 25

1.4252 g 1 mol
0.1000 L # 121.34 g = 0.1175 M
The relative uncertainty in this concentration is
uC =
C a 0.00014 2 + a 0.01 2 + ` 0.00008 j2 = 0.00081
1.4252 k 121.34 k 0.1000
and the absolute uncertainty in the concentration is
u C = 0.00081 # (0.1175 M) = 0.000095 M
The concentration, therefore, is 0.1175±0.0001 M.
8. The mean value for n measurements is
n
/X i

X= n
i

= X 1 + X 2 + gn + X n - 1 + X n

= 1n " X 1 + X 2 + g + X n - 1 + X n ,

If we let the absolute uncertainty in the measurement of Xi equal v,


then a propagation of uncertainty for the sum of n measurements is
v X = 1n (v) 21 + (v) 22 + g + (v) 2n - 1 + (v) 2n

= n1 n (v) 2 = n v = v
n n
9. Because we are subtracting X B from X A , a propagation of uncertain-
ty of their respective uncertainties shows us that
t exp s A 2 t exp s B 2
u X -X = c
B A m +c m
nA nB
t 2exp s 2A t 2exp s 2B
= nA + nB
= t 2exp a nsA + nsA k
2 2

A A

= t exp a nsA + nsA k


2 2

A A

10. To have a relative uncertainty of less than 0.1% requires that we sat-
isfy the following inequality
0.1 mg
x # 0.001

where x is the minimum mass we need to take. Solving for x shows


that we need to weigh out a sample of at least 100 mg.
11. It is tempting to assume that using the 50-mL pipet is the best option
because it requires only two transfers to dispense 100.0 mL, providing
26 Solutions Manual for Analytical Chemistry 2.1

fewer opportunities for a determinate error; although this is true with


respect to determinate errors, our concern here is with indeterminate
errors. We can estimate the indeterminate error for each of the three
methods using a propagation of uncertainty. When we use a pipet
several times, the total volume dispensed is
n
Vtotal = / Vi
i

for the which the uncertainty is


u V = (u V ) 2 + (u V ) 2 + g + (u V ) 2 + (u V ) 2 = n (u V ) 2
total 1 2 n-1 n i

The uncertainties for dispensing 100.0 mL using each pipet are:


50-mL pipet: u V = 2 (0.05) 2 = 0.071 mL
total

25-mL pipet: u V = 4 (0.03) 2 = 0.060 mL


total

10-mL pipet: u V = 10 (0.02) 2 = 0.063 mL


total

where the uncertainty for each pipet are from Table 4.2. Based on
these calculations, if we wish to minimize uncertainty in the form
of indeterminate errors, then the best option is to use a 25-mL pipet
four times.
12. There are many ways to use the available volumetric glassware to
accomplish this dilution. Shown here are the optimum choices for a
one-step, a two-step, and a three-step dilution using the uncertainties
from Table 4.2. For a one-step dilution we use a 5-mL volumetric
pipet and a 1000-mL volumetric flask; thus
uC =
C a 0.01 k2 + ` 0.30 j2 = 0.0020
5.00 1000.0
For a two-step dilution we use a 50-mL volumetric pipet and a 1000-
mL volumetric flask followed by a 50-mL volumetric pipet and a
500-mL volumetric flask; thus
a 50
0.05 k2 + ` 0.30 j2 +
.00 1000.0
uC = = 0.0015
a 50
0.05 k2 + a 0.20 k2
C
.00 500.0
Finally, for a three-step dilution we use 50-mL volumetric pipet and
a 100-mL volumetric flask, a 50-mL volumetric flask and a 500-mL
volumetric flask, and a 50-mL volumetric pipet and a 500-mL volu-
metric flask; thus
a 50
0.05 k2 + ` 0.08 j2 + a 0.05 k2 +
.00 100.0 50.00
uC = = 0.0020
C 0.20 k + a 0.05 k2 + a 0.20 k2
a 500
2

.0 50.00 500.0
The smallest uncertainty is obtained with the two-step dilution.
Chapter 4 Evaluating Analytical Data 27

13. The mean is the average value. If each measurement, Xi, is changed
by the same amount, DX, then the total change for n measurement
is nDX and the average change is nDX/n or DX. The mean, therefore,
changes by DX. When we calculate the standard deviation
(X i - X ) 2
s= n-1
the important term is the summation in the numerator, which con-
sists of the difference between each measurement and the mean value
(X i - X ) 2
Because both Xi and X change by DX, the value of Xi − X becomes
X i + DX - ( X + DX) = X i - X
which leaves unchanged the numerator of the equation for the stan-
dard deviation; thus, changing all measurements by DX has no effect
on the standard deviation.
14. Answers to this question will vary with the object chosen. For a sim-
ple, regularly shaped object—a sphere or cube, for example—where
you can measure the linear dimensions with a caliper, Method A
should yield a smaller standard deviation and confidence interval
than Method B. When using a mm ruler to measure the linear di-
mensions of a regularly shaped object, the two methods should yield
similar results. For an object that is irregular in shape, Method B
should yield a smaller standard deviation and confidence interval.
15. The isotopic abundance for 13C is 1.11%; thus, for a molecule to
average at least one atom of 13C, the total number of carbon atoms
must be at least
n 1
N = p = 0.0111 = 90.1

which we round up to 91 atoms. The probability of finding no atoms


of 13C in a molecule with 91 carbon atoms is given by the binomial
distribution; thus
P (0, 91) = 91! (0.0111) 0 (1 - 0.0111) 91 - 0 = 0.362
0! (91 - 0) !
and 36.2% of such molecules will not contain an atom of 13C.
16. (a) The probability that a molecule of cholesterol has one atom of 13C
is
P (1, 27) = 27! (0.0111) 1 (1 - 0.0111) 27 - 1 = 0.224
1! (27 - 0) !
or 22.4%. (b) From Example 4.10, we know that P(0,27) is 0.740.
Because the total probability must equal one, we know that
28 Solutions Manual for Analytical Chemistry 2.1

P ($ 2, 27) = 1.000 - P (0, 27) - P (1, 27)


P ($ 2, 27) = 1.00 - 0.740 - 0.224
P ($ 2, 27) = 0.036
and 3.6% of cholesterol molecules will have two or more atoms of
13
C.
17. The mean and the standard deviation for the eight samples are, re-
spectively, 16.883% w/w Cr and 0.0794% w/w Cr. The 95% confi-
dence interval is
(2.365) (0.0794)
n = X ! ts = 16.883 !
n 8
= 16.883 ! 0.066% w/w Cr
Based on this one set of experiments, and in the absence of any de-
terminate errors, there is a 95% probability that the actual %w/w Cr
in the reference material is in the range 16.817–16.949% w/w Cr.
18. (a) The mean and the standard deviation for the nine samples are 36.1
ppt and 4.15 ppt, respectively. The null hypothesis and the alternative
hypothesis are
H 0: X = n H A: X ! n
The test statistic is texp, for which
n-X n 40.0 - 36.1 9
t exp = s = = 2.82
4.15
The critical value for t(0.05,8) is 2.306. Because texp is greater than
t(0.05,8), we reject the null hypothesis and accept the alternative hy-
pothesis, finding evidence, at a = 0.05, that the difference between
X and n is too great to be explained by random errors in the mea-
surements.
(b) Because concentration, C, and signal are proportional, we can use
concentration in place of the signal when calculating detection limits.
For vmb we use the standard deviation for the method blank of 0.16
ppt, and for vA we use the standard deviation of 4.15 ppt from part
(a); thus
C DL = C mb + zv mb = 0.16 + (3.00) (1.20) = 3.76 ppt
C LOI = C mb + zv mb + zv A
= 0.16 + (3.00) (1.20) + (3.00) (4.15) = 16.21 ppt
C LOQ = C mb + 10v mb = 0.16 + (10.00) (1.20) = 12.16 ppt
19. The mean and the standard deviation are, respectively, 0.639 and
0.00082. The null hypothesis and the alternative hypothesis are
H 0: X = n H A: X ! n
Chapter 4 Evaluating Analytical Data 29

The test statistic is texp, for which


n-X n 0.640 - 0.639 7
t exp = s = 0.00082 = 3.23

The critical value for t(0.01,6) is 3.707. Because texp is less than
t(0.01,6), we retain the null hypothesis, finding no evidence, at
a = 0.01, that there is a significant difference between X and n.
20. The mean and the standard deviation are 76.64 decays/min and 2.09
decays/min, respectively. The null hypothesis and the alternative hy-
pothesis are
H 0: X = n H A: X ! n
The test statistic is texp, for which
n-X n 77.5 - 76.64 12
t exp = s = 2.09 = 1.43

The critical value for t(0.05,11) is 2.2035. Because texp is less than
t(0.05,11), we retain the null hypothesis, finding no evidence, at
a = 0.05, that there is a significant difference between X and n.
21. The mean and the standard deviation are, respectively, 5730 ppm Fe
and 91.3 ppm Fe. In this case we need to calculate n, which is
0.5351 g Fe 1 # 10 6 µg
(2.6540 g sample) # # g
g sample
n= 250.0 mL
= 5681 ppm Fe
The null hypothesis and the alternative hypothesis are
H 0: X = n H A: X ! n
The test statistic is texp, for which
n-X n 5681 - 5730 4
t exp = s = 91.3 = 1.07

The critical value for t(0.05,3) is 3.182. Because texp is less than
t(0.05,3), we retain the null hypothesis, finding no evidence, at
a = 0.05, that there is a significant difference between X and n.
22. This problem involves a comparison between two sets of unpaired
data. For the digestion with HNO3, the mean and the standard de-
viation are, respectively, 163.8 ppb Hg and 3.11 ppb Hg, and for the
digestion with the mixture of HNO3 and HCl, the mean and the
standard deviation are, respectively, 148.3 ppb Hg and 7.53 ppb Hg.
The null hypothesis and the alternative hypothesis are
H 0: X HNO = X mix
3 H A: X HNO ! X mix
3
30 Solutions Manual for Analytical Chemistry 2.1

Before we can test these hypotheses, however, we first must determine


if we can pool the standard deviations. To do this we use the following
null hypothesis and alternative hypothesis
H 0: s HNO = s mix
3 H A: s HNO ! s mix
3

The test statistic is Fexp for which


(7.53) 2
Fexp = 2s mix =
2
= 5.86
s HNO (3.11) 2
3

The critical value for F(0.05,5,4) is 9.364. Because Fexp is less than
F(0.05,5,4), we retain the null hypothesis, finding no evidence, at
a = 0.05, that there is a significant difference between the standard
deviations. Pooling the standard deviations gives
(4) (3.11) 2 + (5) (7.53) 2
s pool = = 5.98
5+6-2
The test statistic for the comparison of the means is texp, for which
X HNO - X mix n #n
t exp = # n HNO + nmix
3 3

s pool HNO mix


3

163.8 - 148.3
= 5.98 # 5 # 6 = 4.28
5+6
with nine degrees of freedom. The critical value for t(0.05,9) is 2.262.
Because texp is greater than t(0.05,9), we reject the null hypothesis and
accept the alternative hypothesis, finding evidence, at a = 0.05, that
the difference between the means is significant.
23. This problem involves a comparison between two sets of unpaired
data. For the samples of atmospheric origin, the mean and the stan-
dard deviation are, respectively, 2.31011 g and 0.000143 g, and for
the samples of chemical origin, the mean and the standard deviation
are, respectively, 2.29947 g and 0.00138 g.
The null hypothesis and the alternative hypothesis are
H 0: X atm = X chem H A: X atm ! X chem
Before we can test these hypotheses, however, we first must determine
if we can pool the standard deviations. To do this we use the following
null hypothesis and alternative hypothesis
H 0: s atm = s chem H A: s atm ! s chem
The test statistic is Fexp for which
(0.00138) 2
Fexp = s chem
2
= = 97.2
s atm
2
(0.000143) 2
The critical value for F(0.05,7,6) is 5.695. Because Fexp is less than
F(0.05,5,6), we reject the null hypothesis and accept the alternative
hypothesis that the standard deviations are different at a = 0.05. Be-
Chapter 4 Evaluating Analytical Data 31

cause we cannot pool the standard deviations, the test statistic, texp,
for comparing the means is
X atm - X chem
t exp =
s atm
2
+ s chem
2

n atm n chem
2.31011 - 2.29947
= = 21.68
(0.000143) 2 (0.00138) 2
7 + 8
The number of degrees of freedom is
a (0.000143
7
)2
+ k
(0.00138) 2 2
8
o= - 2 = 7.21 c 7
a k
(0.000143) 2 2
7
a k
(0.00138) 2 2
8
7+1 + 8+1
The critical value for t(0.05,7) is 2.365. Because texp is greater than
t(0.05,7), we reject the null hypothesis and accept the alternative hy-
pothesis, finding evidence, at a = 0.05, that the difference between
the means is significant. Rayleigh observed that the density of N2
isolated from the atmosphere was significantly larger than that for
N2 derived from chemical sources, which led him to hypothesize the
presence of an unaccounted for gas in the atmosphere.
24. This problem involves a comparison between two sets of unpaired
data. For the standard method, the mean and the standard devia-
tion are, respectively, 22.86 µL/m3 and 1.28 µL/m3, and for the new
method, the mean and the standard deviation are, respectively, 22.51
µL/m3 and 1.92 µL/m3.
The null hypothesis and the alternative hypothesis are
H 0: X std = X new H A: X std ! X new
Before we can test these hypotheses, however, we first must determine
if we can pool the standard deviations. To do this we use the following
null hypothesis and alternative hypothesis
H 0: s std = s new H A: s std ! s new
The test statistic is Fexp for which
(1.92) 2
Fexp = s new
2
= = 2.25
s 2std (1.28) 2
The critical value for F(0.05,6,6) is 5.820. Because Fexp is less than
F(0.05,6,6), we retain the null hypothesis, finding no evidence, at
a = 0.05, that there is a significant difference between the standard
deviations. Pooling the standard deviations gives
(6) (1.28) 2 + (6) (1.92) 2
s pool = 7+7-2 = 1.63
32 Solutions Manual for Analytical Chemistry 2.1

The test statistic for the comparison of the means is texp, for which
X std - X new
# nn std +
# n new
t exp = s pool n new
std

22.86 - 22.51
= # 77 + # 7 = 0.40
7
1.63
with 12 degrees of freedom. The critical value for t(0.05,12) is 2.179.
Because texp is less than t(0.05,9), we retain the null hypothesis, find-
ing no evidence, at a = 0.05, that there is a significant difference
between new method and the standard method.
25. This problem is a comparison between two sets of paired data,.The
differences, which we define as (measured – accepted), are
0.0001 0.0013 –0.0003 0.0015 –0.0006
The mean and the standard deviation for the differences are 0.00040
and 0.00095, respectively. The null hypothesis and the alternative
hypothesis are
H 0: d = 0 H A: d ! 0
The test statistic is texp, for which
d n 0.00040 5
t exp = s = 0.00095 = 0.942
The critical value for t(0.05,4) is 2.776. Because texp is less than
t(0.05,4), we retain the null hypothesis, finding no evidence, at
a = 0.05, that the spectrometer is inaccurate.
26. This problem is a comparison between two sets of paired data. The
differences, which we define as (ascorbic acid – sodium bisulfate), are
15 –31 1 20 4 –52 –22 –62 –50
The mean and the standard deviation for the differences are –19.7 and
30.9, respectively. The null hypothesis and the alternative hypothesis
are
H 0: d = 0 H A: d ! 0
The test statistic is texp, for which
d n –19.7 9
t exp = s = 30.9 = 1.91

The critical value for t(0.10,8) is 1.860. Because texp is greater than
t(0.10,8), we reject the null hypothesis and accept the alternative
hypothesis, finding evidence, at a = 0.10, that the two preservatives
do not have equivalent holding times.
27. This problem is a comparison between two sets of paired data. The
differences, which we define as (actual – found), are
–1.8 –1.7 0.2 –0.5 –3.6 –1.7 1.1 –1.7 0.3
Chapter 4 Evaluating Analytical Data 33

The mean and the standard deviation for the differences are –1.04 and
1.44, respectively. The null hypothesis and the alternative hypothesis
are
H 0: d = 0 H A: d ! 0
The test statistic is texp, for which
d –1.04 9
n
t exp =
s = = 2.17
1.44
The critical value for t(0.05,8) is 2.306. Because texp is less than
t(0.10,8), we retain the null hypothesis, finding no evidence, at
a = 0.05, that the analysis for kaolinite is inaccurate.
28. This problem is a comparison between two sets of paired data. The
differences, which we define as (electrode – spectrophotometric), are
0.6 –5.8 0.2 0.1 –0.5 –0.6
0.1 –0.5 –0.7 –0.3 0.3 0.1
The mean and the standard deviation for the differences are –0.583
and 1.693, respectively. The null hypothesis and the alternative hy-
pothesis are
H 0: d = 0 H A: d ! 0
The test statistic is texp, for which
d n –0.583 12
t exp = s = = 1.19
1.693
The critical value for t(0.05,11) is 2.2035. Because texp is less than
t(0.05,11), we retain the null hypothesis, finding no evidence, at
a = 0.05, that the two methods yield different results.
29. This problem is a comparison between two sets of paired data. The
differences, which we define as (proposed – standard), are
0.19 0.91 1.39 1.02 –2.38 –2.40 0.03 0.82
The mean and the standard deviation for the differences are –0.05 and
1.51, respectively. The null hypothesis and the alternative hypothesis
are
H 0: d = 0 H A: d ! 0
The test statistic is texp, for which
d n –0.05 8
t exp = s = 1.51 = 0.09

The critical value for t(0.05,7) is 2.365. Because texp is less than
t(0.05,11), we retain the null hypothesis, finding no evidence, at
a = 0.05, that the two methods yield different results. This is not a
very satisfying result, however, because many of the individual differ-
34 Solutions Manual for Analytical Chemistry 2.1

ences are quite large. In this case, additional work might help better
characterize the improved method relative to the standard method.
30. The simplest way to organize this data is to make a table, such as the
one shown here
next-to- next-to-
smallest smallest largest largest
sample value value value value
1 21.3 21.5 23.0 23.1
2 12.9 13.5 13.9 14.2
3 15.9 16.0 17.4 17.5
The only likely candidate for an outlier is the smallest value of 12.9
for sample 2. Using Dixon’s Q-test, the test statistic, Qexp, is
X out - X nearest
Q exp = X = 13.5 - 12.9 = 0.462
l arg est - X smallest 14.2 - 12.9
which is smaller than the critical value for Q(0.05,10) of 0.466; thus,
there is no evidence using Dixon’s Q-test at a = 0.05 to suggest that
12.9 is outlier.
To use Grubb’s test we need the mean and the standard deviation for
sample 2, which are 13.67 and 0.356, respectively. The test statistic,
Gexp, is
X out - X 12.9 - 13.67
G exp = s = = 2.16
0.356
which is smaller than the critical value for G(0.05,10) of 2.290; thus,
there is no evidence using Grubb’s test at a = 0.05 that 12.9 is an
outlier.
To use Chauvenet’s criterion we calculate the deviation, z, for the sus-
pected outlier, assuming a normal distribution and using the sample’s
mean and standard deviation
X out - X 12.9 - 13.67
z= s = = 2.16
0.356
which, from Appendix 3, corresponds to a probability of 0.0154.
The critical value to which we compare this is (2n)–1, or (2×10)–1 =
0.05. Because the experimental probability of 0.0154 is smaller than
the theoretical probability of 0.05 for 10 samples, we have evidence
using Chauvenet’s criterion that 12.9 is an outlier.
At this point, you may be asking yourself what to make of these seem-
ingly contradictory results, in which two tests suggest that 12.9 is not
an outlier and one test suggests that it is an outlier. Here it is help-
ful to keep in mind three things. First, Dixon’s Q-test and Grubb’s
test require us to pick a particular confidence level, a, and make
a decision based on that confidence level. When using Chauvenet’s
Chapter 4 Evaluating Analytical Data 35

criterion, however, we do not assume a particular confidence level;


instead, we simply evaluate the probability that the outlier belongs to
a normal distribution described by the sample’s mean and standard
deviation relative to a predicted probability defined by the size of
the sample. Second, although Qexp and Gexp are not large enough to
identify 12.9 as an outlier at a = 0.05, their respective values are not
far removed from their respective critical values (0.462 vs. 0.466 for
Dixon’s Q-test and 2.16 vs. 2.290 for Grubb’s test). Both tests, for
example, identify 12.9 as an outlier at a = 0.10. Third, and finally,
for the reasons outlined in the text, you should be cautious when
rejecting a possible outlier based on a statistical test only. All three of
these tests, however, suggest that we should at least take a closer look
at the measurement that yielded 12.9 as a result.
31. (a) The mean is 1.940, the median is 1.942 (the average of the 31st
and the 32nd rank ordered values rounded to four significant figures),
and the standard deviation is 0.047.
(b) Figure SM4.4 shows a histogram for the 60 results using bins of
size 0.02. The resulting distribution is a reasonably good approxima-
tion to a normal distribution, although it appears to have a slight skew
toward smaller Cu/S ratios.

10
(c) The range X ! 1s extends from a Cu/S ratio of 1.893 to 1.987.
Of the 62 experimental results, 44 or 71% fall within this range. This

8
frequency
agreement with the expected value of 68.26% for a normal distribu-

6
tion is reasonably good.
(d) For a deviation of 4
2

z = 2.000 - 1.940 = 1.28


0

0.047
1.7 1.8 1.9 2.0 2.1
the probability from Appendix 3 that a Cu/S ratio is greater than 2 is Cu/S ratio
10.03%. Of the 62 experimental results, three or 4.8% fall within this
Figure SM4.4 Histogram for the data in
range. This is a little lower than expected for a normal distribution,
problem 31. Each bar in has a width of
but consistent with the observation from part (b) that the data are 0.02. For example, the bar on the far left
skewed slightly toward smaller Cu/S ratios. includes all Cu/S ratios from 1.76 to 1.78,
(e) The null hypothesis and the alternative hypothesis are which includes the single result of 1.764.
H 0: X = 2.000 H A: X < 2.000
Note that the alternative hypothesis here is one-tailed as we are inter-
ested only in whether the mean Cu/S ratio is significantly less than 2.
The test statistic, texp, is
1.940 - 2.000 62
t exp = = 10.0
0.047
As texp is greater than the one-tailed critical value for t(0.05,61),
which is between 1.65 and 1.75, we reject the null hypothesis and
36 Solutions Manual for Analytical Chemistry 2.1

accept the alternative hypothesis, finding evidence that the Cu/S ratio
is significantly less than its expected stoichiometric ratio of 2.
32. Although answers for this problem will vary, here are some details you
should address in your report. The descriptive statistics for all three
data sets are summarized in the following table.
statistic sample X sample Y sample Z
mean 24.56 27.76 23.75
median 24.55 28.00 23.52
range 1.26 4.39 5.99
std dev 0.339 1.19 1.32
variance 0.115 1.43 1.73
The most interesting observation from this summary is that the spread
of values for sample X—as given by the range, the standard deviation,
and the variance—is much smaller than that for sample Y and for
sample Z.
Outliers are one possible explanation for the difference in spread
among these three samples. Because the number of individual results
for each sample is greater than the largest value of n for the critical
values included in Appendix 6 for Dixon’s Q-test and in Appendix
7 for Grubb’s test, we will use Chauvenet’s criterion; the results are
summarized in the following table.
statistic sample X sample Y sample Z
possible outlier 23.92 24.41 28.79
z 1.89 2.63 3.83
probability 0.0294 0.0043 0.0000713
For 18 samples, the critical probability is (2×18)–1 or 0.0277; thus,
we have evidence that there is an outlier in sample Y and in sample
Z, but not in sample X. Removing these outliers and recalculating the
descriptive statistics gives the results in the following table.
statistic sample X sample Y sample Z
mean 24.56 27.74 23.45
median 24.55 28.00 23.48
range 1.26 3.64 1.37
std dev 0.339 0.929 0.402
variance 0.115 0.863 0.161
The spread for sample Y still seems large relative to sample X, but the
spread for sample Z now seems similar to sample X. An F-test of the
variances using the following null hypothesis and alternative hypoth-
esis
H 0: s 1 = s 2 H A: s 1 ! s 2
Chapter 4 Evaluating Analytical Data 37

gives an Fexp of 5.340 when comparing sample Y to sample Z, and


of 1.406 when comparing sample Z to sample X. Comparing these
values to the critical value for F(0.05,17,17), which is between 2.230
and 2.308, suggests that our general conclusions are reasonable.
The mean values for the three samples appear different from each This process of completing multiple sig-
nificance tests is not without problems,
other. A t-test using the following null hypothesis and alternative for reasons we will discuss in Chapter 14
hypothesis when we consider analysis of variance.

H 0: X 1 = X 2 H A: X 1 ! X 2
gives a texp of 13.30 when comparing sample Y to sample X, which is
much greater than the critical value for t(0.05,20) of 2.086. The value
of texp when comparing sample Z to sample X is 8.810, which is much
greater than the critical value for t(0.05,33), which is between 2.042
and 2.086.
38 Solutions Manual for Analytical Chemistry 2.1
Chapter 5 Standardizing Analytical Methods 39

Chapter 5
Many of the problems in this chapter require a regression analysis. Al-
though equations for these calculations are highlighted in the solution to
the first such problem, for the remaining problems, both here and elsewhere
in this text, the results of a regression analysis simply are provided. Be sure
you have access to a scientific calculator, a spreadsheet program, such as
Excel, or a statistical software program, such as R, and that you know how
to use it to complete a regression analysis.
1. For each step in a dilution, the concentration of the new solution,
Cnew, is
C orig Vorig
C new = V
new

where Corig is the concentration of the original solution, Vorig is the


volume of the original solution taken, and Vnew is the volume to
which the original solution is diluted. A propagation of uncertainty
See Chapter 4C to review the propagation
for Cnew shows that its relative uncertainty is of uncertainty.

a C k + a V k + ` uV j
uC u 2 u 2 2
= new orig orig new

C new C orig Vorig Vnew


For example, if we dilute 10.00 mL of the 0.1000 M stock solution
to 100.0 mL, Cnew is 1.000×10–2 M and the relative uncertainty in
Cnew is
` 0.0002 j2 + ` 0.02 j2 + ` 0.08 j2 = 2.94 # 10 -3
uCnew

C new = 0.1000 10.00 100.0


The absolute uncertainty in Cnew, therefore, is
u C = (1.000 # 10 -2 M) # (2.94 # 10 -3) = 2.94 # 10 -5 M
new

The relative and the absolute uncertainties for each solution’s con-
centration are gathered together in the tables that follow (all con-
centrations are given in mol/L and all volumes are given in mL). The
uncertainties in the volumetric glassware are from Table 4.2 and Table
4.3. For a Vorig of 0.100 mL and of 0.0100 mL, the uncertainties are
those for a 10–100 µL digital pipet.
For a serial dilution, each step uses a 10.00 mL volumetric pipet and
a 100.0 mL volumetric flask; thus
Cnew Corig Vorig Vnew uV orig uV new

–2 0.1000 10.00 100.0 0.02 0.08


1.000×10
1.000×10–3 1.000×10–2 10.00 100.0 0.02 0.08
–4 –3 10.00 100.0 0.02 0.08
1.000×10 1.000×10
1.000×10–5 1.000×10–4 10.00 100.0 0.02 0.08
40 Solutions Manual for Analytical Chemistry 2.1

uC
Cnew Corig uC
new

C new new

1.000×10–2 0.1000 2.94×10–3 2.94×10–5


1.000×10–3 1.000×10–2 3.64×10–3 3.64×10–6
1.000×10–4 1.000×10–3 4.23×10–3 4.23×10–7
1.000×10–5 1.000×10–4 4.75×10–3 4.75×10–8
For the set of one-step dilutions using the original stock solution,
each solution requires a different volumetric pipet; thus
Cnew Corig Vorig Vnew uV orig uV new

–2 0.1000 10.00 100.0 0.02 0.08


1.000×10
1.000×10–3 0.1000 1.000 100.0 0.006 0.08
1.000×10–4 0.1000 0.100 100.0 8.00×10–4 0.08
1.000×10–5 0.1000 0.0100 100.0 3.00×10–4 0.08

uC
Cnew Corig uC
new

C new new

1.000×10–2 0.1000 2.94×10–3 2.94×10–5


1.000×10–3 0.1000 6.37×10–3 6.37×10–6
1.000×10–4 0.1000 8.28×10–3 8.28×10–7
1.000×10–5 0.1000 3.01×10–2 3.01×10–7
Note that for each Cnew, the absolute uncertainty when using a serial
dilution always is equal to or better than the absolute uncertainty
when using a single dilution of the original stock solution. More
specifically, for a Cnew of 1.000×10–3 M and of 1.000×10–4 M, the
improvement in the absolute uncertainty is approximately a factor
of 2, and for a Cnew of 1.000×10–5 M, the improvement in the ab-
solute uncertainty is approximately a factor of 6. This is a distinct
advantage of a serial dilution. On the other hand, for a serial dilution
a determinate error in the preparation of the 1.000×10–2 M solution
carries over as a determinate error in each successive solution, which
is a distinct disadvantage.
2. We begin by determining the value for kA in the equation
S total = k A C A + S reag
where Stotal is the average of the three signals for the standard of con-
centration CA, and Sreag is the signal for the reagent blank. Making
appropriate substitutions
0.1603 = k A (10.0 ppm) + 0.002
Chapter 5 Standardizing Analytical Methods 41

and solving for kA gives its value as 0.01583 ppm–1. Substituting in


the signal for the sample
0.118 = (0.01583 ppm -1) C A + 0.002
and solving for CA gives the analyte’s concentration as 7.33 ppm.
3. This standard addition follows the format of equation 5.9
S samp S spike
V =
CA V o
C A V + C std V
V o std
Vf
f f

in which both the sample and the standard addition are diluted to the
same final volume. Making appropriate substitutions
0.235 = 0.502
10 . 00
C A # 25.00 mLmL C A # 25.00 mL + (1.00 ppm) # 10
10 . 00 mL .00 mL
25.00 mL
0.0940C A + 0.0940 ppm = 0.2008C A
and solving gives the analyte’s concentration, CA, as 0.800 ppm. The Here we assume that a part per million is
equivalent to mg/L.
concentration of analyte in the original solid sample is

(0.880 mg/L) (0.250 L) c 1000 mg m


1g
# 100 = 2.20 # 10 -3 % w/w
10.00 g sample
4. This standard addition follows the format of equation 5.11
S samp S spike
CA = Vo Vstd
CA V + Vstd + C std Vo + Vstd
o

in which the standard addition is made directly to the solution that


contains the analyte. Making appropriate substitutions
11.5 = 23.1
CA (10.0 ppm) (1.00 mL)
C A 50.00 50 .00 mL
mL + 1.00 mL + 50.00 mL + 1.00 mL
23.1C A = 11.27C A + 2.255 ppm
and solving gives the analyte’s concentration, CA, as 0.191 ppm.
5. To derive a standard additions calibration curve using equation 5.10
Sspike(Vo + Vstd)

S spike = k A a C A V +Vo
Vstd + C std Vo + Vstd k
Vstd
o
=k A
pe
slo
we multiply through both sides of the equation by Vo + Vstd
x-intercept = –CAVo
S spike (Vo + Vstd ) = k A C A Vo + k A C std Vstd
y-intercept = kACAVo
As shown in Figure SM5.1, the slope is equal to kA and the y-inter-
cept is equal to kACAVo. The x-intercept occurs when Sspike(Vo + Vstd) CstdVstd

equals zero; thus Figure SM5.1 Standard additions calibra-


0 = k A C A Vo + k A C std Vstd tion curve based on equation 5.10.
42 Solutions Manual for Analytical Chemistry 2.1

and the x-intercept is equal to –CAVo. We must plot the calibra-


tion curve this way because if we plot Sspike on the y-axis versus
C std # " Vstd / (Vo + Vstd ) , on the x-axis, then the term we identify as
y-intercept
k A C A Vo
Vo + Vstd
is not a constant because it includes a variable,Vstd, whose value
changes with each standard addition.
6. Because the concentration of the internal standard is maintained at a
constant level for both the sample and the standard, we can fold the
internal standard’s concentration into the proportionality constant K
in equation 5.12; thus, using SA, SIS, and CA for the standard
SA 0.155 kACA
S IS = 0.233 = k IS C IS = KC A = K (10.00 mg/L)
gives K as 0.06652 L/mg. Substituting in SA, SIS, and K for the sample
0.155 = (0.06652 L/mg) C
0.233 A

gives the concentration of analyte in the sample as 20.8 mg/L.


7. For each pair of calibration curves, we seek to find the calibration
curve that yields the smallest uncertainty as expressed in the standard
deviation about the regression, sr, the standard deviation in the slope,
s b , or the standard deviation in the y-intercept, s b .
1 0

(a) The calibration curve on the right is the better choice because it
uses more standards. All else being equal, the larger the value of n, the
smaller the value for sr in equation 5.19, and for s b in equation 5.21.
0

(b) The calibration curve on the left is the better choice because the
standards are more evenly spaced, which minimizes the term / x 2i
in equation 5.21 for s b . 0

(c) The calibration curve on the left is the better choice because the
standards span a wider range of concentrations, which minimizes the
term / (x i - X ) 2 in equation 5.20 and in equation 5.21 for s b and1

s b , respectively.
0

As a reminder, for this problem we will 8. To determine the slope and the y-intercept for the calibration curve
work through the details of an unweight- at a pH of 4.6 we first need to calculate the summation terms that
ed linear regression calculation using the
equations from the text. For the remain- appear in equation 5.17 and in equation 5.18; these are:
ing problems, it is assumed you have
access to a calculator, a spreadsheet, or a
/ x = 308.4 / y = 131.0
i i

statistical program that can handle most / x y = 8397.5 / x = 19339.6


i i
2
i

or all of the relevant calculations for an


unweighted linear regression. Substituting these values into the equation 5.17
(6 # 8397.5) - (308.4 # 131.0)
b1 = = 0.477
(6 # 19339.6) - (308.4) 2
Chapter 5 Standardizing Analytical Methods 43

gives the slope as 0.477 nA/nM, and substituting into equation 5.18

40
131.0 - (0.477 # 308.4)
b0 = =- 2.69
6

30
Stotal (nA)
gives the y-intercept as –2.69 nA. The equation for the calibration

20
curve is

10
S total = 0.477nA/nM # C Cd - 2.69 nA

0
0 20 40 60 80 100
Figure SM5.2 shows the calibration data and the calibration curve. [Cd2+] (nM)

To find the confidence intervals for the slope and for the y-intercept, Figure SM5.2 Calibration curve at pH 4.6
we use equation 5.19 to calculate the standard deviation about the for the data in Problem 5.8.
regression, sr, and use equation 5.20 and equation 5.21 to calculate
the standard deviation in the slope, s b , and the standard deviation in
1

the y-intercept, s b , respectively. To calculate sr we first calculate the


predicted values for the signal, Vy i , using the known concentrations
0

of Cd2+ and the regression equation, and the squared residual errors,
(y i - Vy i) 2 ; the table below summarizes these results
xi yi Vy i (y i - Vy i) 2
15.4 4.8 4.66 0.0203
30.4 11.4 11.81 0.7115
44.9 18.2 18.73 0.2382
59.0 26.6 25.46 1.3012
72.7 32.3 32.00 0.0926
86.0 37.7 38.34 0.4110
Adding together the last column, which equals 2.2798, gives the nu-
merator for equation 5.19; thus, the standard deviation about the
regression is
s r = 2.2798 = 0.7550
6-2
To calculate the standard deviations in the slope and in the y-inter-
cept, we use equation 5.20 and equation 5.21, respectively, using the
standard deviation about the regression and the summation terms
outlined earlier; thus
6 # (0.7550) 2
sb = = 0.02278
1
(6 # 19339.6) - (308.4) 2
(0.7550) 2 # 19339.6
sb = = 0.7258
0
(6 # 19339.6) - (308.4) 2
With four degrees of freedom, the confidence intervals for the slope
and the y-intercept are
b 1 = b 1 ! ts b = 0.477 ! (2.776) (0.0128)
1

= 0.477 ! 0.036 nA/nM


44 Solutions Manual for Analytical Chemistry 2.1

b 1 = b 0 ! ts b =- 2.69 ! (2.776) (0.7258)


0
1.5

=- 2.69 ! 2.01 nA
residual error
0.5

(b) The table below shows the residual errors for each concentra-
tion of Cd2+. A plot of the residual errors (Figure SM5.3) shows no
−0.5

discernible trend that might cause us to question the validity of the


calibration equation.
−1.5

0 20 40 60
[Cd2+] (nM)
80 100
xi yi Vy i y i - Vy i
Figure SM5.3 Plot of the residual errors 15.4 4.8 4.66 0.14
for the calibration standards in Problem 30.4 11.4 11.81 –0.41
5.8 at a pH of 4.6.
44.9 18.2 18.73 –0.53
59.0 26.6 25.46 1.14
72.7 32.3 32.00 0.30
86.0 37.7 38.34 –0.64
(c) A regression analysis for the data at a pH of 3.7 gives the calibra-
80 100 120

pH = 3.7
pH = 4.6 tion curve’s equation as
Stotal (nA)

S total = 1.43 nA/nM # C Cd - 5.02 nA


60

The more sensitive the method, the steeper the slope of the cali-
40

bration curve, which, as shown in Figure SM5.4, is the case for the
20

calibration curve at pH 3.7. The relative sensitivities for the two pHs
0

0 20 40 60 80 100
is the ratio of their respective slopes
[Cd2+] (nM)
k pH 3.7
Figure SM5.4 Calibration curves for the = 1.43 = 3.00
data in Problem 5.8 at a pH of 3.7 and at k pH 4.6 0.477
a pH of 4.6. The sensitivity at a pH of 3.7, therefore, is three times more sensitive
than that at a pH of 4.6.
(d) Using the calibration curve at a pH of 3.7, the concentration of
Cd2+ in the sample is
66.3 nA - (- 5.02 nA)
[Cd 2+] = S total - b 0 = = 49.9 nM
b1 1.43 nA/nM
To calculate the 95% confidence interval, we first use equation 5.25
^ S samp - S std h2
sC = sr 1 1
m+n+
(b 1) 2 / ^C std - C std h2
n
Cd
b1
i
i=1

to determine the standard deviation in the concentration where the


number of samples, m, is one, the number of standards, n, is six, the
standard deviation about the regression, sr, is 2.826, the slope, b1, is
1.43, the average signal for the one sample, S samp , is 66.3, and the av-
erage signal for the six standards, S std , is 68.7. At first glance, the term
/ (C std - C std ) 2 , where C std is the concentration of the ith stan-
i i

dard and C std is the average concentration for the n standards, seems
Chapter 5 Standardizing Analytical Methods 45

cumbersome to calculate. We can simplify the calculation, however,


by recognizing that / (C std - C std ) 2 is the numerator in the equa-
i
/ (C std i - C std ) 2
tion that gives the standard deviation for the concentrations of the s Cd =
n-1
standards, sCd. Because sCd is easy to determine using a calculator, a
spreadsheet, or a statistical software program, it is easy to calculate
/ (C std - C std ) 2 ; thus
i

n
/ (C std i - C std ) 2 = (n - 1) (s Cd ) 2 = (6 - 1) (26.41) 2 = 3487
i=1

Substituting all terms back into equation 5.25 gives the standard de-
viation in the concentration as
^66.3 - 68.7 h2
s C = 2.826 11 + 1 + = 2.14
1.43
Cd
6 (1.43) 2 (3487)
The 95% confidence interval for the sample’s concentration, there-
fore, is
n Cd = 49.9 ! (2.776) (2.14) = 49.9 ! 5.9 nM
9. The standard addition for this problem follows equation 5.10, which,
as we saw in Problem 5.5, is best treated by plotting Sspike(Vo + Vstd)
on the y-axis vs. CsVs on the x-axis, the values for which are
Vstd (mL) Sspike (arb. units) Sspike(Vo + Vstd) CstdVstd
0.00 0.119 0.595 0.0
0.10 0.231 1.178 60.0
0.20 0.339 1.763 120.0
0.30 0.442 2.343 180.0
Figure SM5.5 shows the resulting calibration curve for which the
calibration equation is
3.0

S spike (Vo + Vstd ) = 0.5955 + 0.009713 # C std Vstd


Sspike(Vo + Vstd)
2.0

To find the analyte’s concentration, CA, we use the absolute value of


the x-intercept, –CAVo, which is equivalent to the y-intercept divided
1.0

by the slope; thus


0.0

C A Vo = C A (5.00 mL) = b 0 = 00.009713


.5955 = 61.31
−100 −50 0 50 100 150 200
kA CstdVstd

which gives CA as 12.3 ppb. Figure SM5.5 Standard additions calibra-


tion curve for Problem 5.9.
To find the 95% confidence interval for CA, we use a modified form
of equation 5.25 to calculate the standard deviation in the x-intercept
" S spike (V o + Vstd ) ,2
sC = sr 1+
n n
A Vo
b1 (b 1) 2 / (C std Vstd - C std Vstd ) 2
i i
i=1

where the number of standards, n, is four, the standard deviation


about the regression, sr, is 0.00155, the slope, b1, is 0.009713, the
46 Solutions Manual for Analytical Chemistry 2.1

average signal for the four standards, S spike (Vo + Vstd ) , is 1.47, and the
term / (C std Vstd - C std Vstd ) 2 is 1.80×104. Substituting back into
i i

this equation gives the standard deviation of the x-intercept as


" 1.47 ,2
sC .00155
= 00.009713 1+ = 0.197
A Vo
4 (0.009713) 2 (1.8 # 10 4)
Dividing s C A Vo by Vo gives the standard deviation in the concentra-
tion, s C , as A

sC V
sC = V
A
A
= 05.197
o

.00 = 0.0393
o

The 95% confidence interval for the sample’s concentration, there-


fore, is
n = 12.3 ! (4.303) (0.0393) = 12.3 ! 0.2 ppb
10. (a) For an internal standardization, the calibration curve places the
signal ratio, SA/SIS, on the y-axis and the concentration ratio, CA/CIS,
3.0

on the x-axis. Figure SM5.6 shows the resulting calibration curve,


which is characterized by the following values
2.0
SA/SIS

slope (b1): 0.5576


1.0


y-intercept (b0): 0.3037
standard deviation for slope ( s b ): 0.0314
0.0

1
0 1 2 3 4
CA/CIS standard deviation for y-intercept ( s b ): 0.0781 0

Figure SM5.6 Internal standards calibra- Based on these values, the 95% confidence intervals for the slope and
tion curve for the data in Problem 5.10. the y-intercept are, respectively
b 0 = b 0 ! ts b = 0.3037 ! (3.182) (0.0781) = 0.3037 ! 0.2484
0

b 1 = b 1 ! ts b = 0.5576 ! (3.182) (0.0314) = 0.5576 ! 0.1001


1

(b) The authors concluded that the calibration model is inappropriate


because the 95% confidence interval for the y-intercept does not in-
clude the expected value of 0.00. A close observation of Figure SM5.6
1.5

shows that the calibration curve has a subtle, but distinct curvature,
expected absorbance

which suggests that a straight-line is not a suitable model for this data.
1.0

11. Figure SM5.7 shows a plot of the measured values on the y-axis and
the expected values on the x-axis, along with the regression line, which
0.5

is characterized by the following values:


slope (b1): 0.9996
0.0

0.0 0.5 1.0 1.5


measured absorbance
y-intercept (b0): 0.000761
Figure SM5.7 Plot of the measured absor- standard deviation for slope ( s b ): 0.00116 1
bance values for a series of spectrophoto-
metric standards versus their expected ab- standard deviation for y-intercept ( s b ): 0.00112 0

sorbance values. The original data is from For the y-intercept, texp is
Problem 4.25.
Chapter 5 Standardizing Analytical Methods 47

b0 - b0 0.00 - 0.00761
t exp = sb = 0.00112 = 0.679
0

and texp for the slope is


b1 - b1 1.00 - 0.9996
t exp = sb = = 0.345
10.00116
For both the y-intercept and the slope, texp is less than the critical
value of t(0.05,3), which is 3.182; thus, we retain the null hypothesis
and have no evidence at a = 0.05 that the y-intercept or the slope dif-
fer significantly from their expected values of zero, and, therefore, no
evidence at a = 0.05 that there is a difference between the measured
absorbance values and the expected absorbance values.
12. (a) Knowing that all three data sets have identical regression statistics
suggests that the three data sets are similar to each other. A close look (a)

at the values of y suggests that all three data sets show a general in-

1
residual error
crease in the value of y as the value of x becomes larger, although the

0
trend seems noisy.
(b) The results of a regression analysis are gathered here

−1
parameter Data Set 1 Data Set 2 Data Set 3

−2
0 5 10 15
b0 3.0001 3.0010 3.0025 x
(b)
b1 0.5001 0.5000 0.4997

1.0
sb0 1.1247 1.1250 1.1245

residual error
0.0
sb1 0.1179 0.1180 0.1179
sr 1.237 1.237 1.236 −1.0

and are in agreement with the values reported in part (a). Figure
−2.0

SM5.8 shows the residual plots for all three data sets. For the first data
0 5 10 15
set, the residual errors are scattered at random around a residual error x

of zero and show no particular trend, suggesting that the regression (c)
3

model provides a reasonable explanation for the data. For data set 2
2

and for data set 3, the clear pattern to the residual errors indicates that
residual error

neither regression models is appropriate.


1

(c) Figure SM5.9 shows each data set with its regression line. For data
0

set 1, the regression line provides a good fit to what is rather noisy
−1

data. For the second data set, we see that the relationship between x 0 5 10 15
and y is not a straight-line and that a quadratic model likely is more x

appropriate. With the exception of an apparent outlier, data set 3 is a Figure SM5.8 Residual plots for (a) data
set 1; (b) data set 2; and (c) data set 3.
straight-line; removing the outlier is likely to improve the regression
The dashed line in each plot shows the ex-
analysis. pected trend for the residual errors when
(d) The apparent outlier is the third point in the data set (x = 13.00, the regression model provides a good fit to
y = 12.74). Figure SM5.10 shows the resulting regression line, for the data.
which
slope (b1): 0.345
48 Solutions Manual for Analytical Chemistry 2.1

(a)

y-intercept (b0): 4.01
15

standard deviation for slope ( s b ): 0.00321


1

standard deviation for y-intercept ( s b ): 0.00292


10

0
y

standard deviation about the regression (sr): 0.00308


5

Note that sr, s b , and s b are much smaller after we remove the ap-
0 1

parent outlier, which is consistent with the better fit of the regression
0

0 5 10 15
x line to the data.
(b) (e) The analysis of this data set drives home the importance of exam-
15

ining your data in a graphical form. As suggested earlier in the answer


to part (a), it is difficult to see the underlying pattern in a data set
10

when we look at numbers only.


y

13. To complete a weighted linear regression we first must determine the


5

weighting factors for each concentration of thallium; thus


0

0 5
x
10 15
xi yi (avg) sy i (s y ) - 2
i wi
0.000 2.626 0.1137 77.3533 3.3397
15

(c)
0.387 8.160 0.2969 11.3443 0.4898
1.851 29.114 0.5566 3.2279 0.1394
10

5.734 85.714 1.1768 0.7221 0.0312


y
5

where yi (avg) is the average of the seven replicate measurements for


each of the i standard additions, and s y is the standard deviation for
i
0

0 5 10 15 these replicate measurements; note that the increase in s y with larger i


x
values of xi indicates that the indeterminate errors affecting the signal
Figure SM5.9 Regression plots for the data
are not independent of the concentration of thallium, which is why a
from (a) data set 1; (b) data set 2; and (c)
weighted linear regression is used here. The weights in the last column
data set 3.
are calculated using equation 5.28 and, as expected, the sum of the
weights is equal to the number of standards.
To calculate the y-intercept and the slope, we use equation 5.26 and
equation 5.27, respectively, using the table below to organize the var-
15

ious summations
10

xi yi (avg) wixi wiyi w i x 2i wixiyi


y

0.000 2.626 0.0000 8.7701 0.0000 0.0000


5

0.387 8.160 0.1896 3.9968 0.0734 1.5467


0

0 5 10 15 1.851 29.114 0.2580 4.0585 0.4776 7.5123


x
5.734 85.714 0.1789 2.6743 1.0258 15.3343
Figure SM5.10 Regression plot for data set
totals 0.6265 19.4997 1.5768 24.3933
3 after removing the apparent outlier.
Chapter 5 Standardizing Analytical Methods 49
n n n
n / wi xi yi - / wi xi / wi yi

100
b1 = i=1 i=1 i=1

n / wi x - c/ wi xi m
n n 2
2

80
i
i=1 i=1

(4) (24.3933) - (0.6265) (19.4997)


= = 14.43

60
(4) (1.5768) - (0.6265) 2

Stotal
n n
/w y -b /w x

40
i i 1 i i

b0 = i=1
n
i=1

20
19.4997 - (14.431) (0.6265)
= = 2.61
4

0
0 1 2 3 4 5 6
The calibration curve, therefore, is CTl

S total = 2.61 µA + (14.43 µA/ppm) # C Tl Figure SM5.11 Calibration data and cali-
bration curve for the data in Problem 5.13.
Figure SM5.11 shows the calibration data and the weighted linear The individual points show the average sig-
regression line. nal for each standard and the calibration
curve is from a weighted linear regression.
The blue tick marks along the y-axis show
the replicate signals for each standard; note
that the spacing of these marks reflect the
increased magnitude of the signal’s indeter-
minate error for higher concentrations of
thallium.
50 Solutions Manual for Analytical Chemistry 2.1
Chapter 14 Developing a Standard Method 227

Chapter 14
1. (a) The response when A = 0 and B = 0 is 1.68, which we represent
as (A, B, response) or, in this case, (0, 0, 1.68). For the first cycle, we
increase A in steps of one until the response begins to decrease or until
we reach a boundary, obtaining the following additional results:
At this point, our best response is 2.04 at
(1, 0, 1.88), (2, 0, 2.00), (3, 0, 2.04), (4, 0, 2.00) A = 3 and at B = 0.
For the second cycle, we return to (3, 0, 2.04) and increase B in steps
of one, obtaining these results:
(3, 1, 2.56), (3, 2, 3.00), (3, 3, 3.36), (3, 4, 3.64), At this point, our best response is 4.00 at
A = 3 and at B = 7.
(3, 5, 3.84), (3, 6, 3.96), (3, 7, 4.00), (3, 8, 3.96)
For the third cycle, we return to (3, 7, 4.00) and increase A in steps of
one, obtaining a result of (4, 7, 3.96). Because this response is smaller
than our current best response of 4.00, we try decreasing A by a step
of one, which gives (2, 7, 3.96). Having explored the response in all
directions around (3, 7, 4.00), we know that the optimum response
is 4.00 at A = 3 and B = 7.
Figure SM14.1a shows the progress of the optimization as a three-di-
mensional scatterplot with the figure’s floor showing a contour plot
of the response surface. Figure SM14.1b shows a three-dimensional
surface plot of the response surface.
(b) The response when A = 0 and B = 0 is 4.00, which we represent as
(0, 0, 4.00). For the first cycle, we increase A in steps of one until the
response begins to decrease or until we reach a boundary, obtaining a
results of (1, 0, 3.60); as this response is smaller than the initial step,
this ends the first cycle.
4.0

3.5
(a) (b)
3.0
end
start
resp

2.5
resp

onse

2.0
nseo

1.5

1.0
fb

fb

va va
so

so

lu lu
es es
lue

lue

of 0.5 of
va

va

a a
0.0

Figure SM14.1 The progress of a one-factor-at-a-time optimization for the equa-


tion in Problem 1a is shown in (a) as a scatterplot in three dimensions with a
contour plot of the response surface on the figure’s floor. The full response surface
is shown in (b). The legend shows the colors used for the individual contour lines;
the response surface provides for a greater resolution in the response by using
gradations between these colors.
228 Solutions Manual for Analytical Chemistry 2.1

7
end
(a) (b)
start 6

resp
resp
4

o
o

nse
nse
3

fb

fb
va va

so

so
lu lu
es es

lue

lue
of 1 of

va

va
a a
0

Figure SM14.2 The progress of a one-factor-at-a-time optimization for the equa-


tion in Problem 1b is shown in (a) as a scatterplot in three dimensions with a
contour plot of the response surface on the figure’s floor. The full response surface
is shown in (b). The legend shows the colors used for the individual contour lines;
the response surface provides for a greater resolution in the response by using
gradations between these colors.
We begin the second cycle by returning to (0, 0, 4.00) and increase
the value of B by one, obtaining a result of (0, 1, 4.00). Because the
response did not increase, we end the second cycle and, for the third
cycle, we increase the value of A, obtaining a result of (1, 1, 3.68).
Continuing in this fashion, the remainder of the steps are
Note that until we reach A = 0 and B = 6, (0, 1, 4.00), (0, 2, 4.00), (1, 2, 3.76), (0, 2, 4.00), (0, 3, 4.00)
we keep probing toward larger values of A
without increasing the response, and then (1, 3, 3.84), (0, 3, 4.00), (0, 4, 4.00), (1, 4, 3.92), (0, 4, 4.00)
probing toward larger values of B, also
without increasing the response. Once we
(0, 5, 4.00), (1, 5, 4.00), (0, 5, 4.00), (0, 6, 4.00), (1, 6, 4.08)
reach A = 0 and B = 6, however, we find (2, 6, 4.16), (3, 6, 4.24), (4, 6, 4.32), (5, 6, 4.40), (6, 6, 4.48)
that an increase in A finally increases the
response. Once we reach the boundary (7, 6, 4.56), (8, 6, 4.64), (9, 6, 4.72), (10, 6, 4.80), (10, 7, 5.60)
for A, we continue to increase B until we
reach the optimum response at A = 10 (10, 8, 6.40), (10, 9, 7.20), (10, 10, 8.00)
and B = 10.
The optimum response is 8.00 at A = 10 and B = 10.
Figure SM14.2a shows the progress of the optimization as a three-di-
mensional scatterplot with the figure’s floor showing a contour plot
for the response surface. Figure SM14.2b shows a three-dimensional
surface plot of the response surface.
(c) The response when A = 0 and B = 0 is 3.267, which we represent
as (0, 0, 3.267). For the first cycle, we increase A in steps of one until
the response begins to decrease or until we reach a boundary, obtain-
ing the following additional results:
(1, 0, 4.651), (2, 0, 5.736), (3, 0, 6.521),
At this point, our best response is 7.187 at (4, 0, 7.004), (5, 0, 7.187), (6, 0, 7.068)
A = 5 and at B = 0.
For the second cycle, we return to (5, 0, 7.187) and increase B in steps
of one, obtaining these results:
Chapter 14 Developing a Standard Method 229

9
end
8
(a) 7 (b)
start
6

resp
resp

onse
onse

fb
va fb 2 va

so
so
lu lu
es es

lue
lue

of 1 of

va
va

a a
0

Figure SM14.3 The progress of a one-factor-at-a-time optimization for the equa-


tion in Problem 1c is shown in (a) as a scatterplot in three dimensions with a
contour plot of the response surface on the figure’s floor. The full response surface
is shown in (b). The legend shows the colors used for the individual contour lines;
the response surface provides for a greater resolution in the response by using
gradations between these colors.
(5, 1, 7.436), (5, 2, 7.631), (5, 3, 7.772), At this point, our best response is 7.889 at
(5, 4, 7.858), (5, 5, 7.889), (5, 6, 7.865) A = 5 and at B = 5.

For the next cycle, we return to (5, 5, 7.889) and increase A in steps of
one, obtaining a response for (6, 5, 7.481) that is smaller; probing in
the other direction gives (4, 5, 7.996) and then (3, 5, 7.801). Return-
ing to (4, 5, 7.966), we find our optimum response at (4, 6, 8.003),
with movement in all other directions giving a smaller response. Note
that using a fixed step size of one prevents us from reaching the true
optimum at A = 3.91 and B = 6.22.
Figure SM14.3a shows the progress of the optimization as a three-di-
mensional scatterplot with the figure’s floor showing a contour plot
for the response surface. Figure SM14.3b shows a three-dimensional
surface plot of the response surface.
2. Given a step size of 1.0 in both directions and A = 0 and B = 0 as
the starting point for the first simplex, the other two vertices for the
first simplex are at A = 1 and at B = 0, and at A = 1.5 and at B =
0.87. The responses for the first three vertices are (0, 0, 3.264), (1.0,
0, 4.651), and (0.5, 0.87, 4.442), respectively. The vertex with the
worst response is (0, 0, 3.264); thus, we reject this vertex and replace
it with coordinates of
A = 2 ` 1 +2 0.5 j - 0 = 1.5

B = 2 ` 0.872+ 0 j - 0 = 0.87

The following table summarizes all the steps in the simplex optimiza-
tion. The column labeled “vertex” shows the 25 unique experiments
along with their values for A, for B, and for the response. The column
230 Solutions Manual for Analytical Chemistry 2.1

9 labeled “simplex” shows the three vertices that make up each simplex.
8 For each simplex, the vertex that we reject is shown in bold font; note
7 that on two occasions, the rejected vertex, shown in bold-italic font,
6
has the second-worst response (either because of a boundary condi-
tion or because the new vertex has the worst response)
resp

5
onse

3 vertex A B response simplex


1 0 0 3.264 —
fb
va 2
so

lu
es
lue

of 1
2 1.0 0 4.651 —
va

a
0
3 0.5 0.87 4.442 1, 2, 3
4 1.5 0.87 5.627 2, 3, 4
10

5 2.0 0 5.736 2, 4, 5
8

6 2.5 0.87 6.512 4, 5, 6


7 3.0 0 6.521 5, 6, 7
values of b

8 3.5 0.87 7.096 6, 7, 8


4

9 4.0 0 7.004 7, 8, 9
10 4.5 0.87 7.378 8, 9, 10
2

11 4.0 1.74 7.504 8, 10, 11


0

0 2 4 6 8 10 12 5.0 1.74 7.586 10, 11, 12


values of a
13 4.5 2.61 7.745 11, 12, 13
Figure SM14.4 Two views showing the
14 5.5 2.61 7.626 12, 13, 14
progress of a simplex optimization of the
equation in Problem 1c in (a) three dimen- 15 5.0 3.48 7.820 13, 14, 15
sions and in (b) two dimensions. The leg- 16 4.0 3.48 7.839 13, 15, 16
end shows the colors used for the individ- 17 4.5 4.35 7.947 15, 16, 17
ual contour lines. Figure SM14.3b shows
the full response surface for this problem.
18 3.5 4.35 7.866 16, 17, 18
19 4.0 5.22 8.008 17, 18, 19
b 20 5.0 5.22 7.888 17, 19, 20
21 4.5 6.09 7.983 19, 20, 21
(an, bn) 22 3.5 6.09 8.002 19, 21, 22
(ab, bb) 23 3.0 5.22 7.826 19, 22, 23
24 3.5 4.35 7.866 19, 23, 24
25 4.5 4.35 7.947 19, 24, 25
Figure SM14.4 shows the progress of the simplex optimization in
three dimensions and in two dimensions.
(as, bs)
a
(aw, bw) 3. To help us in the derivation, we will use the diagram shown in Figure
SM14.5 where a and b are the coordinates of a vertex, and w, b, s, and
Figure SM14.5 Diagram showing vertices
n identify the vertex with, respectively, the worst response, the best
of original simplex and the reflection of the
worst vertex across the midpoint (red cir- response, the second-best response, and the new vertex. The red cir-
cle) of the best and the next-best vertices to cle marks the midpoint between the best vertex and the second-best
give the new vertex (green circle). See text vertex; its coordinates are
for additional details.
Chapter 14 Developing a Standard Method 231

a mp = a b +
2
as

b mp = b b +
2
bs

The distance along the a-axis between the worst vertex’s coordinate of
aw and the midpoint’s coordinate of amp is
ab + as - a
2 w

The distance along the a-axis between the worst vertex’s coordinate
and the new vertex’s coordinate is twice that to the midpoint, which
means the a coordinate for the new vertex is
The value for the coordinate an is the val-
an = 2a ab +2 - aw k + aw
as ue for the coordinate aw plus the distance
along the a-axis between the new vertex
and the worst vertex.
which simplifies to equation 14.3
an = 2a ab +2 k - aw
as

Using the same approach for coordinates relative to the b-axis yields
equation 14.4
bn = 2a bb +2 k - bw
bs

4. In coded form, the values for b0, ba, bb, and bab are
b 0 = 1 ^5.92 + 2.08 + 4.48 + 3.52h = 4.00
4
b a = 1 ^5.92 + 2.08 - 4.48 - 3.52h = 0
4
b b = 1 ^5.92 - 2.08 + 4.48 - 3.52h = 1.20
4
b ab = 1 ^5.92 - 2.08 - 4.48 + 3.52h = 0.72
4
which gives us the following equation for the response surface in
coded form
R = 4.00 + 1.20B [ + 0.72A [ B [
To convert this equation into its uncoded form, we first note the
following relationships between coded and uncoded values for A and
for B
A = 5 + 3A [ B = 5 + 3B [
A[ = A 5 B[ = B 5
3 -3 3 -3
Substituting these two equations back into the response surface’s cod-
ed equation gives
232 Solutions Manual for Analytical Chemistry 2.1

R = 4.00 + 1.20B* +0.72A*B* R = 4.00 – 0.40A +0.08AB

(a) (b)

5 5

resp
resp
4 4

onse
onse
3 3
1.0 8
-1.0 2 7
0.5 3
-0.5 6

fb
4

fb
va 0.0 va 5

so

so
lu 0.0 lu 5
es

lu e
es 4

lu e
of -0.5 of 6

va

va
a 0.5 a 3
7
1.0 -1.0 8 2
Figure SM14.6 Response surfaces based on the (a) coded and the (b) uncoded
equations derived from the data in Problem 4. Note that the two response surfaces
are identical even though their equations are very different.

R = 4.00 + 1.20 a B
3 - 3 k + 0.72 a 3 - 3 ka 3 - 3 k
5 A 5 B 5

R = 4.00 + 0.40B - 2.00 + 0.08AB - 0.40A - 0.40B + 2.00

When we examine carefully both equations,


R = 4.00 - 0.40A + 0.08AB
we see they convey the same information: that At first glance, the coded and the uncoded equations seem quite dif-
the system’s response depends on the relative
values of A and B (or A* and B*) and that ferent, with the coded equation showing a first-order effect in B* and
the affect of A (or A*) depends on the value an interaction between A* and B*, and the uncoded equation show-
of B (or B*), with larger values of A (or more ing a first-order effect in A and an interaction between A and B. As
positive values of A*) decreasing the response
for smaller values of B (or more negative val- we see in Figure SM14.6, however, their respective response surfaces
ues of B*). are identical.
Although the mathematical form of the equa-
5. (a) Letting a represent Ca and letting b represent Al, the values for b0,
tion is important, it is more important that we
interpret what it tells us about how each factor ba, bb, and bab in coded form are
b 0 = 1 ^54.29 + 98.44 + 19.18 + 38.53h = 52.61
affects the response.
4
b a = 1 ^54.29 + 98.44 - 19.18 - 38.53h = 23.755
4
b b = 1 ^54.29 - 98.44 + 19.18 - 38.53h = –15.875
4
b ab = 1 ^54.29 - 98.44 - 19.18 + 38.53h = –6.20
4
which gives us the following equation for the response surface in
coded form
R = 52.610 + 23.755Ca [ - 15.875Al [ - 6.20Ca [ Al [
(b) The original data shows that a larger concentration of Al sup-
presses the signal for Ca; thus, we want to find the maximum con-
centration of Al that results in a decrease in the response of less than
Chapter 14 Developing a Standard Method 233

5%. First, we determine the response for a solution that is 6.00 ppm
in Ca and that has no Al. The following equations relate the actual
concentrations of each species to its coded form
Ca = 7 + 3Ca [ Al = 80 + 80Al [
Substituting in 6.00 ppm for Ca and 0.00 ppm for Al gives –1/3 for
Ca* and –1 for Al*. Substituting these values back into the response
surface’s coded equation
R = 52.610 + 23.755 a –31 k - 15.875 (–1) - 6.20 a –31 k (–1)

gives the response as 58.50. Decreasing this response by 5% leaves us


with a response of 55.58. Substituting this response into the response
surface’s coded equation, along with the coded value of –1/3 for Ca*,
and solving for Al* gives
55.58 = 52.610 + 23.755 a –31 k - 15.875Al [ - 6.20 a –31 k Al [

10.88 =- 13.81Al [
Al [ = –0.789
The maximum allowed concentration of aluminum, therefore, is
Al = 80 + 80 (–0.789) = 16.9 ppm Al
6. (a) The values for b0, bx, by, bz, bxy, bxz, byz, and bxyz in coded form are
28 + 17 + 41 + 34 +
b 0 = 18 d n = 38.125 c 38.1
56 + 51 + 42 + 36
–28 + 17 - 41 + 34 -
b x = 18 d n =- 3.625 c - 3.6
56 + 51 - 42 + 36
–28 - 17 + 41 + 34 -
b y = 18 d n = 0.125 c 0.1
56 - 51 + 42 + 36
–28 - 17 - 41 - 34 +
b z = 18 d n = 8.125 c 8.1
56 + 51 + 42 + 36
28 - 17 - 41 + 34 +
b xy = 18 d n = 0.375 c 0.4
56 - 51 - 42 + 36
28 - 17 + 41 - 34 -
b xz = 18 d n = 0.875 c 0.9
56 + 51 - 42 + 36
28 + 17 - 41 - 34 -
b yz = 18 d n = –7.375 c –7.4
56 - 51 + 42 + 36
–28 + 17 + 41 - 34 +
b xyz = 18 d n = –0.625 c –0.6
56 - 51 - 42 + 36
234 Solutions Manual for Analytical Chemistry 2.1

The coded equation for the response surface, therefore, is


R = 38.1 - 3.6X [ + 0.1Y [ + 8.1Z [ +
0.4X [ Y [ + 0.9X [ Z [ - 7.4Y [ Z [ - 0.6X [ Y [ Z [
(b) The important effects are the temperature (X*) and the reactant’s
concentration (Z*), and an interaction between the reactant’s concen-
tration and the type of catalyst (Y*Z*), which leave us with
R = 38.1 - 3.6X [ + 8.1Z [ - 7.4Y [ Z [
(c) Because the catalyst is a categorical variable, not a numerical vari-
able, we cannot transform its coded value (Y*) into a number.
(d) The response surface’s simple coded equation shows us that the
effect of the catalyst depends on the reactant’s concentration as it ap-
pears only in the interaction term Y*Z*. For smaller concentrations
of reactant—when Z* is less than 0 or the reactant’s concentration
is less than 0.375 M—catalyst B is the best choice because the term
–7.4Y*Z* is positive; the opposite is true for larger concentrations of
reactant—when Z* is greater than 0 or the reactant’s concentration is
greater than 0.375 M—where catalyst A is the best choice.
(e) For the temperature and the concentration of reactant, the follow-
ing equations relate a coded value to its actual value
X = 130 + 10X [ Z = 0.375 + 0.125Z [
Substituting in the desired temperature and concentration, and solv-
ing for X* and for Z* gives
125 = 130 + 10X [ 0.45 = 0.375 + 0.125Z [
–5 = 10X [ 0.075 + 0.125Z [
X [ = –0.5 Z [ = 0.6
Because Z* is greater than zero, we know that the best catalyst is type
A, for which Y* is –1. Substituting these values into the response
surface’s coded equation gives the percent yield as
R = 38.1 - 3.6 (–0.5) + 8.1 (0.6) - 7.4 (–1) (0.6) = 49.2%
7. (a) The values for b0, bx, by, bz, bxy, bxz, byz, and bxyz in coded form are
1.55 + 5.40 + 3.50 + 6.75 +
b 0 = 18 d n = 4.175 c 4.18
2.45 + 3.60 + 3.05 + 7.10
–1.55 + 5.40 - 3.50 + 6.75 -
b x = 18 d n = 1.538 c 1.54
2.45 + 3.60 - 3.05 + 7.10
–1.55 - 5.40 + 3.50 + 6.75 -
b y = 18 d n = 0.925 c 0.92
2.45 - 3.60 + 3.05 + 7.10
Chapter 14 Developing a Standard Method 235

–1.55 - 5.40 - 3.50 - 6.75 +


b z = 18 d n = –0.125 c –0.12
2.45 + 3.60 + 3.05 + 7.10
1.55 - 5.40 - 3.50 + 6.75 +
b xy = 18 d n = 0.288 c 0.29
2.45 - 3.60 - 3.05 + 7.10
1.55 - 5.40 + 3.50 - 6.75 -
b xz = 18 d n = –0.238 c –0.24
2.45 + 3.60 - 3.05 + 7.10
1.55 + 5.40 - 3.50 - 6.75 -
b yz = 18 d n = 0.100 c 0.10
2.45 - 3.60 + 3.05 + 7.10
–1.55 + 5.40 + 3.50 - 6.75 +
b xyz = 18 d n = 0.438 c 0.44
2.45 - 3.60 - 3.05 + 7.10
The coded equation for the response surface, therefore, is
R = 4.18 + 1.54X [ + 0.92Y [ - 0.12Z [ +
0.29X [ Y [ - 0.24X [ Z [ + 0.1Y [ Z [ + 0.44X [ Y [ Z [
(b) The important effects are the presence or absence of benzocaine
(X*) and the temperature (Y*), which leave us with
R = 4.18 + 1.54X [ + 0.92Y [
8. (a) The values for b0, bx, by, bz, bxy, bxz, byz, and bxyz in coded form are
b 0 = 18 ^2 + 6 + 4 + 8 + 10 + 18 + 8 + 12h = 8.5

b x = 18 ^–2 + 6 - 4 + 8 - 10 + 18 - 8 + 12h = 2.5

b y = 18 ^–2 - 6 + 4 + 8 - 10 - 18 + 8 + 12h = –0.5

b z = 18 ^–2 - 6 - 4 - 8 + 10 + 18 + 8 + 12h = 3.5

b xy = 18 ^2 - 6 - 4 + 8 + 10 - 18 - 8 + 12h = –0.5

b xz = 18 ^2 - 6 + 4 - 8 - 10 + 18 - 8 + 12h = 0.5

b yz = 18 ^2 + 6 - 4 - 8 - 10 - 18 + 8 + 12h = - 1.5

b xyz = 18 ^–2 + 6 + 4 - 8 + 10 - 18 - 8 + 12h = –0.5

The coded equation for the response surface, therefore, is


R = 8.5 + 2.5X [ - 0.5Y [ + 3.5Z [ -
0.5X [ Y [ + 0.5X [ Z [ - 1.5Y [ Z [ - 0.5X [ Y [ Z [
(b) The important effects are the temperature (X*), the pressure (Y*),
and the interaction between the pressure and the residence time
(Y*Z*), which leave us with
236 Solutions Manual for Analytical Chemistry 2.1

R = 8.5 + 2.5X [ + 3.5Z [ - 1.5Y [ Z [


(c) The mean response is an 8.6% yield for the three trials at the center
of the experimental design, with a standard deviation of 0.529%. A
95% confidence interval for the mean response is
(4.303) (0.529%)
n = X ! ts = 8.60% ! = 8.60% ! 1.31%
n 3
The average response for the eight trials in the experimental design
is given by b0 and is equal to 8.5; as this falls within the confidence
interval, there is no evidence, at a = 0.05, of curvature in the data
and a first-order model is a reasonable choice.
9. (a) When considering the response in terms of DE, the values for b0,
bx, by, bz, bxy, bxz, byz, and bxyz in coded form are
37.45 + 31.70 + 32.10 + 27.20 +
b 0 = 18 d n = 33.54
39.85 + 32.85 + 35.00 + 32.15
–37.45 + 31.70 - 32.10 + 27.20 -
b x = 18 d n = –2.56
39.85 + 32.85 - 35.00 + 32.15
–37.45 - 31.70 + 32.10 + 27.20 -
b y = 18 d n = –1.92
39.85 - 32.85 + 35.00 + 32.15
–37.45 - 31.70 - 32.10 - 27.20 +
b z = 18 d n = 1.42
39.85 + 32.85 + 35.00 + 32.15
37.45 - 31.70 - 32.10 + 27.20 +
b xy = 18 d n = 0.62
39.85 - 32.85 - 35.00 + 32.15
37.45 - 31.70 + 32.10 - 27.20 -
b xz = 18 d n = 0.10
39.85 + 32.85 - 35.00 + 32.15
37.45 + 31.70 - 32.10 - 27.20 -
b yz = 18 d n = 0.54
39.85 - 32.85 + 35.00 + 32.15
–37.45 + 31.70 + 32.10 - 27.20 +
b xyz = 18 d n = 0.41
39.85 - 32.85 - 35.00 + 32.15
The coded equation for the response surface, therefore, is
R = 33.54 - 2.56X [ - 1.92Y [ + 1.42Z [ +
0.62X [ Y [ + 0.10X [ Z [ + 0.54Y [ Z [ + 0.41X [ Y [ Z [
(b) When considering the response in terms of samples per hour, the
values for b0, bx, by, bz, bxy, bxz, byz, and bxyz in coded form are
21.5 + 26.0 + 30.0 + 33.0 +
b 0 = 18 d n = 26.9
21.0 + 19.5 + 30.0 + 34.0
–21.5 + 26.0 - 30.0 + 33.0 -
b x = 18 d n = 1.2
21.0 + 19.5 - 30.0 + 34.0
Chapter 14 Developing a Standard Method 237

–21.5 - 26.0 + 30.0 + 33.0 -


b y = 18 d n = 4.9
21.0 - 19.5 + 30.0 + 34.0
–21.5 - 26.0 - 30.0 - 33.0 +
b z = 18 d n = –0.8
21.0 + 19.5 + 30.0 + 34.0
21.5 - 26.0 - 30.0 + 33.0 +
b xy = 18 d n = 0.5
21.0 - 19.5 - 30.0 + 34.0
21.5 - 26.0 + 30.0 - 33.0 -
b xz = 18 d n = –0.6
21.0 + 19.5 - 30.0 + 34.0
21.5 + 26.0 - 30.0 - 33.0 -
b yz = 18 d n = 1.0
21.0 - 19.5 + 30.0 + 34.0
–21.5 + 26.0 + 30.0 - 33.0 +
b xyz = 18 d n = 0.9
21.0 - 19.5 - 30.0 + 34.0
The coded equation for the response surface, therefore, is
R = 26.9 + 1.2X [ + 4.9Y [ - 0.8Z [ +
0.5X [ Y [ - 0.6X [ Z [ + Y [ Z [ + 0.9X [ Y [ Z [
(c) To help us compare the response surfaces, let’s gather the values
for each term into a table; thus
parameter DE sample/h
b0 33.54 26.9
bx –2.56 1.2
by –1.92 4.9
bz 1.42 –0.8
bxy 0.62 0.5
bxz 0.10 –0.6
byz 0.54 1.0
bxyz 0.41 0.9
Looking at the main effects (bx, by, and bz), we see from the signs that
the parameters that favor a high sampling rate (a smaller volume of
sample, a shorter reactor length, and a faster carrier flow rate) result
in smaller values for DE; thus, the conditions that favor sensitivity do
not favor the sampling rate.
(d) One way to answer this question is to look at the original data
and see if for any individual experiment, the sensitivity and the sam-
pling rate both exceed their mean values as given by their respective
values for b0: 33.54 for DE and 26.9 sample/h for the sampling rate.
Of the original experiments, this is the case only for run 7; thus, a
reactor length of 1.5 cm (X* = –1), a carrier flow rate of 2.2 mL/min
238 Solutions Manual for Analytical Chemistry 2.1

4 8 (Y* = +1), and a sample volume of 150 µL provides the best com-
promise between sensitivity and sampling rate.
3
7
Another approach is to plot the sampling rate versus the sensitivity
30
sampling rate

for each experimental run, as shown in Figure SM14.7 where the


2
blue dots are the results for the eight experiments, the red square is
25

the average sensitivity and the average rate, and the red line shows
conditions that result in an equal percentage change in the sensitivity
1
6
5 and the sampling rate relative to their mean values. The best experi-
20

mental run is the one that lies closest to the red line and furthest to
28 30 32 34 36 38 40
the upper-right corner. Again, the seventh experiment provides the
sensitivity
best compromise between sampling rate and sensitivity.
Figure SM14.7 Plot of sampling rate vs.
sensitivity for the data in Problem 9. The 10. (a) There are a total of 32 terms to calculate: one average (b0), five
blue dots are the results for the experimen- main effects (ba, bb, bc, bd, and be), 10 binary interactions (bab, bac,
tal runs used to model the response surface, bad, bae, bbc, bbd, bbe, bcd, bce, and bde), 10 ternary interactions (babc,
the red square shows the mean sensitivity babd, babe, bacd, bace, bade, bbcd, bbce, bbde, and bcde), five quaternary
and mean sampling rate for the experimen- interactions (babcd, babce, babde, bacde, and bbdce), and one quinary inter-
tal data, and the red line shows equal per- action (babcde). We will not show here the equations for all 32 terms;
centage changes in sensitivity and sampling instead, we provide the equation for one term in each set and sum-
rate relative to their respective mean values. marize the results in a table.
See text for further details. 32 32
b 0 = 321 /R b = 1 / A[ R
i=1
i a
32 i = 1 i i
32 32
1 / A[ B[ R
b ab = 32 1 / A[ B[ C[ R
b abc = 32
i i i i i i i
i=1 i=1

32 32
1 / A[ B[ C[ D[ R
b abcd = 32 1 / A[ B[ C[ D[ E[ R
b abcde = 32
i i i i i i i i i i i
i=1 i=1

term value term value term value


b0 0.49 bbd –0.008 bbcd 0.001
ba 0.050 bbe 0.008 bbce 0
bb –0.071 bcd –0.021 bbde 0.006
bc 0.039 bce –0.12 bcde 0.025
bd 0.074 bde –0.007 babcd 0.006
be –0.15 babc 0.003 babce 0.007
bab 0.001 babd 0.005 babde 0.004
bac –0.007 babe –0.004 bacde 0.009
bad 0.013 bacd 0.003 bbdce 0.005
bae 0.009 bace 0.049 babcde –0.14
bbc 0.014 bade 0.019
If we ignore any term with an absolute value less than 0.03, then the
coded equation for the response surface is
Chapter 14 Developing a Standard Method 239

R = 0.49 + 0.50A [ - 0.071B [ + 0.039C [


+ 0.074D [ - 0.15E [ - 0.12C [ E [ + 0.049A [ C [ E [
(b) The coded equation suggests that the most desirable values for
A* and for D* are positive as they appear only in terms with positive
coefficients, and that the most desirable values for B* are negative as it
appears only in a term with a negative coefficient. Because E* is held
at its high, or +1 level, the most desirable value for C* is negative as
this will make –0.12C*E* more positive than the term 0.049A*C*E*
is negative. This is consistent with the results from the simplex op-
timization as the flow rate (A) of 2278 mL/min is greater than its
average factor level of 1421 mL/min (A*), the amount of SiH4 used
(B) of 9.90 ppm is less than its average factor level of 16.1 ppm (B*),
the O2 + N2 flow rate (C) of 260.6 mL/min is greater its average
factor level C*) of 232.5 mL/min, and the O2/N2 ratio (D) of 1.71 is
greater than its average factor level (D*) of 1.275.
11. Substituting in values of X1 = 10 and X2 = 0 gives a response of
519.7, or an absorbance of 0.520. Repeating using values of X1 = 0
and X2 = 10 gives a response of 637.5, or an absorbance of 0.638.
Finally, letting X1 = 0 and X2 = 0 gives a response of 835.9, or an
absorbance of 0.836.
These values are not reasonable as both H2O2 and H2SO4 are re-
quired reagents if the reaction is to develop color. Although the em- This is, of course, the inherent danger of
pirical model works well within the limit 8 # X 1 # 22 and the limit extrapolation.
8 # X 2 # 22 , we cannot extend the model outside this range without
introducing error.
12. The mean and the standard deviation for the 10 trials are 1.355 ppm
and 0.1183 ppm, respectively. The relative standard deviation of
0.1183 ppm
s rel = 1.355 ppm # 100 = 8.73%

and the bias of


1.355 ppm - 1.30 ppm
1.30 ppm # 100 = 4.23%

are within the prescribed limits; thus, the single operator characteris-
tics are acceptable.
13. The following calculations show the effect of a change in each factor’s
level
E A = 98.9 + 98.5 + 97.7 + 97.0
4
- 98 . 8 + 98.5 + 97.7 + 97.3 = –0.05
4
240 Solutions Manual for Analytical Chemistry 2.1

E B = 98.9 + 98.5 + 98.8 + 98.5


4
- 97.7 + 97.0 + 97.7 + 97.3 = 1.25
4
E C = 98.9 + 97.7 + 98.8 + 97.7
4
- 98.5 + 97.0 + 98.5 + 97.3 = 0.45
4
E D = 98.9 + 98.5 + 97.7 + 97.3
4
- 97.7 + 97.0 + 98.8 + 98.5 = 0.10
4
E E = 98.9 + 97.7 + 98.5 + 97.3
4
- 98.5 + 97.0 + 98.8 + 97.7 = 0.10
4
E F = 98.9 + 97.0 + 98.8 + 97.3
4
- 98 . 5 + 97.7 + 98.5 + 97.7 = –0.10
4
E G = 98.9 + 97.0 + 98.5 + 97.7
4
- 98.5 + 97.7 + 98.8 + 97.3 = –0.05
4
The only significant factors are pH (factor B) and the digestion time
(factor C). Both have a positive factor effect, which indicates that each
factor’s high level produces a more favorable recovery. The method’s
estimated standard deviation is
2 ) (–0.05) + (1.25) + (0.45) + 3 = 0.72
2 2 2

s= 7 (0.10) 2 + (0.10) 2 + (–0.10) 2 + (–0.05) 2


14. (a) The most accurate analyst is the one whose results are closest to
the true mean values, which is indicated by the red star; thus, analyst
2 has the most accurate results.
(b) The most precise analyst is the one whose results are closest to the
diagonal line that represents no indeterminate error; thus, analyst 8
has the most precise results.
Note that the results for analyst 8 remind (c) The least accurate analyst is the one whose results are furthest from
us that accuracy and precision are not re-
lated, and that it is possible for work to be
the true mean values, which is indicated by the red star; thus, analyst
very precise and yet wholly inaccurate (or 8 has the most accurate results.
very accurate and very imprecise).
(d) The least precise analyst is the one whose results are furthest from
the diagonal line that represents no indeterminate error; thus, ana-
lysts 1 and 10 have the least precise results.
Chapter 14 Developing a Standard Method 241

15. Figure SM14.8 shows the two sample plot where the mean for the

1.7
(–,+) (+,+)
first sample is 1.38 and the mean for the second sample is 1.50. A
casual examination of the plot shows that six of the eight points are

1.6
result for sample 2
in the (+,+) or the (–,–) quadrants and that the distribution of the

1.5
points is more elliptical than spherical; both suggest that systematic
errors are present.

1.4
To estimate values for vrand and for vsys, we first calculate the differ-

1.3
ences, Di, and the totals, Ti, for each analyst; thus
(–,–) (+,–)
analyst Di Ti

1.2
1.2 1.3 1.4 1.5 1.6 1.7
1 –0.22 2.92
result for sample 1
2 0.02 2.68 Figure SM14.8 Two-sample plot for the
3 –0.13 2.81 data in Problem 15. The blue dots are the
4 –0.10 3.10 results for each analyst, the red square is
the average results for the two samples, the
5 –0.10 3.14
dashed brown lines divide the plot into
6 –0.13 2.91 four quadrants where the results for both
7 –0.06 2.66 samples exceeds the mean (+,+), where
8 –0.21 2.85 both samples are below the mean (–,–), and
where one sample is above the mean and
To calculate the experimental standard deviations for the differences one below the mean, (+,–) and (–,+). The
and the totals, we use equation 14.18 and equation 14.20, respec- solid green line shows results with identical
tively, and are easy to calculate if first we find the regular standard systematic errors.
deviation and then we divide it by 2 ; thus
s D = 0.1232 s T = 0.0549
To determine if the systematic errors are significant, we us the follow-
ing null hypothesis and one-tailed alternative hypothesis Here we use a one-tailed alternative hy-
pothesis because we are interested only in
H 0: s T = s D H A: s T > s D whether sT is significantly greater than sD.

Because the value of Fexp


(s T ) 2 (0.1232) 2
Fexp = 2 = = 5.04
(s D ) (0.0549) 2
exceeds the critical value of F(0.05,7,7) of 3.787; thus, we reject the
null hypothesis and accept the alternative hypothesis, finding evi-
dence at a = 0.5 that systematic errors are present in the data. The
estimated precision for a single analyst is
v rand = s D = 0.055
and the estimated standard deviation due to systematic differences
between the analysts is
v 2T - v 2D = (0.1232) 2 - (0.0549) 2
v sys = 2 2 = 0.078
242 Solutions Manual for Analytical Chemistry 2.1

16. (a) For an analysis of variance, we begin by calculating the global mean
and the global variance for all 35 measurements using equation 14.22
and equation 14.23, respectively, obtaining values of X = 3.542
and s 2 = 1.989 . Next, we calculate the mean value for each of the
seven labs, obtaining results of
X A = 2.40 X B = 3.60 X C = 2.00
X D = 2.60 X E = 4.80 X F = 5.00 X G = 4.40
To calculate the variance within the labs and the variance between
the labs, we use the equations from Table 14.7; thus, the total sum-
of-squares is
SS t = s 2 (N - 1) = (1.989) (35 - 1) = 67.626
and the between lab sum-of-squares is
h
SS b = / n i (X i - X ) 2 = (5) (2.40 - 3.542) 2
i=1

+ (5) (3.60 - 3.542) 2 + (5) (2.00 - 3.542) 2


+ (5) (2.60 - 3.542) 2 + (5) (4.80 - 3.542) 2
+ (5) (5.00 - 3.542) 2 + (5) (4.40 - 3.542) 2 = 45.086
and the within lab sum-of-squares is
SS w = SS t - SS b = 67.626 - 45.086 = 22.540
The between lab variance, s 2b , and the within lab variance, s 2w , are
s 2b = SS b = 45 .086
7 - 1 = 7.514
h-1
s 2w = SS w = 22.540 = 0.805
N-h 35 - 7
To determine if there is evidence that the differences between the labs
is significant, we use an F-test of the following hull hypothesis and
Here we use a one-tailed alternative hy- one-tailed alternative hypothesis
pothesis because we are interested only in
whether sb is significantly greater than sw.
H 0: s 2b = s 2w H A: s 2b > s 2w
Because the value of Fexp
(7.514) 2

Fexp = s2b =
2
= 87.13
sw (0.805) 2
exceeds the critical value for F(0.05,6,28), which is between 2.099
and 2.599, we reject the null hypothesis and accept the alternative
hypothesis, finding evidence at a = 0.5 that there are systematic dif-
ferences between the results of the seven labs.
Here we use a one-tailed alternative hy- To evaluate the source(s) of this systematic difference, we use equa-
pothesis because we are interested only in tion 14.27 to calculate texp for the difference between mean values,
whether the result for one lab is greater
comparing texp to a critical value of 1.705 for a one-tailed t-test with
than the result for another lab.
Chapter 14 Developing a Standard Method 243

28 degrees of freedom. For example, when comparing lab A to lab C,


the two labs with the smallest mean values, we find
XA- XC
t exp = # n n+ A nC
nC =
sw
2 A

2.40 - 2.00
# 55 + # 5 = 0.705
5
0.805
no evidence for a systematic difference at a = 0.05 between lab A and
lab C. The table below summarizes results for all seven labs
Note that the labs are organized from the
lab C A D B G E F lab with the smallest mean value (lab C)
to the lab with the largest mean value (lab
X 2.00 2.40 2.60 3.60 4.40 4.80 5.00 F) and that we compare mean values for
texp 0.705 1.762 0.705 adjacent labs only.

0.352 1.410 0.352

where there is no evidence of a significant difference between the re-


sults for labs C, A, and D (as shown by the green bar), where there is
no evidence of a significant difference between the results for labs G,
E, and F (as shown by the blue bar), and where there is no significant
difference between the results for labs B and G (as shown by the red
bar).
(b) The estimated values for v 2rand and for v 2sys are
v 2rand . s 2w = 0.805
s 2b - v rand
= 7.514 - 0.805 = 1.34
2
2
v sys = 5
n
17. First, let’s write out the three sum-of-squares terms that appear in
equation 14.23 (SSt), equation 14.24 (SSw), and equation 14.25 (SSb)
h ni
SS t = / / (X ij - X ) 2
i=1 j=1

h ni
SS w = / / (X ij - X i) 2
i=1 j=1

h
SS b = / n i ( X i - X ) 2
i=1

so that we have them in front of us. Looking at the equation for SSt,
let’s pull out the term within the parentheses,
X ij - X
and then subtract and add the term X i to it, grouping together parts
of the equation using parentheses
_ X ij - X i = ^ X ij - X i h + ^ X i - X h
244 Solutions Manual for Analytical Chemistry 2.1

Next, let’s square both sides of the equation


_ X ij - X i = #^ X ij - X i h + ^ X i - X h-
2 2

_ X ij - X i = ^ X ij - X i h2 + ^ X i - X h + 2 ^ X ij - X i h^ X i - X h
2 2

and then substitute the right side of this equation back into the sum-
mation term for SSt
SS t = / / #^ X ij - X i h2 + ^ X i - X h + 2 ^ X ij - X i h^ X i - X h-
h ni
2

i=1 j=1

and expand the summation across the terms in the curly parentheses
SS t = / / ^ X ij - X i h2
h ni

i=1 j=1

+ / /^X i - X h
h ni
2

i=1 j=1

+ 2 / / ^ X ij - X i h^ X i - X h
h ni

i=1 j=1
Note that this is not the case for the first
two terms in this expanded equation for The last of these terms is equal to zero because this always is the result
SSt because these terms sum up the squares
when you sum up the difference between a mean and the values that
of the differences, which always are posi-
tive, not the differences themselves, which give the mean; thus, we now have this simpler equation
SS t = / / ^ X ij - X i h2 + / / ^ X i - X h
are both positive and negative. h ni h ni
2

i=1 j=1 i=1 j=1

Finally, we note that


/ /^X - X h = / ni ^ X i - X h
h ni h
2 2
i
i=1 j=1 i=1

because, for each of the h samples, the inner summation term simply
adds together the term ^ X i - X h a total of ni times. Substituting
2

this back into our equation for SSt gives


SS t = / / ^ X ij - X i h2 + / n i ^ X i - X h
h ni h
2

i=1 j=1 i=1

which is equivalent to SS t = SS w + SS b .
18. (a) Using equation 14.28, our estimate for the relative standard devi-
ation is
R = 2 (1 - 0.5 log C) = 2 (1 - 0.5 log(0.0026)) = 4.9%
(b) The mean and the standard deviation for the data set are
0.257%w/w and 0.0164%w/w respectively. The experimental per-
cent relative standard deviation, therefore, is
s r = 0.0164%w/w
0.257%w/w # 100 = 6.4%
Because this value is within the range of 0.5× to 2.0× of R, the vari-
ability in the individual results is reasonable.
Chapter 15 Quality Assurance 245

Chapter 15
1. Answers will vary depending on the labs you have done and the
guidelines provided by your instructor. Of the examples cited in the
text, those that likely are most relevant to your experience are prop-
erly recording data and maintaining records, specifying and purify-
ing chemical reagents, cleaning and calibrating glassware and other
equipment, and maintaining the laboratory facilities and general lab-
oratory equipment.
2. Although your answers may include additional details, here are some
specific issues you should include.
(a) If necessary, clean and rinse the buret with water. When clean,
rinse the buret with several portions of your reagent and then fill the
buret with reagent so that it is below the buret’s 0.00 mL mark. Be
sure that the buret’s tip is filled and that an air bubble is not present.
Read the buret’s initial volume. Dispense the reagent, being sure that
each drop falls into your sample’s flask. If splashing occurs, rinse the
walls of the sample’s flask to ensure that the reagent makes it into the
flask. If a drop of reagent remains suspended on the buret’s tip when
you are done adding reagent, rinse it into the sample’s flask. Record
the final volume of reagent in the buret.
(b) Calibrate the pH meter using two buffers, one near a pH of 7 and
one that is more acidic or more basic, depending on the samples you
will analyze. When transferring the pH electrode to a new solution,
rinse it with distilled water and carefully dry it with a tissue to remove
the rinse water. Place the pH electrode in the solution you are analyz-
ing and allow the electrode to equilibrate before recording the pH.
(c) Turn on the instrument and allow sufficient time for the light
source to warm up. Adjust the wavelength to the appropriate value.
Adjust the instrument’s 0%T (infinite absorbance) without a sample
in the cell and with the light source blocked from reaching the detec-
tor. Fill a suitable cuvette with an appropriate blank solution, clean
the cuvette’s exterior surface with a tissue, place the cuvette in the
sample holder, and adjust the instrument’s 100%T (zero absorbance).
Rinse the cuvette with several small portions of your sample and then
fill the cuvette with sample. Place the cuvette in the sample holder
and record the sample’s %T or absorbance.
3. Substituting each sample’s signal into the equation for the calibration
curve gives the concentration of lead in the samples as 1.59 ppm and
1.48 ppm. The absolute difference, d, and the relative difference, (d)r,
are
d = 1.59 ppm - 1.48 ppm = 0.11 ppm
246 Solutions Manual for Analytical Chemistry 2.1

0.11 ppm
(d) r = # 100 = 7.2%
0.5 (1.59 ppm + 1.48 ppm)
For a trace metal whose concentration is more than 20× the method’s
detection limit of 10.0 ppb, the relative difference should not exceed
10%; with a (d)r of 7.2%, the duplicate analysis is acceptable.
4. In order, the differences are 0.12, –0.08, 0.12, –0.05, –0.10, and
0.07 ppm. The standard deviation for the duplicates is
Z (0.12) 2 + (–0.08) 2 + _
]] bb
n [ (0.12) + (–0.05) + `
2 2

/ (d i ) 2 ] (–0.10) 2 + (0.07) 2 b
s = i = 12n = \ a = 0.066 ppm
2#6
The mean concentration of NO -3 for all 12 samples is 5.005 ppm,
which makes the relative standard deviation
0.066 ppm
s r = 5.005 ppm # 100 = 1.3%
a value that is less than the maximum limit of 1.5%.
5. For the first spike recovery, the result is
0.342 mg/g - 0.20 mg/g
R= 0.135 mg/g # 100 = 105.2%
The recoveries for the remaining four trials are 103.7%, 103.7%,
91.9%, and 90.4%. The mean recovery for all five trials is 99.0%.
6. (a) Using the equation for the calibration curve, the concentration
of analyte in the spiked field blank is 2.10 ppm. The recovery on the
spike, therefore, is
2.10 ppm - 0 ppm
R= 2.00 ppm # 100 = 105%

Because this recovery is within the limit of ±10%, the field blank’s
recovery is acceptable.
(b) Using the equation for the calibration curve, the concentration
of analyte in the spiked method blank is 1.70 ppm. The recovery on
the spike, therefore, is
1.70 ppm - 0 ppm
R= 2.00 ppm # 100 = 85%

Because this recovery exceeds the limit of ±10%, the method blank’s
recovery is not acceptable and there is a systematic error in the labo-
ratory.
(c) Using the equation for the calibration curve, the concentration of
analyte in the sample before the spike is 1.67 ppm and its concentra-
tion after the spike is 3.77 ppm. The recovery on the spike is
Chapter 15 Quality Assurance 247

3.77 ppm - 1.67 ppm


R= # 100 = 105%

40
UCL
2.00 ppm

38
UWL
Because this recovery is within the limit of ±10%, the laboratory

concentration (ppm)
36
spike’s recovery is acceptable, suggesting a time-dependent change in
the analyte’s concentration. CL

34
7. The mean and the standard deviation for the 25 samples are 34.01

32
ppm and 1.828 ppm, respectively, which gives us the following warn- LWL

30
ing limits and control limits LCL

28
UCL = 34.01 + (3) (1.828) = 39.5 5 10 15 20 25
sample number

UWL = 34.01 + (2) (1.828) = 37.7 Figure SM15.1 Property control chart for
the data in Problem 7. The highlighted re-
LWL = 34.01 - (2) (1.828) = 30.4 gion shows 14 consecutive cycles in which
the results oscillate up and down, a sign that
LCL = 34.01 - (3) (1.828) = 28.5 system is not in a state of statistical control.
Figure SM15.1 shows the property control chart. Note that the high-
lighted region contains 14 consecutive cycles (15 samples) in which
the results oscillate up and down, indicating that the system is not in
a state of statistical control. UCL

140
UWL
8. The mean and the standard deviation for the 25 samples are 99.84%

120
and 14.08%, respectively, which gives us the following warning limits

recovery
and control limits

100
CL

UCL = 99.84 + (3) (14.08) = 142.1

80
LWL

UWL = 99.84 + (2) (14.08) = 128.0


60
LCL

LWL = 99.84 - (2) (14.08) = 71.7 5 10 15 20 25


sample number

LCL = 99.84 - (3) (14.08) = 57.6 Figure SM15.2 Property control chart for
the data in Problem 8.
Figure SM15.2 shows the property control chart, which has no fea-
tures to suggest that the system is not in a state of statistical control.
9. The 25 range values are 4, 1, 3, 3, 2, 0, 2, 4, 3, 1, 4, 1, 2, 0, 2, 4, 3,
4, 1, 1, 2, 1, 2, 3, 3, with a mean of 2.24. The control and warning
8

limits, therefore, are UCL

UCL = (3.267) (2.24) = 7.3


6

UWL
range

UWL = (2.512) (2.24) = 5.6


4

Figure SM15.2 shows the precision control chart, which has no fea- CL
2

tures to suggest that the system is not in a state of statistical control.


0

5 10 15 20 25
sample number

Figure SM15.3 Precision control chart for


the data in Problem 9.

You might also like