Quam 2012 - P1
Quam 2012 - P1
Quam 2012 - P1
net/publication/236884917
CITATIONS READS
323 1,836
21 authors, including:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Ales Fajgelj on 31 May 2014.
Quantifying Uncertainty in
Analytical Measurement
Third Edition
QUAM:2012.P1
EURACHEM/CITAC Guide
Quantifying Uncertainty in
Analytical Measurement
Third Edition
Editors
S L R Ellison (LGC, UK)
A Williams (UK)
EURACHEM members
A Williams Chairman UK
S Ellison Secretary LGC, Teddington, UK
R Bettencourt da University of Lisbon, Portugal
Silva
W Bremser BAM, Germany
A Brzyski Eurachem Poland
P Fodor Corvinus University of Budapest, Hungary
R Kaarls Netherlands Measurement Institute, The
Netherlands
R Kaus Eurachem Germany
B Magnusson SP, Sweden
E Amico di Meane Italy
P Robouch IRMM, EU
M Rösslein EMPA St. Gallen, Switzerland
Acknowledgements A van der Veen Netherlands Measurement Institute, The
Netherlands
This document has been produced primarily by a joint M Walsh Eurachem IRE
EURACHEM/CITAC Working Group with the W Wegscheider Montanuniversitaet, Leoben, Austria
composition shown (right). The editors are grateful to R Wood Food Standards Agency, UK
all these individuals and organisations and to others P Yolci Omeroglu Istanbul Technical University, Turkey
who have contributed comments, advice and
assistance. CITAC Representatives
A Squirrell ILAC
Production of this Guide was in part supported by the I Kuselman National Physical Laboratory of Israel
UK National Measurement System. A Fajgelj IAEA Vienna
CONTENTS
2. UNCERTAINTY 4
2.1. DEFINITION OF UNCERTAINTY 4
2.2. UNCERTAINTY SOURCES 4
2.3. UNCERTAINTY COMPONENTS 4
2.4. ERROR AND UNCERTAINTY 5
2.5. THE VIM 3 DEFINITION OF UNCERTAINTY 6
QUAM:2012.P1 Page i
Quantifying Uncertainty Contents
9. REPORTING UNCERTAINTY 30
9.1. GENERAL 30
9.2. INFORMATION REQUIRED 30
9.3. REPORTING STANDARD UNCERTAINTY 30
9.4. REPORTING EXPANDED UNCERTAINTY 30
9.5. NUMERICAL EXPRESSION OF RESULTS 31
9.6. ASYMMETRIC INTERVALS 31
9.7. COMPLIANCE AGAINST LIMITS 31
APPENDIX A. EXAMPLES 33
EXAMPLE A1: PREPARATION OF A CALIBRATION STANDARD 35
EXAMPLE A2: STANDARDISING A SODIUM HYDROXIDE SOLUTION 41
EXAMPLE A3: AN ACID/BASE TITRATION 51
EXAMPLE A4: UNCERTAINTY ESTIMATION FROM IN-HOUSE VALIDATION STUDIES.
DETERMINATION OF ORGANOPHOSPHORUS PESTICIDES IN BREAD. 60
EXAMPLE A5: DETERMINATION OF CADMIUM RELEASE FROM CERAMIC WARE BY ATOMIC
ABSORPTION SPECTROMETRY 72
EXAMPLE A6: THE DETERMINATION OF CRUDE FIBRE IN ANIMAL FEEDING STUFFS 81
EXAMPLE A7: DETERMINATION OF THE AMOUNT OF LEAD IN WATER USING DOUBLE ISOTOPE
DILUTION AND INDUCTIVELY COUPLED PLASMA MASS SPECTROMETRY 89
APPENDIX B. DEFINITIONS 97
QUAM:2012.P1 Page ii
Quantifying Uncertainty
Many important decisions are based on the results of chemical quantitative analysis; the results are used, for
example, to estimate yields, to check materials against specifications or statutory limits, or to estimate
monetary value. Whenever decisions are based on analytical results, it is important to have some indication
of the quality of the results, that is, the extent to which they can be relied on for the purpose in hand. Users
of the results of chemical analysis, particularly in those areas concerned with international trade, are coming
under increasing pressure to eliminate the replication of effort frequently expended in obtaining them.
Confidence in data obtained outside the user’s own organisation is a prerequisite to meeting this objective.
In some sectors of analytical chemistry it is now a formal (frequently legislative) requirement for
laboratories to introduce quality assurance measures to ensure that they are capable of and are providing
data of the required quality. Such measures include: the use of validated methods of analysis; the use of
defined internal quality control (QC) procedures; participation in proficiency testing (PT) schemes;
accreditation based on ISO/IEC 17025 [H.1], and establishing traceability of the results of the
measurements.
In analytical chemistry, there has been great emphasis on the precision of results obtained using a specified
method, rather than on their traceability to a defined standard or SI unit. This has led the use of “official
methods” to fulfil legislative and trading requirements. However as there is a formal requirement to
establish the confidence of results it is essential that a measurement result is traceable to defined references
such as SI units or reference materials even when using an operationally defined or empirical (sec. 5.4.)
method. The Eurachem/CITAC Guide “Traceability in Chemical Measurement” [H.9] explains how
metrological traceability is established in the case of operationally defined procedures.
As a consequence of these requirements, chemists are, for their part, coming under increasing pressure to
demonstrate the quality of their results, and in particular to demonstrate their fitness for purpose by giving a
measure of the confidence that can be placed on the result. This is expected to include the degree to which a
result would be expected to agree with other results, normally irrespective of the analytical methods used.
One useful measure of this is measurement uncertainty.
Although the concept of measurement uncertainty has been recognised by chemists for many years, it was
the publication in 1993 of the “Guide to the Expression of Uncertainty in Measurement” (the GUM) [H.2]
by ISO in collaboration with BIPM, IEC, IFCC, ILAC, IUPAC, IUPAP and OIML, which formally
established general rules for evaluating and expressing uncertainty in measurement across a broad spectrum
of measurements. This EURACHEM/CITAC document shows how the concepts in the ISO Guide may be
applied in chemical measurement. It first introduces the concept of uncertainty and the distinction between
uncertainty and error. This is followed by a description of the steps involved in the evaluation of
uncertainty with the processes illustrated by worked examples in Appendix A.
The evaluation of uncertainty requires the analyst to look closely at all the possible sources of uncertainty.
However, although a detailed study of this kind may require a considerable effort, it is essential that the
effort expended should not be disproportionate. In practice a preliminary study will quickly identify the
most significant sources of uncertainty and, as the examples show, the value obtained for the combined
uncertainty is almost entirely controlled by the major contributions. A good estimate of uncertainty can be
made by concentrating effort on the largest contributions. Further, once evaluated for a given method
applied in a particular laboratory (i.e. a particular measurement procedure), the uncertainty estimate
obtained may be reliably applied to subsequent results obtained by the method in the same laboratory,
provided that this is justified by the relevant quality control data. No further effort should be necessary
unless the procedure itself or the equipment used is changed, in which case the uncertainty estimate would
be reviewed as part of the normal re-validation.
Method development involves a similar process to the evaluation of uncertainty arising from each individual
source; potential sources of uncertainty are investigated and method adjusted to reduce the uncertainty to an
acceptable level where possible. (Where specified as a numerical upper limit for uncertainty, the acceptable
QUAM:2012.P1 Page 1
Quantifying Uncertainty Foreword to the Third Edition
level of measurement uncertainty is called the ‘target measurement uncertainty’ [H.7]). The performance of
the method is then quantified in terms of precision and trueness. Method validation is carried out to ensure
that the performance obtained during development can be achieved for a particular application and if
necessary the performance figures adjusted. In some cases the method is subjected to a collaborative study
and further performance data obtained. Participation in proficiency testing schemes and internal quality
control measurements primarily check that the performance of the method is maintained, but also provides
additional information. All of these activities provide information that is relevant to the evaluation of
uncertainty. This Guide presents a unified approach to the use of different kinds of information in
uncertainty evaluation.
The first edition of the EURACHEM Guide for “Quantifying Uncertainty in Analytical Measurement” [H.3]
was published in 1995 based on the ISO Guide. The second edition [H.4] was prepared in collaboration with
CITAC in 2000 in the light of practical experience of uncertainty estimation in chemistry laboratories and
the even greater awareness of the need to introduce formal quality assurance procedures by laboratories.
The second edition stressed that the procedures introduced by a laboratory to estimate its measurement
uncertainty should be integrated with existing quality assurance measures, since these measures frequently
provide much of the information required to evaluate the measurement uncertainty.
This third edition retains the features of the second edition and adds information based on developments in
uncertainty estimation and use since 2000. The additional material provides improved guidance on the
expression of uncertainty near zero, new guidance on the use of Monte Carlo methods for uncertainty
evaluation, improved guidance on the use of proficiency testing data and improved guidance on the
assessment of compliance of results with measurement uncertainty. The guide therefore provides explicitly
for the use of validation and related data in the construction of uncertainty estimates in full compliance with
the formal ISO Guide principles set out in the ISO Guide to the Expression of Uncertainty in measurement
[H.2]. The approach is also consistent with the requirements of ISO/IEC 17025:2005 [H.1].
This third edition implements the 1995 edition of the ISO Guide to the Expression of Uncertainty in
Measurement as re-issued in 2008 [H.2]. Terminology therefore follows the GUM. Statistical terminology
follows ISO 3534 Part 2 [H.8]. Later terminology introduced in the International vocabulary of metrology -
Basic and general concepts and associated terms (VIM) [H.7] is used otherwise. Where GUM and VIM
terms differ significantly, the VIM terminology is additionally discussed in the text. Additional guidance on
the concepts and definitions used in the VIM is provided in the Eurachem Guide “Terminology in Analytical
Measurement - Introduction to VIM 3” [H.5]. Finally, it is so common to give values for mass fraction as a
percentage that a compact nomenclature is necessary; mass fraction quoted as a percentage is given the units
of g/100 g for the purposes of this Guide.
NOTE Worked examples are given in Appendix A. A numbered list of definitions is given at Appendix B. The
convention is adopted of printing defined terms in bold face upon their first occurrence in the text, with a
reference to Appendix B enclosed in square brackets. The definitions are, in the main, taken from the
International vocabulary of basic and general standard terms in Metrology (VIM) [H.7], the Guide [H.2]
and ISO 3534-2 (Statistics - Vocabulary and symbols - Part 2: Applied Statistics) [H.8]. Appendix C shows,
in general terms, the overall structure of a chemical analysis leading to a measurement result. Appendix D
describes a general procedure which can be used to identify uncertainty components and plan further
experiments as required; Appendix E describes some statistical operations used in uncertainty estimation in
analytical chemistry, including a numerical spreadsheet method and the use of Monte Carlo simulation.
Appendix F discusses measurement uncertainty near detection limits. Appendix G lists many common
uncertainty sources and methods of estimating the value of the uncertainties. A bibliography is provided at
Appendix H.
QUAM:2012.P1 Page 2
Quantifying Uncertainty Scope and Field of Application
1.1. This Guide gives detailed guidance for the for a single method implemented as a defined
evaluation and expression of uncertainty in measurement procedure [B.6] in a single
quantitative chemical analysis, based on the laboratory.
approach taken in the ISO “Guide to the
• Information from method development and
Expression of Uncertainty in Measurement”
validation.
[H.2]. It is applicable at all levels of accuracy and
in all fields - from routine analysis to basic • Results from defined internal quality control
research and to empirical and rational methods procedures in a single laboratory.
(sees section 5.5.). Some common areas in which
• Results from collaborative trials used to
chemical measurements are needed, and in which
validate methods of analysis in a number of
the principles of this Guide may be applied, are:
competent laboratories.
• Quality control and quality assurance in
• Results from proficiency test schemes used to
manufacturing industries.
assess the analytical competency of
• Testing for regulatory compliance. laboratories.
• Testing utilising an agreed method. 1.4. It is assumed throughout this Guide that,
whether carrying out measurements or assessing
• Calibration of standards and equipment.
the performance of the measurement procedure,
• Measurements associated with the effective quality assurance and control measures
development and certification of reference are in place to ensure that the measurement
materials. process is stable and in control. Such measures
normally include, for example, appropriately
• Research and development.
qualified staff, proper maintenance and
1.2. Note that additional guidance will be calibration of equipment and reagents, use of
required in some cases. In particular, reference appropriate reference standards, documented
material value assignment using consensus measurement procedures and use of appropriate
methods (including multiple measurement check standards and control charts. Reference
methods) is not covered, and the use of [H.10] provides further information on analytical
uncertainty estimates in compliance statements QA procedures.
and the expression and use of uncertainty at low NOTE: This paragraph implies that all analytical
levels may require additional guidance. methods are assumed in this guide to be
Uncertainties associated with sampling operations implemented via fully documented
are not explicitly treated, since they are treated in procedures. Any general reference to
detail in the EURACHEM guide “Measurement analytical methods accordingly implies the
uncertainty arising from sampling: A guide to presence of such a procedure. Strictly,
methods and approaches” [H.6]. measurement uncertainty can only be applied
to the results of such a procedure and not to a
1.3. Since formal quality assurance measures more general method of measurement [B.7].
have been introduced by laboratories in a number
of sectors this EURACHEM Guide illustrates
how the following may be used for the estimation
of measurement uncertainty:
• Evaluation of the effect of the identified
sources of uncertainty on the analytical result
QUAM:2012.P1 Page 3
Quantifying Uncertainty Uncertainty
2. Uncertainty
QUAM:2012.P1 Page 4
Quantifying Uncertainty Uncertainty
2.3.2. For a measurement result y, the total However, the uncertainty may still be very large,
uncertainty, termed combined standard simply because the analyst is very unsure of how
uncertainty [B.11] and denoted by uc(y), is an close that result is to the value of the measurand.
estimated standard deviation equal to the positive
2.4.4. The uncertainty of the result of a
square root of the total variance obtained by
measurement should never be interpreted as
combining all the uncertainty components,
representing the error itself, nor the error
however evaluated, using the law of propagation
remaining after correction.
of uncertainty (see section 8.) or by alternative
methods (Appendix E describes two useful 2.4.5. An error is regarded as having two
numerical methods: the use of a spreadsheet and components, namely, a random component and a
Monte Carlo simulation). systematic component.
2.3.3. For most purposes in analytical chemistry, 2.4.6. Random error [B.17] typically arises from
an expanded uncertainty [B.12] U, should be unpredictable variations of influence quantities
used. The expanded uncertainty provides an [B.3]. These random effects give rise to
interval within which the value of the measurand variations in repeated observations of the
is believed to lie with a higher level of measurand. The random error of an analytical
confidence. U is obtained by multiplying uc(y), result cannot be compensated for, but it can
the combined standard uncertainty, by a coverage usually be reduced by increasing the number of
factor [B.13] k. The choice of the factor k is observations.
based on the level of confidence desired. For an NOTE 1 The experimental standard deviation of the
approximate level of confidence of 95 %, k is arithmetic mean [B.19] or average of a series
usually set to 2. of observations is not the random error of the
NOTE The coverage factor k should always be stated mean, although it is so referred to in some
so that the combined standard uncertainty of publications on uncertainty. It is instead a
the measured quantity can be recovered for measure of the uncertainty of the mean due to
use in calculating the combined standard some random effects. The exact value of the
uncertainty of other measurement results that random error in the mean arising from these
may depend on that quantity. effects cannot be known.
2.4.7. Systematic error [B.18] is defined as a
component of error which, in the course of a
number of analyses of the same measurand,
2.4. Error and uncertainty
remains constant or varies in a predictable way.
2.4.1. It is important to distinguish between error It is independent of the number of measurements
and uncertainty. Error [B.16] is defined as the made and cannot therefore be reduced by
difference between an individual result and the increasing the number of analyses under constant
true value [B.2] of the measurand. In practice, an measurement conditions.
observed measurement error is the difference
2.4.8. Constant systematic errors, such as failing
between the observed value and a reference
to make an allowance for a reagent blank in an
value. As such, error – whether theoretical or
assay, or inaccuracies in a multi-point instrument
observed – is a single value. In principle, the
calibration, are constant for a given level of the
value of a known error can be applied as a
measurement value but may vary with the level of
correction to the result.
the measurement value.
NOTE Error is an idealised concept and errors cannot
be known exactly. 2.4.9. Effects which change systematically in
magnitude during a series of analyses, caused, for
2.4.2. Uncertainty, on the other hand, takes the example by inadequate control of experimental
form of a range or interval, and, if estimated for conditions, give rise to systematic errors that are
an analytical procedure and defined sample type, not constant.
may apply to all determinations so described. In
general, the value of the uncertainty cannot be EXAMPLES:
used to correct a measurement result. 1. A gradual increase in the temperature of a set
2.4.3. To illustrate further the difference, the of samples during a chemical analysis can lead
to progressive changes in the result.
result of an analysis after correction may by
chance be very close to the value of the
measurand, and hence have a negligible error.
QUAM:2012.P1 Page 5
Quantifying Uncertainty Uncertainty
2. Sensors and probes that exhibit ageing effects measurement uncertainty
over the time-scale of an experiment can also uncertainty of measurement
introduce non-constant systematic errors. uncertainty
2.4.10. The result of a measurement should be “non-negative parameter characterizing the
corrected for all recognised significant systematic dispersion of the quantity values being attributed
effects. to a measurand, based on the information used”
NOTE Measuring instruments and systems are often NOTE 1: Measurement uncertainty includes
adjusted or calibrated using measurement components arising from systematic effects,
standards and reference materials to correct such as components associated with
for systematic effects. The uncertainties corrections and the assigned quantity values
associated with these standards and materials of measurement standards, as well as the
and the uncertainty in the correction must still definitional uncertainty. Sometimes estimated
be taken into account. systematic effects are not corrected for but,
2.4.11. A further type of error is a spurious error, instead, associated measurement uncertainty
components are incorporated.
or blunder. Errors of this type invalidate a
measurement and typically arise through human NOTE 2: The parameter may be, for example, a
failure or instrument malfunction. Transposing standard deviation called standard
digits in a number while recording data, an air measurement uncertainty (or a specified
bubble lodged in a spectrophotometer flow- multiple of it), or the half-width of an interval,
having a stated coverage probability.
through cell, or accidental cross-contamination of
test items are common examples of this type of NOTE 3: Measurement uncertainty comprises, in
error. general, many components. Some of these
may be evaluated by Type A evaluation of
2.4.12. Measurements for which errors such as measurement uncertainty from the statistical
these have been detected should be rejected and distribution of the quantity values from series
no attempt should be made to incorporate the of measurements and can be characterized by
errors into any statistical analysis. However, standard deviations. The other components,
errors such as digit transposition can be corrected which may be evaluated by Type B evaluation
(exactly), particularly if they occur in the leading of measurement uncertainty, can also be
digits. characterized by standard deviations,
evaluated from probability density functions
2.4.13. Spurious errors are not always obvious based on experience or other information.
and, where a sufficient number of replicate NOTE 4: In general, for a given set of information, it is
measurements is available, it is usually understood that the measurement uncertainty
appropriate to apply an outlier test to check for is associated with a stated quantity value
the presence of suspect members in the data set. attributed to the measurand. A modification of
Any positive result obtained from such a test this value results in a modification of the
should be considered with care and, where associated uncertainty.
possible, referred back to the originator for 2.5.2. The changes to the definition do not
confirmation. It is generally not wise to reject a materially affect the meaning for the purposes of
value on purely statistical grounds. analytical measurement. Note 1, however, adds
2.4.14. Uncertainties estimated using this guide the possibility that additional terms may be
are not intended to allow for the possibility of incorporated in the uncertainty budget to allow
spurious errors/blunders. for uncorrected systematic effects. Chapter 7
provides further details of treatment of
2.5. The VIM 3 definition of uncertainties associated with systematic effects.
uncertainty
2.5.1. The revised VIM [H.7] introduces the
following definition of measurement uncertainty:
QUAM:2012.P1 Page 6
Quantifying Uncertainty Analytical Measurement and Uncertainty
QUAM:2012.P1 Page 7
Quantifying Uncertainty Analytical Measurement and Uncertainty
Robustness or ruggedness. Many method continuous parameters, this may be a permitted
development or validation protocols require that range or stated uncertainty; for discontinuous
sensitivity to particular parameters be factors such as sample matrix, this range
investigated directly. This is usually done by a corresponds to the variety of types permitted or
preliminary ‘ruggedness test’, in which the effect encountered in normal use of the method. Note
of one or more parameter changes is observed. If that representativeness extends not only to the
significant (compared to the precision of the range of values, but to their distribution.
ruggedness test) a more detailed study is carried
3.2.4. In selecting factors for variation, it is
out to measure the size of the effect, and a
important to ensure that the larger effects are
permitted operating interval chosen accordingly.
varied where possible. For example, where day to
Ruggedness test data can therefore provide
day variation (perhaps arising from recalibration
information on the effect of important parameters.
effects) is substantial compared to repeatability,
Selectivity. “Selectivity” relates to the degree to two determinations on each of five days will
which a method responds uniquely to the required provide a better estimate of intermediate
analyte. Typical selectivity studies investigate the precision than five determinations on each of two
effects of likely interferents, usually by adding days. Ten single determinations on separate days
the potential interferent to both blank and will be better still, subject to sufficient control,
fortified samples and observing the response. The though this will provide no additional information
results are normally used to demonstrate that the on within-day repeatability.
practical effects are not significant. However,
3.2.5. It is generally simpler to treat data obtained
since the studies measure changes in response
from random selection than from systematic
directly, it is possible to use the data to estimate
variation. For example, experiments performed at
the uncertainty associated with potential
random times over a sufficient period will usually
interferences, given knowledge of the range of
include representative ambient temperature
interferent concentrations.
effects, while experiments performed
Note: The term “specificity” has historically been systematically at 24-hour intervals may be subject
used for the same concept. to bias due to regular ambient temperature
variation during the working day. The former
3.2. Conduct of experimental studies of experiment needs only evaluate the overall
method performance standard deviation; in the latter, systematic
variation of ambient temperature is required,
3.2.1. The detailed design and execution of followed by adjustment to allow for the actual
method validation and method performance distribution of temperatures. Random variation is,
studies is covered extensively elsewhere [H.11] however, less efficient. A small number of
and will not be repeated here. However, the main systematic studies can quickly establish the size
principles as they affect the relevance of a study of an effect, whereas it will typically take well
applied to uncertainty estimation are pertinent over 30 determinations to establish an uncertainty
and are considered below. contribution to better than about 20 % relative
3.2.2. Representativeness is essential. That is, accuracy. Where possible, therefore, it is often
studies should, as far as possible, be conducted to preferable to investigate small numbers of major
provide a realistic survey of the number and range effects systematically.
of effects operating during normal use of the 3.2.6. Where factors are known or suspected to
method, as well as covering the concentration interact, it is important to ensure that the effect of
ranges and sample types within the scope of the interaction is accounted for. This may be
method. Where a factor has been representatively achieved either by ensuring random selection
varied during the course of a precision from different levels of interacting parameters, or
experiment, for example, the effects of that factor by careful systematic design to obtain both
appear directly in the observed variance and need variance and covariance information.
no additional study unless further method
optimisation is desirable. 3.2.7. In carrying out studies of overall bias, it is
important that the reference materials and values
3.2.3. In this context, representative variation are relevant to the materials under routine test.
means that an influence parameter must take a
distribution of values appropriate to the 3.2.8. Any study undertaken to investigate and
uncertainty in the parameter in question. For test for the significance of an effect should have
QUAM:2012.P1 Page 8
Quantifying Uncertainty Analytical Measurement and Uncertainty
sufficient power to detect such effects before they while uncertainty characterises the ‘strength’ of
become practically significant. the links in the chain and the agreement to be
expected between laboratories making similar
3.3. Traceability measurements.
3.3.1. It is important to be able to compare results 3.3.3. In general, the uncertainty on a result
from different laboratories, or from the same which is traceable to a particular reference, will
laboratory at different times, with confidence. be the uncertainty on that reference together with
This is achieved by ensuring that all laboratories the uncertainty on making the measurement
are using the same measurement scale, or the relative to that reference.
same ‘reference points’. In many cases this is 3.3.4. The Eurachem/CITAC Guide “Traceability
achieved by establishing a chain of calibrations in Chemical Measurement” [H.9] identifies the
leading to primary national or international essential activities in establishing traceability as:
standards, ideally (for long-term consistency) the
Systeme Internationale (SI) units of measurement. i) Specifying the measurand, scope of
A familiar example is the case of analytical measurements and the required uncertainty
balances; each balance is calibrated using ii) Choosing a suitable method of estimating the
reference masses which are themselves checked value, that is, a measurement procedure with
(ultimately) against national standards and so on associated calculation - an equation - and
to the primary reference kilogram. This unbroken measurement conditions
chain of comparisons leading to a known
reference value provides ‘traceability’ to a iii) Demonstrating, through validation, that the
common reference point, ensuring that different calculation and measurement conditions
operators are using the same units of include all the “influence quantities” that
measurement. In routine measurement, the significantly affect the result, or the value
consistency of measurements between one assigned to a standard.
laboratory (or time) and another is greatly aided iv) Identifying the relative importance of each
by establishing traceability for all relevant influence quantity
intermediate measurements used to obtain or
control a measurement result. Traceability is v) Choosing and applying appropriate reference
therefore an important concept in all branches of standards
measurement. vi) Estimating the uncertainty
3.3.2. Traceability is formally defined [H.7] as: These activities are discussed in detail in the
“metrological traceability associated Guide [H.9] and will not be discussed
further here. It is, however, noteworthy that most
property of a measurement result whereby the of these activities are also essential for the
result can be related to a reference through a
estimation of measurement uncertainty, which
documented unbroken chain of calibrations,
each contributing to the measurement also requires an identified and properly validate
uncertainty.” procedure for measurement, a clearly stated
measurand, and information on the calibration
The reference to uncertainty arises because the standards used (including the associated
agreement between laboratories is limited, in part, uncertainties).
by uncertainties incurred in each laboratory’s
traceability chain. Traceability is accordingly
intimately linked to uncertainty. Traceability
provides the means of placing all related
measurements on a consistent measurement scale,
QUAM:2012.P1 Page 9
Quantifying Uncertainty The Uncertainty Estimation Process
QUAM:2012.P1 Page 10
Quantifying Uncertainty The Uncertainty Estimation Process
Step 1
Specify
START
Measurand
Step 2
Identify
Uncertainty
Sources
Step 3
Simplify by
grouping sources
covered by
existing data
Quantify
grouped
components
Quantify
remaining
components
Convert
components to
standard deviations
Step 4
Calculate
combined
standard uncertainty
Review and if
necessary re-evaluate
large components
Calculate
END Expanded
uncertainty
QUAM:2012.P1 Page 11
Quantifying Uncertainty Step 1. Specification of the Measurand
5.1. In the context of uncertainty estimation, primary sampling effects become important and
“specification of the measurand” requires both a are often much larger than the uncertainty
clear and unambiguous statement of what is being associated with measurement of a laboratory test
measured, and a quantitative expression relating item. If sampling is part of the procedure used to
the value of the measurand to the parameters on obtain the measured result, estimation of
which it depends. These parameters may be other uncertainties associated with the sampling
measurands, quantities which are not directly procedure need to be considered. This is covered
measured, or constants. All of this information in considerable detail in reference [H.6].
should be in the Standard Operating Procedure
5.4. In analytical measurement, it is particularly
(SOP).
important to distinguish between measurements
5.2. For most analytical measurements, a good intended to produce results which are
definition of the measurand includes a statement independent of the method used, and those which
of are not so intended. The latter are often referred
a) the particular kind of quantity to be to as empirical methods or operationally defined
measured, usually the concentration or mass methods. The following examples may clarify the
fraction of an analyte. point further.
b) the item or material to be analysed and, if EXAMPLES:
necessary, additional information on the
1. Methods for the determination of the amount
location within the test item. For example, of nickel present in an alloy are normally
‘lead in patient blood’ identifies a specific expected to yield the same result, in the same
tissue within a test subject (the patient). units, usually expressed as a mass fraction or
c) where necessary, the basis for calculation of mole (amount) fraction. In principle, any
systematic effect due to method bias or matrix
the quantity reporting results. For example,
would need to be corrected for, though it is
the quantity of interest may be the amount more usual to ensure that any such effect is
extracted under specified conditions, or a small. Results would not normally need to quote
mass fraction may be reported on a dry the particular method used, except for
weight basis or after removal of some information. The method is not empirical.
specified parts of a test material (such as
2. Determinations of “extractable fat” may
inedible parts of foods). differ substantially, depending on the extraction
conditions specified. Since “extractable fat” is
NOTE 1: The term ‘analyte’ refers to the chemical entirely dependent on choice of conditions, the
species to be measured; the measurand is method used is empirical. It is not meaningful
usually the concentration or mass fraction of to consider correction for bias intrinsic to the
the analyte. method, since the measurand is defined by the
method used. Results are generally reported
NOTE 2: The term ‘analyte level’ is used in this with reference to the method, uncorrected for
document to refer generally to the value of any bias intrinsic to the method. The method is
quantities such as analyte concentration, considered empirical.
analyte mass fraction etc. ‘Level’ is also used
similarly for ‘material’, ‘interferent’ etc. 3. In circumstances where variations in the
substrate, or matrix, have large and
NOTE 3: The term ‘measurand’ is discussed in more unpredictable effects, a procedure is often
detail in reference [H.5]. developed with the sole aim of achieving
5.3. It should also be made clear whether a comparability between laboratories measuring
sampling step is included within the procedure or the same material. The procedure may then be
adopted as a local, national or international
not. For example, is the measurand related just to
standard method on which trading or other
the test item transmitted to the laboratory or to decisions are taken, with no intent to obtain an
the bulk material from which the sample was absolute measure of the true amount of analyte
taken? It is obvious that the uncertainty will be present. Corrections for method bias or matrix
different in these two cases; where conclusions effect are ignored by convention (whether or
are to be drawn about the bulk material itself, not they have been minimised in method
QUAM:2012.P1 Page 12
Quantifying Uncertainty Step 1. Specification of the Measurand
development). Results are normally reported associated with some quite large effects are not
uncorrected for matrix or method bias. The relevant in normal use. Due consideration should
method is considered to be empirical. accordingly be given to whether the results are
5.5. The distinction between empirical and non- expected to be dependent upon, or independent
empirical (sometimes called rational) methods is of, the method in use and only those effects
important because it affects the estimation of relevant to the result as reported should be
uncertainty. In examples 2 and 3 above, because included in the uncertainty estimate.
of the conventions employed, uncertainties
QUAM:2012.P1 Page 13
Quantifying Uncertainty Step 2. Identifying Uncertainty Sources
6. Step 2. Identifying Uncertainty Sources
6.1. A comprehensive list of relevant sources of 6.5. It may additionally be useful to consider a
uncertainty should be assembled. At this stage, it measurement procedure as a series of discrete
is not necessary to be concerned about the operations (sometimes termed unit operations),
quantification of individual components; the aim each of which may be assessed separately to
is to be completely clear about what should be obtain estimates of uncertainty associated with
considered. In Step 3, the best way of treating them. This is a particularly useful approach where
each source will be considered. similar measurement procedures share common
unit operations. The separate uncertainties for
6.2. In forming the required list of uncertainty
each operation then form contributions to the
sources it is usually convenient to start with the
overall uncertainty.
basic expression used to calculate the measurand
from intermediate values. All the parameters in 6.6. In practice, it is more usual in analytical
this expression may have an uncertainty measurement to consider uncertainties associated
associated with their value and are therefore with elements of overall method performance,
potential uncertainty sources. In addition there such as observable precision and bias measured
may be other parameters that do not appear with respect to appropriate reference materials.
explicitly in the expression used to calculate the These contributions generally form the dominant
value of the measurand, but which nevertheless contributions to the uncertainty estimate, and are
affect the measurement results, e.g. extraction best modelled as separate effects on the result. It
time or temperature. These are also potential is then necessary to evaluate other possible
sources of uncertainty. All these different sources contributions only to check their significance,
should be included. Additional information is quantifying only those that are significant.
given in Appendix C (Uncertainties in Analytical Further guidance on this approach, which applies
Processes). particularly to the use of method validation data,
is given in section 7.2.1.
6.3. The cause and effect diagram described in
Appendix D is a very convenient way of listing 6.7. Typical sources of uncertainty are
the uncertainty sources, showing how they relate
to each other and indicating their influence on the • Sampling
uncertainty of the result. It also helps to avoid
Where in-house or field sampling form part
double counting of sources. Although the list of
of the specified procedure, effects such as
uncertainty sources can be prepared in other
random variations between different samples
ways, the cause and effect diagram is used in the
and any potential for bias in the sampling
following chapters and in all of the examples in
procedure form components of uncertainty
Appendix A. Additional information is given in
affecting the final result.
Appendix D (Analysing uncertainty sources).
6.4. Once the list of uncertainty sources is • Storage Conditions
assembled, their effects on the result can, in Where test items are stored for any period
principle, be represented by a formal prior to analysis, the storage conditions may
measurement model, in which each effect is affect the results. The duration of storage as
associated with a parameter or variable in an well as conditions during storage should
equation. The equation then forms a complete therefore be considered as uncertainty
model of the measurement process in terms of all sources.
the individual factors affecting the result. This
function may be very complicated and it may not • Instrument effects
be possible to write it down explicitly. Where
possible, however, this should be done, as the Instrument effects may include, for example,
form of the expression will generally determine the limits of accuracy on the calibration of an
the method of combining individual uncertainty analytical balance; a temperature controller
contributions. that may maintain a mean temperature which
differs (within specification) from its
QUAM:2012.P1 Page 14
Quantifying Uncertainty Step 2. Identifying Uncertainty Sources
indicated set-point; an auto-analyser that The stability of a sample/analyte may change
could be subject to carry-over effects. during analysis because of a changing
thermal regime or photolytic effect.
• Reagent purity
When a ‘spike’ is used to estimate recovery,
The concentration of a volumetric solution the recovery of the analyte from the sample
will not be known exactly even if the parent may differ from the recovery of the spike,
material has been assayed, since some introducing an uncertainty which needs to be
uncertainty related to the assaying procedure evaluated.
remains. Many organic dyestuffs, for
instance, are not 100 % pure and can contain • Computational effects
isomers and inorganic salts. The purity of
Selection of the calibration model, e.g. using
such substances is usually stated by
a straight line calibration on a curved
manufacturers as being not less than a
response, leads to poorer fit and higher
specified level. Any assumptions about the
uncertainty.
degree of purity will introduce an element of
uncertainty. Truncation and round off can lead to
inaccuracies in the final result. Since these
• Assumed stoichiometry are rarely predictable, an uncertainty
allowance may be necessary.
Where an analytical process is assumed to
follow a particular reaction stoichiometry, it
• Blank Correction
may be necessary to allow for departures
from the expected stoichiometry, or for There will be an uncertainty on both the value
incomplete reaction or side reactions. and the appropriateness of the blank
correction. This is particularly important in
• Measurement conditions trace analysis.
For example, volumetric glassware may be
• Operator effects
used at an ambient temperature different from
that at which it was calibrated. Gross Possibility of reading a meter or scale
temperature effects should be corrected for, consistently high or low.
but any uncertainty in the temperature of Possibility of making a slightly different
liquid and glass should be considered. interpretation of the method.
Similarly, humidity may be important where
materials are sensitive to possible changes in • Random effects
humidity.
Random effects contribute to the uncertainty
• Sample effects in all determinations. This entry should be
included in the list as a matter of course.
The recovery of an analyte from a complex
matrix, or an instrument response, may be
affected by composition of the matrix. NOTE: These sources are not necessarily
Analyte speciation may further compound independent.
this effect.
QUAM:2012.P1 Page 15
Quantifying Uncertainty Step 3. Quantifying Uncertainty
7.1. Introduction
7.1.1. Having identified the uncertainty sources as • Plan to obtain the further data required
explained in Step 2 (Chapter 6), the next step is to For sources of uncertainty not adequately
quantify the uncertainty arising from these covered by existing data, either seek
sources. This can be done by additional information from the literature or
• evaluating the uncertainty arising from each standing data (certificates, equipment
individual source and then combining them as specifications etc.), or plan experiments to
described in Chapter 8. Examples A1 to A3 obtain the required additional data.
illustrate the use of this procedure. Additional experiments may take the form of
specific studies of a single contribution to
or uncertainty, or the usual method performance
• by determining directly the combined studies conducted to ensure representative
contribution to the uncertainty on the result variation of important factors.
from some or all of these sources using 7.2.2. It is important to recognise that not all of
method performance data. Examples A4 to A6 the components will make a significant
represent applications of this procedure. contribution to the combined uncertainty; indeed,
In practice, a combination of these is usually in practice it is likely that only a small number
necessary and convenient. will. Unless there is a large number of them,
components that are less than one third of the
7.1.2. Whichever of these approaches is used, largest need not be evaluated in detail. A
most of the information needed to evaluate the preliminary estimate of the contribution of each
uncertainty is likely to be already available from component or combination of components to the
the results of validation studies, from QA/QC uncertainty should be made and those that are not
data and from other experimental work that has significant eliminated.
been carried out to check the performance of the
method. However, data may not be available to 7.2.3. The following sections provide guidance on
evaluate the uncertainty from all of the sources the procedures to be adopted, depending on the
and it may be necessary to carry out further work data available and on the additional information
as described in sections 7.11. to 7.15. required. Section 7.3. presents requirements for
the use of prior experimental study data,
7.2. Uncertainty evaluation procedure including validation data. Section 7.4. briefly
discusses evaluation of uncertainty solely from
7.2.1. The procedure used for estimating the individual sources of uncertainty. This may be
overall uncertainty depends on the data available necessary for all, or for very few of the sources
about the method performance. The stages identified, depending on the data available, and is
involved in developing the procedure are consequently also considered in later sections.
• Reconcile the information requirements Sections 7.5. to 7.10. describe the evaluation of
with the available data uncertainty in a range of circumstances. Section
7.5. applies when using closely matched
First, the list of uncertainty sources should be reference materials. Section 7.6. covers the use of
examined to see which sources of uncertainty collaborative study data and 7.7. the use of in-
are accounted for by the available data, house validation data. 7.9. describes special
whether by explicit study of the particular considerations for empirical methods and 7.10.
contribution or by implicit variation within covers ad-hoc methods. Methods for quantifying
the course of whole-method experiments. individual components of uncertainty, including
These sources should be checked against the experimental studies, documentary and other
list prepared in Step 2 and any remaining data, modelling, and professional judgement are
sources should be listed to provide an covered in more detail in sections 7.11. to 7.15.
auditable record of which contributions to the Section 7.16. covers the treatment of known bias
uncertainty have been included. in uncertainty estimation.
QUAM:2012.P1 Page 16
Quantifying Uncertainty Step 3. Quantifying Uncertainty
7.3. Relevance of prior studies 7.5. Closely matched certified
7.3.1. When uncertainty estimates are based at reference materials
least partly on prior studies of method • 7.5.1. Measurements on certified reference
performance, it is necessary to demonstrate the materials are normally carried out as part of
validity of applying prior study results. Typically, method validation or re-validation, effectively
this will consist of: constituting a calibration of the whole
• Demonstration that a comparable precision to measurement procedure against a traceable
that obtained previously can be achieved. reference. Because this procedure provides
information on the combined effect of many
• Demonstration that the use of the bias data of the potential sources of uncertainty, it
obtained previously is justified, typically provides very good data for the assessment of
through determination of bias on relevant uncertainty. Further details are given in
reference materials (see, for example, ISO section 7.7.4.
Guide 33 [H.12]), by appropriate spiking NOTE: ISO Guide 33 [H.12] gives a useful account of
studies, or by satisfactory performance on the use of reference materials in checking
relevant proficiency schemes or other method performance.
laboratory intercomparisons.
• Continued performance within statistical 7.6. Uncertainty estimation using prior
control as shown by regular QC sample collaborative method development
results and the implementation of effective and validation study data
analytical quality assurance procedures.
7.6.1. A collaborative study carried out to
7.3.2. Where the conditions above are met, and validate a published method, for example
the method is operated within its scope and field according to the AOAC/IUPAC protocol [H.13]
of application, it is normally acceptable to apply or ISO 5725 standard [H.14], is a valuable source
the data from prior studies (including validation of data to support an uncertainty estimate. The
studies) directly to uncertainty estimates in the data typically include estimates of reproducibility
laboratory in question. standard deviation, sR, for several levels of
response, a linear estimate of the dependence of
7.4. Evaluating uncertainty by sR on level of response, and may include an
quantification of individual estimate of bias based on CRM studies. How this
components data can be utilised depends on the factors taken
into account when the study was carried out.
7.4.1. In some cases, particularly when little or no During the ‘reconciliation’ stage indicated above
method performance data is available, the most (section 7.2.), it is necessary to identify any
suitable procedure may be to evaluate each sources of uncertainty that are not covered by the
uncertainty component separately. collaborative study data. The sources which may
7.4.2. The general procedure used in combining need particular consideration are:
individual components is to prepare a detailed • Sampling. Collaborative studies rarely include
quantitative model of the experimental procedure a sampling step. If the method used in-house
(cf. sections 5. and 6., especially 6.4.), assess the involves sub-sampling, or the measurand (see
standard uncertainties associated with the Specification) is estimating a bulk property
individual input parameters, and combine them as from a small sample, then the effects of
described in Section 8. sampling should be investigated and their
7.4.3. In the interests of clarity, detailed guidance effects included.
on the assessment of individual contributions by • Pre-treatment. In most studies, samples are
experimental and other means is deferred to homogenised, and may additionally be
sections 7.11. to 7.15. Examples A1 to A3 in stabilised, before distribution. It may be
Appendix A provide detailed illustrations of the necessary to investigate and add the effects of
procedure. Extensive guidance on the application the particular pre-treatment procedures
of this procedure is also given in the ISO Guide applied in-house.
[H.2]. • Method bias. Method bias is often examined
prior to or during interlaboratory study, where
possible by comparison with reference
QUAM:2012.P1 Page 17
Quantifying Uncertainty Step 3. Quantifying Uncertainty
methods or materials. Where the bias itself, with trueness (Steps a and b) and the effects
the uncertainty in the reference values used, of additional influences (step d) to form a
and the precision associated with the bias combined uncertainty estimate.
check, are all small compared to sR, no
This procedure is essentially identical to the
additional allowance need be made for bias
general procedure set out in Section 7.2. Note,
uncertainty. Otherwise, it will be necessary to
however, that it is important to check that the
make additional allowances.
laboratory’s performance is consistent with that
• Variation in conditions. expected for the measurement method in use.
Laboratories participating in a study may tend
towards the means of allowed ranges of The use of collaborative study data is illustrated
experimental conditions, resulting in an in example A6 (Appendix A).
underestimate of the range of results possible 7.6.3. For methods operating within their defined
within the method definition. Where such scope, when the reconciliation stage shows that
effects have been investigated and shown to all the identified sources have been included in
be insignificant across their full permitted the validation study or when the contributions
range, however, no further allowance is from any remaining sources such as those
required. discussed in section 7.6.1. have been shown to be
• Changes in sample matrix. The uncertainty negligible, then the reproducibility standard
arising from matrix compositions or levels of deviation sR, adjusted for concentration if
interferents outside the range covered by the necessary, may be used as the combined standard
study will need to be considered. uncertainty.
7.6.2. Uncertainty estimation based on 7.6.4. The repeatability standard deviation sr is
collaborative study data acquired in compliance not normally a suitable uncertainty estimate, since
with ISO 5725 is fully described in ISO 21748 it excludes major uncertainty contributions.
“Guidance for the use of repeatability,
reproducibility and trueness estimates in
measurement uncertainty estimation”. [H.15]. 7.7. Uncertainty estimation using in-
The general procedure recommended for
evaluating measurement uncertainty using
house development and validation
collaborative study data is as follows: studies
a) Obtain estimates of the repeatability, 7.7.1. In-house development and validation
reproducibility and trueness of the method in studies consist chiefly of the determination of the
use from published information about the method performance parameters indicated in
method. section 3.1.3. Uncertainty estimation from these
parameters utilises:
b) Establish whether the laboratory bias for the
measurements is within that expected on the • The best available estimate of overall
basis of the data obtained in a). precision.
c) Establish whether the precision attained by • The best available estimate(s) of overall bias
current measurements is within that expected and its uncertainty.
on the basis of the repeatability and • Quantification of any uncertainties associated
reproducibility estimates obtained in a). with effects incompletely accounted for in the
d) Identify any influences on the measurement above overall performance studies.
that were not adequately covered in the Precision study
studies referenced in a), and quantify the
variance that could arise from these effects, 7.7.2. The precision should be estimated as far as
taking into account the sensitivity possible over an extended time period, and
coefficients and the uncertainties for each chosen to allow natural variation of all factors
influence. affecting the result. This can be obtained from
e) Where the bias and precision are under • The standard deviation of results for a typical
control, as demonstrated in steps b) and c), sample analysed several times over a period of
combine the reproducibility standard time, using different analysts and equipment
estimate at a) with the uncertainty associated where possible (the results of measurements
QUAM:2012.P1 Page 18
Quantifying Uncertainty Step 3. Quantifying Uncertainty
on QC check samples can provide this 7.7.5. Bias for a method under study can also be
information). determined by comparison of the results with
those of a reference method. If the results show
• The standard deviation obtained from replicate
that the bias is not statistically significant, the
analyses performed on each of several
standard uncertainty is that for the reference
samples.
method (if applicable; see section 7.9.1.),
NOTE: Replicates should be performed at materially combined with the standard uncertainty
different times to obtain estimates of associated with the measured difference between
intermediate precision; within-batch methods. The latter contribution to uncertainty is
replication provides estimates of repeatability
given by the standard deviation term used in the
only.
significance test applied to decide whether the
• From formal multi-factor experimental difference is statistically significant, as explained
designs, analysed by ANOVA to provide in the example below.
separate variance estimates for each factor.
EXAMPLE
7.7.3. Note that precision frequently varies
A method (method 1) for determining the
significantly with the level of response. For concentration of selenium is compared with a
example, the observed standard deviation often reference method (method 2). The results (in
increases significantly and systematically with mg kg-1) from each method are as follows:
analyte concentration. In such cases, the
uncertainty estimate should be adjusted to allow x s n
for the precision applicable to the particular Method 1 5.40 1.47 5
result. Appendix E.5 gives additional guidance on
handling level-dependent contributions to Method 2 4.76 2.75 5
uncertainty. The standard deviations are pooled to give a
Bias study pooled standard deviation sc
QUAM:2012.P1 Page 19
Quantifying Uncertainty Step 3. Quantifying Uncertainty
• Comparison of result observed in a reference deviation associated with the relevant
material with the recovery of added analyte in significance test be associated with that factor.
the same reference material.
EXAMPLE
• Judgement on the basis of specific materials The effect of a permitted 1-hour extraction time
with known extreme behaviour. For example, variation is investigated by a t-test on five
oyster tissue, a common marine tissue determinations each on the same sample, for the
reference, is well known for a tendency to c0- normal extraction time and a time reduced by 1
precipitate some elements with calcium salts hour. The means and standard deviations (in
on digestion, and may provide an estimate of mg L-1) were: Standard time: mean 1.8,
‘worst case’ recovery on which an uncertainty standard deviation 0.21; alternate time: mean
estimate can be based (e.g. By treating the 1.7, standard deviation 0.17. A t-test uses the
worst case as an extreme of a rectangular or pooled variance of
triangular distribution). (5 − 1) × 0.212 + (5 − 1) × 0.17 2
= 0.037
(5 − 1) + (5 − 1)
• Judgement on the basis of prior experience.
to obtain
7.7.7. Bias may also be estimated by comparison
of the particular method with a value determined (1.8 − 1.7)
t= = 0.82
by the method of standard additions, in which 1 1
0.037 × +
known quantities of the analyte are added to the 5 5
test material, and the correct analyte
This is not significant compared to tcrit = 2.3.
concentration inferred by extrapolation. The But note that the difference (0.1) is compared
uncertainty associated with the bias is then with a calculated standard deviation term of
normally dominated by the uncertainties
0.037 × (1 / 5 + 1 / 5) =0.12. This value is the
associated with the extrapolation, combined
(where appropriate) with any significant contribution to uncertainty associated with the
effect of permitted variation in extraction time.
contributions from the preparation and addition of
stock solution. 7.7.12. Where an effect is detected and is
NOTE: To be directly relevant, the additions should
statistically significant, but remains sufficiently
be made to the original sample, rather than a small to neglect in practice, the provisions of
prepared extract. section 7.16. apply.
7.7.8. It is a general requirement of the ISO Guide 7.8. Using data from Proficiency
that corrections should be applied for all
recognised and significant systematic effects. Testing
Where a correction is applied to allow for a 7.8.1. Uses of PT data in uncertainty evaluation
significant overall bias, the uncertainty associated
Data from proficiency testing (PT) can also
with the bias is estimated as paragraph 7.7.5.
provide useful information for uncertainty
described in the case of insignificant bias
evaluation. For methods already in use for a long
7.7.9. Where the bias is significant, but is time in the laboratory, data from proficiency
nonetheless neglected for practical purposes, testing (also called External Quality Assurance,
additional action is necessary (see section 7.16.). EQA) can be used:
Additional factors • for checking the estimated uncertainty with
7.7.10. The effects of any remaining factors results from PT exercises for a single
should be estimated separately, either by laboratory
experimental variation or by prediction from • for estimating the laboratory’s measurement
established theory. The uncertainty associated uncertainty.
with such factors should be estimated, recorded
and combined with other contributions in the 7.8.2. Validity of PT data for uncertainty
normal way. evaluation
7.7.11. Where the effect of these remaining The advantage of using PT data is that, while
factors is demonstrated to be negligible compared principally a test of laboratories’ performance, a
to the precision of the study (i.e. statistically single laboratory will, over time, test a range of
insignificant), it is recommended that an well-characterised materials chosen for their
uncertainty contribution equal to the standard relevance to the particular field of measurement.
QUAM:2012.P1 Page 20
Quantifying Uncertainty Step 3. Quantifying Uncertainty
Further, PT test items may be more similar to a 7.8.4. Use for evaluating uncertainty
routine test item than a CRM since the demands
Over several rounds, the deviations of laboratory
on stability and homogeneity are frequently less
results from the assigned values can provide a
stringent.
preliminary evaluation of the measurement
uncertainty for that laboratory
The relative disadvantage of PT samples is the
lack of traceable reference values similar to those If the results for all the participants using the
for certified reference materials. Consensus same method in the PT scheme are selected, the
values in particular are prone to occasional error. standard deviation obtained is equivalent to an
This demands due care in their use for uncertainty estimate of interlaboratory reproducibility and
estimation, as indeed is recommended by IUPAC can, in principle, be used in the same way as the
for interpretation of PT results in general. [H.16]. reproducibility standard deviation obtained from
However, appreciable bias in consensus values is collaborative study (section 7.6. above).
relatively infrequent as a proportion of all
materials circulated, and substantial protection is Eurolab Technical Reports 1/2002 “Measurement
provided by the extended timescale common in Uncertainty in Testing” [H.17], 1/2006 “Guide to
proficiency testing. PT assigned values, including the Evaluation of Measurement Uncertainty for
those assigned by consensus of participants’ Quantitative Test Results” [H.18] and
results, may therefore be regarded as sufficiently “Measurement uncertainty revisited: Alternative
reliable for most practical purposes. approaches to uncertainty evaluation” [H.19]
describe the use of PT data in more detail and
The data obtained from a laboratory’s provide worked examples, and a Nordtest guide
participation in PT can be a good basis for [H.20] provides a general approach aimed at
uncertainty estimates provided the following environmental laboratories.
conditions are fulfilled:
QUAM:2012.P1 Page 21
Quantifying Uncertainty Step 3. Quantifying Uncertainty
affecting the result; typically such factors as recommended that as many replicates as practical
times, temperatures, masses, volumes etc. The are performed. The precision should be compared
uncertainty associated with these input factors with that for the related system; the standard
must accordingly be assessed and either shown to deviation for the ad-hoc method should be
be negligible or quantified (see example A6). comparable.
7.9.5. Empirical methods are normally subjected NOTE: It recommended that the comparison be based
to collaborative studies and hence the uncertainty on inspection. Statistical significance tests
can be evaluated as described in section 7.6. (e.g. an F-test) will generally be unreliable
with small numbers of replicates and will tend
to lead to the conclusion that there is ‘no
7.10. Evaluation of uncertainty for ad- significant difference’ simply because of the
hoc methods low power of the test.
7.10.1. Ad-hoc methods are methods established 7.10.5. Where the above conditions are met
to carry out exploratory studies in the short term, unequivocally, the uncertainty estimate for the
or for a short run of test materials. Such methods related system may be applied directly to results
are typically based on standard or well- obtained by the ad-hoc method, making any
established methods within the laboratory, but are adjustments appropriate for concentration
adapted substantially (for example to study a dependence and other known factors.
different analyte) and will not generally justify
formal validation studies for the particular 7.11. Quantification of individual
material in question. components
7.10.2. Since limited effort will be available to 7.11.1. It is nearly always necessary to consider
establish the relevant uncertainty contributions, it some sources of uncertainty individually. In some
is necessary to rely largely on the known cases, this is only necessary for a small number of
performance of related systems or blocks within sources; in others, particularly when little or no
these systems. Uncertainty estimation should method performance data is available, every
accordingly be based on known performance on a source may need separate study (see examples 1,
related system or systems. This performance 2 and 3 in Appendix A for illustrations). There
information should be supported by any study are several general methods for establishing
necessary to establish the relevance of the individual uncertainty components:
information. The following recommendations
assume that such a related system is available and Experimental variation of input variables
has been examined sufficiently to obtain a From standing data such as measurement and
reliable uncertainty estimate, or that the method calibration certificates
consists of blocks from other methods and that
the uncertainty in these blocks has been By modelling from theoretical principles
established previously. Using judgement based on experience or
7.10.3. As a minimum, it is essential that an informed by modelling of assumptions
estimate of overall bias and an indication of These different methods are discussed briefly
precision be available for the method in question. below.
Bias will ideally be measured against a reference
material, but will in practice more commonly be 7.12. Experimental estimation of
assessed from spike recovery. The considerations
individual uncertainty
of section 7.7.4. then apply, except that spike
recoveries should be compared with those contributions
observed on the related system to establish the 7.12.1. It is often possible and practical to obtain
relevance of the prior studies to the ad-hoc estimates of uncertainty contributions from
method in question. The overall bias observed for experimental studies specific to individual
the ad-hoc method, on the materials under test, parameters.
should be comparable to that observed for the
related system, within the requirements of the 7.12.2. The standard uncertainty arising from
study. random effects is often measured from
repeatability experiments and is quantified in
7.10.4. A minimum precision experiment consists terms of the standard deviation of the measured
of a duplicate analysis. It is, however, values. In practice, no more than about fifteen
QUAM:2012.P1 Page 22
Quantifying Uncertainty Step 3. Quantifying Uncertainty
replicates need normally be considered, unless a 7.13. Estimation based on other results
high precision is required. or data
7.12.3. Other typical experiments include: 7.13.1. It is often possible to estimate some of the
• Study of the effect of a variation of a single standard uncertainties using whatever relevant
parameter on the result. This is particularly information is available about the uncertainty on
appropriate in the case of continuous, the quantity concerned. The following paragraphs
controllable parameters, independent of other suggest some sources of information.
effects, such as time or temperature. The rate 7.13.2. Quality Control (QC) data. As noted
of change of the result with the change in the previously it is necessary to ensure that the
parameter can be obtained from the quality criteria set out in standard operating
experimental data. This is then combined procedures are achieved, and that measurements
directly with the uncertainty in the parameter on QC samples show that the criteria continue to
to obtain the relevant uncertainty contribution. be met. Where reference materials are used in QC
NOTE: The change in parameter should be sufficient checks, section 7.5. shows how the data can be
to change the result substantially compared to used to evaluate uncertainty. Where any other
the precision available in the study (e.g. by stable material is used, the QC data provides an
five times the standard deviation of replicate estimate of intermediate precision (Section
measurements) 7.7.2.). When stable QC samples are not
• Robustness studies, systematically examining available, quality control can use duplicate
the significance of moderate changes in determinations or similar methods for monitoring
parameters. This is particularly appropriate for repeatability; over the long term, the pooled
rapid identification of significant effects, and repeatability data can be used to form an estimate
commonly used for method optimisation. The of the repeatability standard deviation, which can
method can be applied in the case of discrete form a part of the combined uncertainty.
effects, such as change of matrix, or small 7.13.3. QC data also provides a continuing check
equipment configuration changes, which have on the value quoted for the uncertainty. Clearly,
unpredictable effects on the result. Where a the combined uncertainty arising from random
factor is found to be significant, it is normally effects cannot be less than the standard deviation
necessary to investigate further. Where of the QC measurements.
insignificant, the associated uncertainty is (at
least for initial estimation) that obtained from 7.13.4. Further detail on the use of QC data in
the robustness study. uncertainty evaluation can be found in recent
NORDTEST and EUROLAB guides [H.19,
• Systematic multifactor experimental designs H.20].
intended to estimate factor effects and
interactions. Such studies are particularly 7.13.5. Suppliers' information. For many sources
useful where a categorical variable is of uncertainty, calibration certificates or suppliers
involved. A categorical variable is one in catalogues provide information. For example, the
which the value of the variable is unrelated to tolerance of volumetric glassware may be
the size of the effect; laboratory numbers in a obtained from the manufacturer’s catalogue or a
study, analyst names, or sample types are calibration certificate relating to a particular item
examples of categorical variables. For in advance of its use.
example, the effect of changes in matrix type
(within a stated method scope) could be 7.14. Modelling from theoretical
estimated from recovery studies carried out in principles
a replicated multiple-matrix study. An analysis
7.14.1. In many cases, well-established physical
of variance would then provide within- and
theory provides good models for effects on the
between-matrix components of variance for
result. For example, temperature effects on
observed analytical recovery. The between-
volumes and densities are well understood. In
matrix component of variance would provide a
such cases, uncertainties can be calculated or
standard uncertainty associated with matrix
estimated from the form of the relationship using
variation.
the uncertainty propagation methods described in
section 8.
QUAM:2012.P1 Page 23
Quantifying Uncertainty Step 3. Quantifying Uncertainty
7.14.2. In other circumstances, it may be is used for converting the measured quantity
necessary to use approximate theoretical models to the value of the measurand (analytical
combined with experimental data. For example, result). This model is - like all models in
where an analytical measurement depends on a science - subject to uncertainty. It is only
timed derivatisation reaction, it may be necessary assumed that nature behaves according to the
to assess uncertainties associated with timing. specific model, but this can never be known
This might be done by simple variation of elapsed with ultimate certainty.
time. However, it may be better to establish an
• The use of reference materials is highly
approximate rate model from brief experimental
encouraged, but there remains uncertainty
studies of the derivatisation kinetics near the
regarding not only the true value, but also
concentrations of interest, and assess the
regarding the relevance of a particular
uncertainty from the predicted rate of change at a
reference material for the analysis of a
given time.
specific sample. A judgement is required of
the extent to which a proclaimed standard
7.15. Estimation based on judgement substance reasonably resembles the nature of
7.15.1. The evaluation of uncertainty is neither a the samples in a particular situation.
routine task nor a purely mathematical one; it • Another source of uncertainty arises when the
depends on detailed knowledge of the nature of measurand is insufficiently defined by the
the measurand and of the measurement method procedure. Consider the determination of
and procedure used. The quality and utility of the "permanganate oxidizable substances" that
uncertainty quoted for the result of a are undoubtedly different whether one
measurement therefore ultimately depends on the analyses ground water or municipal waste
understanding, critical analysis, and integrity of water. Not only factors such as oxidation
those who contribute to the assignment of its temperature, but also chemical effects such as
value. matrix composition or interference, may have
7.15.2. Most distributions of data can be an influence on this specification.
interpreted in the sense that it is less likely to • A common practice in analytical chemistry
observe data in the margins of the distribution calls for spiking with a single substance, such
than in the centre. The quantification of these as a close structural analogue or isotopomer,
distributions and their associated standard from which either the recovery of the
deviations is done through repeated respective native substance or even that of a
measurements. whole class of compounds is judged. Clearly,
7.15.3. However, other assessments of intervals the associated uncertainty is experimentally
may be required in cases when repeated assessable provided the analyst is prepared to
measurements cannot be performed or do not study the recovery at all concentration levels
provide a meaningful measure of a particular and ratios of measurands to the spike, and all
uncertainty component. "relevant" matrices. But frequently this
experimentation is avoided and substituted by
7.15.4. There are numerous instances in judgements on
analytical chemistry when the latter prevails, and
judgement is required. For example: • the concentration dependence of
recoveries of measurand,
• An assessment of recovery and its associated
uncertainty cannot be made for every single • the concentration dependence of
sample. Instead, an assessment is made for recoveries of spike,
classes of samples (e.g. grouped by type of • the dependence of recoveries on (sub)type
matrix), and the results applied to all samples of matrix,
of similar type. The degree of similarity is
itself an unknown, thus this inference (from • the identity of binding modes of native
type of matrix to a specific sample) is and spiked substances.
associated with an extra element of 7.15.5. Judgement of this type is not based on
uncertainty that has no frequentist immediate experimental results, but rather on a
interpretation. subjective (personal) probability, an expression
• The model of the measurement as defined by which here can be used synonymously with
the specification of the analytical procedure "degree of belief", "intuitive probability" and
QUAM:2012.P1 Page 24
Quantifying Uncertainty Step 3. Quantifying Uncertainty
"credibility" [H.21]. It is also assumed that a • the same computational rules apply in
degree of belief is not based on a snap judgement, combining 'degree of belief' contributions of
but on a well considered mature judgement of uncertainty to a combined uncertainty as for
probability. standard deviations derived by other
methods.
7.15.6. Although it is recognised that subjective
probabilities vary from one person to another, and
even from time to time for a single person, they 7.16. Significance of bias
are not arbitrary as they are influenced by 7.16.1. It is a general requirement of the ISO
common sense, expert knowledge, and by earlier Guide that corrections should be applied for all
experiments and observations. recognised and significant systematic effects.
7.15.7. This may appear to be a disadvantage, but 7.16.2. In deciding whether a known bias can
need not lead in practice to worse estimates than reasonably be neglected, the following approach
those from repeated measurements. This applies is recommended:
particularly if the true, real-life, variability in
experimental conditions cannot be simulated and i) Estimate the combined uncertainty without
the resulting variability in data thus does not give considering the relevant bias.
a realistic picture. ii) Compare the bias with the combined
7.15.8. A typical problem of this nature arises if uncertainty.
long-term variability needs to be assessed when iii) Where the bias is not significant compared to
no collaborative study data are available. A the combined uncertainty, the bias may be
scientist who dismisses the option of substituting neglected.
subjective probability for an actually measured
iv) Where the bias is significant compared to the
one (when the latter is not available) is likely to
combined uncertainty, additional action is
ignore important contributions to combined
required. Appropriate actions might:
uncertainty, thus being ultimately less objective
than one who relies on subjective probabilities. • Eliminate or correct for the bias, making
due allowance for the uncertainty of the
7.15.9. For the purpose of estimation of combined
correction.
uncertainties two features of degree of belief
estimations are essential: • Report the observed bias and its
uncertainty in addition to the result.
• degree of belief is regarded as interval valued
which is to say that a lower and an upper NOTE: Where a known bias is uncorrected by
bound similar to a classical probability convention, the method should be considered
distribution is provided, empirical (see section 7.8).
QUAM:2012.P1 Page 25
Quantifying Uncertainty Step 4. Calculating the Combined Uncertainty
QUAM:2012.P1 Page 26
Quantifying Uncertainty Step 4. Calculating the Combined Uncertainty
square of the associated uncertainty expressed as Appendix E also describes the use of Monte Carlo
a standard deviation multiplied by the square of simulation, an alternative numerical approach. It
the relevant sensitivity coefficient. These is recommended that these, or other appropriate
sensitivity coefficients describe how the value of computer-based methods, be used for all but the
y varies with changes in the parameters x1, x2 etc. simplest cases.
NOTE: Sensitivity coefficients may also be evaluated 8.2.6. In some cases, the expressions for
directly by experiment; this is particularly combining uncertainties reduce to much simpler
valuable where no reliable mathematical forms. Two simple rules for combining standard
description of the relationship exists. uncertainties are given here.
8.2.3. Where variables are not independent, the Rule 1
relationship is more complex:
For models involving only a sum or difference of
u ( y ( xi , j... )) = ∑ c i2 u ( x i ) 2 + ∑ ci c k ⋅ u ( xi , x k ) quantities, e.g. y=(p+q+r+...), the combined
i =1, n i , k =1, n standard uncertainty uc(y) is given by
i≠k
QUAM:2012.P1 Page 27
Quantifying Uncertainty Step 4. Calculating the Combined Uncertainty
u ( y ) = 0.13 2 + 0.05 2 + 0.22 2 = 0.26 • Any knowledge of the number of values used
to estimate random effects (see 8.3.3 below).
EXAMPLE 2
8.3.3. For most purposes it is recommended that k
y = (op/qr). The values are o=2.46, p=4.32, is set to 2. However, this value of k may be
q=6.38 and r=2.99, with standard uncertainties insufficient where the combined uncertainty is
of u(o)=0.02, u(p)=0.13, u(q)=0.11 and u(r)= based on statistical observations with relatively
0.07.
few degrees of freedom (less than about six). The
y=( 2.46 × 4.32 ) / (6.38 × 2.99 ) = 0.56 choice of k then depends on the effective number
2 2
of degrees of freedom.
0.02 0.13
+ + 8.3.4. Where the combined standard uncertainty
2.46 4.32
u ( y ) = 0.56 × 2 2 is dominated by a single contribution with fewer
0.11 0.07 than six degrees of freedom, it is recommended
+
6.38 2.99 that k be set equal to the two-tailed value of
⇒ u(y) = 0.56 × 0.043 = 0.024 Student’s t for the number of degrees of freedom
associated with that contribution, and for the
8.2.9. There are many instances in which the level of confidence required (normally 95 %).
magnitudes of components of uncertainty vary Table 1 (page 29) gives a short list of values for
with the level of analyte. For example, t, including degrees of freedom above six for
uncertainties in recovery may be smaller for high critical applications.
levels of material, or spectroscopic signals may
vary randomly on a scale approximately EXAMPLE:
proportional to intensity (constant coefficient of A combined standard uncertainty for a weighing
variation). In such cases, it is important to take operation is formed from contributions
account of the changes in the combined standard ucal=0.01 mg arising from calibration
uncertainty with level of analyte. Approaches uncertainty and sobs=0.08 mg based on the
include: standard deviation of five repeated
observations. The combined standard
• Restricting the specified procedure or uncertainty uc is equal
uncertainty estimate to a small range of to 0.012 + 0.08 2 = 0.081 mg . This is clearly
analyte concentrations.
dominated by the repeatability contribution sobs,
• Providing an uncertainty estimate in the form which is based on five observations, giving 5-
of a relative standard deviation. 1=4 degrees of freedom. k is accordingly based
on Student’s t. The two-tailed value of t for four
• Explicitly calculating the dependence and degrees of freedom and 95 % confidence is,
recalculating the uncertainty for a given from tables, 2.8; k is accordingly set to 2.8 and
result. the expanded uncertainty
U=2.8×0.081=0.23 mg.
Appendix E.5 gives additional information on
these approaches. 8.3.5. The Guide [H.2] gives additional guidance
on choosing k where a small number of
8.3. Expanded uncertainty measurements is used to estimate large random
effects, and should be referred to when estimating
8.3.1. The final stage is to multiply the combined degrees of freedom where several contributions
standard uncertainty by the chosen coverage are significant.
factor in order to obtain an expanded uncertainty.
The expanded uncertainty is required to provide 8.3.6. Where the distributions concerned are
an interval which may be expected to encompass normal, a coverage factor of 2 (or chosen
a large fraction of the distribution of values which according to paragraphs 8.3.3.-8.3.5. Using a
could reasonably be attributed to the measurand. level of confidence of 95 %) gives an interval
containing approximately 95 % of the distribution
8.3.2. In choosing a value for the coverage factor of values. It is not recommended that this interval
k, a number of issues should be considered. These is taken to imply a 95 % confidence interval
include: without a knowledge of the distribution
• The level of confidence required concerned.
QUAM:2012.P1 Page 28
Quantifying Uncertainty Step 4. Calculating the Combined Uncertainty
QUAM:2012.P1 Page 29
Quantifying Uncertainty Reporting Uncertainty
9. Reporting Uncertainty
• give the value of each input value, its *Standard uncertainty corresponds to one
standard uncertainty and a description of standard deviation.
how each was obtained
9.4. Reporting expanded uncertainty
• give the relationship between the result and
the input values and any partial derivatives, 9.4.1. Unless otherwise required, the result x
covariances or correlation coefficients used should be stated together with the expanded
to account for correlation effects uncertainty U calculated using a coverage factor
QUAM:2012.P1 Page 30
Quantifying Uncertainty Reporting Uncertainty
k=2 (or as described in section 8.3.3.). The 9.7. Compliance against limits
following form is recommended:
9.7.1. Regulatory compliance often requires that
"(Result): (x ± U) (units) a measurand, such as the concentration of a toxic
[where] the reported uncertainty is [an expanded substance, be shown to be within particular
uncertainty as defined in the International limits. Measurement uncertainty clearly has
Vocabulary of Basic and General terms in implications for interpretation of analytical
metrology, 2nd ed., ISO 1993,] calculated using results in this context. In particular:
a coverage factor of 2, [which gives a level of • The uncertainty in the analytical result may
confidence of approximately 95 %]" need to be taken into account when assessing
Terms in parentheses [] may be omitted or compliance.
abbreviated as appropriate. The coverage factor • The limits may have been set with some
should, of course, be adjusted to show the value allowance for measurement uncertainties.
actually used.
Consideration should be given to both factors in
EXAMPLE: any assessment.
Total nitrogen: (3.52 ± 0.14) g/100 g * 9.7.2. Detailed guidance on how to take
*The reported uncertainty is an expanded uncertainty into account when assessing
uncertainty calculated using a coverage factor compliance is given in the EURACHEM Guide
of 2 which gives a level of confidence of “Use of uncertainty information in compliance
approximately 95 %. assessment” [H.24]. The following paragraphs
summarise the principles of reference [H.24].
9.5. Numerical expression of results 9.7.3. The basic requirements for deciding
9.5.1. The numerical values of the result and its whether or not to accept the test item are:
uncertainty should not be given with an • A specification giving upper and/or lower
excessive number of digits. Whether expanded permitted limits of the characteristics
uncertainty U or a standard uncertainty u is (measurands) being controlled.
given, it is seldom necessary to give more than
• A decision rule that describes how the
two significant digits for the uncertainty. Results
measurement uncertainty will be taken into
should be rounded to be consistent with the
account with regard to accepting or rejecting
uncertainty given.
a product according to its specification and
the result of a measurement.
9.6. Asymmetric intervals
• The limit(s) of the acceptance or rejection
9.6.1. In some circumstances, particularly zone (i.e. the range of results), derived from
relating to uncertainties in results near zero the decision rule, which leads to acceptance
(Appendix F) or following Monte Carlo or rejection when the measurement result is
estimation (Appendix E.3), the distribution within the appropriate zone.
associated with the result may be strongly
asymmetric. It may then be inappropriate to EXAMPLE:
quote a single value for the uncertainty. Instead, A decision rule that is currently widely used is
the limits of the estimated coverage interval that a result implies non compliance with an
should be given. If it is likely that the result and upper limit if the measured value exceeds the
its uncertainty will be used in further limit by the expanded uncertainty. With this
calculations, the standard uncertainty should also decision rule, then only case (i) in Figure 2
be given. would imply non compliance. Similarly, for a
decision rule that a result implies compliance
EXAMPLE: only if it is below the limit by the expanded
uncertainty, only case (iv) would imply
Purity (as a mass fraction) might be reported
compliance.
as:
Purity: 0.995 with approximate 95 %
9.7.4. In general the decision rules may be more
confidence interval 0.983 to 1.000 based on a complicated than these. Further discussion may
standard uncertainty of 0.005 and 11 degrees be found in reference H.24.
of freedom
QUAM:2012.P1 Page 31
Quantifying Uncertainty Reporting Uncertainty
Upper
Control
Limit
(i) ( ii ) ( iii ) ( iv )
Result plus Result Result below Result minus
uncertainty above limit limit but limit uncertainty
above limit but limit within below limit
within uncertainty
uncertainty
QUAM:2012.P1 Page 32
Quantifying Uncertainty Appendix A. Examples
Appendix A. Examples
Introduction
General introduction
These examples illustrate how the techniques for spectrometry (AAS). Its purpose is to show how
evaluating uncertainty, described in sections 5-7, to evaluate the components of uncertainty arising
can be applied to some typical chemical analyses. from the basic operations of volume measurement
They all follow the procedure shown in the flow and weighing and how these components are
diagram (Figure 1 on page 11). The uncertainty combined to determine the overall uncertainty.
sources are identified and set out in a cause and
Example A2
effect diagram (see appendix D). This helps to
avoid double counting of sources and also assists This deals with the preparation of a standardised
in the grouping together of components whose solution of sodium hydroxide (NaOH) which is
combined effect can be evaluated. Examples 1-6 standardised against the titrimetric standard
illustrate the use of the spreadsheet method of potassium hydrogen phthalate (KHP). It includes
Appendix E.2 for calculating the combined the evaluation of uncertainty on simple volume
uncertainties from the calculated contributions measurements and weighings, as described in
u(y,xi).* example A1, but also examines the uncertainty
associated with the titrimetric determination.
Each of examples 1-6 has an introductory
summary. This gives an outline of the analytical Example A3
method, a table of the uncertainty sources and Example A3 expands on example A2 by
their respective contributions, a graphical including the titration of an HCl against the
comparison of the different contributions, and the prepared NaOH solution.
combined uncertainty.
Example A4
Examples 1-3 and 5 illustrate the evaluation of
the uncertainty by the quantification of the This illustrates the use of in house validation
uncertainty arising from each source separately. data, as described in section 7.7., and shows how
Each gives a detailed analysis of the uncertainty the data can be used to evaluate the uncertainty
associated with the measurement of volumes arising from combined effect of a number of
using volumetric glassware and masses from sources. It also shows how to evaluate the
difference weighings. The detail is for illustrative uncertainty associated with method bias.
purposes, and should not be taken as a general Example A5
recommendation as to the level of detail required
or the approach taken. For many analyses, the This shows how to evaluate the uncertainty on
uncertainty associated with these operations will results obtained using a standard or “empirical”
not be significant and such a detailed evaluation method to measure the amount of heavy metals
will not be necessary. It would be sufficient to leached from ceramic ware using a defined
use typical values for these operations with due procedure, as described in section 7.2.-7.9. Its
allowance being made for the actual values of the purpose is to show how, in the absence of
masses and volumes involved. collaborative trial data or ruggedness testing
results, it is necessary to consider the uncertainty
Example A1 arising from the range of the parameters (e.g.
temperature, etching time and acid strength)
Example A1 deals with the very simple case of allowed in the method definition. This process is
the preparation of a calibration standard of considerably simplified when collaborative study
cadmium in HNO3 for atomic absorption data is available, as is shown in the next example.
* Section 8.2.2. explains the theory behind the
calculated contributions u(y,xi).
QUAM:2012.P1 Page 33
Quantifying Uncertainty Appendix A. Examples
QUAM:2012.P1 Page 34
Quantifying Uncertainty Example A1: Preparation of a Calibration Standard
Example A1: Preparation of a Calibration Standard
Summary
Goal where
A calibration standard is prepared from a high cCd concentration of the calibration standard
purity metal (cadmium) with a concentration of [mg L-1]
ca.1000 mg L-1.
1000 conversion factor from [mL] to [L]
Measurement procedure
m mass of the high purity metal [mg]
The surface of the high purity metal is cleaned to
P purity of the metal given as mass fraction
remove any metal-oxide contamination.
Afterwards the metal is weighed and then V volume of the liquid of the calibration
dissolved in nitric acid in a volumetric flask. The standard [mL]
stages in the procedure are shown in the Identification of the uncertainty sources:
following flow chart.
The relevant uncertainty sources are shown in the
Clean cause and effect diagram below:
Cleanmetal
metal
surface
surface V Purity
Temperature
Calibration
Repeatability
Weigh
Weighmetal
metal
c(Cd)
Readability Readability
m(tare) m(gross)
Dissolve
Dissolveandand
Linearity
Repeatability Repeatability
Linearity
Sensitivity
dilute
dilute Calibration
Sensitivity
Calibration
m
Quantification of the uncertainty components
RESULT
RESULT The values and their uncertainties are shown in
the Table below.
Combined Standard Uncertainty
Figure A1. 1: Preparation of cadmium
standard The combined standard uncertainty for the
preparation of a 1002.7 mg L-1 Cd calibration
Measurand
standard is 0.9 mg L-1
1000 ⋅ m ⋅ P
cCd = [mg L-1] The different contributions are shown
V diagrammatically in Figure A1.2.
QUAM:2012.P1 Page 35
Quantifying Uncertainty Example A1: Preparation of a Calibration Standard
Figure A1.2: Uncertainty contributions in cadmium standard preparation
Purity
c(Cd)
QUAM:2012.P1 Page 36
Quantifying Uncertainty Example A1: Preparation of a Calibration Standard
Dissolve
Dissolveandand A1.3 Step 2: Identifying and analysing
dilute
dilute uncertainty sources
The aim of this second step is to list all the
uncertainty sources for each of the parameters
RESULT which affect the value of the measurand.
RESULT
QUAM:2012.P1 Page 37
Quantifying Uncertainty Example A1: Preparation of a Calibration Standard
The different effects and their influences are The volume has three major influences;
shown as a cause and effect diagram in Figure calibration, repeatability and temperature effects.
A1.4 (see Appendix D for description). i) Calibration: The manufacturer quotes a
volume for the flask of 100 mL ± 0.1 mL
QUAM:2012.P1 Page 38
Quantifying Uncertainty Example A1: Preparation of a Calibration Standard
ii) Repeatability: The uncertainty due to The intermediate values, their standard
variations in filling can be estimated from a uncertainties and their relative standard
repeatability experiment on a typical example uncertainties are summarised overleaf (Table
of the flask used. A series of ten fill and weigh A1.2)
experiments on a typical 100 mL flask gave a Using those values, the concentration of the
standard deviation of 0.02 mL. This can be calibration standard is
used directly as a standard uncertainty.
1000 × 100.28 × 0.9999
iii) Temperature: According to the manufacturer cCd = = 1002.7 mg L−1
100.0
the flask has been calibrated at a temperature
of 20 °C, whereas the laboratory temperature For this simple multiplicative expression, the
varies between the limits of ±4 °C. The uncertainties associated with each component are
uncertainty from this effect can be calculated combined as follows.
from the estimate of the temperature range and 2 2 2
the coefficient of the volume expansion. The u c (cCd ) u ( P ) u (m) u (V )
= + +
volume expansion of the liquid is considerably cCd P m V
larger than that of the flask, so only the former
needs to be considered. The coefficient of = 0.000058 2 + 0.0005 2 + 0.0007 2
volume expansion for water is 2.1×10–4 °C–1, = 0.0009
which leads to a volume variation of
−4 u c (cCd ) = c Cd × 0.0009 = 1002.7 mg L−1 × 0.0009
± (100 × 4 × 2.1 × 10 ) = ±0.084 mL
= 0.9 mg L−1
The standard uncertainty is calculated using
the assumption of a rectangular distribution It is preferable to derive the combined standard
for the temperature variation i.e. uncertainty (uc(cCd)) using the spreadsheet method
given in Appendix E, since this can be utilised
even for complex expressions. The completed
Table A1.2: Values and Uncertainties
spreadsheet is shown in Table A1.3. The values
Description Value x u(x) u(x)/x of the parameters are entered in the second row
from C2 to E2. Their standard uncertainties are in
Purity of the 0.9999 0.000058 0.000058
the row below (C3-E3). The spreadsheet copies
metal P
the values from C2-E2 into the second column
Mass of the 100.28 0.05 mg 0.0005 from B5 to B7. The result (c(Cd)) using these
metal m values is given in B9. The C5 shows the value of
(mg) P from C2 plus its uncertainty given in C3. The
result of the calculation using the values C5-C7 is
Volume of 100.0 0.07 mL 0.0007
given in C9. The columns D and E follow a
the flask
similar procedure. The values shown in the row
V (mL)
10 (C10-E10) are the differences of the row (C9-
QUAM:2012.P1 Page 39
Quantifying Uncertainty Example A1: Preparation of a Calibration Standard
E9) minus the value given in B9. In row 11 (C11- similar. The uncertainty on the purity of the
E11) the values of row 10 (C10-E10) are squared cadmium has virtually no influence on the overall
and summed to give the value shown in B11. B13 uncertainty.
gives the combined standard uncertainty, which is
The expanded uncertainty U(cCd) is obtained by
the square root of B11.
multiplying the combined standard uncertainty
The contributions of the different parameters are with a coverage factor of 2, giving
shown in Figure A1.5. The contribution of the
uncertainty on the volume of the flask is the U (cCd ) = 2 × 0.9 mg l −1 = 1.8 mg L−1
largest and that from the weighing procedure is
A B C D E
1 P m V
2 Value 0.9999 100.28 100.00
3 Uncertainty 0.000058 0.05 0.07
4
5 P 0.9999 0.999958 0.9999 0.9999
6 m 100.28 100.28 100.33 100.28
7 V 100.0 100.00 100.00 100.07
8
9 c(Cd) 1002.69972 1002.75788 1003.19966 1001.99832
10 u(y,xi)* 0.05816 0.49995 -0.70140
2 2
11 u(y) , u(y,xi) 0.74529 0.00338 0.24995 0.49196
12
13 u(c(Cd)) 0.9
*The sign of the difference has been retained
Purity
c(Cd)
QUAM:2012.P1 Page 40
Quantifying Uncertainty Example A2: Standardising a Sodium Hydroxide Solution
Example A2: Standardising a Sodium Hydroxide Solution
Summary
Goal
A solution of sodium hydroxide (NaOH) is Weighing
WeighingKHP
KHP
standardised against the titrimetric standard
potassium hydrogen phthalate (KHP).
Measurement procedure
Preparing
PreparingNaOH
NaOH
The titrimetric standard (KHP) is dried and
weighed. After the preparation of the NaOH
solution the sample of the titrimetric standard
(KHP) is dissolved and then titrated using the Titration
Titration
NaOH solution. The stages in the procedure are
shown in the flow chart Figure A2.1.
Measurand:
RESULT
RESULT
1000 ⋅ mKHP ⋅ PKHP
c NaOH = [mol L−1 ]
M KHP ⋅ VT
where Figure A2.1: Standardising NaOH
cNaOH concentration of the NaOH solution Identification of the uncertainty sources:
[mol L–1] The relevant uncertainty sources are shown as a
1000 conversion factor [mL] to [L] cause and effect diagram in Figure A2.2.
mKHP mass of the titrimetric standard KHP [g] Quantification of the uncertainty components
PKHP purity of the titrimetric standard given as
The different uncertainty contributions are given
mass fraction
in Table A2.1, and shown diagrammatically in
MKHP molar mass of KHP [g mol–1] Figure A2.3. The combined standard uncertainty
VT titration volume of NaOH solution [mL] for the 0.10214 mol L-1 NaOH solution is
0.00010 mol L-1
QUAM:2012.P1 Page 41
Quantifying Uncertainty Example A2: Standardising a Sodium Hydroxide Solution
Figure A2.2: Cause and effect diagram for titration
PKHP mKHP
calibration
calibration
sensitivity
sensitivity linearity
linearity
m(gross)
m(tare)
c NaOH
VT
calibration
end-point
temperature
mKHP
end-point
V(T)
M(KHP)
P(KHP)
m(KHP)
Repeatability
c(NaOH)
QUAM:2012.P1 Page 42
Quantifying Uncertainty Example A2: Standardising a Sodium Hydroxide Solution
QUAM:2012.P1 Page 43
Quantifying Uncertainty Example A2: Standardising a Sodium Hydroxide Solution
PKHP purity of the titrimetric standard given as and any further influence quantity is added as a
mass fraction factor to the diagram working outwards from the
MKHP molar mass of KHP [g mol–1] main effect. This is carried out for each branch
until effects become sufficiently remote, that is,
VT titration volume of NaOH solution [mL]
until effects on the result are negligible.
A2.3 Step 2: Identifying and analysing
Mass mKHP
uncertainty sources
Approximately 388 mg of KHP are weighed to
The aim of this step is to identify all major
standardise the NaOH solution. The weighing
uncertainty sources and to understand their effect
procedure is a weight by difference. This means
on the measurand and its uncertainty. This has
that a branch for the determination of the tare
been shown to be one of the most difficult step in
(mtare) and another branch for the gross weight
evaluating the uncertainty of analytical
(mgross) have to be drawn in the cause and effect
measurements, because there is a risk of
diagram. Each of the two weighings is subject to
PKHP mKHP run to run variability and the uncertainty of the
calibration of the balance. The calibration itself
has two possible uncertainty sources: the
sensitivity and the linearity of the calibration
cNaOH function. If the weighing is done on the same
scale and over a small range of weight then the
sensitivity contribution can be neglected.
All these uncertainty sources are added into the
VT M KHP cause and effect diagram (see Figure A2.6).
Figure A2.5: First step in setting up a Purity PKHP
cause and effect diagram The purity of KHP is quoted in the supplier's
neglecting uncertainty sources on the one hand catalogue to be within the limits of 99.95 % and
and an the other of double-counting them. The 100.05 %. PKHP is therefore 1.0000 ±0.0005.
use of a cause and effect diagram (Appendix D) is There is no other uncertainty source if the drying
one possible way to help prevent this happening. procedure was performed according to the
The first step in preparing the diagram is to draw supplier’s specification.
the four parameters of the equation of the Molar mass MKHP
measurand as the main branches.
Potassium hydrogen phthalate (KHP) has the
Afterwards, each step of the method is considered
VT MKHP
Figure A2.6:Cause and effect diagram with added uncertainty sources for
the weighing procedure
QUAM:2012.P1 Page 44
Quantifying Uncertainty Example A2: Standardising a Sodium Hydroxide Solution
QUAM:2012.P1 Page 45
Quantifying Uncertainty Example A2: Standardising a Sodium Hydroxide Solution
PKHP mKHP
calibration
calibration
sensitivity
sensitivity linearity
linearity
m(gross)
m(tare)
cNaOH
VT calibration
end-point
temperature
mKHP
end-point
The uncertainty therefore arises solely from the The titrimetric standard has been completely
balance linearity uncertainty. dried, but the weighing procedure is carried
out at a humidity of around 50 % relative
Linearity: The calibration certificate of the humidity, so adsorption of some moisture is
balance quotes ±0.15 mg for the linearity. This expected.
value is the maximum difference between the
Purity PKHP
actual mass on the pan and the reading of the
scale. The balance manufacturer's own PKHP is 1.0000±0.0005. The supplier gives no
uncertainty evaluation recommends the use of further information concerning the uncertainty in
a rectangular distribution to convert the the catalogue. Therefore this uncertainty is taken
linearity contribution to a standard uncertainty. as having a rectangular distribution, so the
standard uncertainty u(PKHP) is
The balance linearity contribution is
accordingly 0.0005 3 = 0.00029 .
0.15 mg Molar mass MKHP
= 0.09 mg
3 From the IUPAC table current at the time of
This contribution has to be counted twice, measurement, the atomic weights and listed
once for the tare and once for the gross weight, uncertainties for the constituent elements of KHP
because each is an independent observation (C8H5O4K) were:
and the linearity effects are not correlated.
Atomic Quoted Standard
This gives for the standard uncertainty u(mKHP) of Element
weight uncertainty uncertainty
the mass mKHP, a value of
C 12.0107 ±0.0008 0.00046
u (mKHP ) = 2 × (0.09 )2
H 1.00794 ±0.00007 0.000040
⇒ u (mKHP ) = 0.13 mg O 15.9994 ±0.0003 0.00017
NOTE 1: Buoyancy correction is not considered K 39.0983 ±0.0001 0.000058
because all weighing results are quoted on the
For each element, the standard uncertainty is
conventional basis for weighing in air [H.33].
The remaining uncertainties are too small to found by treating the IUPAC quoted uncertainty
consider. Note 1 in Appendix G refers. as forming the bounds of a rectangular
distribution. The corresponding standard
NOTE 2: There are other difficulties when weighing a uncertainty is therefore obtained by dividing
titrimetric standard. A temperature difference
of only 1 °C between the standard and the those values by 3 .
balance causes a drift in the same order of
magnitude as the repeatability contribution.
QUAM:2012.P1 Page 46
Quantifying Uncertainty Example A2: Standardising a Sodium Hydroxide Solution
The separate element contributions to the molar 2. Calibration: The limits of accuracy of the
mass, together with the uncertainty contribution delivered volume are indicated by the
for each, are: manufacturer as a ± figure. For a 20 mL piston
burette this number is typically ±0.03 mL.
Standard Assuming a triangular distribution gives a
Calculation Result
uncertainty
standard uncertainty of 0.03 6 = 0.012 ml .
C8 8×12.0107 96.0856 0.0037
H5 5×1.00794 5.0397 0.00020 Note: The ISO Guide (F.2.3.3) recommends
adoption of a triangular distribution if
O4 4×15.9994 63.9976 0.00068 there are reasons to expect values in the
K 1×39.0983 39.0983 0.000058 centre of the range being more likely
than those near the bounds. For the
glassware in examples A1 and A2, a
The uncertainty in each of these values is triangular distribution has been assumed
calculated by multiplying the standard uncertainty (see the discussion under Volume
in the previous table by the number of atoms. uncertainties in example A1).
This gives a molar mass for KHP of 3. Temperature: The uncertainty due to the lack
M KHP = 96.0856 + 5.0397 + 63.9976 + 39.0983 of temperature control is calculated in the
same way as in the previous example, but this
= 204.2212 g mol −1
time taking a possible temperature variation of
As this expression is a sum of independent values, ±3 °C (with a 95 % confidence). Again using
the standard uncertainty u(MKHP) is a simple the coefficient of volume expansion for water
square root of the sum of the squares of the as 2.1×10–4 °C–1 gives a value of
contributions:
19 × 2.1 × 10 −4 × 3
= 0.006 mL
0.0037 2 + 0.0002 2 + 0.00068 2 1.96
u ( M KHP ) =
+ 0.000058 2 Thus the standard uncertainty due to
incomplete temperature control is 0.006 mL.
⇒ u ( M KHP ) = 0.0038 g mol −1
NOTE: When dealing with uncertainties arising from
NOTE: Since the element contributions to MKHP are incomplete control of environmental factors
simply the sum of the single atom such as temperature, it is essential to take
contributions, it might be expected from the account of any correlation in the effects on
general rule for combining uncertainty different intermediate values. In this example,
contributions that the uncertainty for each the dominant effect on the solution
element contribution would be calculated from temperature is taken as the differential heating
the sum of squares of the single atom effects of different solutes, that is, the
contributions, that is, for carbon, solutions are not equilibrated to ambient
temperature. Temperature effects on each
u ( M ) = 8 × 0.00037 = 0.001 g mol - 1 .
C
2
solution concentration at STP are therefore
Recall, however, that this rule applies only to uncorrelated in this example, and are
independent contributions, that is, consequently treated as independent
contributions from separate determinations of uncertainty contributions.
the value. In this case, the total is obtained by
multiplying a single value by 8. Notice that the 4. Bias of the end-point detection: The titration is
contributions from different elements are performed under a layer of Argon to exclude
independent, and will therefore combine in the any bias due to the absorption of CO2 in the
usual way. titration solution. This approach follows the
principle that it is better to prevent any bias
Volume VT
than to correct for it. There are no other
1. Repeatability of the volume delivery: As indications that the end-point determined from
before, the repeatability has already been taken the shape of the pH-curve does not correspond
into account via the combined repeatability to the equivalence-point, because a strong acid
term for the experiment. is titrated with a strong base. Therefore it is
QUAM:2012.P1 Page 47
Quantifying Uncertainty Example A2: Standardising a Sodium Hydroxide Solution
QUAM:2012.P1 Page 48
Quantifying Uncertainty Example A2: Standardising a Sodium Hydroxide Solution
A2.6 Step 5: Re-evaluate the significant bounds a+ and a- define 99.73 percent limits
components rather than 100 percent limits, and Xi can be
assumed to be approximately normally
The contribution of V(T) is the largest one. The distributed rather than there being no specific
volume of NaOH for titration of KHP (V(T)) itself knowledge about Xi [between the bounds], then
is affected by four influence quantities: the u2(xi) = a2/9. By comparison, the variance of a
repeatability of the volume delivery, the symmetric rectangular distribution of the half-
calibration of the piston burette, the difference width a is a2/3 ... and that of a symmetric
between the operation and calibration temperature triangular distribution of the half-width a is a2/6
of the burette and the repeatability of the end- ... The magnitudes of the variances of the three
distributions are surprisingly similar in view of
point detection. Checking the size of each
the differences in the assumptions upon which
contribution, the calibration is by far the largest. they are based.”
Therefore this contribution needs to be
Thus the choice of the distribution function of
investigated more thoroughly.
this influence quantity has little effect on the
The standard uncertainty of the calibration of value of the combined standard uncertainty
V(T) was calculated from the data given by the (uc(cNaOH)) and it is adequate to assume that it is
manufacturer assuming a triangular distribution. triangular.
The influence of the choice of the shape of the
The expanded uncertainty U(cNaOH) is obtained by
distribution is shown in Table A2.4.
multiplying the combined standard uncertainty by
According to the ISO Guide 4.3.9 Note 1: a coverage factor of 2.
“For a normal distribution with expectation µ U (c NaOH ) = 0.00010 × 2 = 0.0002 mol L−1
and standard deviation σ, the interval µ ±3σ
encompasses approximately 99.73 percent of Thus the concentration of the NaOH solution is
the distribution. Thus, if the upper and lower (0.1021 ±0.0002) mol L–1.
QUAM:2012.P1 Page 49
Quantifying Uncertainty Example A2: Standardising a Sodium Hydroxide Solution
Note 1: The factor of 9 arises from the factor of 3 in Note 1 of ISO Guide
4.3.9 (see page 49 for details).
QUAM:2012.P1 Page 50
Quantifying Uncertainty Example A3
RESULT
RESULT
QUAM:2012.P1 Page 51
Quantifying Uncertainty Example A3
V(HCl)
M(KHP)
V(T1)
V(T2)
P(KHP)
m(KHP)
Repeatability
c(HCl)
QUAM:2012.P1 Page 52
Quantifying Uncertainty Example A3
QUAM:2012.P1 Page 53
Quantifying Uncertainty Example A3
QUAM:2012.P1 Page 54
Quantifying Uncertainty Example A3
NOTE 2: Buoyancy correction is not considered Molar mass MKHP
because all weighing results are quoted on the
conventional basis for weighing in air [H.33]. Atomic weights and listed uncertainties (from
The remaining uncertainties are too small to current IUPAC tables) for the constituent
consider. Note 1 in Appendix G refers. elements of KHP (C8H5O4K) are:
P(KHP) Element Atomic Quoted Standard
P(KHP) is given in the supplier's certificate as weight uncertainty uncertainty
100 % with uncertainty quoted as ±0.05 % (or C 12.0107 ±0.0008 0.00046
±0.0005). This is taken as a rectangular H 1.00794 ±0.00007 0.000040
distribution, so the standard uncertainty u(PKHP) is
O 15.9994 ±0.0003 0.00017
0.0005 K 39.0983 ±0.0001 0.000058
u ( PKHP ) = = 0.00029 .
3 For each element, the standard uncertainty is
V(T2) found by treating the IUPAC quoted uncertainty
as forming the bounds of a rectangular
i) Calibration: Figure given by the manufacturer distribution. The corresponding standard
(±0.03 mL) and treated as a triangular uncertainty is therefore obtained by dividing
distribution: u = 0.03 6 = 0.012 mL . those values by 3 .
ii) Temperature: The possible temperature
The molar mass MKHP for KHP and its uncertainty
variation is within the limits of ±4 °C and
u(MKHP)are, respectively:
treated as a rectangular distribution:
u = 15 × 2.1 × 10 −4 × 4 3 = 0.007 mL . M KHP = 8 × 12.0107 + 5 × 1.00794 + 4 × 15.9994
iii) Bias of the end-point detection: A bias + 39.0983
between the determined end-point and the = 204.2212 g mol −1
equivalence-point due to atmospheric CO2 can
be prevented by performing the titration under (8 × 0.00046) 2 + (5 × 0.00004) 2
u ( M KHP ) =
argon. No uncertainty allowance is made. + (4 × 0.00017) 2 + 0.000058 2
VT2 is found to be 14.89 mL and combining the ⇒ u ( FKHP ) = 0.0038 g mol −1
two contributions to the uncertainty u(VT2) of the
volume VT2 gives a value of NOTE: The single atom contributions are not
independent. The uncertainty for the atom
u (VT2 ) = 0.012 2 + 0.007 2 contribution is therefore calculated by
multiplying the standard uncertainty of the
⇒ u (VT2 ) = 0.014 mL atomic weight by the number of atoms.
Volume VT1 Volume VHCl
All contributions except the one for the i) Calibration: Uncertainty stated by the
temperature are the same as for VT2 manufacturer for a 15 mL pipette as ±0.02 mL
and treated as a triangular distribution:
i) Calibration: 0.03 6 = 0.012 mL
0.02 6 = 0.008 mL .
ii) Temperature: The approximate volume for the
ii) Temperature: The temperature of the
titration of 0.3888 g KHP is 19 mL NaOH,
laboratory is within the limits of ±4 °C. Use of
therefore its uncertainty contribution is
a rectangular temperature distribution gives a
19 × 2.1 × 10 −4 × 4 3 = 0.009 mL .
standard uncertainty of 15 × 2.1 × 10 −4 × 4 3
iii) Bias: Negligible = 0.007 mL.
VT1 is found to be 18.64 mL with a standard Combining these contributions gives
uncertainty u(VT1) of
u (VHCl ) = 0.0037 2 + 0.008 2 + 0.007 2
u (VT 1 ) = 0.012 + 0.009
2 2
⇒ u (VHCl ) = 0.011 mL
⇒ u (VT 1 ) = 0.015 mL
QUAM:2012.P1 Page 55
Quantifying Uncertainty Example A3
A3.5 Step 4: Calculating the combined A spreadsheet method (see Appendix E) can be
standard uncertainty used to simplify the above calculation of the
combined standard uncertainty. The spreadsheet
cHCl is given by
filled in with the appropriate values is shown in
1000 ⋅ m KHP ⋅ PKHP ⋅ VT2 Table A3.3, with an explanation.
c HCl =
VT1 ⋅ M KHP ⋅ VHCl The sizes of the different contributions can be
NOTE: The repeatability estimate is, in this example, compared using a histogram. Figure A3.6 shows
treated as a relative effect; the complete the values of the contributions |u(y,xi)| from
model equation is therefore Table A3.3.
1000 ⋅ m KHP ⋅ PKHP ⋅ VT2
c HCl = × rep Figure A3.6: Uncertainties in acid-base
VT1 ⋅ M KHP ⋅ V HCl titration
All the intermediate values of the two step V(HCl)
experiment and their standard uncertainties are M(KHP)
collected in Table A3.2. Using these values: V(T1)
Repeatability
The uncertainties associated with each c(HCl)
component are combined accordingly: 0 0.05 0.1 0.15 0.2
|u(y,xi)| (mmol L-1)
2 2 2
u ( m KHP ) u ( PKHP ) u (V T2 )
+ +
m KHP PKHP V T2
2 2 2
u c (c HCl ) u (V T1 ) u ( M KHP ) u (V HCl ) The expanded uncertainty U(cHCl) is calculated by
= + + +
c HCl V T1 M KHP V HCl multiplying the combined standard uncertainty by
+ u ( rep) 2 a coverage factor of 2:
0.000312 + 0.00029 2 + 0.00094 2 + U (c HCl ) = 0.00018 × 2 = 0.0004 mol L-1
=
0.00080 2 + 0.000019 2 + 0.00073 2 + 0.0012
The concentration of the HCl solution is
= 0.0018
(0.1014 ±0.0004) mol L–1
⇒ u c (c HCl ) = c HCl × 0.0018 = 0.00018 mol L−1
QUAM:2012.P1 Page 56
Quantifying Uncertainty Example A3
QUAM:2012.P1 Page 57
Quantifying Uncertainty Example A3
Including the temperature correction terms gives: VT1;Ind :volume from a visual end-point detection
1000 ⋅ mKHP ⋅ PKHP V ' T2 VT1 volume at the equivalence-point
c HCl = ⋅
M KHP V ' T1 ⋅V ' HCl
VExcess excess volume needed to change the colour
1000 ⋅ mKHP ⋅ PKHP of phenolphthalein
=
M KHP The volume correction quoted above leads to the
VT2 [1 − α (T − 20)] following changes in the equation of the
×
V
T1 [1 − α (T − 20 )] ⋅ V HCl [1 − α (T − 20)] measurand
This expression can be simplified by assuming 1000 ⋅ mKHP ⋅ PKHP ⋅ (VT2;Ind − VExcess )
c HCl =
that the mean temperature T and the expansion M KHP ⋅ (VT1;Ind − VExcess ) ⋅ VHCl
coefficient of an aqueous solution α are the same
for all three volumes The standard uncertainties u(VT2) and u(VT1) have
to be recalculated using the standard uncertainty
1000 ⋅ mKHP ⋅ PKHP of the visual end-point detection as the
c HCl =
M KHP uncertainty component of the repeatability of the
end-point detection.
VT2
× u (VT1 ) = u (VT1;Ind − VExcess )
VT1 ⋅ VHCl ⋅ [1 − α (T − 20)]
This gives a slightly different result for the HCl = 0.012 2 + 0.009 2 + 0.032
concentration at 20°C: = 0.034 mL
where
QUAM:2012.P1 Page 58
Quantifying Uncertainty Example A3
Volume VT2 Volume VT1
calibration: 0.03 6 = 0.012 mL calibration: 0.03 6 = 0.12 mL
temperature:
temperature:
15 × 2.1 × 10 −4 × 4 3 = 0.007 mL 19 × 2.1 × 10 −4 × 4 3 = 0.009 mL
⇒ u (VT2 ) = 0.012 2 + 0.007 2 = 0.014 mL
⇒ u (VT1 ) = 0.012 2 + 0.009 2 = 0.015 mL
Repeatability
All the values of the uncertainty components are
The quality log of the triple determination shows summarised in Table A3.4. The combined
a mean long term standard deviation of the standard uncertainty is 0.00016 mol L–1, which is
experiment of 0.001 (as RSD). It is not a very modest reduction due to the triple
recommended to use the actual standard deviation determination. The comparison of the uncertainty
obtained from the three determinations because contributions in the histogram, shown in Figure
this value has itself an uncertainty of 52 %. The A3.7, highlights some of the reasons for that
standard deviation of 0.001 is divided by the result. Though the repeatability contribution is
square root of 3 to obtain the standard much reduced, the volumetric uncertainty
uncertainty of the triple determination (three contributions remain, limiting the improvement.
independent measurements)
Figure A3.7: Replicated Acid-base Titration
Rep = 0.001 3 = 0.00058 (as RSD) values and uncertainties
Volume VHCl V(HCl)
Replicated
M(KHP)
calibration: 0.02 6 = 0.008 mL Single run
V(T1)
P(KHP)
⇒ u (VHCl ) = 0.008 2 + 0.007 2 = 0.01 mL m(KHP)
Repeatability
Molar mass MKHP c(HCl)
QUAM:2012.P1 Page 59
Quantifying Uncertainty Example A4
QUAM:2012.P1 Page 60
Quantifying Uncertainty Example A4
m(gross)
Linearity m (tare)
Linearity
Calibration
Calibration
Calibration
Fhom Recovery Iref msample
Homogeneity
Bias
Precision
P(op)
QUAM:2012.P1 Page 61
Quantifying Uncertainty Example A4
A4.1 Introduction
This example illustrates the way in which in- the hexane layer through sodium sulphate
house validation data can be used to quantify the column.
measurement uncertainty. The aim of the
vi) Concentration of the washed extract by gas
measurement is to determine the amount of an
blown-down of extract to near dryness.
organophosphorus pesticides residue in bread.
The validation scheme and experiments establish vii)Dilution to standard volume Vop
performance by measurements on spiked samples. (approximately 2 mL) in a 10 mL graduated
It is assumed the uncertainty due to any difference tube.
in response of the measurement to the spike and viii)Measurement: Injection and GC
the analyte in the sample is small compared with
measurement of 5 µL of sample extract to
the total uncertainty on the result.
give the peak intensity Iop.
A4.2 Step 1: Specification
ix) Preparation of an approximately 5 µg mL-1
The measurand is the mass fraction of pesticide in standard (actual mass concentration cref).
a bread sample. The detailed specification of the
x) GC calibration using the prepared standard
measurand for more extensive analytical methods
and injection and GC measurement of 5 µL
is best done by a comprehensive description of
of the standard to give a reference peak
the different stages of the analytical method and
intensity Iref.
by providing the equation of the measurand.
QUAM:2012.P1 Page 62
Quantifying Uncertainty Example A4
Calculation Scope
The mass concentration cop in the final sample The analytical method is applicable to a small
extract is given by range of chemically similar pesticides at levels
I op between 0.01 and 2 mg kg-1 with different kinds
cop = cref ⋅ µg mL−1 of bread as matrix.
I ref
A4.3 Step 2: Identifying and analysing
and the estimate Pop of the level of pesticide in the uncertainty sources
bulk sample (in mg kg-1) is given by
The identification of all relevant uncertainty
cop ⋅ Vop −1
Pop = mg kg sources for such a complex analytical procedure
Rec ⋅ msample is best done by drafting a cause and effect
or, substituting for cop, diagram. The parameters in the equation of the
measurand are represented by the main branches
I op ⋅ c ref ⋅ Vop of the diagram. Further factors are added to the
Pop = mg kg -1
I ref ⋅ Rec ⋅ msample diagram, considering each step in the analytical
procedure (A4.2), until the contributory factors
where
become sufficiently remote.
Pop Mass fraction of pesticide in the sample
[mg kg-1] The sample inhomogeneity is not a parameter in
the original equation of the measurand, but it
Iop Peak intensity of the sample extract appears to be a significant effect in the analytical
cref Mass concentration of the reference procedure. A new branch, F(hom), representing
standard [µg mL-1] the sample inhomogeneity is accordingly added to
the cause and effect diagram (Figure A4.5).
Vop Final volume of the extract [mL]
Finally, the uncertainty branch due to the
Iref Peak intensity of the reference standard inhomogeneity of the sample has to be included in
Rec Recovery the calculation of the measurand. To show the
effect of uncertainties arising from that source
msample Mass of the investigated sub-sample [g] clearly, it is useful to write
Figure A4.5: Cause and effect diagram with added main branch for sample inhomogeneity
I op c ref Vop
Calibration Precision
Linearity Purity(ref)
m(ref)
Precision
Temperature Calibration Precision Temperature
V(ref) Calibration
Calibration Calibration Precision
Precision
dilution
Pop
m(gross)
Precision
Linearity m(tare)
Sensitivity Linearity
Calibration Precision
Precision
Calibration Sensitivity
Calibration
Fhom Recovery I ref msample
QUAM:2012.P1 Page 63
Quantifying Uncertainty Example A4
Figure A4.6: Cause and effect diagram after rearrangement to accommodate the data of the
validation study
m(gross)
Linearity m (tare)
Linearity
Calibration
Calibration
Calibration
Fhom Recovery Iref msample
QUAM:2012.P1 Page 64
Quantifying Uncertainty Example A4
variation under intermediate precision conditions. standard uncertainty for the single values. This
That is, the precision is treated as a multiplicative gives a value for the standard uncertainty due to
factor FI like the homogeneity. This form is run to run variation of the overall analytical
chosen for convenience in calculation, as will be process, including run to run recovery variation
seen below. but excluding homogeneity effects, of
The evaluation of the different effects is now 0.382 2 = 0.27
considered.
1. Precision study NOTE: At first sight, it may seem that duplicate tests
The overall run to run variation (precision) of the provide insufficient degrees of freedom. But it
analytical procedure was performed with a is not the goal to obtain very accurate numbers
for the precision of the analytical process for
number of duplicate tests (same homogenised
one specific pesticide in one special kind of
sample, complete extraction/determination bread. It is more important in this study to test
procedure repeated in two different runs) for a wide variety of different materials (different
typical organophosphorus pesticides found in bread types in this case) and analyte levels,
different bread samples. The results are collected giving a representative selection of typical
in Table A4.2. organophosphorus pesticides. This is done in
the most efficient way by duplicate tests on
The normalised difference data (the difference many materials, providing (for the precision
divided by the mean) provides a measure of the estimate) approximately one degree of
overall run to run variability (intermediate freedom for each material studied in duplicate.
precision). To obtain the estimated relative This gives a total of 15 degrees of freedom.
standard uncertainty for single determinations, the
2. Bias study
standard deviation of the normalised differences
is taken and divided by 2 to correct from a The bias of the analytical procedure was
standard deviation for pairwise differences to the investigated during the in-house validation study
using spiked samples (homogenised samples were
QUAM:2012.P1 Page 65
Quantifying Uncertainty Example A4
split and one portion spiked). Table A4.3 collects In this example a correction factor (1/ Rec ) is
the results of a long term study of spiked samples
being applied and therefore Rec is explicitly
of various types.
included in the calculation of the result.
The relevant line (marked with grey colour) is the
3. Other sources of uncertainty
"bread" entry line, which shows a mean recovery
for forty-two samples of 90 %, with a standard The cause and effect diagram in Figure A4.7
deviation (s) of 28 %. The standard uncertainty shows which other sources of uncertainty are (1)
was calculated as the standard deviation of the adequately covered by the precision data, (2)
mean u ( Rec) = 0.28 42 = 0.0432 . covered by the recovery data or (3) have to be
further examined and eventually considered in the
A Student’s t test is used to determine whether the calculation of the measurement uncertainty.
mean recovery is significantly different from 1.0.
All balances and the important volumetric
The test statistic t is calculated using the
measuring devices are under regular control.
following equation
Precision and recovery studies take into account
the influence of the calibration of the different
t=
1 − Rec
=
(1 − 0.9) = 2.31 volumetric measuring devices because during the
u( Rec) 0.0432 investigation various volumetric flasks and
This value is compared with the 2-tailed critical pipettes have been used. The extensive variability
value tcrit, for n–1 degrees of freedom at 95 % studies, which lasted for more than half a year,
confidence (where n is the number of results used also cover influences of the environmental
temperature on the result. This leaves only the
to estimate Rec ). If t is greater or equal than the
reference material purity, possible nonlinearity in
critical value tcrit than Rec is significantly GC response (represented by the ‘calibration’
different from 1. terms for Iref and Iop in the diagram), and the
t = 2.31 ≥ t crit ; 41 ≅ 2.021 sample homogeneity as additional components
requiring study.
The purity of the reference standard is given by
the manufacturer as 99.53 % ±0.06 %. The purity
QUAM:2012.P1 Page 66
Quantifying Uncertainty Example A4
m(gross)
m(tare)
Linearity Linearity
Calibration(3)
Calibration(2)
Calibration(2)
Fhom (3) Recovery(2) Iref msample
(1) Contribution (FI in equation A4.1) included in the relative standard deviation calculated from the intermediate
precision study of the analytical procedure.
(2) Considered during the bias study of the analytical procedure.
(3) To be considered during the evaluation of the other sources of uncertainty.
is potential an additional uncertainty source with has therefore been estimated on the basis of the
a standard uncertainty of 0.0006 3 = 0.00035 sampling method used.
(rectangular distribution). But the contribution is To aid the estimation, a number of feasible
so small (compared, for example, to the precision pesticide residue distribution scenarios were
estimate) that it is clearly safe to neglect this considered, and a simple binomial distribution
contribution. used to calculate the standard uncertainty for the
Linearity of response to the relevant total included in the analysed sample (see section
organophosphorus pesticides within the given A4.6). The scenarios, and the calculated relative
concentration range is established during standard uncertainties in the amount of pesticide
validation studies. In addition, with multi-level in the final sample, were:
studies of the kind indicated in Table A4.2 and Scenario (a) Residue distributed on the top
Table A4.3, nonlinearity would contribute to the surface only: 0.58.
observed precision. No additional allowance is
Scenario (b) Residue distributed evenly over
required. The in-house validation study has
the surface only: 0.20.
proven that this is not the case.
Scenario (c) Residue distributed evenly
The homogeneity of the bread sub-sample is the
through the sample, but reduced in
last remaining other uncertainty source. No
concentration by evaporative loss or
literature data were available on the distribution
decomposition close to the surface: 0.05-0.10
of trace organic components in bread products,
(depending on the "surface layer" thickness).
despite an extensive literature search (at first sight
this is surprising, but most food analysts attempt Scenario (a) is specifically catered for by
homogenisation rather than evaluate proportional sampling or complete
inhomogeneity separately). Nor was it practical to homogenisation (see section A4.2, Procedure
measure homogeneity directly. The contribution paragraph i). This would only arise in the case of
decorative additions (whole grains) added to one
QUAM:2012.P1 Page 67
Quantifying Uncertainty Example A4
surface. Scenario (b) is therefore considered the The spreadsheet for this case (Table A4.5) takes
likely worst case. Scenario (c) is considered the the form shown in Table A4.5. Note that the
most probable, but cannot be readily spreadsheet calculates an absolute value
distinguished from (b). On this basis, the value of uncertainty (0.377) for a nominal corrected result
0.20 was chosen. of 1.1111, giving a value of 0.373/1.1111=0.34.
NOTE: For more details on modelling inhomogeneity The relative sizes of the three different
see the last section of this example. contributions can be compared by employing a
A4.5 Step 4: Calculating the combined histogram. Figure A4.8 shows the values |u(y,xi)|
standard uncertainty taken from Table A4.5.
During the in-house validation study of the The precision is the largest contribution to the
analytical procedure the intermediate precision, measurement uncertainty. Since this component is
the bias and all other feasible uncertainty sources derived from the overall variability in the method,
had been thoroughly investigated. Their values further experiments would be needed to show
and uncertainties are collected in Table A4.4. where improvements could be made. For
example, the uncertainty could be reduced
The relative values are combined because the significantly by homogenising the whole loaf
model (equation A4.1) is entirely multiplicative: before taking a sample.
u c ( Pop ) The expanded uncertainty U(Pop) is calculated by
= 0.27 2 + 0.048 2 + 0.2 2 = 0.34
Pop multiplying the combined standard uncertainty
with a coverage factor of 2 to give:
⇒ u c ( Pop ) = 0.34 × Pop
QUAM:2012.P1 Page 68
Quantifying Uncertainty Example A4
Assuming that all of the analyte in a sample can the worst case for inhomogeneity is the situation
be extracted for analysis irrespective of its state, where some part or parts of a sample contain all
Homogeneity
Bias
Precision
P(op)
QUAM:2012.P1 Page 69
Quantifying Uncertainty Example A4
σ
⇒ σ = 2.08 l12 = 1.44 l1 ⇒ RSD = = 0.33
µ
σ
⇒ RSD = = 0.58 However, if the loss extends to a depth less than
µ the size of the portion removed, as would be
NOTE: To calculate the level X in the entire sample, µ expected, each portion contains some material l1
is multiplied back up by 432/15, giving a and l2 would therefore both be non-zero. Taking
mean estimate of X of the case where all outer portions contain 50 %
QUAM:2012.P1 Page 70
Quantifying Uncertainty Example A4
"centre" and 50 % "outer" parts of the sample scenario (c) will give values of σ µ not above
l1 = 2 × l 2 ⇒ l1 = X 296 0.09.
NOTE: In this case, the reduction in uncertainty arises
µ = 15 × 0.37 × (l1 − l 2 ) + 15 × l 2 because the inhomogeneity is on a smaller
= 15 × 0.37 × l 2 + 15 × l 2 = 20.6 l 2 scale than the portion taken for
homogenisation. In general, this will lead to a
σ 2 = 15 × 0.37 × (1 − 0.37) × (l1 − l 2 ) 2 = 3.5 l 22 reduced contribution to uncertainty. It follows
that no additional modelling need be done for
giving an RSD of 1.87 20.6 = 0.09 cases where larger numbers of small
inclusions (such as grains incorporated in the
In the current model, this corresponds to a depth bulk of a loaf) contain disproportionate
of 1 cm through which material is lost. amounts of the material of interest. Provided
Examination of typical bread samples shows crust that the probability of such an inclusion being
thickness typically of 1 cm or less, and taking this incorporated into the portions taken for
to be the depth to which the material of interest is homogenisation is large enough, the
lost (crust formation itself inhibits lost below this contribution to uncertainty will not exceed any
depth), it follows that realistic variants on already calculated in the scenarios above.
QUAM:2012.P1 Page 71
Quantifying Uncertainty Example A5
QUAM:2012.P1 Page 72
Quantifying Uncertainty Example A5
c0 VL
calibration
curve Filling
facid Temperature
ftime Calibration
ftemp Reading
Result r
1 length
2 length
area
aV d
f (temp)
f (time)
f (acid)
a(V)
V(L)
c(0)
0 0.5 1 1.5 2
|u(y,xi)| (mg dm-2)x1000
QUAM:2012.P1 Page 73
Quantifying Uncertainty Example A5
QUAM:2012.P1 Page 74
Quantifying Uncertainty Example A5
appropriate wavelengths and, in this example, aV the surface area of the liquid meniscus
a least squares calibration curve. [dm2]
vi) The result is calculated (see below) and d factor by which the sample was diluted
reported as the amount of lead and/or The first part of the above equation of the
cadmium in the total volume of the extracting measurand is used to draft the basic cause and
solution, expressed in milligrams of lead or effect diagram (Figure A5.5).
cadmium per square decimetre of surface area
for category 1 articles or milligrams of lead or Figure A5.5:Initial cause and effect diagram
cadmium per litre of the volume for category 2
and 3 articles. c0 VL
NOTE: Complete copies of BS 6748:1986 can be Filling
obtained by post from BSI customer services,
calibration Temperature
389 Chiswick High Road, London W4 4AL curve
England +44 (0) 208 996 9001 Calibration
QUAM:2012.P1 Page 75
Quantifying Uncertainty Example A5
Figure A5.6:Cause and effect diagram with added hidden assumptions (correction factors)
c0 VL
calibration
curve Filling
f acid Temperature
f time Calibration
f temp Reading
Result r
1 length
2 length
area
aV d
A5.4 Step 3: Quantifying uncertainty sources Reading: The volume VL used is to be recorded to
within 2 %, in practice, use of a measuring
The aim of this step is to quantify the uncertainty
cylinder allows an inaccuracy of about 1 % (i.e.
arising from each of the previously identified
0.01VL). The standard uncertainty is calculated
sources. This can be done either by using
assuming a triangular distribution.
experimental data or from well based
assumptions. Calibration: The volume is calibrated according
to the manufacturer’s specification within the
Dilution factor d
range of ±2.5 mL for a 500 mL measuring
For the current example, no dilution of the cylinder. The standard uncertainty is obtained
leaching solution is necessary, therefore no assuming a triangular distribution.
uncertainty contribution has to be accounted for.
For this example a volume of 332 mL is found
Volume VL and the four uncertainty components are
Filling: The empirical method requires the vessel combined accordingly
to be filled ‘to within 1 mm from the brim’ or, for 2
a shallow article with sloping rim, within 6 mm 0.005 × 332
+ (0.08) 2
from the edge. For a typical, approximately 6
cylindrical, drinking or kitchen utensil, 1 mm will u (VL ) = 2 2
0.01 × 332 2.5
represent about 1 % of the height of the vessel. + +
The vessel will therefore be 99.5 ±0.5 % filled 6 6
(i.e. VL will be approximately 0.995 ±0.005 of the = 1.83 mL
vessel’s volume).
Cadmium concentration c0
Temperature: The temperature of the acetic acid
has to be 22 ±2ºC. This temperature range leads The amount of leached cadmium is calculated
to an uncertainty in the determined volume, due using a manually prepared calibration curve. For
to a considerable larger volume expansion of the this purpose five calibration standards, with a
liquid compared with the vessel. The standard concentration 0.1 mg L-1, 0.3 mg L-1, 0.5 mg L-1,
uncertainty of a volume of 332 mL, assuming a 0.7 mg L-1 and 0.9 mg L-1, were prepared from a
rectangular temperature distribution, is 500 ±0.5 mg L-1 cadmium reference standard. The
linear least squares fitting procedure used
2.1 × 10 −4 × 332 × 2 assumes that the uncertainties of the values of the
= 0.08 mL
3 abscissa are considerably smaller than the
uncertainty on the values of the ordinate.
QUAM:2012.P1 Page 76
Quantifying Uncertainty Example A5
Value Standard S 1 1 (c 0 − c ) 2
deviation u (c 0 ) = + +
B1 p n S xx
B1 0.2410 0.0050
0.005486 1 1 (0.26 − 0.5) 2
B0 0.0087 0.0029 = + +
0.241 2 15 1 .2
with a correlation coefficient r of 0.997. The ⇒ u (c 0 ) = 0.018 mg L−1
fitted line is shown in Figure A5.7. The residual
standard deviation S is 0.005486. Although there
Figure A5.7:Linear least square fit and uncertainty interval for duplicate determinations
0.25
0.20
Absorbance A
0.15
0.10
0.05
0.00
QUAM:2012.P1 Page 77
Quantifying Uncertainty Example A5
with the residual standard deviation S given by standard uncertainty in area of 5.73×0.05/1.96 =
n 0.146 dm2.
∑[ A j − ( B0 + B1 ⋅ c j )]
2
QUAM:2012.P1 Page 78
Quantifying Uncertainty Example A5
Table A5.3: Intermediate values and uncertainties for leachable cadmium analysis
Description Value Standard uncertainty Relative standard
u(x) uncertainty u(x)/x
c0 Content of cadmium in the extraction 0.26 mg L-1 0.018 mg L-1 0.069
solution
VL Volume of the leachate 0.332 L 0.0018 L 0.0054
2 2
aV Surface area of the liquid 5.73 dm 0.19 dm 0.033
facid Influence of the acid concentration 1.0 0.0008 0.0008
ftime Influence of the duration 1.0 0.001 0.001
ftemp Influence of temperature 1.0 0.06 0.06
QUAM:2012.P1 Page 79
Quantifying Uncertainty Example A5
f (temp)
f (time)
f (acid)
a(V)
V(L)
c(0)
0 0.5 1 1.5 2
|u(y,xi)| (mg dm-2 )x1000
QUAM:2012.P1 Page 80
Quantifying Uncertainty Example A6
Summary
Goal
The determination of crude fibre by a regulatory Figure A6.1: Fibre determination.
standard method.
Grind
Grindand
and
Measurement procedure weigh sample
weigh sample
The measurement procedure is a standardised
procedure involving the general steps outlined in
Figure A6.1. These are repeated for a blank Acid
Aciddigestion
digestion
sample to obtain a blank correction.
Measurand
The fibre content as a percentage of the sample by Alkaline
Alkalinedigestion
digestion
weight, Cfibre, is given by:
Cfibre =
(b − c ) × 100
a Dry
Dryand
and
Where: weigh residue
weigh residue
a is the mass (g) of the sample.
(Approximately 1 g)
Ash
Ashand
andweigh
weigh
b is the loss of mass (g) after ashing during
the determination; residue
residue
c is the loss of mass (g) after ashing during
the blank test.
RESULT
RESULT
Identification of uncertainty sources
A full cause and effect diagram is provided as
Figure A6.9. reproducibility data. No other contributions were
significant in general. At low levels it was
Quantification of uncertainty components
necessary to add an allowance for the specific
Laboratory experiments showed that the method drying procedure used. Typical resulting
was performing in house in a manner that fully uncertainty estimates are tabulated below (as
justified adoption of collaborative study standard uncertainties) (Table A6.1).
5 0.4 0.08
10 0.6 0.06
QUAM:2012.P1 Page 81
Quantifying Uncertainty Example A6
A6.1 Introduction
Crude fibre is defined in the method scope as the v) Dry to constant weight at a standardised
amount of fat-free organic substances which are temperature (“constant weight” is not
insoluble in acid and alkaline media. The defined within the published method; nor are
procedure is standardised and its results used other drying conditions such as air
directly. Changes in the procedure change the circulation or dispersion of the residue).
measurand; this is accordingly an example of an vi) Record the dry residue weight.
operationally defined (empirical) method.
vii) Ash at a stated temperature to “constant
Collaborative trial data (repeatability and weight” (in practice realised by ashing for a
reproducibility) were available for this statutory set time decided after in house studies).
method. The precision experiments described viii) Weigh the ashed residue and calculate the
were planned as part of the in-house evaluation of fibre content by difference, after subtracting
the method performance. There is no suitable the residue weight found for the blank
reference material (i.e. certified by the same crucible.
method) available for this method.
Measurand
A6.2 Step 1: Specification
The fibre content as a percentage of the sample by
The specification of the measurand for more weight, Cfibre, is given by:
extensive analytical methods is best done by a
comprehensive description of the different stages Cfibre =
(b − c ) × 100
of the analytical method and by providing the a
equation of the measurand. Where:
Procedure a is the mass (g) of the sample. Approximately
1 g of sample is taken for analysis.
The procedure, a complex digestion, filtration,
b is the loss of mass (g) after ashing during the
drying, ashing and weighing procedure, which is
determination.
also repeated for a blank crucible, is summarised
in Figure A6.2. The aim is to digest most c is the loss of mass (g) after ashing during the
components, leaving behind all the undigested blank test.
material. The organic material is ashed, leaving A6.3 Step 2: Identifying and analysing
an inorganic residue. The difference between the uncertainty sources
dry organic/inorganic residue weight and the
ashed residue weight is the “fibre content”. The A range of sources of uncertainty was identified.
main stages are: These are shown in the cause and effect diagram
for the method (see Figure A6.9). This diagram
i) Grind the sample to pass through a 1mm was simplified to remove duplication following
sieve the procedures in Appendix D; this, together with
ii) Weigh 1g of the sample into a weighed removal of insignificant components (particularly
crucible the balance calibration and linearity), leads to the
iii) Add a set of acid digestion reagents at stated simplified cause and effect diagram in Figure
concentrations and volumes. Boil for a A6.10.
stated, standardised time, filter and wash the Since prior collaborative and in-house study data
residue. were available for the method, the use of these
iv) Add standard alkali digestion reagents and data is closely related to the evaluation of
boil for the required time, filter, wash and different contributions to uncertainty and is
rinse with acetone. accordingly discussed further below.
QUAM:2012.P1 Page 82
Quantifying Uncertainty Example A6
Figure A6.2: Flow diagram illustrating the stages in the regulatory method for the
determination of fibre in animal feeding stuffs
A6.4 Step 3: Quantifying uncertainty reproducibility estimates obtained from the trial
components are presented in Table A6.2.
Collaborative trial results As part of the in-house evaluation of the method,
experiments were planned to evaluate the
The method has been the subject of a
repeatability (within batch precision) for feeding
collaborative trial. Five different feeding stuffs
stuffs with fibre concentrations similar to those of
representing typical fibre and fat concentrations
the samples analysed in the collaborative trial.
were analysed in the trial. Participants in the trial
The results are summarised in Table A6.2. Each
carried out all stages of the method, including
estimate of in-house repeatability is based on 5
grinding of the samples. The repeatability and
replicates.
QUAM:2012.P1 Page 83
Quantifying Uncertainty Example A6
Table A6.2: Summary of results from collaborative trial of the method and in-house
repeatability check
The estimates of repeatability obtained in-house the loss of mass after ashing:
were comparable to those obtained from the acid concentration;
collaborative trial. This indicates that the method
alkali concentration;
precision in this particular laboratory is similar to
that of the laboratories which took part in the acid digestion time;
collaborative trial. It is therefore acceptable to alkali digestion time;
use the reproducibility standard deviation from drying temperature and time;
the collaborative trial in the uncertainty budget ashing temperature and time.
for the method. To complete the uncertainty
budget we need to consider whether there are any Reagent concentrations and digestion times
other effects not covered by the collaborative trial The effects of acid concentration, alkali
which need to be addressed. The collaborative concentration, acid digestion time and alkali
trial covered different sample matrices and the digestion time have been studied in previously
pre-treatment of samples, as the participants were published papers. In these studies, the effect of
supplied with samples which required grinding changes in the parameter on the result of the
prior to analysis. The uncertainties associated analysis was evaluated. For each parameter the
with matrix effects and sample pre-treatment do sensitivity coefficient (i.e., the rate of change in
not therefore require any additional consideration. the final result with changes in the parameter) and
Other parameters which affect the result relate to the uncertainty in the parameter were calculated.
the extraction and drying conditions used in the
method. Although the reproducibility standard The uncertainties given in Table A6.3 are small
deviation will normally include the effect of compared to the reproducibility figures presented
variation in these parameters, they were in Table A6.2. For example, the reproducibility
investigated separately to ensure the laboratory standard deviation for a sample containing
bias was under control (i.e., small compared to 2.3 % m/m fibre is 0.293 % m/m. The uncertainty
the reproducibility standard deviation). The associated with variations in the acid digestion
parameters considered are discussed below. time is estimated as 0.021 % m/m (i.e.,
2.3 × 0.009). We can therefore safely neglect the
Loss of mass on ashing uncertainties associated with variations in these
As there is no appropriate reference material for method parameters.
this method, in-house bias has to be assessed by Drying temperature and time
considering the uncertainties associated with
individual stages of the method. Several factors No prior data were available. The method states
will contribute to the uncertainty associated with that the sample should be dried at 130 °C to
QUAM:2012.P1 Page 84
Quantifying Uncertainty Example A6
“constant weight”. In this case the sample is dried concentrations, this uncertainty is more than 1/3
for 3 hours at 130 °C and then weighed. It is then of the sR value so an additional term should be
dried for a further hour and re-weighed. Constant included in the uncertainty budget (see Table
weight is defined in this laboratory as a change of A6.4).
less than 2 mg between successive weighings. In
Ashing temperature and time
an in-house study, replicate samples of four
feeding stuffs were dried at 110, 130 and 150 °C The method requires the sample to be ashed at
and weighed after 3 and 4 hours drying time. In 475 to 500 °C for at least 30 mins. A published
the majority of cases, the weight change between study on the effect of ashing conditions involved
3 and 4 hours at each drying temperature was less determining fibre content at a number of different
than 2 mg. This was therefore taken as the worst ashing temperature/time combinations, ranging
case estimate of the uncertainty in the weight from 450 °C for 30 minutes to 650 °C for 3 hours.
change on drying. The range ±2 mg describes a No significant difference was observed between
rectangular distribution, which is converted to a the fibre contents obtained under the different
standard uncertainty by dividing by √3. The conditions. The effect on the final result of small
uncertainty in the weight recorded after drying to variations in ashing temperature and time can
constant weight is therefore 0.00115 g. The therefore be assumed to be negligible.
method specifies a sample weight of 1 g. For a 1 g Loss of mass after blank ashing
sample, the uncertainty in drying to constant
weight corresponds to a standard uncertainty of No experimental data were available for this
0.115 % m/m in the fibre content. This source of parameter. However, the uncertainties arise
uncertainty is independent of the fibre content of primarily from weighing; the effects of variations
the sample. There will therefore be a fixed in this parameter are therefore likely to be small
contribution of 0.115 % m/m to the uncertainty and well represented in the collaborative study.
budget for each sample, regardless of the A6.5 Step 4: Calculating the combined
concentration of fibre in the sample. At all fibre standard uncertainty
concentrations, this uncertainty is smaller than the
reproducibility standard deviation, and for all but This is an example of an empirical method for
the lowest fibre concentrations is less than 1/3 of which collaborative trial data were available. The
the sR value. Again, this source of uncertainty can in-house repeatability was evaluated and found to
usually be neglected. However for low fibre be similar to that predicted by the collaborative
QUAM:2012.P1 Page 85
Quantifying Uncertainty Example A6
5 0.4 0.08
10 0.6 0.06
2.5 0.62 25
5 0.8 16
10 0.12 12
QUAM:2012.P1 Page 86
Figure A6.9: Cause and effect diagram for the determination of fibre in animal feeding stuffs
QUAM:2012.P1
Precision
Page 87
Example A6
Figure A6.10: Simplified cause and effect diagram
QUAM:2012.P1
drying temp drying time
ashing temp ashing time sample weight precision
weight of crucible
before ashing weight of extraction
Quantifying Uncertainty
crucible after
ashing
alkali digest* acid digest* ashing precision
weighing precision
Crude Fibre
(%)
ashing temp ashing time
Page 88
Example A6
Quantifying Uncertainty Example A7
A7.1 Introduction
This example illustrates how the uncertainty Rcertified
concept can be applied to a measurement of the K = K 0 + K bias ; where K 0 = (2)
Robserved
amount content of lead in a water sample using
Isotope Dilution Mass Spectrometry (IDMS) and where K0 is the mass discrimination correction
Inductively Coupled Plasma Mass Spectrometry factor at time 0, Kbias is a bias factor coming into
(ICP-MS). effect as soon as the K-factor is applied to correct
General introduction to Double IDMS a ratio measured at a different time during the
measurement. The Kbias also includes other
IDMS is one of the techniques that is recognised possible sources of bias such as multiplier dead
by the Comité consultatif pour la quantité de time correction, matrix effects etc. Rcertified is the
matière (CCQM) to have the potential to be a certified isotope amount ratio taken from the
primary method of measurement, and therefore a certificate of an isotopic reference material and
well defined expression which describes how the Robserved is the observed value of this isotopic
measurand is calculated is available. In the reference material. In IDMS experiments, using
simplest case of isotope dilution using a certified Inductively Coupled Plasma Mass Spectrometry
spike, which is an enriched isotopic reference (ICP-MS), mass fractionation will vary with time
material, isotope ratios in the spike, the sample which requires that all isotope amount ratios in
and a blend b of known masses of sample and equation (1) need to be individually corrected for
spike are measured. The element amount content mass discrimination.
cx in the sample is given by:
Certified material enriched in a specific isotope is
m y K y1 ⋅ R y1 − K b ⋅ Rb ∑ (K xi ⋅ Rxi ) often unavailable. To overcome this problem,
cx = cy ⋅ ⋅ ⋅ i
‘double’ IDMS is frequently used. The procedure
m x K b ⋅ Rb − K x1 ⋅ Rx1 ∑ (K
i
yi ⋅ R yi ) uses a less well characterised, isotopically
enriched spiking material in conjunction with a
(1)
certified material (denoted z) of natural isotopic
where cx and cy are element amount content in the composition. The certified, natural composition
sample and the spike respectively (the symbol c is material acts as the primary assay standard. Two
used here instead of k for amount content1 to blends are used; blend b is a blend between
avoid confusion with K-factors and coverage sample and enriched spike, as in equation (1). To
factors k). mx and my are mass of sample and spike perform double IDMS a second blend, b’ is
respectively. Rx, Ry and Rb are the isotope amount prepared from the primary assay standard with
ratios. The indexes x, y and b represent the amount content cz, and the enriched material y.
sample, the spike and the blend respectively. One This gives a similar expression to equation (1):
isotope, usually the most abundant in the sample,
is selected and all isotope amount ratios are m ' y K y1 ⋅ R y1 − K ' b ⋅ R ' b ∑
(K zi ⋅ Rzi )
cz = c y ⋅ ⋅ ⋅ i
expressed relative to it. A particular pair of mz K ' b ⋅R ' b − K z1 ⋅ R z1 ∑ (K yi ⋅ R yi )
isotopes, the reference isotope and preferably the i
most abundant isotope in the spike, is then (3)
selected as monitor ratio, e.g. n(208Pb)/n(206Pb).
where cz is the element amount content of the
Rxi and Ryi are all the possible isotope amount
primary assay standard solution and mz the mass
ratios in the sample and the spike respectively.
of the primary assay standard when preparing the
For the reference isotope, this ratio is unity. Kxi,
new blend. m’y is the mass of the enriched spike
Kyi and Kb are the correction factors for mass
solution, K’b, R’b, Kz1 and Rz1 are the K-factor and
discrimination, for a particular isotope amount
the ratio for the new blend and the assay standard
ratio, in sample, spike and blend respectively. The
respectively. The index z represents the assay
K-factors are measured using a certified isotopic
reference material according to equation (2).
QUAM:2012.P1 Page 89
Quantifying Uncertainty Example A7
QUAM:2012.P1 Page 90
Quantifying Uncertainty Example A7
Table A7.2. General procedure material certified for isotopic composition. In this
case, the isotopically certified reference material
Step Description
NIST SRM 981 was used to monitor a possible
1 Preparing the primary assay change in the K0-factor. The K0-factor is measured
standard before and after the ratio it will correct. A typical
2 Preparation of blends: b’ and b sample sequence is: 1. (blank), 2. (NIST SRM
3 Measurement of isotope ratios 981), 3. (blank), 4. (blend 1), 5. (blank), 6. (NIST
SRM 981), 7. (blank), 8. (sample), etc.
4 Calculation of the amount
content of Pb in the sample, cx The blank measurements are not only used for
blank correction, they are also used for
5 Estimating the uncertainty in cx
monitoring the number of counts for the blank.
Calculation of the Molar Mass No new measurement run was started until the
blank count rate was stable and back to a normal
Due to natural variations in the isotopic level. Note that sample, blends, spike and assay
composition of certain elements, e.g. Pb, the standard were diluted to an appropriate amount
molar mass, M, for the primary assay standard has content prior to the measurements. The results of
to be determined since this will affect the amount ratio measurements, calculated K0-factors and Kbias
content cz. Note that this is not the case when cz is are summarised in Table A7.8.
expressed in mol g-1. The molar mass, M(E), for
an element E, is numerically equal to the atomic Preparing the primary assay standard and
weight of element E, Ar(E). The atomic weight calculating the amount content, cz.
can be calculated according to the general Two primary assay standards were produced,
expression: each from a different piece of metallic lead with a
chemical purity of w=99.999 %. The two pieces
∑ R ⋅ M ( E)
p
i
i came from the same batch of high purity lead.
i =1
Ar (E ) = p
(6) The pieces were dissolved in about 10 mL of 1:3
∑ Ri
i =1
m/m HNO3:water under gentle heating and then
further diluted. Two blends were prepared from
where the values Ri are all true isotope amount each of these two assay standards. The values
ratios for the element E and M(iE) are the from one of the assays is described hereafter.
tabulated nuclide masses. 0.36544 g lead, m1, was dissolved and diluted in
Note that the isotope amount ratios in equation aqueous HNO3 (0.5 mol L-1) to a total of
(6) have to be absolute ratios, that is, they have to d1=196.14 g. This solution is named Assay 1. A
be corrected for mass discrimination. With the more diluted solution was needed and m2=1.0292
use of proper indexes, this gives equation (7). For g of Assay 1, was diluted in aqueous HNO3
the calculation, nuclide masses, M(iE), were taken (0.5 mol L-1) to a total mass of d2=99.931g. This
from literature values2, while Ratios, Rzi, and K0- solution is named Assay 2. The amount content of
factors, K0(zi), were measured (see Table A7.8). Pb in Assay 2, cz, is then calculated according to
These values give equation (8)
m 2 m1 ⋅ w 1
( )
p
cz = ⋅ ⋅
∑ K zi ⋅ Rzi ⋅ M z i E
i =1
d2 d 1 M (Pb, Assay1) (8)
M (Pb, Assay 1) = −8 −1 −1
p
(7) = 9.2605 × 10 mol g = 0.092605 µmol g
∑K
i =1
zi ⋅ Rz i
Preparation of the blends
= 207.21034 g mol−1
The mass fraction of the spike is known to be
Measurement of K-factors and isotope amount roughly 20µg Pb per g solution and the mass
ratios fraction of Pb in the sample is also known to be in
To correct for mass discrimination, a correction this range. Table A7.3 shows the weighing data
factor, K, is used as specified in equation (2). The for the two blends used in this example.
K0-factor can be calculated using a reference
QUAM:2012.P1 Page 91
Quantifying Uncertainty Example A7
Table A7.3
Table A7.4
Blend b b’
cx (µmol g-1)
Solutions Spike Sample Spike Assay 2
Replicate 1 (our example) 0.053738
used
Replicate 2 0.053621
Parameter my mx m’y mz
Replicate 3 0.053610
Mass (g) 1.1360 1.0440 1.0654 1.1029
Replicate 4 0.053822
Measurement of the procedure blank cBlank Average 0.05370
In this case, the procedure blank was measured Experimental standard 0.0001
using external calibration. A more exhaustive deviation (s)
procedure would be to add an enriched spike to a
blank and process it in the same way as the Uncertainty on the K-factors
samples. In this example, only high purity
reagents were used, which would lead to extreme i) Uncertainty on K0
ratios in the blends and consequent poor K is calculated according to equation (2) and
reliability for the enriched spiking procedure. The using the values of Kx1 as an example gives for
externally calibrated procedure blank was K0:
measured four times, and cBlank found to be
Rcertified 2.1681
4.5×10-7 µmol g-1, with standard uncertainty K 0 ( x1) = = = 0.9992 (9)
4.0×10-7 µmol g-1 evaluated as type A. Robserved 2.1699
To calculate the uncertainty on K0 we first look at
Calculation of the unknown amount content cx
the certificate where the certified ratio, 2.1681,
Inserting the measured and calculated data (Table has a stated uncertainty of 0.0008 based on a
A7.8) into equation (5) gives cx=0.053738 95 % confidence interval. To convert an
µmol g-1. The results from all four replicates are uncertainty based on a 95 % confidence interval
given in Table A7.4. to standard uncertainty we divide by 2. This gives
a standard uncertainty of u(Rcertified)=0.0004. The
A7.3 Steps 2 and 3: Identifying and
observed amount ratio, Robserved=n(208Pb)/n(206Pb),
quantifying uncertainty sources
has a standard uncertainty of 0.0025 (as RSD).
Strategy for the uncertainty calculation For the K-factor, the combined uncertainty can be
If equations (2), (7) and (8) were to be included in calculated as:
the final IDMS equation (5), the sheer number of 2
u c ( K 0 ( x1)) 0.0004
+ (0.0025)
2
parameters would make the equation almost =
impossible to handle. To keep it simpler, K0- K 0 ( x1) 2.1681 (10)
factors and amount content of the standard assay = 0.002507
solution and their associated uncertainties are
treated separately and then introduced into the This clearly points out that the uncertainty
IDMS equation (5). In this case it will not affect contributions from the certified ratios are
the final combined uncertainty of cx, and it is negligible. Henceforth, the uncertainties on the
advisable to simplify for practical reasons. measured ratios, Robserved, will be used for the
uncertainties on K0.
For calculating the combined standard
uncertainty, uc(cx), the values from one of the Uncertainty on Kbias
measurements, as described in A7.2, will be used. This bias factor is introduced to account for
The combined uncertainty of cx will be calculated possible deviations in the value of the mass
using the spreadsheet method described in discrimination factor. As can be seen in equation
Appendix E. (2) above, there is a bias associated with every K-
factor. The values of these biases are in our case
not known, and a value of 0 is applied. An
uncertainty is, of course, associated with every
QUAM:2012.P1 Page 92
Quantifying Uncertainty Example A7
QUAM:2012.P1 Page 93
Quantifying Uncertainty Example A7
QUAM:2012.P1 Page 94
Quantifying Uncertainty Example A7
Table A7.8
QUAM:2012.P1 Page 95
Quantifying Uncertainty Example A7
QUAM:2012.P1 Page 96
Quantifying Uncertainty Appendix B - Definitions
Appendix B. Definitions
General
B.1 Precision
B.3 Influence quantity
Closeness of agreement between
independent test results obtained under quantity that, in a direct measurement,
stipulated conditions [H.8]. does not affect the quantity that is
NOTE 1 Precision depends only on the actually measured, but affects the
distribution of random errors and does relation between the indication and the
not relate to the true value or the measurement result[H.7].
specified value.
EXAMPLES
NOTE 2 The measure of precision is usually
expressed in terms of imprecision and 1. Frequency in the direct measurement with
computed as a standard deviation of the an ammeter of the constant amplitude of an
test results. Less precision is reflected by alternating current.
a larger standard deviation. 2. Amount-of-substance concentration of
NOTE 3 "Independent test results" means bilirubin in a direct measurement of
results obtained in a manner not haemoglobin amount-of-substance
influenced by any previous result on the concentration in human blood plasma
same or similar test object. Quantitative
measures of precision depend critically 3. Temperature of a micrometer used for
on the stipulated conditions. measuring the length of a rod, but not the
Repeatability conditions and temperature of the rod itself which can enter
reproducibility conditions are particular into the definition of the measurand.
sets of extreme stipulated conditions. 4. Background pressure in the ion source of a
mass spectrometer during a measurement of
B.2 True value amount-of-substance fraction.
QUAM:2012.P1 Page 97
Quantifying Uncertainty Appendix B - Definitions
QUAM:2012.P1 Page 98
Quantifying Uncertainty Appendix B - Definitions
QUAM:2012.P1 Page 99
Quantifying Uncertainty Appendix B - Definitions
outcome. For the present purpose, this outcome Temperature Lin. Bias Calibration
Precision
is a particular analytical result (‘d(EtOH)’ in Precision Calibration
Figure D1). The ‘branches’ leading to the d(EtOH)
Precision
outcome are the contributory effects, which Calibration
The main branches are the parameters in the Lin. Bias Calibration
Rectangular distribution
Form Use when: Uncertainty
Triangular distribution
Form Use when: Uncertainty
Normal distribution
Figure E2.1
A B C D E
1 u(p) u(q) u(r) u(s)
2
3 p p p p p
4 q q q q q
5 r r r r r
6 s s s s s
7
8 y=f(p,q,..) y=f(p,q,..) y=f(p,q,..) y=f(p,q,..) y=f(p,q,..)
9
10
11
Figure E2.2
A B C D E
1 u(p) u(q) u(r) u(s)
2
3 p p+u(p) p p p
4 q q q+u(q) q q
5 r r r r+u(r) r
6 s s s s s+u(s)
7
8 y=f(p,q,..) y=f(p’,...) y=f(..q’,..) y=f(..r’,..) y=f(..s’,..)
9 u(y,p) u(y,q) u(y,r) u(y,s)
10
11
Figure E2.3
A B C D E
1 u(p) u(q) u(r) u(s)
2
3 p p+u(p) p p p
4 q q q+u(q) q q
5 r r r r+u(r) r
6 s s s s s+u(s)
7
8 y=f(p,q,..) y=f(p’,...) y=f(..q’,..) y=f(..r’,..) y=f(..s’,..)
9 u(y,p) u(y,q) u(y,r) u(y,s)
10 u(y) u(y,p)2 u(y,q)2 u(y,r)2 u(y,s)2
11
Figure E.3.1
The Figure compares (A) the law of propagation of uncertainty and (B) the propagation of
distributions for three independent input quantities. g(ξi) is the probability density function (PDF)
associated with xi and g(η) the density function for the result. .
approach given in section 8. applies poorly. Non- Table E3.1: Spreadsheet formulae for Monte
linearity is addressed in the GUM by extending Carlo Simulation
the calculation to include higher-order terms
(reference H.2 gives further detail). If that is the Distribution Formula for PDFNote 1
case, the Kragten approach (Appendix E2) is NORMINV(RAND(),x,u)
Normal
likely to give a more realistic estimate of the
uncertainty than the first-order equation in section Rectangular
8.2.2. because the Kragten approach calculates
given half-width a: x+2*a*(RAND()-0.5)
the actual changes in the result when the input
quantities change by the standard uncertainty. given standard x+2*u*SQRT(3)
MCS (for large enough simulations) gives a still uncertainty u: *(RAND()-0.5)
better approximation because it additionally
Triangular
explores the extremes of the input and output
distributions. Where distributions are given half-width a: x+a*(RAND()-RAND())
substantially non-normal, the Kragten and basic
given standard x+u*SQRT(6)
GUM approaches provide estimated standard *(RAND()-RAND())
uncertainty u:
uncertainty, whereas MCS can give an indication
of distribution and accordingly provides a better tNote 2 x+u*TINV(RAND(),νeff)
indication of the real ‘coverage interval’ than the Note 1. In these formulae, x should be replaced with
interval y±U. the value of the input quantity xi, u with the
corresponding standard uncertainty, a with the half-
The principal disadvantages of MCS are
width of the rectangular or triangular distribution
• greater computational complexity and concerned, and ν with the relevant degrees of
computing time, especially if reliable intervals freedom
are to be obtained; Note 2. This formula is applicable when the standard
uncertainty is given and known to be associated with
• calculated uncertainties vary from one run to
a t distribution with ν degrees of freedom. This is
the next because of the intentionally random typical of a reported standard uncertainty with
nature of the simulation; reported effective degrees of freedom νeff.
• it is difficult to identify the most important
contributions to the combined uncertainty
without repeating the simulation. Table E3.1 to provide MCS estimates for modest
simulation sizes. The procedure is illustrated
using the following simple example, in which a
Using the basic GUM method, Kragten approach value y is calculated from input values a, b and c
and MCS together, however, is nearly always according to
useful in developing an appropriate strategy
because the three give insight into different parts a
of the problem. Substantial differences between y=
b−c
basic GUM and Kragten approaches will often
indicate appreciable non-linearity, while large (This might, for example, be a mass fraction
differences between the Kragten or basic GUM calculated from a measured analyte mass a and
approach and MCS may signal important small gross and tare masses b and c respectively).
departures from normality. When the different The values, standard uncertainties and assigned
methods give significantly different results, the distributions for a to c are listed in rows 3 and 4
reason for the difference should therefore be of Table E3.2.
investigated. Table E3.2 also illustrates the procedure:
i) Input parameter values and their standard
E.3.4 Spreadsheet Implementation uncertainties (or, optionally for rectangular or
triangular distributions, half-interval width)
MCS is best implemented in purpose-designed are entered at rows 3 and 4 of the spreadsheet.
software. However, it is possible to use
spreadsheet functions such as those listed in
Frequency
70
60
50
40
30
20
10
0
1.1 - 1.2
1.3 - 1.4
2.1 - 2.2
2.3 - 2.4
0 - 0.6
0.7 - 0.8
0.9 - 1
1.5 - 1.6
1.7 - 1.8
1.9 - 2
2.5 - 2.6
> 2.7
Simulated result
distributed, the output shows appreciable positive for example by using the relevant quantiles.
skew, resulting in a higher standard uncertainty However, it is important not to be misled by the
than expected. This arises from appreciable non- apparent detail in the PDF obtained for the result.
linearity; notice that the uncertainties in b and c The lack of detailed knowledge about the PDFs
are appreciable fractions of the denominator b-c, for the input quantities, because the information
resulting in a proportion of very small values for on which these PDFs are based is not always
the denominator and corresponding high reliable, needs to be borne in mind. The tails of
estimates for y. the PDFs are particularly sensitive to such
information. Therefore, as is pointed out in
E.3.5 Practical considerations in using MCS
GUM, section G 1.2, “it is normally unwise to try
for uncertainty evaluation
to distinguish between closely similar levels of
Number of MCS samples confidence (say a 94 % and a 96 % level of
MCS gives a good estimate of the standard confidence)”. In addition, the GUM indicates that
uncertainty even for simulations with a few obtaining intervals with levels of confidence of
hundred trials; with as few as 200 trials, estimated 99 % or greater is especially difficult. Further, to
standard uncertainties are expected to vary by obtain sufficient information about the tails of the
about ±10 % from the best estimate, while for PDF for the output quantity can require
1000 and 10 000 samples the expected ranges are calculating the result for at least 106 trials . It then
about ±5 % and ±1.5 % (based on the 95 % becomes important to ensure that the random
interval for the chi-squared distribution). Bearing number generator used by the software is capable
in mind that many input quantity uncertainties are of maintaining randomness for such large
derived from far fewer observations, numbers of draws from the PDFs for the input
comparatively small simulations of 500-5000 quantities; this requires well characterised
MCS samples are likely to be adequate at least for numerical software. GS1 recommends some
exploratory studies and often for reported reliable random number generators.
standard uncertainties. For this purpose, Bias due to asymmetry in the output distribution
spreadsheet MCS calculations are often sufficient.
When the measurement model is non-linear and
Confidence intervals from MCS the standard uncertainty associated with the
It is also possible in principle to estimate estimate y is large compared to y (that is, u(y)/y is
confidence intervals from the MCS results much greater than 10 %) the MCS PDF is likely
without the use of effective degrees of freedom, to be asymmetric. In this case the mean value
computed from the simulated results will be
different from the value of the measurand V depends on temperature and the calibration of
calculated using the estimates of the input the measuring system:
quantities (as in GUM). For most practical
V = VT [1 + α (T − T0 )]
purposes in chemical measurement, the result
calculated from the original input values should Where α is the coefficient of volume expansion
be reported; the MCS estimate can, however, be for water T is the laboratory temperature and T0
used to provide the associated standard the temperature at which the flask was calibrated
uncertainty.
Further, a quantity R representing repeatability
E.3.6 Example of MCS evaluation of effects is included.
uncertainty
The resulting measurement function is
The following example is based on Example A2,
determination of the concentration of sodium 1000(m KHP,1 − mKHP,2 )
c NaOH =
hydroxide using a potassium hydrogen phthalate (M C8 )
+ M H5 + M O 4 + M K VT [1 − α (T − T0 )]
(KHP) reference material. [mol L-1]
The measurement function for the concentration These input quantities are each characterized by
cNaOH of NaOH is an appropriate PDF, depending on the
1 000 mKHP PKHP information available about these quantities.
c NaOH = [mol l −1 ], Table A2.4 lists these quantities and gives the
M KHPV
characterizing PDFs.
where Since the contribution from VT is dominant, two
mKHP is the mass of the KHP, PDFs (Triangular, Normal) other than
Rectangular are considered for this quantity to see
PKHP is the purity of KHP,
the effect on the calculated results.
MKHP is the molar mass of the KHP, and
The standard uncertainty u(cNaOH) computed for
V is the volume of NaOH for KHP the concentration cNaOH with the three different
titration. PDFs for the uncertainty on VT agree very well
with those obtained conventionally using the
Some of the quantities in this measurement
GUM (Table E3.3) or Kragten approaches. Also
function are themselves expressed in terms of
the coverage factors k, obtained from the values
further quantities. A representation of this
of the results below and above which 2.5 % of the
function in terms of fundamental quantities is
tails fell, correspond to those of a Normal
needed, because each of these quantities has to be
distribution and support using k = 2 for the
described by a PDF as the basis of the Monte
expanded uncertainty. However, the PDF for the
Carlo calculation.
concentration cNaOH is discernibly influenced by
mKHP is obtained by difference weighings: using the rectangular distribution for the
mKHP = mKHP,1 – mKHP,2. uncertainty on VT . The calculations were carried
out using a number of Monte Carlo trials ranging
MKHP Molar mass of KHP comprises four terms from 104 to 106; however a value of 104 gave
for the different elements in the molecular sufficiently stable values for k and u(cNaOH).
formula: Larger numbers of trials give smoother
approximations to the PDFs.
M KHP = M C + M H + M O + M K .
8 5 4
Table E3.3: Comparison of values for the uncertainty u(cNaOH) obtained using GUM and MCS with
different PDFs for the uncertainty on VT
VT Triangular PDF VT Normal PDF VT Rectangular PDF
GUM* 0 000099 mol L-1 0.000085 mol L-1 0.00011 mol L-1
MCS 0 000087 mol L-1 0 000087 mol L-1 0.00011 mol L-1
*The results from GUM and Kragten [E.2] approaches agree to at least two significant figures.
4
4×10
Frequency
4
2×10
4
4×10
Frequency
4
2×10
xpred = (yobs – b0)/b1 Eq. E3.2 The above formula for var(xpred) can be written in
terms of the set of n data points, (xi, yi), used to
It is usual to determine the constants b1 and b0 by determine the calibration function:
weighted or un-weighted least squares regression
on a set of n pairs of values (xi, yi). var( x pred ) = var( y obs ) / b12 +
1 ( x pred − x ) 2
s( y c ) = S 1 + +
n (∑ ( x ) − (∑ x ) n)
2
i i
2
giving, on comparison with equation E3.5, judgement based on the size of the systematic
2 trend.
var(xpred) = [ s(yc) / b1] Eq. E3.6
E.4.6 The values of x and y may be subject to a
E.4.4 The reference values xi may each have
constant unknown offset (e.g. arising when the
uncertainties which propagate through to the final
values of x are obtained from serial dilution of a
result. In practice, uncertainties in these values
stock solution which has an uncertainty on its
are usually small compared to uncertainties in the
certified value). If the standard uncertainties on y
system responses yi, and may be ignored. An
and x from these effects are u(y, const) and
approximate estimate of the uncertainty u(xpred, xi)
u(x, const), then the uncertainty on the
in a predicted value xpred due to uncertainty in a
interpolated value xpred is given by:
particular reference value xi is
u(xpred)2 = u(x, const)2 +
u(xpred, xi) ≈ u(xi)/n Eq. E3.7
(u(y, const)/b1)2 + var(x) Eq. E3.8
where n is the number of xi values used in the
calibration. This expression can be used to check E.4.7 The four uncertainty components described
the significance of u(xpred, xi). in E.4.2 can be calculated using equations
Eq. E3.3 to Eq. E3.8. The overall uncertainty
E.4.5 The uncertainty arising from the assumption
arising from calculation from a linear calibration
of a linear relationship between y and x is not
can then be calculated by combining these four
normally large enough to require an additional
components in the normal way.
estimate. Providing the residuals show that there
is no significant systematic deviation from this E.4.8 While the calculations above provide
assumed relationship, the uncertainty arising from suitable approaches for the most common case of
this assumption (in addition to that covered by the linear least squares regression, they do not apply
resulting increase in y variance) can be taken to to more general regression modelling methods
be negligible. If the residuals show a systematic that take account of uncertainties in x or
trend then it may be necessary to include higher correlations among x and/or y. A treatment of
terms in the calibration function. Methods of these more complex cases can be found in ISO TS
calculating var(x) in these cases are given in 28037, Determination and use of straight-line
standard texts. It is also possible to make a calibration functions [H.28].
are documented in the form of values for a estimates, when required, can then be produced
constant term represented by s0 and a variable on the basis of the reported result. This remains
term represented by s1. the recommended approach where practical.
E.5.4. Special cases NOTE: See the note to section E.5.2.
1.6 Uncertainty
Uncertainty
significantly
approximately
greater than
1.4 equal to x.s 1
either s 0 or x.s1
1.2 A B C
1 Uncertainty s0
approximately x.s1
equal to s0 u(x)
0.8
0.6
0.4
0.2
0
0 1 2 3 4 5 6 7 8
Result x
F.1. Introduction
F.1.1. At low concentrations, an increasing very low levels that it is inevitable that
number of effects become important, including, measurements must be made, and results reported,
for example, in this region.
• the presence of noise or unstable baseline, F.1.4. The ISO Guide on Measurement
Uncertainty [H.2] does not give explicit
• the contribution of interferences to the (gross) instructions for the estimation of uncertainty
signal, when the results are small and the uncertainties
• the influence of any analytical blank used, large compared to the results. Indeed, the basic
and form of the ‘law of propagation of uncertainties’,
described in chapter 8 of this guide, may cease to
• losses during extraction, isolation or clean-up. apply accurately in this region; one assumption on
Because of such effects, as the analyte which the calculation is based is that the
concentrations drop, the relative uncertainty uncertainty is small relative to the value of the
associated with the result tends to increase, first measurand. An additional, if philosophical,
to a substantial fraction of the result and finally to difficulty follows from the definition of
the point where the (symmetric) uncertainty uncertainty given by the ISO Guide: though
interval includes zero. This region is typically negative observations are quite possible, and even
associated with the practical limit of detection for common in this region, an implied dispersion
a given method. including values below zero cannot be “...
reasonably ascribed to the value of the
F.1.2. The terminology and conventions
measurand” when the measurand is a
associated with measuring and reporting low
concentration, because concentrations themselves
levels of analyte have been widely discussed
cannot be negative.
elsewhere (See Bibliography [H.29-H.32] for
examples and definitions). Here, the term ‘limit of F.1.5. These difficulties do not preclude the
detection’ follows the IUPAC recommendation of application of the methods outlined in this guide,
reference H.31 which defines the limit of but some caution is required in interpretation and
detection as a true amount of analyte which leads reporting the results of measurement uncertainty
with high probability to the conclusion that the estimation in this region. The purpose of the
analyte is present, given a particular decision present Appendix is to provide guidance to
criterion. The decision criterion (‘critical value’) supplement that already available from other
is usually set to ensure a low probability of sources.
declaring the analyte present when it is in fact NOTE: Similar considerations may apply to other
absent. Following this convention, a analyte is regions; for example, mole or mass
declared present when the observed response is fractions close to 100 % may lead to
above the critical value. The limit of detection is similar difficulties.
usually approximately twice the critical value
expressed in terms of analyte concentration.
F.2. Observations and estimates
F.1.3. It is widely accepted that the most
F.2.1. A fundamental principle of measurement
important use of the ‘limit of detection’ is to show
science is that results are estimates of true values.
where method performance becomes insufficient
Analytical results, for example, are available
for acceptable quantitation, so that improvements
initially in units of the observed signal, e.g. mV,
can be made. Ideally, therefore, quantitative
absorbance units etc. For communication to a
measurements should not be made in this region.
wider audience, particularly to the customers of a
Nonetheless, so many analytes are important at
laboratory or to other authorities, the raw data F.3. Interpreted results and compliance
need to be converted to a chemical quantity, such statements
as concentration or amount of substance. This
conversion typically requires a calibration F.3.1. Despite the foregoing, it must be accepted
procedure (which may include, for example, that many reports of analysis and statements of
corrections for observed and well characterised compliance include some interpretation for the
losses). Whatever the conversion, however, the end user’s benefit. Typically, such an
figure generated remains an observation, or interpretation would include any relevant
signal. If the experiment is properly carried out, inference about the levels of analyte which could
this observation remains the ‘best estimate’ of the reasonably be present in a material. Such an
value of the measurand. interpretation is an inference about the real world,
and consequently would be expected (by the end
F.2.2. Observations are not often constrained by user) to conform to real limits. So, too, would any
the same fundamental limits that apply to real associated estimate of uncertainty in ‘real’ values.
concentrations. For example, it is perfectly The following paragraphs summarise some
sensible to report an ‘observed concentration’, accepted approaches. The first (use of ‘less than’
that is, an estimate, below zero. It is equally or ‘greater than’) is generally consistent with
sensible to speak of a dispersion of possible existing practice. Section F.5. describes an
observations which extends into the same region. approach based on the properties of classical
For example, when performing an unbiased confidence intervals. This is very simple to use
measurement on a sample with no analyte present, and will usually be adequate for most ordinary
one should see about half of the observations purposes. Where observations are particularly
falling below zero. In other words, reports like likely to fall below zero (or above 100 %),
observed concentration = 2.4±8 mg L-1 however, the classical approach may lead to
unrealistically small intervals; for this situation,
observed concentration = -4.2±8 mg L-1 the Bayesian approach described in section F.6 is
are not only possible; they should be seen as likely to be more appropriate.
valid statements about observations and their
mean values.
F.2.3. While reporting observations and their F.4. Using ‘less than’ or ‘greater than’
associated uncertainties to an informed audience, in reporting
there is no barrier to, or contradiction in,
reporting the best estimate and its associated F.4.1. Where the end use of the reported results
uncertainty even where the result implies an is well understood, and where the end user cannot
impossible physical situation. Indeed, in some realistically be informed of the nature of
circumstances (for example, when reporting a measurement observations, the use of ‘less than’,
value for an analytical blank which will ‘greater than’ etc. should follow the general
subsequently be used to correct other results) it is guidance provided elsewhere (for example in
absolutely essential to report the observation and reference H.31) on the reporting of low level
its uncertainty, however large. results.
F.2.4. This remains true wherever the end use of F.4.2. One note of caution is pertinent. Much of
the result is in doubt. Since only the observation the literature on capabilities of detection relies
and its associated uncertainty can be used directly heavily on the statistics of repeated observations.
(for example, in further calculations, in trend It should be clear to readers of the current guide
analysis or for re-interpretation), the uncensored that observed variation is only rarely a good guide
observation should always be available. to the full uncertainty of results. Just as with
results in any other region, careful consideration
F.2.5. The ideal is accordingly to report valid should accordingly be given to all the
observations and their associated uncertainty uncertainties affecting a given result before
regardless of the values. reporting the values.
F.5. Expanded uncertainty intervals adjusted interval of [0, 0]. This may reasonably
near zero: Classical approach be taken as an indication that the results are
inconsistent with any possible true concentration.
F.5.1. The desired outcome is an expanded The analyst should normally return to the original
uncertainty interval which satisfies three data and determine the cause, as for any other
requirements: aberrant quality control observation.
1. An interval that lies within the possible range
(the ‘possible range’ is the concentration range F.5.6. If it is necessary to report the standard
from zero upwards). uncertainty as well as the (asymmetric) expanded
2. A coverage close to the specified confidence uncertainty interval, it is recommended that the
level, so that an expanded uncertainty interval standard uncertainty used in constructing the
claimed to correspond to approximately 95 % confidence interval should be reported without
confidence should be expected to contain the change.
true value close to 95 % of the time.
3. Reported results that have minimal bias in the
long term.
0.10
F.5.2. If the expanded uncertainty has been
calculated using classical statistics, the interval –
including any part lying below zero – will, by
0.05
definition, have 95 % coverage. However, since
the (true) value of the measurand cannot lie
Reported value
F.5.3. Where the mean observation is also outside -0.04 -0.02 0.00 0.02 0.04
the possible range, and the interval for the true Measured value
concentration is required, the reported result
should simply be shifted to zero. Shifting to this Figure 3. Truncating classical confidence
limit does, however, lead to a small long-term intervals close to zero. The mean varies between
bias, which may well be unacceptable to -0.05 and 0.05, and the standard deviation is fixed
customers (or PT providers) demanding raw data at 0.01. The bold diagonal line shows how the
for their own statistical analysis. These customers reported value depends (before truncation) on the
will continue to require the raw observations observed value; the diagonal dashed lines show
regardless of natural limits. Nonetheless, simple the corresponding interval. The solid, partial bars
truncation at zero can be shown to provide show the reported uncertainty interval after
minimal bias among the range of options so far truncation. Note that at observed mean values
examined for this situation. below zero, the simple truncated interval becomes
unreasonably small; see paragraph F5.8.
F.5.4. If this procedure is followed, the expanded
uncertainty interval becomes progressively more
asymmetric as the result approaches the limit. F.6. Expanded uncertainty intervals
Figure 3 illustrates the situation near zero, where near zero: Bayesian approach
the measured mean is reported until the mean falls
F.6.1. Bayesian methods allow the combination
below zero, and the reported value is thereafter
of information from measurements with prior
reported as zero.
information about the possible (or likely)
distribution of values of the measurand. The
F.5.5. Eventually, the classical interval falls
approach combines a ‘prior’ distribution with a
entirely beyond the natural limit, implying an
likelihood (the distribution inferred from the
8
Max. dens.
Classical
6
R eported value
4
2
0
-2 0 2 4
Measured value ( x u)
The following tables summarise some typical examples of uncertainty components. The
tables give:
• The particular measurand or experimental procedure (determining mass, volume
etc)
• The main components and sources of uncertainty in each case
• A suggested method of determining the uncertainty arising from each source.
• An example of a typical case
The tables are intended only to indicate methods of estimating the value of some typical
measurement uncertainty components in analytical measurement. They are not intended
to be comprehensive, nor should the values given be used directly without independent
justification. The values may, however, help in deciding whether a particular component
is significant.
QUAM:2012.P1
uncertainty calibration converted to standard deviation
Linearity i) Experiment, with range of ca. 0.5x last
certified weights significant
digit
ii) Manufacturer's specification
Quantifying Uncertainty
Page 127
Appendix G – Uncertainty Sources
Determination Uncertainty Cause Method of determination Typical values
QUAM:2012.P1
Components Example Value
Volume (liquid) Calibration uncertainty Limited accuracy in Stated on manufacturer's 10 mL (Grade 0.02 / √3 =
calibration specification, converted to A) 0.01 mL*
standard deviation.
Quantifying Uncertainty
Page 128
Appendix G – Uncertainty Sources
Determination Uncertainty Cause Method of determination Typical values
QUAM:2012.P1
Components Example Value
Analyte Purity Impurities reduce the Stated on manufacturer’s Reference 0.1/√3 =
concentration from amount of reference certificate. Reference material potassium 0.06 %
a reference material present. Reactive certificates may give unqualified hydrogen
material certificate impurities may interfere limits; these should accordingly phthalate
Quantifying Uncertainty
Page 129
Appendix G – Uncertainty Sources
Determination Uncertainty Cause Method of determination Typical values
Components Example Value
QUAM:2012.P1
Absorbance Instrument calibration Limited accuracy in Stated on calibration certificate
calibration. as limits, converted to standard
Note: this component
deviation
relates to absorbance
Quantifying Uncertainty
reading versus
reference absorbance,
not to the calibration
of concentration
against absorbance
reading
Run to run variation Various Standard deviation of replicate Mean of 7 1.63/√7 = 0.62
determinations, or QA absorbance
performance. readings with
s=1.63
Sampling Homogeneity Sub-sampling from i) Standard deviation of separate Sampling from For 15 portions
inhomogeneous material sub-sample results (if the bread of from 72
will not generally inhomogeneity is large relative to assumed two- contaminated
represent the bulk exactly. analytical accuracy). valued and 360
inhomogeneity uncontaminated
Note: random sampling ii) Standard deviation estimated
(See Example bulk portions:
will generally result in from known or assumed
A4) RSD = 0.58
zero bias. It may be population parameters.
necessary to check that
sampling is actually
random.
Page 130
Appendix G – Uncertainty Sources
QUAM:2012.P1
Determination Uncertainty Cause Method of determination Typical values
Components Example Value
Extraction Mean recovery Extraction is rarely Recovery calculated as Recovery of 28/√42= 4.3 %
Quantifying Uncertainty
recovery complete and may add or percentage recovery from pesticide from (0.048 as
include interferents. matched reference material or bread; 42 RSD)
representative spiking. experiments,
Uncertainty obtained from mean 90 %,
standard deviation of mean of s=28 %
recovery experiments. (See Example
A4)
Note: recovery may also be
calculated directly from
previously measured partition
coefficients.
Run to run variation in Various Standard deviation of replicate Recovery of 0.31 as RSD.
recovery experiments. pesticides from
bread from
paired replicate
data. (See
Example A4)
Page 131
Appendix G – Uncertainty Sources
Quantifying Uncertainty Appendix H - Bibliography
Appendix H. Bibliography
H.1. ISO/IEC 17025:2005. General Requirements for the Competence of Calibration and Testing
Laboratories. ISO, Geneva (2005).
H.2. Guide To The Expression Of Uncertainty In Measurement. ISO, Geneva (1993).
(ISBN 92-67-10188-9) (Reprinted 1995: Reissued as ISO Guide 98-3 (2008), also available from
http://www.bipm.org as JCGM 100:2008)
H.3. EURACHEM Guide, Quantifying Uncertainty in Analytical Measurement. Laboratory of the
Government Chemist, London (1995). ISBN 0-948926-08-2
H.4. EURACHEM/CITAC Guide, Quantifying Uncertainty in Analytical Measurement, Second
Edition. Laboratory of the Government Chemist, London (2000). ISBN 0-948926-15-5. Also
available from http://www.eurachem.org.
H.5. EURACHEM Guide, Terminology in Analytical Measurement - Introduction to VIM 3 (2011).
Available from http://www.eurachem.org.
H.6. EURACHEM/CITAC Guide, Measurement uncertainty arising from sampling: A guide to
methods and approaches. EURACHEM, (2007). Available from http://www.eurachem.org.
H.7. ISO/IEC Guide 99:2007, International vocabulary of metrology - Basic and general concepts and
associated terms (VIM). ISO, Geneva, (2007). (Also available from http://www.bipm.org as
JGCM 200:2008)
H.8. ISO 3534-2:2006. Statistics - Vocabulary and Symbols - Part 2: Applied statistics. ISO, Geneva,
Switzerland (2006).
H.9. EURACHEM/CITAC Guide: Traceability in Chemical Measurement (2003). Available from
http://www.eurachem.org and http://www.citac.cc.
H.10. Analytical Methods Committee, Analyst (London).120 29-34 (1995).
H.11. EURACHEM, The Fitness for Purpose of Analytical Methods. (1998) (ISBN 0-948926-12-0)
H.12. ISO/IEC Guide 33:1989, Uses of Certified Reference Materials. ISO, Geneva (1989).
H.13. International Union of Pure and Applied Chemistry. Pure Appl. Chem., 67, 331-343, (1995).
H.14. ISO 5725:1994 (Parts 1-4 and 6). Accuracy (trueness and precision) of measurement methods
and results. ISO, Geneva (1994). See also ISO 5725-5:1998 for alternative methods of estimating
precision.
H.15. ISO 21748:2010. Guide to the use of repeatability, reproducibility and trueness estimates in
measurement uncertainty estimation. ISO, Geneva (2010).
H.16. M Thompson, S L R Ellison, R Wood; The International Harmonized Protocol for the
proficiency testing of analytical chemistry laboratories (IUPAC Technical Report); Pure Appl.
Chem. 78(1) 145-196 (2006).
H.17. EUROLAB Technical Report 1/2002, Measurement uncertainty in testing, EUROLAB (2002).
Available from http://www.eurolab.org.
H.18. EUROLAB Technical Report 1/2006, Guide to the Evaluation of Measurement Uncertainty for
Quantitative Test Results, Eurolab (2006). Available from http://www.eurolab.org.