Expertise in Fingerprint Identification
Expertise in Fingerprint Identification
Expertise in Fingerprint Identification
6
doi: 10.1111/1556-4029.12203
Available online at: onlinelibrary.wiley.com
PAPER
GENERAL
ABSTRACT: Although fingerprint experts have presented evidence in criminal courts for more than a century, there have been few scientific
investigations of the human capacity to discriminate these patterns. A recent latent print matching experiment shows that qualified, court-practicing fingerprint experts are exceedingly accurate (and more conservative) compared with novices, but they do make errors. Here, a rationale
for the design of this experiment is provided. We argue that fidelity, generalizability, and control must be balanced to answer important
research questions; that the proficiency and competence of fingerprint examiners are best determined when experiments include highly similar
print pairs, in a signal detection paradigm, where the ground truth is known; and that inferring from this experiment the statement The error
rate of fingerprint identification is 0.68% would be unjustified. In closing, the ramifications of these findings for the future psychological study
of forensic expertise and the implications for expert testimony and public policy are considered.
KEYWORDS: forensic science, fingerprints, decision making, expertise, testimony, judgment, law, policy
1520
THOMPSON ET AL.
examiners, with an average of 17 years of experience, who consented to being tested at an unknown time over 12 months. The
five examiners were each given a print to identify by a colleague, who advised them that the fingerprints were from a
famous case of misidentification by the FBI for the 2004 Madrid
train bombings. One examiner reported that the prints matched,
three reported that the prints did not match, and one reported
inconclusive. Unbeknownst to the examiners, however, the prints
that they were asked to identify were taken from their own previous case history where they had previously declared them a
match. With four of the five examiners subsequently changing
their previous judgment of the prints as matching, it seems that
top-down, contextual influences can affect expert judgments (see
also [16,19]). Dror and Rosenthal (20) also conducted a metaanalysis to determine the degree to which examiners would
make the same or conflicting decisions if extraneous information
about the case was added. Although good data were sparse, the
authors concluded that examiners are susceptible to bias.
It is clear that fingerprint experts have special abilities, but
their decisions can be influenced by extraneous contextual information (22,42), and researchers have suggested ways contextual
bias can be mitigated (43). Even with this contribution, relatively
little research on human fingerprint identification has been conducted by academics and professionals alike. The U.S. NAS (2)
and others have called for the development of a research culture
within forensic science. Mnookin et al. (40) argue that there is a
legitimate role for experience-based claims of knowledge, but
also that pattern identification disciplines must develop a scientific foundation, through research, that is grounded in the values
of empiricism and skepticism. The experiment described below
is a step toward addressing the call from the NAS for the urgent
development of objective measures of accuracy and expertise in
fingerprint identification.
The Identifying Fingerprint Expertise Experiment
The Identifying Fingerprint Expertise experiment (1) was
designed to find out whether fingerprint experts were any more
accurate at matching prints than the average person and to get
an idea of how often they make errors of the sort that could
lead to a failure to identify a criminal compared with how often
they make errors of the sort that could lead to inaccurate evidence being presented in court. Thirty-seven qualified fingerprint experts and 37 undergraduate students were given pairs of
fingerprints to examine and decide whether a simulated crime
scene print matched a potential suspect or not. Some of the
print pairs matched, while others were highly similar but did
not match.
Thirty-six simulated latent crime scene prints were paired with
fully rolled exemplar prints. Across participants, each simulated
print was paired with a fully rolled print from the same individual (match), with a nonmatching but similar exemplar (similar
distractor), and with a random nonmatching exemplar (nonsimilar distractor). The simulated prints and their matches were from
our Forensic Informatics Biometric Repository, so, unlike genuine crime scene prints, they had a known true origin (13,44). All
fingerprints were authentic (i.e., not simulated), but the latents
were simulated in the sense of representing those found at
crime scenes during casework. (For human matching performance with genuine crime scene prints, see [45].) Similar
distractors were obtained by searching the Australian National
Automated Fingerprint Identification System (NAFIS). For each
simulated print, the most highly ranked nonmatching exemplar
FINGERPRINT EXPERTISE
1521
1522
uals, but the examiner incorrectly declares that the prints match,
then the examiner has made a false alarm type of error; if the
ground truth of a pair of prints is that they were left by the same
finger from the same individual, but the examiner concludes that
the prints do not match, then the examiner has made a miss
type of error. The same goes for the two ways the examiner can
reach the correct conclusion: that the prints actually match and
the examiner correctly declares them as such (a hit) or that the
prints do not actually match and the examiner correctly declares
them a nonmatch (a correct rejection).
Most tests of proficiency and studies of accuracy (with the
exception of [32]) used print pairs from casework where the
ground truth was uncertain (see [1315,53]). In assessing validitythe degree to which a test measures what it is supposed to
measurein fingerprint identification decision making, experiments ought to use print pairs for which the ground truth is
known.
To make use of ground truth prints in our expertise study, fingerprint pairs were sourced from the Forensic Informatics Biometric Repositoryan open biometric repository that we created
to increase the availability of high-quality forensic materials
where the ground truth is known. Details on the Forensic Informatics Biometric Repository are available at www.FIB-R.com.
Forensic Informatics Biometric Repository contains a range of
crime related materials: fingerprints, palm prints, shoe prints,
face photographs, handwriting samples, voice samples, and iris
photographs. The materials are collected from participants using
a standardized methodology and vary systematically in quality.
The repository contains multiple types of materials converging
on a single source, and the ground truth of the source is built
into the system. Materials are also collected from participants
over two sessions to approximate the natural variation that is
commonly found in forensic evidence (e.g., changes in facial
hair, clothes, and shoe decay). Participants are first-year undergraduates who participate in 1 h of data collection for course
credit and who provide informed consent.
The fingerprint materials contained in Forensic Informatics
Biometric Repository are 10-prints, palm prints, and latent prints.
Ink is used to capture each fingerprint onto standard 10-print
cards, rolled fully from nail-edge to nail-edge, as well as slap
impressions (pressing, not rolling, the fingers on the card) and
fully rolled palms. Latent prints are taken from common crime
scene surfaces (determined in consultation with fingerprint examiners) including: gloss-painted timber, smooth metal, glass, and
smooth plastic. Participants are instructed to interact with the
surfaces by pushing on the gloss-painted timber to open the
door or safely grabbing the knife by the blade. By interacting
with objects in this way, the aim is to approximate the variation
in materials that are commonly found at actual crime scenes.
In our experiment, the latent prints used were mated with their
matching fully rolled exemplar so that the ground truth of match
trials was known. The use of ground truth print pairs means that
we can compare participants responses to reality.
Signal Detection
A signal detection framework was used to measure the matching performance of fingerprint examiners (see also [15,54]). Signal
detection is a method of quantitating a persons ability to distinguish signal from noise. In fingerprint identification, for example, signal refers to print pairs that truly match and noise refers
to print pairs that do not truly match. Signal detection was initially applied to radar operators who were trying to discriminate
THOMPSON ET AL.
friendly and enemy aircraft and has since been used to measure
all areas of human performance. Several factors may affect a
persons ability to distinguish signal from noise, such as experience, expectations, context, physiological, and psychological
states. To conduct a signal detection analysis of novice and
expert fingerprint matching performance, the two ways of being
right and the two ways of being wrong were separated; performance on matching and nonmatching prints was compared; and
accuracy and response bias were separated.
Separate the Two Ways of Being Right and the Two Ways of
Being Wrong
When an examiner compares two fingerprints, there are two
ways for her to be right and two ways to be wrong, as shown in
Fig. 1. To get a comparison right, she can correctly say the
prints match when they actually do (a hit) or she can correctly
say they do not match when they do not (a correct rejection).
These decisions could result in providing correct evidence to the
court or help eliminate potential suspects. To get a comparison
wrong, she can incorrectly say that the prints match when they
do not (a false alarm), or she can incorrectly say that the prints
do not match when in fact they do (a miss). These decisions
could lead to providing incorrect evidence to the court or a failure to identify a criminal.
Compare Performance on Matching and Nonmatching Prints
To properly measure performance, examiners must compare
both matching and nonmatching prints. As discussed in the section on Proficiency Tests and Accuracy above, most previous
studies have included no or few distractors, making it impossible
to measure the two ways of being right and the two ways of
being wrong, leading to artificially inflated accuracy rates.
In this experiment, to avoid the problem of distractors, each
latent print in the set formed part of a match, similar distractor,
and nonsimilar distractor trial. (The reasoning for providing similar distractors is described in the Similarity section below.) This
way, match trials can be directly compared with the same number of nonmatch trials in the other two conditions. For each participant, each latent print was randomly allocated to one of the
three trial types, with the constraint that there were 12 prints in
each condition. This way, each latent print has an equal chance
to act in either a match, similar distractor or nonsimilar distractor
trial, and so eliminating the possibility that a particularly easy/
Fingerprint
Status
Match
Non-Match
Match
Hit
False
Alarm
No
Match
Miss
Correct
Rejection
Examiner
Says
FINGERPRINT EXPERTISE
1523
1524
Similarity
A pair of fingerprints will appear similar or dissimilar to each
other (or somewhere in-between), depending on the amount of
information in each and depending on the experience of the
examiner. There is no agreed upon definition or measure of similarity for the comparison of prints, but there have been attempts
to create an objective measure of similarity. For example, Vokey
et al. (15) converted a set of fingerprint images into their raw
pixel values (i.e., the brightness values in each fingerprint image)
and projected each print into the multidimensional space of all
the prints in a set to return a vector, where the similarity of one
print to another is given by the cosine of the angle between their
vectors. A cosine value close to 1 indicates that the prints are
virtually identical; whereas cosines close to zero indicate that the
prints are highly dissimilar. This technique, therefore, provides an
objective measure of similarity because it uses only the raw pixel
values in the images and so requires no human input. We did not
make use of this objective measure of accuracy for this experiment but, instead, used a national fingerprint database search.
For over 20 years, examiners have had the ability to search
large databases, with the aid of computer algorithms, for
potential matches to latent crime scene fingerprints. Although no
formal data exist, it is likely that the majority of fingerprint comparisons made today use database queries (suspect-absent cases)
rather than with a closed set of known prints from a suspect
(suspect-present cases). A database query on a latent print will
return a list of candidates that are most similar (according to the
algorithm) to that latent. As Vokey et al. (15) and Dror and
Mnookin (26) discuss, a database query makes an examiners
task much more difficult by returning a set of highly similar
distractor printsprints that look very much alike (according to
the algorithm) but come from different people. These searches,
by their very nature, are maximizing the conditions conducive to
false positives. Whats more, Vokey et al. (15) found that
novices made more false alarms on comparisons that were
similaras measured by the distance between vectors of pixel
mapsthan those that were not similar. Given that distinguishing highly similar, but nonmatching, prints from genuine prints
is likely to be the most difficult and common task that examiners
face, similarity was included as a factor in our experiment.
Similar distractor prints were obtained by searching our simulated crime scene latents on the NAFIS. The latents were first
auto-coded and then hand-coded by a qualified expert. For each
simulated crime scene print, the most highly ranked nonmatching
exemplar from the search was used if it was available in
the Queensland Police 10-print hard-copy archives, which
contains approximately one million 10-print cards (10 million
individual prints) from approximately 300,000400,000 people
(one person may have more than one 10-print card on record).
The corresponding 10-print card was retrieved from the archives,
scanned, and the individual print of interest was extracted. Due to
THOMPSON ET AL.
FINGERPRINT EXPERTISE
1525
1526
THOMPSON ET AL.
FINGERPRINT EXPERTISE
1527
Documented cases of false identification (9), issues of plausibility reported by the NAS (2), and recent experiments (1,32
34) highlight the need for a contemporary model of forensic
testimony. Following developments in the United States and Canada, Edmond (73) has suggested that Australia adopt a reliability
standard, and the U.K. Law Commission (4) has announced similar recommendations for admissibility practice in England and
Wales. Indeed, science and legal commentators are beginning to
call for empirical demonstrations of accuracy and performance,
along with details about the relative performance of laypersons,
across forensic science. A failure to respond to criticism means
that judges are in danger of acting irrationally and being left
behind by practical and ongoing reforms in the forensic sciences.
While it is likely that courts will start to develop an admissibility
and testimony jurisprudence more directly concerned with reliability in the near future, there is an independent need for forensic
scientists and technicians to pay much closer attention to the evidence for ability and reliability (74). Edmond (73) suggests that
reliability standards will help to make criminal trials fairer and
ensure outcomes reflect the known value of expert evidence.
It is clear that an alternative to the current model of fingerprint testimony is required. But, what should an acceptable alternative and contemporary model look like? Several factors must
be considered; these include the role of scientific experiments on
the accuracy and reliability of forensic identification; whether it
is necessary to report department or individual scores on properly controlled proficiency tests; the state of the science in other
areas of pattern and impression identification; the impact that the
testimony has on jury decision making; finding the right balance
between accurate scientific reporting and the ability of judges
and juries to understand expert testimony; and decisions about
whether to report on the degree to which the specimen matches
the source (e.g., lends limited support), the degree of confidence in a match (e.g., highly confident that x matches y),
opinions about the evidence (e.g., it is my opinion that), or
statements about the particular hypotheses in question (e.g., the
evidence is more consistent with x than y).
There is much research and consideration needed to develop an
acceptable alternative model of fingerprint testimony. Several
debates on this topic are raging internationally between academics,
1528
acknowledging errors (75). Wecollectively, forensic professionals, researchers, legal scholars, and the courtsneed to
define acceptable rates of error, foster a work environment
conducive to learning from error, and promote a blame-free
safety culture, as medicine is working toward (64,76).
This approach will allow police, intelligence systems, and
investigators to interpret evidence more effectively and efficiently, assist forensic examiners in the development of evidencebased training programs, discourage exaggerated interpretations
of forensic evidence, and help in the development of a model of
expert testimony that does not extend beyond the capabilities of
examiners or beyond the scope of experimental findings. Further
psychological research into forensic decision making will help to
ensure the integrity of forensics as an investigative tool available
to police, so the rule of law is justly applied.
Acknowledgments
We thank Professor Gary Edmond for his valuable input
toward this publication and Morgan Tear and Hayley Thomason
for assistance with data collection.
References
1. Tangen JM, Thompson MB, McCarthy DJ. Identifying fingerprint expertise. Psychol Sci 2011;22(8):9957.
2. National Research Council, Committee on Identifying the Needs of the
Forensic Science Community. Strengthening forensic science in the United States: a path forward. Washington, DC: The National Academies
Press, 2009.
3. Edwards HT. Statement of The Honorable Harry T. Edwards: strengthening forensic science in the United States: a path forward. Washington,
DC: United States Senate Committee on the Judiciary, 2009.
4. Law Commission of England and Wales. Expert evidence in criminal
proceedings in England and Wales. London: Stationery Office, 2011.
5. Loftus EF, Cole SA. Contaminated evidence. Science 2004;304
(5673):959.
6. Saks MJ, Koehler JJ. The coming paradigm shift in forensic identification science. Science 2005;309(5736):8925.
7. Spinney L. The fine print. Nature 2010;464:3446.
8. Federal Bureau of Investigation. The science of fingerprints: classification and uses. Washington, DC: U.S. Government Printing Office, 1984.
9. Cole SA. More than zero: accounting for error in latent fingerprint identification. J Crim Law Criminol 2005;95(3):9851078.
10. Cole SA. Who speaks for science? A response to the National Academy
of Sciences report on forensic science. Law Prob Ris 2010;9:2546.
11. Garrett R. Memorandum from the President of the International Association for Identification, February 19, 2009, http://www.theiai.org/current_
affairs/nas_memo_20090219.pdf. (accessed December 1, 2011).
12. Scientific Working Group on Friction Ridge Analysis Study and Technology (SWGFAST). Standard for the definition and measurement of
rates of errors and non-consensus decisions in friction ridge examination
(latent/tenprint). Ver. 1.1, 09/16/11, SWGFAST, 2011.
13. Cole SA, Welling M, Dioso-Villa R, Carpenter R. Beyond the individuality of fingerprints: a measure of simulated computer latent print source
attribution accuracy. Law Prob Ris 2008;7(3):16589.
14. Haber L, Haber RN. Scientific validation of fingerprint evidence under
Daubert. Law Prob Ris 2008;7(2):11926.
15. Vokey JR, Tangen JM, Cole SA. On the preliminary psychophysics of
fingerprint identification. Q J Exp Psychol (Hove) 2009;62(5):102340.
16. Dror I, Charlton D. Why experts make errors. J Forensic Identif 2006;56
(4):60056.
17. Dror I, Charlton D, Peron A. Contextual information renders experts vulnerable to making erroneous identifications. Forensic Sci Int 2006;156
(1):748.
18. Dror I, Cole S. The vision in blind justice: expert perception, judgment, and visual cognition in forensic pattern recognition. Psychon B
Rev 2010;17(2):1617.
19. Dror I, Peron A, Hind S, Charlton D. When emotions get the better of
us: the effect of contextual top-down processing on matching fingerprints. Appl Cognitive Psychol 2005;19(6):799809.
THOMPSON ET AL.
20. Dror I, Rosenthal R. Meta-analytically quantifying the reliability and
biasability of forensic experts. J Forensic Sci 2008;53(4):9003.
21. Langenburg G, Champod C, Wertheim P. Testing for potential contextual
bias effects during the verification stage of the ACE-V methodology when
conducting fingerprint comparisons. J Forensic Sci 2009;54(3):57182.
22. Busey T, Dror I. Special abilities and vulnerabilities in forensic expertise.
In: Holder EH Jr, Robinson LO, Laub JH, editors. Fingerprint sourcebook. Washington, DC: National Institute of Justice Press, 2010;15-1
15-23.
23. Busey T, Parada F. The nature of expertise in fingerprint examiners.
Psychon B Rev 2010;17(2):15560.
24. Busey T, Vanderkolk J. Behavioral and electrophysiological evidence for
configural processing in fingerprint experts. Vision Res 2005;45(4):
43148.
25. Busey T, Yu C, Wyatte D, Vanderkolk J, Parada F, Akavipat R. Consistency and variability among latent print examiners as revealed by eye
tracking methodologies. J Forensic Identif 2011;60(1):6191.
26. Dror IE, Mnookin JL. The use of technology in human expert
domains: challenges and risks arising from the use of automated fingerprint identification systems in forensic science. Law Prob Ris
2010;9(1):4767.
27. Dror IE, Wertheim K, Fraser-Mackenzie P, Walajtys J. The impact of
human-technology cooperation and distributed cognition in forensic
science: biasing effects of AFIS contextual information on human
experts. J Forensic Sci 2012;57(2):34352.
28. Champod C, Evett I. A probabilistic approach to fingerprint evidence.
J Forensic Identif 2001;51(2):10122.
29. Neumann C, Champod C, Puch-Solis R, Egli N, Anthonioz A, BromageGriffiths A. Computation of likelihood ratios in fingerprint identification
for configurations of any number of minutiae. J Forensic Sci
2007;52:5464.
30. Neumann C, Champod C, Puch-Solis R, Egli N, Anthonioz A, Meuwly
D, et al. Computation of likelihood ratios in fingerprint identification
for configurations of three minutiae. J Forensic Sci 2006;51(6):1255
66.
31. Neumann C. Fingerprints at the crime-scene: statistically certain, or probable? Significance 2012;9(1):215.
32. Ulery BT, Hicklin RA, Buscaglia J, Roberts MA. Accuracy and reliability of forensic latent fingerprint decisions. Proc Natl Acad Sci USA
2011;108(19):77338.
33. Ulery BT, Hicklin RA, Buscaglia J, Roberts MA. Repeatability and
reproducibility of decisions by latent fingerprint examiners. PLoS ONE
2012;7(3):e32800.
34. Dror IE, Champod C, Langenburg G, Charlton D, Hunt H, Rosenthal R.
Cognitive issues in fingerprint analysis: inter- and intra-expert consistency and the effect of a target comparison. Forensic Sci Int 2011;208
(1):1017.
35. Langenberg G. Performance study of the ACE-V process: a pilot study
to measure the accuracy, precision, reproducibility, repeatability, and
biasability of conclusions resulting from the ACE-V process. J Forensic
Identif 2009;59(2):21957.
36. Wertheim K, Langenburg G, Moenssens A. A report of latent print examiner
accuracy during comparison training exercises. J Forensic Identif 2006;56
(4):5593.
37. Haber L, Haber RN. Letter to the editor. Re: a report of latent print examiner
accuracy during comparison training exercises. J Forensic Identif 2006;56
(4):4939.
38. Wertheim K, Langenburg G, Moenssens A. Authors response to letter:
letter to the editor. Re: a report of latent print examiner accuracy during
training exercises. J Forensic Identif 2006;56(4):50010.
39. FBI responds to the office of inspector generals report on the fingerprint
misidentification of Brandon Mayfield. 2006, http://www.fbi.gov/news/
pressrel/press-releases/fbi-respondsto-the-office-of-inspector-general2019sreporton-the-fingerprint-misidentification-of-brandon-mayfield (accessed
November 3, 2011).
40. Mnookin JL, Cole SA, Dror IE, Fisher BAJ, Houck M, Inman K, et al.
The need for a research culture in the forensic sciences. UCLA Law Rev
2011;58(3):72579.
41. Spinney L. Forensic science braces for change. Nature 2010, doi:
10.1038/news.2010.369, http://www.nature.com/news/2010/100722/full/news.
2010.369.html (accessed December 1, 2011).
42. Dror IE. The paradox of human expertise: why experts get it wrong. In:
Kapur N, editor. The paradoxical brain. Cambridge: Cambridge University Press, 2011;17788.
43. Dror IE. Combating bias: the next step in fighting cognitive and psychological contamination. J Forensic Sci 2012;57:2767.
FINGERPRINT EXPERTISE
1529
44. Koehler JJ. Fingerprint error rates and proficiency tests: what they are
and why they matter. Hastings Law J 2007;59:10771110.
45. Thompson MB, Tangen JM, McCarthy DJ. Human matching performance of genuine crime scene latent fingerprints. Law and Hum Behav
2013 In press.
46. Mook DG. In defense of external invalidity. Am Psychol 1983;38
(4):37987.
47. Brinberg D, McGrath JE. Validity and the research process. Beverly
Hills, CA: SAGE Publishing Co, 1985.
48. Woods DD. The observation problem in psychology (Westinghouse
Technical Report). Pittsburgh, PA:Westinghouse Corporation, 1985.
49. Brunswik E. Perception and the representative design of psychological
experiments, 2nd rev. ed. & enl. Berkeley, CA: University of California
Press, 1956.
50. Rasmussen J, Pejtersen AM, Goodstein LP. Cognitive systems engineering. New York, NY: John Wiley & Sons, Inc, 1994.
51. Sanderson P. Capturing the essentials: simulator-based research in aviation and healthcare. Proceedings of the Eighth International Symposium
of the Australian Aviation Psychology Association; 2008 Apr 811;
Sydney, Australia. Sydney: Australian Aviation Psychology Association,
2008.
52. Sanderson P, Liu D, Jenkins S, Watson M, Russell WJ. Summative display evaluation with advanced patient simulation: fidelity, control, and
generalizability. Brisbane, QLD: Cognitive Engineering Research Group,
The University of Queensland, 2010. Report No.: CERG-2010-01.
53. Haber L, Haber RN. Error rates for human fingerprint examiners. In:
Rath NK, Bolle RM, editors. Automatic fingerprint recognition systems.
New York, NY: Springer-Verlag, 2004;33960.
54. Phillips VL, Saks MJ, Peterson JL. The application of signal detection
theory to decision-making in forensic science. J Forensic Sci
2001;46:294308.
55. Green DM, Swets JA. Signal detection theory and psychophysics. New
York, NY: Wiley, 1966.
56. Ericsson KA, Smith J. Toward a general theory of expertise: prospects
and limits. Cambridge: Cambridge University Press, 1991.
57. Galton F. Decipherment of blurred fingerprints: supplementary chapter to
finger prints. London: Macmillan, 1893.
58. Cho A. Fingerprinting doesnt hold up as a science in court. Science
2002;295(5554):418.
59. Cho A. Judge reverses decision on fingerprint evidence. Science
2002;295(5563):21957.
60. Johnston R, Edmonds A. Familiar and unfamiliar face recognition: a
review. Memory 2009;17(5):57796.
61. Megreya AM, Burton AM. Matching faces to photographs: poor performance in eyewitness memory (without the memory). J Exp Psychol-Appl
2008;14(4):36472.
62. Kemp R, Towell N, Pike G. When seeing should not be believing: photographs, credit cards and fraud. Appl Cognitive Psychol 1997;11
(3):21122.
63. Institute of Medicine. To err is human: building a safer health system.
Washington, DC: National Academy Press, 2000.
64. Newman-Toker DE, Pronovost PJ. Diagnostic errorsthe next frontier
for patient safety. JAMA 2009;301(10):10602.
65. Hayward RA, Hofer TP. Estimating hospital deaths due to medical
errors. JAMA 2001;286(4):41520.
66. Berner ES, Graber ML. Overconfidence as a cause of diagnostic error in
medicine. Am J Med 2008;121(5):S23.
67. Norman GR, Eva KW. Diagnostic error and clinical reasoning. Med
Educ 2010;44(1):94100.
68. Norman GR, Young M, Brooks L. Non-analytical models of clinical reasoning: the role of experience. Med Educ 2007;41(12):11405.
69. Scientific Working Group on Friction Ridge Analysis Study and Technology (SWGFAST). Standards for Conclusions. Ver. 1.0, 9/11/03,
SWGFAST, 2003.
70. International Association For Identification (IAI). IAI position concerning
latent fingerprint identification, 2007, www.onin.com/fp/IAI_Position_
Statement_11-29-07.pdf. (accessed December 1, 2011).
71. Scientific Working Group on Friction Ridge Analysis Study and Technology (SWGFAST). Standard terminology of friction ridge examination.
Ver. 3, 3/23/11, SWGFAST, 2011.
72. Cole SA. Forensics without uniqueness, conclusions without individualization: the new epistemology of forensic identification. Law Prob Ris
2009;8(3):23355.
73. Edmond G. Specialised knowledge, the exclusionary discretions and
reliability: reassessing incriminating expert opinion evidence. UNSW L
J 2008;31(1):155.
1530