01 - Handbook of Fingerprint Recognition-Introduction - 2003
01 - Handbook of Fingerprint Recognition-Introduction - 2003
01 - Handbook of Fingerprint Recognition-Introduction - 2003
Chapter 1: Introduction
(Copyright 2003, Springer Verlag. All rights Reserved.)
1
Introduction
1.1 Introduction
More than a century has passed since Alphonse Bertillon first conceived and then industriously practiced the idea of using body measurements for solving crimes (Rhodes, 1956). Just
as his idea was gaining popularity, it faded into relative obscurity by a far more significant and
practical discovery of the distinctiveness of the human fingerprints. In 1893, the Home Ministry Office, UK, accepted that no two individuals have the same fingerprints. Soon after this
discovery, many major law enforcement departments embraced the idea of first booking the
fingerprints of criminals, so that their records are readily available and later using leftover
fingerprint smudges (latents), they could determine the identity of criminals. These agencies
sponsored a rigorous study of fingerprints, developed scientific methods for visual matching of
fingerprints and strong programs/cultures for training fingerprint experts, and applied the art of
fingerprint recognition for nailing down the perpetrators (Scott (1951) and Lee and Gaensslen
(2001)).
Despite the ingenious methods improvised to increase the efficiency of the manual approach to fingerprint indexing and search, the ever growing demands on manual fingerprint
recognition quickly became overwhelming. The manual method of fingerprint indexing resulted in a highly skewed distribution of fingerprints into bins (types): most fingerprints fell
into a few bins and this did not improve search efficiency. Fingerprint training procedures
were time-intensive and slow. Furthermore, demands imposed by the painstaking attention
needed to visually match the fingerprints of varied qualities, tedium of the monotonous nature
of the work, and increasing workloads due to a higher demand on fingerprint recognition services, all prompted the law enforcement agencies to initiate research into acquiring fingerprints through electronic media and automate fingerprint recognition based on the digital
representation of fingerprints. These efforts led to development of Automatic Fingerprint
Identification Systems (AFIS) over the past few decades. Law enforcement agencies were the
earliest adopters of the fingerprint recognition technology, more recently, however, increasing
1 Introduction
identity fraud has created a growing need for biometric technology for person recognition in a
number of non-forensic applications.
Biometric recognition refers to the use of distinctive physiological (e.g., fingerprints, face,
retina, iris) and behavioral (e.g., gait, signature) characteristics, called biometric identifiers (or
simply biometrics) for automatically recognizing individuals. Perhaps all biometric identifiers
are a combination of physiological and behavioral characteristics and they should not be exclusively classified into either physiological or behavioral characteristics. For example, fingerprints may be physiological in nature but the usage of the input device (e.g., how a user
presents a finger to the fingerprint scanner) depends on the persons behavior. Thus, the input
to the recognition engine is a combination of physiological and behavioral characteristics.
Similarly, speech is partly determined by the biological structure that produces speech in an
individual and partly by the way a person speaks. Often, a similarity can be noticed among
parent, children, and siblings in their voice, gait, and even signature. The same argument applies to the face: faces of identical twins may be extremely similar at birth but during development, the faces change based on the persons behavior (e.g., lifestyle differences leading to
a difference in bodyweight, etc.).
Is this person authorized to enter this facility? Is this individual entitled to access privileged information? Is the given service being administered exclusively to the enrolled users?
Answers to questions such as these are valuable to business and government organizations.
Because biometric identifiers cannot be easily misplaced, forged, or shared, they are considered more reliable for person recognition than traditional token- or knowledge-based methods.
The objectives of biometric recognition are user convenience (e.g., money withdrawal without
ATM card or PIN), better security (e.g., difficult to forge access), and higher efficiency (e.g.,
lower overhead for computer password maintenance). The tremendous success of fingerprintbased recognition technology in law enforcement applications, decreasing cost of fingerprint
sensing devices, increasing availability of inexpensive computing power, and growing identity
fraud/theft have all ushered in an era of fingerprint-based person recognition applications in
commercial, civilian, and financial domains.
There is a popular misconception in the pattern recognition and image processing academic community that automatic fingerprint recognition is a fully solved problem inasmuch as
it was one of the first applications of machine pattern recognition almost fifty years ago. On
the contrary, fingerprint recognition is still a challenging and important pattern recognition
problem.
With the increase in the number of commercial systems for fingerprint-based recognition,
proper evaluation protocols are needed. The first fingerprint verification competition
(FVC2000) was a good start in establishing such protocols (Maio et al., 2002a). As fingerprints (biometrics) get increasingly embedded into various systems (e.g., cellular phones), it
becomes increasingly important to analyze the impact of biometrics on the overall integrity of
the system and its social acceptability as well as the related security and privacy issues.
1 Introduction
plates of all the users in the system database; the output is either the identity of an enrolled
user or an alert message such as user not identified. Because identification in large databases
is computationally expensive, classification and indexing techniques are often deployed to
limit the number of templates that have to be matched against the input.
template
name
NAME (PIN)
Quality
checker
Feature
Extractor
System DB
User interface
Enrollment
claimed identity
NAME (PIN)
Feature
Extractor
Matcher
(1 match)
one
template
System DB
User interface
True/False
Verification
Feature
Extractor
Matcher
(N matches)
N
templates
System DB
User interface
Identification
Users identity or
user not identified
Depending on the application domain, a biometric system could operate either as an online system or an off-line system. An on-line system requires the recognition to be performed
quickly and an immediate response is imposed (e.g., a computer network logon application).
On the other hand, an off-line system usually does not require the recognition to be performed
immediately and a relatively long response delay is allowed (e.g., an employee background
check application). Typically, on-line systems are fully automatic and require that the biometric characteristic be captured using a live-scan scanner, the enrollment process be unattended,
there be no (manual) quality control, and the matching and decision be fully automatic. Offline systems, however, are typically semi-automatic, where the biometric acquisition could be
through an off-line scanner (e.g, scanning a fingerprint image from a latent or inked fingerprint card), the enrollment may be supervised (e.g., when a criminal is booked, a forensic
expert or a police officer may guide the fingerprint acquisition process), a manual quality
check may be performed to ensure good quality acquisition, and the matcher may return a list
of candidates which are then manually examined by a forensic expert to arrive at a final (human) decision.
An application could operate either in a positive or a negative recognition mode:
in a positive recognition application, the system establishes whether the person is
who he (implicitly or explicitly) claims to be. The purpose of a positive recognition is
to prevent multiple people from using the same identity. For example, if only Alice is
authorized to enter a certain secure area, then the system will grant access only to Alice. If the system fails to match the enrolled template of Alice with the input, a rejection results; otherwise, an acceptance results;
in a negative recognition application, the system establishes whether the person is
who he (implicitly or explicitly) denies being. The purpose of negative recognition is
to prevent a single person from using multiple identities. For example, if Alice has already received welfare benefits and now she claims that she is Becky and would like
to receive the welfare benefits of Becky (this is called double dipping), the system
will establish that Becky is not who she claims to be. If the system fails to match the
input biometric of Becky with a database of people who have already received benefits, an acceptance results; otherwise, a rejection results.
Note that although the traditional methods of user authentication such as passwords, PINs,
keys, and tokens may work for positive recognition, negative recognition can only be established through biometrics. Furthermore, positive recognition application can operate both in
verification or identification mode, but negative recognition applications cannot work in verification mode: in fact, the system has to search the entire archive to prove that the given input
is not already present.
A biometric system can be classified according to a number of other application-dependent
characteristics. Wayman (1999a) suggests that all the biometric applications may be classified
into categories based on their characteristics:
1. cooperative versus non-cooperative,
2. overt versus covert,
3. habituated versus non-habituated,
4. attended versus non-attended,
1 Introduction
5.
6.
7.
Cooperative versus non-cooperative dichotomy refers to the behavior of the impostor in interacting with the system. For example, in a positive recognition system, it is in the best interest of an impostor to cooperate with the system to be accepted as a valid user. On the other
hand, in a negative recognition system, it is in the best interest of the impostor not to cooperate
with the system so that she does not get recognized. Electronic banking is an example of a
cooperative application whereas an airport application to identify terrorists who will try to
break the system is an example of a non-cooperative application.
If a user is aware that he is being subjected to a biometric recognition, the application is
categorized as overt. If the user is unaware, the application is covert. Facial recognition can be
used in a covert application while fingerprint recognition cannot be used in this mode (except
for criminal identification based on latent fingerprints). Most commercial uses of biometrics
are overt, whereas government, forensic, and surveillance applications are typically covert.
Also, most verification applications are overt whereas identification applications generally fall
in the covert category.
Habituated versus non-habituated use of a biometric system refers to how often the enrolled users are subjected to biometric recognition. For example, a computer network logon
application typically has habituated users (after an initial habituation period) due to their use
of the system on a regular basis. However, a drivers license application typically has nonhabituated users since a drivers license is renewed only once in several years. This is an important consideration when designing a biometric system because the familiarity of users with
the system affects recognition accuracy.
Attended versus non-attended classification refers to whether the process of biometric data
acquisition in an application is observed, guided, or supervised by a human (e.g., a security
officer). Furthermore, an application may have an attended enrollment but non-attended recognition. For example, a banking application may have a supervised enrollment when an ATM
card is issued to a user but the subsequent uses of the biometric system for ATM transactions
will be non-attended. Non-cooperative applications generally require attended operation.
Standard versus non-standard environments refer to whether the system is being operated
in a controlled environment (such as temperature, pressure, moisture, lighting conditions, etc.).
Typically, indoor applications such as computer network logon operate in a controlled environment whereas outdoor applications such as keyless car entry or parking lot surveillance
operate in a non-standard environment. This classification is also important for the system
designer as a more rugged biometric sensor is needed for a non-standard environment. Similarly, infrared face recognition may be preferred over visible-band face recognition for outdoor
surveillance at night.
Public or private dichotomy refers to whether the users of the system are customers or employees of the organization deploying the biometric system. For example, a network logon
application is used by the employees and managed by the information technology manager of
the same company. Thus it is a private application. The use of biometric data in conjunction
with electronic identity cards is an example of a public application.
Closed versus open systems refers to whether a persons biometric template is used for a
single or multiple applications. For example, a user may use a fingerprint-based recognition
system to enter secure facilities, for computer network logon, electronic banking, and ATM.
Should all these applications use separate templates (databases) for each application, or should
they all access the same template (database)? A closed system may be based on a proprietary
template whereas an open system will need standard data formats and compression methods to
exchange and compare information between different systems (most likely developed by different commercial vendors).
Note that the most popular commercial applications have the following attributes: cooperative, overt, habituated, attended enrollment and non-attended recognition, standard environment, closed, and private.
1 Introduction
A number of biometric identifiers are in use in various applications (Figure 1.2). Each
biometric has its strengths and weaknesses and the choice typically depends on the application. No single biometric is expected to effectively meet the requirements of all the applications. The match between a biometric and an application is determined depending upon the
characteristics of the application and the properties of the biometric.
a)
g)
b)
c)
h)
d)
i)
e)
j)
f)
k)
Figure 1.2. Some of the biometrics are shown: a) ear, b) face, c) facial thermogram, d) hand
thermogram, e) hand vein, f) hand geometry, g) fingerprint, h) iris, i) retina, j) signature, and k)
voice.
When choosing a biometric for an application the following issues have to be addressed:
Does the application need verification or identification? If an application requires an
identification of a subject from a large database, it needs a scalable and relatively
more distinctive biometric (e.g., fingerprint, iris, or DNA).
What are the operational modes of the application? For example, whether the application is attended (semi-automatic) or unattended (fully automatic), whether the users
are habituated (or willing to be habituated) to the given biometrics, whether the application is covert or overt, whether subjects are cooperative or non-cooperative, and so
on.
What is the storage requirement of the application? For example, an application that
performs the recognition at a remote server may require a small template size.
How stringent are the performance requirements? For example, an application that
demands very high accuracy needs a more distinctive biometric.
What types of biometrics are acceptable to the users? Different biometrics are acceptable in applications deployed in different demographics depending on the cultural,
ethical, social, religious, and hygienic standards of that society. The acceptability of a
biometric in an application is often a compromise between the sensitivity of a community to various perceptions/taboos and the value/convenience offered by biometrics-based recognition.
A brief introduction to the most common biometrics is provided below.
DNA: DeoxyriboNucleic Acid (DNA) is the one-dimensional ultimate unique code
for ones individuality, except for the fact that identical twins have identical DNA
patterns. It is, however, currently used mostly in the context of forensic applications
for person recognition. Several issues limit the utility of this biometric for other applications: i) contamination and sensitivity: it is easy to steal a piece of DNA from an
unsuspecting subject that can be subsequently abused for an ulterior purpose; ii)
automatic real-time recognition issues: the present technology for DNA matching requires cumbersome chemical methods (wet processes) involving an experts skills
and is not geared for on-line non-invasive recognition; iii) privacy issues: information
about susceptibilities of a person to certain diseases could be gained from the DNA
pattern and there is a concern that the unintended abuse of genetic code information
may result in discrimination, for example, in hiring practices.
Ear: It is known that the shape of the ear and the structure of the cartilaginous tissue
of the pinna are distinctive. The features of an ear are not expected to be unique to an
individual. The ear recognition approaches are based on matching the distance of salient points on the pinna from a landmark location on the ear.
Face: The face is one of the most acceptable biometrics because it is one of the most
common methods of recognition that humans use in their visual interactions. In addition, the method of acquiring face images is nonintrusive. Facial disguise is of concern in unattended recognition applications. It is very challenging to develop face
recognition techniques that can tolerate the effects of aging, facial expressions, slight
variations in the imaging environment, and variations in the pose of the face with respect to the camera (2D and 3D rotations).
Facial, hand, and hand vein infrared thermograms: The pattern of heat radiated by
the human body is a characteristic of each individual body and can be captured by an
infrared camera in an unobtrusive way much like a regular (visible spectrum) photograph. The technology could be used for covert recognition and could distinguish between identical twins. A thermogram-based system is non-contact and non-invasive
but sensing challenges in uncontrolled environments, where heat-emanating surfaces
in the vicinity of the body, such as, room heaters and vehicle exhaust pipes, may
drastically affect the image acquisition phase. A related technology using nearinfrared imaging is used to scan the back of a clenched fist to determine hand vein
structure. Infrared sensors are prohibitively expensive which is a factor inhibiting
widespread use of the thermograms.
Gait: Gait is the peculiar way one walks and is a complex spatio-temporal biometric.
Gait is not supposed to be very distinctive, but is sufficiently characteristic to allow
verification in some low-security applications. Gait is a behavioral biometric and may
10
1 Introduction
not stay invariant, especially over a large period of time, due to large fluctuations of
body weight, major shift in the body weight, major injuries involving joints or brain,
or due to inebriety. Acquisition of gait is similar to acquiring facial pictures and
hence it may be an acceptable biometric. Because gait-based systems use a videosequence footage of a walking person to measure several different movements of
each articulate joint, it is computing and input intensive.
Hand and finger geometry: Some features related to a human hand (e.g., length of
fingers) are relatively invariant and peculiar (although not very distinctive) to an individual. The image acquisition system requires cooperation of the subject and captures frontal and side view images of the palm flatly placed on a panel with
outstretched fingers. The representational requirements of the hand are very small
(nine bytes in one of the commercially available products), which is an attractive feature for bandwidth- and memory-limited systems. Due to its limited distinctiveness,
hand geometry-based systems are typically used for verification and do not scale well
for identification applications. Finger geometry systems (which measure the geometry of only one or two fingers) may be preferred because of their compact size.
Iris: Visual texture of the human iris is determined by the chaotic morphogenetic
processes during embryonic development and is posited to be distinctive for each
person and each eye (Daugman, 1999a). An iris image is typically captured using a
non-contact imaging process. Capturing an iris image involves cooperation from the
user, both to register the image of iris in the central imaging area and to ensure that
the iris is at a predetermined distance from the focal plane of the camera. The iris
recognition technology is believed to be extremely accurate and fast.
Keystroke dynamics: It is hypothesized that each person types on a keyboard in a
characteristic way. This behavioral biometric is not expected to be unique to each individual but it offers sufficient discriminatory information to permit identity verification. Keystroke dynamics is a behavioral biometric; for some individuals, one may
expect to observe large variations from typical typing patterns. The keystrokes of a
person using a system could be monitored unobtrusively as that person is keying in
information.
Odor: It is known that each object exudes an odor that is characteristic of its chemical
composition and could be used for distinguishing various objects. A whiff of air surrounding an object is blown over an array of chemical sensors, each sensitive to a certain group of (aromatic) compounds. A component of the odor emitted by a human
(or any animal) body is distinctive to a particular individual. It is not clear if the invariance in the body odor could be detected despite deodorant smells and varying
chemical composition of the surrounding environment.
Retinal scan: The retinal vasculature is rich in structure and is supposed to be a characteristic of each individual and each eye. It is claimed to be the most secure biometric since it is not easy to change or replicate the retinal vasculature. The image
capture requires a person to peep into an eyepiece and focus on a specific spot in the
11
visual field so that a predetermined part of the retinal vasculature may be imaged.
The image acquisition involves cooperation of the subject, entails contact with the
eyepiece, and requires a conscious effort on the part of the user. All these factors adversely affect public acceptability of retinal biometrics. Retinal vasculature can reveal
some medical conditions (e.g., hypertension), which is another factor standing in the
way of public acceptance of retinal scan-based biometrics.
Signature: The way a person signs his name is known to be a characteristic of that individual. Although signatures require contact and effort with the writing instrument,
they seem to be acceptable in many government, legal, and commercial transactions
as a method of verification. Signatures are a behavioral biometric that change over a
period of time and are influenced by physical and emotional conditions of the signatories. Signatures of some people vary a lot: even successive impressions of their signature are significantly different. Furthermore, professional forgers can reproduce
signatures to fool the unskilled eye.
Voice: Voice capture is unobtrusive and voice print is an acceptable biometric in almost all societies. Voice may be the only feasible biometric in applications requiring
person recognition over a telephone. Voice is not expected to be sufficiently distinctive to permit identification of an individual from a large database of identities.
Moreover, a voice signal available for recognition is typically degraded in quality by
the microphone, communication channel, and digitizer characteristics. Voice is also
affected by a persons health (e.g., cold), stress, emotions, and so on. Besides, some
people seem to be extraordinarily skilled in mimicking others.
These various biometric identifiers described above are compared in Table 1.1. Note that
fingerprint recognition has a very good balance of all the desirable properties. Every human
being possesses fingerprints with the exception of any hand-related disabilities. Fingerprints
are very distinctive (see Chapter 8); fingerprint details are permanent, even if they may temporarily change slightly due to cuts and bruises on the skin or weather conditions. Live-scan fingerprint sensors can easily capture high-quality images and they do not suffer from the
problem of segmentation of the fingerprint from the background (e.g., unlike face recognition). However, they are not suitable for covert applications (e.g., surveillance) as live-scan
fingerprint scanners cannot capture a fingerprint image from a distance without the knowledge
of the person. The deployed fingerprint-based biometric systems offer good performance and
fingerprint sensors have become quite small and affordable (see Chapter 2). Because fingerprints have a long history of use in forensic divisions worldwide for criminal investigations,
they have a stigma of criminality associated with them. However, this is changing with the
high demand of automatic recognition to fight identity fraud in our electronically interconnected society. With a marriage of fingerprint recognition, cryptographic techniques, and vitality detection, fingerprint systems are becoming quite difficult to circumvent (see Chapter 9).
Fingerprint recognition is one of the most mature biometric technologies and is suitable for a
large number of recognition applications. This is also reflected in revenues generated by various biometric technologies in the year 2002 (see Figure 1.3).
Biometric identifier
Distinctiveness
Permanence
Collectability
Performance
Acceptability
Circumvention
1 Introduction
Universality
12
DNA
Ear
Face
Facial thermogram
Fingerprint
Gait
Hand geometry
Hand vein
Iris
Keystroke
Odor
Retina
Signature
Voice
H
M
H
H
M
M
M
M
H
L
H
H
L
M
H
M
L
H
H
L
M
M
H
L
H
H
L
L
H
H
M
L
H
L
M
M
H
L
H
M
L
L
L
M
H
H
M
H
H
M
M
M
L
L
H
M
H
M
L
M
H
L
M
M
H
L
L
H
L
L
L
H
H
H
M
H
M
M
L
M
M
L
H
H
L
M
H
L
M
M
M
L
L
M
L
L
H
H
Table 1.1. Comparison of biometric technologies. The data are based on the perception of the
authors. High, Medium, and Low are denoted by H, M, and L, respectively.
Voice
4.4%
Signature
2.1%
Iris
5.8%
Face
12.4%
Hand
10.0%
Fingerprint
52.1%
Middleware
13.2%
Figure 1.3. Biometric Market Report (International Biometric Group) estimated the revenue of
various biometrics in the year 2002 and showed that fingerprint-based biometric systems continue to be the leading biometric technology in terms of market share, commanding more than
50% of non-AFIS biometric revenue. Face recognition was second with 12.4%. Note that AFIS
are used in forensic applications.
13
14
1 Introduction
presence of noise. H0 is the hypothesis that the received signal is noise alone, and H1 is the
hypothesis that the received signal is message plus the noise. Such a hypothesis testing formulation inherently contains two types of errors:
Type I: false match (D1 is decided when H0 is true);
Type II: false non-match (D0 is decided when H1 is true).
False Match Rate (FMR) is the probability of type I error (also called significance level of the
hypothesis test) and False Non-Match Rate (FNMR) is the probability of type II error:
FMR = P(D1| H0 = true);
FNMR = P(D0| H1 = true).
Note that (1 FNMR) is also called the power of the hypothesis test.
To evaluate the accuracy of a biometric system one must collect scores generated from a
number of fingerprint pairs from the same finger (the distribution p(s|H1 = true) of such scores
is traditionally called genuine distribution), and scores generated from a number of fingerprint
pairs from different fingers (the distribution p(s|H0 = true) of such scores is traditionally called
impostor distribution). Figure 1.4 graphically illustrates the computation of FMR and FNMR
over genuine and impostor distributions:
FNMR =
FMR =
1
t
t
0
p(s | H1 = true) ds ,
p (s | H 0 = true) ds .
Impostor
distribution
p(s|H0=true)
Genuine
distribution
p(s|H1=true)
Threshold (t)
FNMR
P(D0|H1=true)
FMR
P(D1|H0=true)
Figure 1.4. FMR and FNMR for a given threshold t are displayed over the genuine and impostor
score distributions. From the drawing, it is evident that FMR is the percentage of impostor pairs
whose matching score is greater than or equal to t, and FNMR is the percentage of genuine
pairs whose matching score is less than t.
15
There is a strict tradeoff between FMR and FNMR in every biometric system (Golfarelli,
Maio, and Maltoni, 1997). In fact, both FMR and FNMR are functions of the system threshold
t, and we should, therefore, refer them as FMR(t) and FNMR(t), respectively. If t is decreased
to make the system more tolerant with respect to input variations and noise, then FMR(t) increases; vice versa, if t is raised to make the system more secure, then FNMR(t) increases accordingly. A system designer may not know in advance the particular application for which
the system may be used (or a single system may be designed for a wide variety of applications). So it is advisable to report system performance at all operating points (threshold, t).
This is done by plotting a Receiver Operating Characteristic (ROC) curve. A ROC curve is a
plot of FMR against (1 FNMR) for various decision thresholds (often FNMR is reported
along the vertical axis instead of (1 FNMR)). Figures 1.5.a through c show examples of
score distributions, FMR(t) and FNMR(t) curves, and a ROC curve, respectively.
Besides the above distributions and curves, some compact indices are also used to summarize the accuracy of a verification system.
Equal-Error Rate (EER) denotes the error rate at the threshold t for which false
match rate and false non-match rate are identical: FMR(t) = FNMR(t) (see Figure
1.6). In practice, because the matching score distributions are not continuous (due to
the finite number of matched pairs and the quantization of the output scores), an exact
EER point might not exist. In this case, instead of a single value, an interval should
be reported (Maio et al., 2000). Although EER is an important indicator, in practice, a
fingerprint-based biometric system is rarely used at the operating point corresponding
to EER, and often a more stringent threshold is set to reduce FMR in spite of a rise in
FNMR.
ZeroFNMR is the lowest FMR at which no false non-matches occur (see Figure 1.6).
ZeroFMR is the lowest FNMR at which no false matches occur (see Figure 1.6).
Failure To Capture (FTC) rate is associated with the automatic capture function of a
biometric device and denotes the percentage of times the device fails to automatically
capture the biometric when it is presented to a sensor. A high failure to capture rate
makes the biometric device difficult to use.
Failure To Enroll (FTE) rate denotes the percentage of times users are not able to enroll in the recognition system. There is a tradeoff between the FTE rate and the accuracy (FMR and FNMR) of a system. FTE errors typically occur when the recognition
system performs a quality check to ensure that only good quality templates are stored
in the database and rejects poor quality templates. As a result, the database contains
only good quality templates and the system accuracy (FMR and FNMR) improves.
Failure To Match (FTM) rate is the percentage of times the input cannot be processed
or matched against a valid template because of insufficient quality. This is different
from a false non-match error; in fact, in a failure to match error, the system is not able
to make a decision, whereas in false non-match error, the system wrongly decides
that the two inputs do not come from the same finger.
1 Introduction
16
Impostors
max
max
Genuines
60%
FMR(t)
F MR
FNMR(t)
F N MR
Impostors
50%
40%
30%
20%
10%
Genuines
0
threshold (t)
threshold
0%
threshold (t)
threshold
a)
b)
F NM
R
FNMR(t)
1
10-1
10-2
10-3
10-5
10-4
10-3
10-2
10-1
F MR
FMR(t)
c)
Figure 1.5. Evaluation of a fingerprint verification algorithm over FVC2002 (Maio et al., 2002b)
database DB1: a) genuine and impostor distributions were computed from 2800 genuine pairs
and 4950 impostor pairs, respectively; b) FMR(t) and FNMR(t) are derived from the score distributions in a); c) ROC curve is derived from the FMR(t) and FNMR(t) curves in b).
error
17
FNMR(t)
FMR(t)
ZeroFNMR
ZeroFMR
EER
t
Figure 1.6. An example of FMR(t) and FNMR(t) curves, where the points corresponding to EER,
ZeroFNMR, and ZeroFMR are highlighted.
For more formal definitions of errors in a fingerprint-based verification system, and practical suggestions on how to compute and report them for a given dataset, the reader should refer
to the FVC2000 report (Maio et al. (2000); also included in the DVD accompanying this book)
and Biometric Testing Best Practices, Version 2.01 (UKBWG, 2002).
The practical performance requirements of a biometric system are very much application
related. From the viewpoint of system accuracy, an extremely low false non-match rate may
be the primary objective. For example, in some forensic applications such as criminal
identification, it is the false non-match rate that is a major concern and not the false match
rate: that is, we do not want to miss a criminal even at the risk of manually examining a large
number of potential matches identified by the biometric system. In forensic applications, it is
the human expert that will make the final decision anyway. At the other extreme, a very low
false match rate may be the most important factor in a highly secure access control
application, where the primary objective is not to let in any impostors although we are
concerned with the possible inconvenience to legitimate users due to a high false non-match
rate. In between these two extremes are several civilian applications, where both false match
rate and false non-match rate need to be considered. For example, in applications such as an
ATM card verification a false match means a loss of several hundred dollars whereas a high
false non-match rate may irritate the customers. Figure 1.7 graphically depicts the FMR and
FNMR tradeoff preferred by different types of applications.
18
1 Introduction
Forensic Applications
Civilian
Applications
High Security Access
Applications
EER
False Non-Match Rate (FNMR)
Figure 1.7. Typical operating points of different applications displayed on an ROC curve.
Let us assume that no indexing/retrieval mechanism is available (i.e., the entire database
containing N templates has to be searched), and that a single template for each user is present
in the database. Let FNMRN and FMRN denote the identification false non-match rate and false
match rate, respectively, then:
FNMRN = FNMR; in fact, the probability of falsely non-matching the input against
the user template is the same as in verification mode (except that this expression does
not take into account the probability that a false match may occur before the correct
template is visited, see Cappelli, Maio, and Maltoni (2000c));
FMRN = 1 (1 FMR)N; in fact, a false match occurs when the input falsely matches
one or more templates in the database. FMRN is then computed as one minus the
probability that no false match is made with any of the database templates. In the
above expression (1 FMR) is the probability that the input does not falsely match a
single template, and (1 FMR)N is the probability that it does not falsely match any
of the database templates. If FMR is very small, then the above expression can be approximated by FMRN N FMR, and therefore we can state that the probability of
false match increases linearly with the size of the database.
This result has serious implications for the design of large-scale identification systems. Usually, computation speed is perceived as the biggest problem in scaling an identification application. Actually, accuracy scales even worse than speed: in fact, consider an identification
application with 10,000 users. We can certainly find a combination of a fast algorithm plus a
fast architecture (eventually exploiting parallelism) capable of carrying out an identification in
a few seconds. On the other hand, suppose that, for an acceptable FNMR, the FMR of the chosen algorithm is 10-5 (i.e., just one false match in 100,000 matches). Then the probability of
falsely accepting an individual during identification is FMRN 10%, and everyone has a good
19
chance of gaining access to the system by trying to get in with all the ten fingers in their two
hands. Multimodal biometric systems (see Chapter 7) seem to be the only obvious solution to
accuracy scalability in large-scale automatic identification.
If the templates in the database have been classified/indexed (see Section 1.12 and Chapter
5), then only a portion of the database is searched during identification and this results in a
different formulation of FNMRN and FMRN:
FNMRN = RER + (1RER) FNMR, where RER (Retrieval Error Rate) is the probability that the database template corresponding to the searched finger is wrongly discarded by the retrieval mechanism. The above expression is obtained using the
following argument: in case the template is not correctly retrieved (this happens with
probability RER), the system always generates a false non-match, whereas in the case
where the retrieval returns the right template (this happens with probability (1 RER)), the false non-match rate of the system is FNMR. Also, this expression
is only an approximation as it does not consider the probability of falsely matching an
incorrect template before the right one is retrieved (Cappelli, Maio, and Maltoni,
2000c);
FMRN = 1 (1 FMR)NP, where P (also called penetration rate) is the average percentage of the database searched during the identification of an input fingerprint.
The exact formulation of errors in an identification system is derived in Cappelli, Maio, and
Maltoni (2000c). The more complex case where the characteristics of the indexing/retrieval
mechanism are known is also discussed there.
20
1 Introduction
tested system has its own acquisition device. Data collection across all tested systems
has to be carried out in the same environment with the same population. Test results
are repeatable only to the extent that the modeled scenario can be carefully controlled
(UKBWG, 2002).
Operational evaluation: The goal of operational testing is to determine the performance of a complete biometric system in a specific application environment with a
specific target population. In general, operational test results are not repeatable because of unknown and undocumented differences between operational environments
(UKBWG, 2002).
In scenario and operational evaluations, the accuracy of a biometric system depends heavily on several variables: the composition of the population (e.g., occupation, age, demographics, race), the environment, the system operational mode, and other application-specific
constraints. In an ideal situation, one would like to characterize the application-independent
performance of a recognition system and be able to predict the real operational performance of
the system based on the application. Rigorous and realistic modeling techniques characterizing
data acquisition and matching processes are the only way to grasp and extrapolate the performance evaluation results. In the case of fingerprint recognition, the results of fingerprint
synthesis (see Chapter 6) exhibit many characteristics of finger appearance that can be exploited for simulations, but there do not exist any formal models for the data acquisition process under different conditions (e.g., different skin conditions, different distortions, different
types of cuts and their states of healing, subtle user mischief, and adversarial testing conditions, etc.).
Until many aspects of biometric recognition algorithms and application requirements are
clearly understood, the empirical, application-dependent evaluation techniques will be predominant and the evaluation results obtained using these techniques will be meaningful only
for a specific database in a specific test environment and specific application. The disadvantage of the empirical evaluation is that it is not only expensive to collect the data for each
evaluation, but it is also often difficult to objectively compare the evaluation results of two
different systems. Depending upon the data collection protocol, the performance results can
vary significantly. Biometric samples collected in a very controlled and non-realistic environment provide over-optimistic results that do not generalize well in practice.
For any performance metric to be able to generalize to the entire population of interest, the
test data should i) be representative of the population and ii) contain enough samples from
each category of the population (large sample size). Furthermore, the collection of two samples of the same biometric should be separated by a sufficient time period. Different applications, depending on whether the subjects are cooperative and habituated, or whether the target
population is benevolent or subversive, may require a completely different sample set (Wayman, 2001). Size of the sample set is a very important factor in obtaining a reliable estimate of
the error rates. The larger the size of the representative test samples, more reliable are the test
results (smaller confidence interval). Data collection is expensive, therefore it is desirable to
determine the smallest size of the database that will result in a given confidence interval. An
21
estimation of the smallest database size is typically governed either by heuristics (Doddington
et al., 1998) or by simplifying statistical assumptions (Wayman (2001) and UKBWG (2002)).
There are two methods of estimating confidence intervals: parametric and non-parametric.
To simplify the estimation, both approaches typically assume independent and identically distributed (i.i.d.) test samples. Furthermore, parametric methods make strong assumptions about
the form of the distribution. There is very little work on how to objectively test the validity of
these assumptions (Kittler, Messer, and Sadeghi, 2001). A typical parametric approach models
the test samples as independent Bernoulli trials and estimates the confidence intervals based
on the resulting binomial distribution, inasmuch as a collection of correlated Bernoulli trials is
also binomially distributed with a smaller variance (Viveros, Balasubramanian, and Mitas,
1984). Similarly, non-identically distributed test samples can be accommodated within the
parametric approach by making some assumptions about the data. Wayman (2001) applied
these methods to obtain estimates of accuracies as well as their confidence intervals. A nonparametric approach, such as bootstrap has been used by Bolle, Ratha, and Pankanti (1999) to
estimate the error rates as well as their confidence intervals. The non-parametric approaches
do not make any assumption about the form of the distributions. In addition, some nonparametric approaches such as bootstrapping techniques are known to be relatively immune to
violations of i.i.d. assumptions. Bolle, Ratha, and Pankanti (2001) further explicitly modeled
the weak dependence among typical fingerprint test sets by using a subset bootstrap technique.
This technique obtains a better estimate of the error rate confidence intervals than the techniques that do not take the dependency among the test data into account.
In summary, the performance evaluation of a biometric system is empirical and the resulting measures cannot be completely understood/compared without carefully considering the
methods that were used to acquire the underlying test data. Fortunately, the biometric community (e.g., United Kingdom Biometric Working Group) is making efforts towards establishing
best practices guidelines (UKBWG, 2002) for performance evaluation so that egregious mistakes in data collection can be avoided and the test results presented in a consistent and descriptive manner.
22
1 Introduction
first scientific paper reporting his systematic study on the ridge, furrow, and pore structure in
fingerprints (Figure 9.a) (Lee and Gaensslen, 2001).
a)
b)
c)
d)
Figure 1.8. Examples of archaeological fingerprint carvings and historic fingerprint impressions:
a) Neolithic carvings (Gavrinis Island) (Moenssens, 1971); b) standing stone (Goat Island, 2000
B.C.) (Lee and Gaensslen, 2001); c) a Chinese clay seal (300 B.C.) (Lee and Gaensslen,
2001); d) an impression on a Palestinian lamp (400 A.D.) (Moenssens, 1971). Although impressions on the Neolithic carvings and the Goat Island standing stones might not be used to indicate identity, there is sufficient evidence to suggest that the Chinese clay seal and impressions
on the Palestinian lamp were used to indicate the identity of the providers. Figures courtesy of
A. Moenssens, R. Gaensslen, and J. Berry.
Since then, a large number of researchers have invested huge amounts of effort on fingerprint studies. In 1788, a detailed description of the anatomical formations of fingerprints was
made by Mayer (Moenssens, 1971) in which a number of fingerprint ridge characteristics were
identified and characterized (Figure 1.9.b). Starting in 1809, Thomas Bewick began to use his
fingerprint as his trademark (Figure 1.9.c), which is believed to be one of the most important
milestones in the scientific study of fingerprint recognition (Moenssens, 1971). Purkinje, in
1823, proposed the first fingerprint classification scheme, which classified fingerprints into
nine categories according to the ridge configurations (Figure 1.9.d) (Moenssens, 1971). Henry
Fauld, in 1880, first scientifically suggested the individuality of fingerprints based on an empirical observation. At the same time, Herschel asserted that he had practiced fingerprint recognition for about 20 years (Lee and Gaensslen (2001) and Moenssens (1971)). These findings
established the foundation of modern fingerprint recognition. In the late nineteenth century,
Sir Francis Galton conducted an extensive study on fingerprints (Galton, 1892). He introduced
the minutiae features for fingerprint matching in 1888.
a)
23
b)
c)
d)
An important advance in fingerprint recognition was made in 1899 by Edward Henry, who
(actually his two assistants from India) established the well-known Henry system of fingerprint classification (Lee and Gaensslen, 2001). By the early twentieth century, the formations
of fingerprints were well understood. The biological principles of fingerprints (Moenssens,
1971) are summarized below:
1. individual epidermal ridges and furrows have different characteristics for different
fingerprints;
24
1 Introduction
2.
the configuration types are individually variable, but they vary within limits that allow for a systematic classification;
3. the configurations and minute details of individual ridges and furrows are permanent
and unchanging.
The first principle constitutes the foundation of fingerprint recognition and the second principle constitutes the foundation of fingerprint classification.
In the early twentieth century, fingerprint recognition was formally accepted as a valid personal identification method and became a standard routine in forensics (Lee and Gaensslen,
2001). Fingerprint identification agencies were set up worldwide and criminal fingerprint databases were established (Lee and Gaensslen, 2001). Various fingerprint recognition techniques, including latent fingerprint acquisition, fingerprint classification, and fingerprint
matching were developed. For example, the FBI fingerprint identification division was set up
in 1924 with a database of 810,000 fingerprint cards (see Federal Bureau of Investigation
(1984, 1991)).
With the rapid expansion of fingerprint recognition in forensics, operational fingerprint databases became so huge that manual fingerprint identification became infeasible. For example,
the total number of fingerprint cards (each card contains one impression each of the 10 fingers
of a person) in the FBI fingerprint database now stands well over 200 million from its original
number of 810,000 and is growing continuously. With thousands of requests being received
daily, even a team of more than 1300 fingerprint experts were not able to provide timely responses to these requests (Lee and Gaensslen, 2001). Starting in the early 1960s, the FBI,
Home Office in the UK, and Paris Police Department began to invest a large amount of effort
in developing automatic fingerprint identification systems (Lee and Gaensslen, 2001). Based
on the observations of how human fingerprint experts perform fingerprint recognition, three
major problems in designing AFISs were identified and investigated: digital fingerprint acquisition, local ridge characteristic extraction, and ridge characteristic pattern matching. Their
efforts were so successful that today almost every law enforcement agency worldwide uses an
AFIS. These systems have greatly improved the operational productivity of law enforcement
agencies and reduced the cost of hiring and training human fingerprint experts.
Automatic fingerprint recognition technology has now rapidly grown beyond forensic applications into civilian applications. In fact, fingerprint-based biometric systems are so popular
that they have almost become the synonym for biometric systems.
25
by the interaction of a specific genotype and a specific environment. Physical appearance and
fingerprints are, in general, a part of an individuals phenotype. Fingerprint formation is similar to the growth of capillaries and blood vessels in angiogenesis. The general characteristics
of the fingerprint emerge as the skin on the fingertip begins to differentiate. The differentiation
process is triggered by the growth in size of the volar pads on the palms, fingers, soles, and
toes. However, the flow of amniotic fluids around the fetus and its position in the uterus
change during the differentiation process. Thus the cells on the fingertip grow in a microenvironment that is slightly different from hand to hand and finger to finger. The finer details of
the fingerprints are determined by this changing microenvironment. A small difference in microenvironment is amplified by the differentiation process of the cells. There are so many
variations during the formation of fingerprints that it would be virtually impossible for two
fingerprints to be exactly alike. But, because the fingerprints are differentiated from the same
genes, they are not totally random patterns either.
The extent of variation in a physical trait due to a random development process differs
from trait to trait. By definition, identical twins can not be distinguished based on DNA. Typically, most of the physical characteristics such as body type, voice, and face are very similar
for identical twins and automatic recognition based on face and hand geometry will most
likely fail to distinguish them. Although the minute details in the fingerprints of identical
twins are different (Jain, Prabhakar, and Pankanti, 2002), a number of studies have shown significant correlation in the fingerprint class (i.e., whorl, right loop, left loop, arch, tented arch)
of identical (monozygotic) twin fingers; correlation based on other generic attributes of the
fingerprint such as ridge count, ridge width, ridge separation, and ridge depth has also been
found to be significant in identical twins (Lin et al., 1982). In dermatoglyphics studies, the
maximum generic difference between fingerprints has been found among individuals of different races. Unrelated persons of the same race have very little generic similarity in their fingerprints, parent and child have some generic similarity as they share half the genes, siblings have
more similarity, and the maximum generic similarity is observed in monozygotic (identical)
twins, which is the closest genetic relationship (Cummins and Midlo, 1943).
26
1 Introduction
27
touched by a finger. These latent prints can be lifted from the surface by employing certain
chemical techniques.
The main parameters characterizing a digital fingerprint image are: resolution, area, number of pixels, geometric accuracy, contrast, and geometric distortion. To maximize compatibility between digital fingerprint images and to ensure good quality of the acquired fingerprint
impressions, the US Criminal Justice Information Services (the largest division within the
FBI) released a set of specifications that regulate the quality and the format of both fingerprint
images and FBI-compliant off-line/live-scan scanners (Appendix F and Appendix G of CJIS
(1999)). Most of the commercial live-scan devices, designed for the non-AFIS market, do not
meet FBI specifications but, on the other hand, are usually more user friendly, compact, and
significantly cheaper.
There are a number of live-scan sensing mechanisms (e.g., optical FTIR, capacitive, thermal, pressure-based, ultrasound, etc.) that can be used to detect the ridges and valleys present
in the fingertip. Figure 1.10 shows an off-line fingerprint image acquired with the ink technique, a latent fingerprint image, and some live-scan images acquired with different types of
commercial live-scan devices.
Although optical scanners have the longest history, the new solid-state sensors are gaining
great popularity because of their compact size and the ease of embedding them into laptop
computers, cellular phones, smart pens, and the like. Figure 1.11 shows some examples of
fingerprint sensors embedded in a variety of computer peripherals and other devices.
Chapter 2 of this book discusses fingerprint sensing technologies, provides some indications about the characteristics of commercially available scanners and shows images acquired
with a number of devices in different operating conditions (good quality fingers, poor quality
fingers, dry and wet fingers). One of the main causes of accuracy drop in fingerprint-based
systems is the small sensing area. To overcome (or at least to reduce) this problem, mosaicking techniques attempt to build a complete fingerprint representation from a set of smaller partially overlapping images.
Storing raw fingerprint images may be problematic for large AFISs. In 1995, the size of
the FBI fingerprint card archive contained over 200 million items, and archive size was increasing at the rate of 30,000 to 50,000 new cards per day. Although the digitization of fingerprint cards seemed to be the most obvious choice, the resulting digital archive could become
extremely large. In fact, each fingerprint card, when digitized at 500 dpi requires about 10
Mbytes of storage. A simple multiplication by 200 million yields the massive storage requirement of 2000 terabytes for the entire archive. The need for an effective compression technique
was then very urgent. Unfortunately, neither the well-known lossless methods nor the JPEG
methods were found to be satisfactory. A new compression technique (with small acceptable
loss), called Wavelet Scalar Quantization (WSQ), became the FBI standard for the compression of 500 dpi fingerprint images. Besides WSQ, a number of other compression techniques,
as surveyed in Chapter 2, have been proposed.
28
1 Introduction
a)
d)
b)
e)
c)
f)
Figure 1.10. Fingerprint images from: a) a live-scan FTIR-based optical scanner; b) a live-scan
capacitive scanner; c) a live-scan piezoelectic scanner; d) a live-scan thermal scanner; e) an
off-line inked impression; f) a latent fingerprint.
29
Figure 1.11. Fingerprint sensors can be embedded in a variety of devices for user recognition
purposes.
A good fingerprint representation should have the following two properties: saliency and
suitability. Saliency means that a representation should contain distinctive information about
the fingerprint. Suitability means that the representation can be easily extracted, stored in a
compact fashion, and be useful for matching. Saliency and suitability properties are not generally correlated. A salient representation is not necessarily a suitable representation. In addition,
in some biometrics applications, storage space is at a premium. For example, in a smartcard
application, typically about 2 Kbytes of storage are available. In such situations, the representation also needs to be parsimonious.
Image-based representations, constituted by raw pixel intensity information, are prevalent
among the recognition systems using optical matching and correlation-based matching. However, the utility of the systems using such representation schemes may be limited due to factors such as brightness variations, image quality variations, scars, and large global distortions
present in the fingerprint image. Furthermore, an image-based representation requires a considerable amount of storage. On the other hand, an image-based representation preserves the
maximum amount of information, makes fewer assumptions about the application domain, and
therefore has the potential to be robust to wider varieties of fingerprint images. For instance, it
is extremely difficult to extract robust features from a (degenerate) finger devoid of any ridge
structure.
The fingerprint pattern, when analyzed at different scales, exhibits different types of features.
30
1 Introduction
At the global level, the ridge line flow delineates a pattern similar to one of those
shown in Figure 1.12. Singular points, called loop and delta (denoted as squares and
triangles, respectively in Figure 1.12), are a sort of control points around which the
ridge lines are wrapped (Levi and Sirovich, 1972). Singular points and coarse ridge
line shape are very important for fingerprint classification and indexing (see Chapter
5), but their distinctiveness is not sufficient for accurate matching. External fingerprint shape, orientation image, and frequency image also belong to the set of features
that can be detected at the global level.
a)
b)
d)
c)
e)
Figure 1.12. Fingerprint patterns as they appear at a coarse level: a) left loop; b) right loop; c)
whorl; d) arch; and e) tented arch; squares denote loop-type singular points, and triangles deltatype singular points.
At the local level, a total of 150 different local ridge characteristics, called minute details, have been identified (Moenssens, 1971). These local ridge characteristics are
not evenly distributed. Most of them depend heavily on the impression conditions and
quality of fingerprints and are rarely observed in fingerprints. The two most prominent ridge characteristics, called minutiae (see Figure 1.13), are: ridge termination
and ridge bifurcation. A ridge ending is defined as the ridge point where a ridge ends
abruptly. A ridge bifurcation is defined as the ridge point where a ridge forks or diverges into branch ridges. Minutiae in fingerprints are generally stable and robust to
fingerprint impression conditions. Although a minutiae-based representation is char-
31
acterized by a high saliency, a reliable automatic minutiae extraction can be problematic in low-quality fingerprints (hence the suitability of this kind of representation is
not optimal).
At the very-fine level, intra-ridge details can be detected. These are essentially the
finger sweat pores (see Figure 1.13) whose position and shape are considered highly
distinctive. However, extracting pores is feasible only in high-resolution fingerprint
images (e.g., 1000 dpi) of good quality and therefore this kind of representation is not
practical for most applications.
Figure 1.13. Minutiae (black-filled circles) in a portion of fingerprint image; sweat pores (empty
circles) on a single ridge line.
Chapter 3 describes fingerprint anatomy and introduces the techniques available for processing fingerprint images and extracting salient features. Specific sections are dedicated to the
definition and description of approaches for computing local ridge orientation, local ridge frequency, singular points, and minutiae. Particular emphasis is placed on fingerprint segmentation (i.e., isolation of fingerprint area from the background), fingerprint image enhancement,
and binarization, which are very important intermediate steps in the extraction of salient features.
32
1 Introduction
a)
b)
c)
d)
Figure 1.14. Difficulty in fingerprint matching. Fingerprint images in a) and b) look different to an
untrained eye but they are impressions of the same finger. Fingerprint images in c) and d) look
similar to an untrained eye but they are from different fingers.
Human fingerprint examiners, in order to claim that two fingerprints are from the same
finger, evaluate several factors: i) global pattern configuration agreement, which means that
two fingerprints must be of the same type, ii) qualitative concordance, which requires that the
corresponding minute details must be identical, iii) quantitative factor, which specifies that at
least a certain number (a minimum of 12 according to the forensic guidelines in the United
States) of corresponding minute details must be found, and iv) corresponding minute details,
which must be identically inter-related. In practice, complex protocols have been defined for
fingerprint matching and a detailed flowchart is available to guide fingerprint examiners in
manually performing fingerprint matching.
Automatic fingerprint matching does not necessarily follow the same guidelines. In fact,
although automatic minutiae-based fingerprint matching is inspired by the manual procedure,
a large number of approaches have been designed over the last 40 years, and many of them
33
34
1 Introduction
gerprint cards). To reduce the search time and computational complexity, it is desirable to
classify these fingerprints in an accurate and consistent manner such that the input fingerprint
needs to be matched only with a subset of the fingerprints in the database. Fingerprint classification is a technique used to assign a fingerprint to one of the several pre-specified types already established in the literature (see Figure 1.12). Fingerprint classification can be viewed as
a coarse-level matching of the fingerprints. An input fingerprint is first matched to one of the
pre-specified types and then it is compared to a subset of the database corresponding to that
fingerprint type. For example, if the fingerprint database is binned into five classes, and a fingerprint classifier outputs two classes (primary and secondary) with extremely high accuracy,
then the identification system will only need to search two of the five bins, thus decreasing (in
principle) the search space 2.5-fold. Unfortunately, only a limited number of major fingerprint
categories have been identified (e.g., five), the distribution of fingerprints into these categories
is not uniform, and there are many ambiguous fingerprints (see Figure 1.15), whose exclusive membership cannot be reliably stated even by human experts. In fact, the definition of
each fingerprint category is both complex and vague. A human inspector needs a long period
of experience to reach a satisfactory level of performance in fingerprint classification. About
17% of the 4000 images in the NIST Special Database 4 (Watson and Wilson, 1992a) have
two different ground truth labels. This means that even human experts could not agree on the
true class of the fingerprint for about 17% of the fingerprint images in this database. Therefore, in practice, fingerprint classification is not immune to errors and does not offer much
selectivity for fingerprint searching in large databases.
a)
b)
c)
Figure 1.15. Examples of fingerprints that are difficult to classify; a) tented arch; b) a loop; c) a
whorl; it seems that all the fingerprints shown here should be in the loop category.
To overcome this problem, some authors have proposed methods based on continuous
classification or on other indexing techniques. In continuous classification, fingerprints are
not partitioned into non-overlapping classes, but each fingerprint is characterized with a numerical vector summarizing its main features. The continuous features obtained are used for
35
indexing fingerprints through spatial data structures and for retrieving fingerprints by means of
spatial queries.
Chapter 5 of this book covers fingerprint classification and indexing techniques. The fingerprint classification literature is surveyed in detail and the proposed methods are categorized
into one or more of the following families: rule-based approaches, syntactic approaches, structural approaches, statistical approaches, neural networks, and multiple classifiers. A separate
section introduces the standard notation used to compute classification performance and
compares existing methods on NIST Special Database 4 (Watson and Wilson, 1992a) and
NIST Special Database 14 (Watson, 1993a) which are the most commonly used benchmarks
for fingerprint classification studies. Fingerprint sub-classification (i.e., a multilevel partitioning strategy) and continuous classification are then discussed and the associated retrieval
strategies are introduced and compared.
36
1 Introduction
Figure 1.16. Synthetic fingerprint images generated with the software tool SFINGE, whose
demo version accompanies this book.
The mathematical models on which the generator is based are introduced, together with
visual examples of the intermediate steps. Part of the chapter is dedicated to the validation of
the synthetic generator, that is, to the definition of measures and criteria that help to understand how realistic the generated synthetic fingerprints are, not simply on the basis of their
appearance, but also from the point of view of fingerprint matching algorithms. Finally, the
concluding section briefly describes a software tool that is provided with the DVD that accompanies this book. This tool implements synthetic generation and can be used to create a
synthetic fingerprint step by step, observing the effects of various parameter values on the
resulting fingerprint image.
37
introduce some of the problems inherent in the possession- and knowledge-based techniques
for personal recognition which is not desirable. This implies that for the desired performance
improvement, we may need to rely on integrating multiple biometrics.
Multiple biometrics can also alleviate several practical problems in biometrics-based personal recognition. For instance, although a biometric identifier is supposed to be universal
(each person in the target population should possess it), in practice, no biometric identifier is
truly universal. Similarly, the biometric identifiers are not always sensed (failure to capture
rate) or measured (failure to enroll rate, failure to match rate) by a practical biometric system.
That is, some small fraction of the target population may possess biometric identifiers that are
not easily quantifiable by the given biometric system. Consequently, the recognition system
cannot handle this fraction of the population based on that particular biometric identifier. Furthermore, different biometrics may not be acceptable to different sections of the target population. In highly secure systems, reinforcement of evidence from multiple independent biometric
identifiers offers increasingly irrefutable proof of the identity of the authorized person. It is
also extremely difficult for an intruder to fake all the biometric traits of a genuine user in order
to circumvent the system. The assumptions of universality, collectability, acceptability, and
integrity are more realistically accommodated when person recognition is based on information from several biometric identifiers.
The output from multiple biometric sensors could be used to create a more reliable and/or
extensive (spatially, temporally, or both) input acquisition (Brooks and Iyengar, 1997). Multiple modalities of biometrics can be combined at the feature, matcher score, or matcher decision levels. The representations extracted from many biometric sensors could be collated and
the decisions could be made based on the augmented feature vector. The integration at sensor
or representation level assumes a strong interaction among the input measurements and such
integration schemes are referred to as tightly coupled integrations (Clark and Yuille, 1990).
The loosely coupled systems, on the other hand, assume very little or no interaction among the
inputs (e.g., face and finger) and integration occurs at the output of relatively autonomous
agents, each agent independently assessing the input from its own perspective.
Focus of most multimodal biometrics research has been on loosely coupled systems. The
loosely coupled systems are not only simpler to implement, they are more feasible in commonly confronted integration scenarios. A typical scenario for integration is two biometric
systems (often proprietary) independently acquiring inputs and making an autonomous assessment of the match based on their respective identifiers; although the decisions or scores
of individual biometric systems are available for integration, the features used by one biometric system are not accessible to the other biometric system. Decision-level and matcher scorelevel integration can provably deliver at least as good or better performance than any single
constituent biometric (Hong, Jain, and Pankanti, 1999). With the advent of the API standards
(e.g., BioAPI; www.bioapi.org), we expect to see score- or decision-level integration of fingerprints with other biometric identifiers.
Tightly coupled multimodal biometric integration is much harder. The National Institute of
Standards and Technology (NIST) and the American Association of Motor Vehicle Adminis-
38
1 Introduction
trators (AAMVA) introduced initiatives on interoperability that have led to common formats
of fingerprint representations for easy exchange of data among vendors (Ratha et al., 1999). A
limitation of these approaches is that these schemes force vendors to use the least common
denomination of the representation as a basis of data sharing (e.g., minutiae) and consequently,
there may be degradation in performance when one vendor is using the features extracted by
another vendor. It remains to be seen whether there will be any development of feature languagespecification that will facilitate describing blackbox features for which blackbox matchers can be developed.
Chapter 7 introduces the reader to the various advantages of designing a multimodal biometric system and presents arguments from the multiclassifier literature that a (carefully designed) integrated system is expected to result in a significant improvement in recognition
accuracy. Various combination schemes are categorized on the basis of architecture (e.g., cascade, hierarchical, and parallel), level of fusion (e.g., feature, confidence, rank, and abstract),
fusion strategy (e.g., sum and product), and selection/training (e.g., stacking, bagging, and
boosting) approaches for individual modalities. Five most common fusion scenarios in a fingerprint recognition system are briefly discussed. The chapter concludes by reviewing the
most successful multimodal biometric systems designed to date.
1 Negative recognition applications cannot work in verification mode: in fact, the system has to search
the entire archive to prove that the input is not already present. Sometimes, also in positive applications,
the system must necessarily work in identification mode, due to the practical impossibility of using an
input device to enter a PIN.
39
If the system designer/integrator is also the developer of the feature extraction and matching (and eventually classification/retrieval) algorithms, then she certainly has the necessary
knowledge to combine all the pieces and to select the optimal fingerprint scanner and computing platform. In the biometric field, developers/integrators of systems and applications are not
always the producers of hardware devices and core software, and therefore, care must be taken
when choosing basic hardware and software components. The system designer should take
into account several factors:
proven technology: have the hardware and software components been tested by third
parties? Are the test results available? Is the vendor available to prove in some way
that the claimed performance is true?
system interoperability and standards: is the system compliant with emerging standards? Is the software compliant with all the platforms and operating systems of interest?
cost/performance tradeoff: the optimal point in the cost/performance tradeoff strongly
depends on the application requirements. The cheapest solution is not necessarily the
best choice; biometrics is not infallible, and the success/failure of an application often
depends on how much of the customer expectation is met;
available documentation, examples, and support.
Vendors may supply an SDK (Software Development Kit) in the form of libraries for one
or more operating systems. These libraries typically include a series of primitives that allow
different tasks to be performed (e.g., acquisition, feature extraction, matching, template storage, etc.). The system/application designer is usually in charge of developing specific routines
for:
implementing ad hoc enrollment stages;
storing and retrieving template and user information in/from a centralized/distributed
database;
defining the user search order in an identification application. For example, the template of the users most frequently accessing the system may be matched before those
of infrequent users;
defining policies and administration modules to let the system administration define
and control system behavior. This includes setting the system security options (system threshold, number of trials, alarms, etc.) and logging information about access attempts.
An important point, when designing a fingerprint-based biometric system/application is to
decide from the beginning how to deal with users whose fingerprint quality is very poor. Although the percentage of users with unusable fingerprints (they are often called goats by
some authors (Doddington et al., 1998) is very small, they cannot be ignored, especially in
large-scale applications. There are several options to deal with goats:
in the enrollment stage, choose the best quality finger and eventually enroll more fingers or more instances of the same finger;
40
1 Introduction
41
prints from surfaces touched by the individuals, fingerprint information tapped from a communication channel) into the system. The access protection may sometimes involve physical
protection of the data or detection of the fraudulent physical access to the data (e.g., tamperproof enclosures). When the system and/or its communication channels are vulnerable to open
physical access, cryptographic methods should be employed to protect the biometric information.
A typical approach to protecting biometric information closely follows the path of mainstream security research: encrypting fingerprint data using various standard cryptographic
mechanisms. Although many standard encryption technologies could be applied independent
of the biometrics, some fingerprint-specific techniques exist. For instance, Soutar and Tomko
(1996) envisage a practical system of private DES-like encryption based on Fourier transform
of fingerprints. In another approach, Ratha, Connell, and Bolle (1999) propose i) a system of
using secret transformation of the biometric measurements and ii) using secret manipulation of
frequency domain descriptors of the biometric measurements to render them useless to intruders.
Invisible watermarking of fingerprint images may assure the database administrators that
all the images in the database are authentic and are not tampered with by an intruder (Yeung
and Pankanti, 2000) in situations where the biometrics are needed for visual inspection without having to decrypt the coded message for such operations. A further requirement is that
such watermarked images do not interfere with feature extraction/matching. Such mechanisms
of protection reduce the risk of unauthorized insertion of spurious records into the database.
Typical approaches to resist replay attacks also follow mainstream cryptographic strategies, which rely on introducing (encrypted) time/session sensitive challenge response mechanisms to authenticate the source/destination of the encrypted transmission. Ratha, Connell, and
Bolle (1999) have proposed to exploit stochastic noise in the biometric measurements to make
sure the stale measurements are rejected by the system. Some biometric systems update their
template representations to adapt to the temporally varying nature of the biometric identifier
(e.g., finger scratch or cut). Emerging standards like X.9.84 ensure that incremental changes in
template are not substantial so as to prevent inadvertent or intentional (insider) attacks to
gradually change one identity into the other.
How well one can guess a given fingerprint by brute force depends on the invariant information (see Chapters 8 and 9) in the fingerprints and the number of attempts typically offered
to the user. Inherently, fingerprints have significantly high information content which is substantially larger than weak passwords (e.g., date of birth, mothers maiden name) chosen by a
typical naive user. A leading biometric API, BioAPI, has explicit controls in its framework to
stymie guessing of biometric information (e.g., fingerprint minutiae) by hackers using the
feedback offered by the matcher based on optimization strategies such as hill climbing. On the
other hand, fingerprint data standardization (e.g., Common Biometric Exchange File Format
(CBEFF)) when used in isolation offers to somewhat weaken the security of the system by
publishing the basic format of the data and thus offering information to the culprits to enable
them to use the stolen data in multiple systems (e.g., Hill (2001)). Similarly, by making the
42
1 Introduction
functionality public, common API systems may become more prone to hacker attacks than the
systems designed with unknown software architecture. Security standards such as ANSI Standard X.9.84 ensure that the known software API structure is minimally exposed to adverse
hacking attacks (e.g., through encryption).
Denial-of-service attacks can be particularly problematic in any (biometric or nonbiometric) system. Biometric systems often face additional vulnerabilities in this aspect because of the nature of the sensors and because of high bandwidth requirements in the case of
centralized recognition services. Biometric sensors are often sensitive and fragile. Many livescan fingerprint scanners involve optical imaging and are prone to breakage on intentional
show of force. Solid-state fingerprint sensors can often be damaged using malicious electric
discharge. All sensors requiring optical contact can be rendered practically useless by scratching the glass platens. Developments in contact-less and ultrasonic sensors promise design of
more robust fingerprint sensors. On the other hand, forgetting a password or misplacing a token is a known problem in denying access to otherwise legitimate users.
Of particular interest among the various biometric circumvention measures is that of
checking whether the source of the input signal is a live genuine biometric (finger) and distinguishing it from a signal originating from a fraud (e.g., tight-fitting latex glove having impression of the genuine finger). The premise of a liveness test is that if the finger (surface) is live,
the impression made by it represents the person to whom the finger belongs. One of the approaches of detecting vitality is to use one or more vital signs (e.g., pulse, temperature) that
are not particular to the individual but are common to the entire target population. The livescan optical scanners use a frustrated total internal reflection mechanism for differentially imaging the fingerprint ridges and the fingerprint valleys. Such mechanisms are somewhat inherently resistant to attacks from two-dimensional replay of the fingerprint image. Highresolution fingerprint scanning can reveal characteristic sweat pore structure of skin (Roddy
and Stosz, 1997) that is difficult to replicate in an artificial finger. The skin tone of a live finger turns white/yellow when pressed against a glass platen. This effect can be exploited for
detecting a live finger. The blood flow in a live finger and its pulsation can be detected by
careful measurement of light reflected/transmitted through the finger. Difference in action
potentials across two specific points on a live fingerprint muscle can also be used to distinguish it from a dead finger. The electrical properties of a live finger can be ascertained rather
effortlessly in some solid-state fingerprint scanners. Measuring complex impedance of the
finger can be a useful attribute to distinguish a live finger from its lifeless counterpart. Finally,
a live finger typically generates sweat; in a solid-state sensor, the characteristic temporal evolution of the signal can determine liveness of a finger (Derakhshani et al., 2002). Any combination of pulse rate, electrocardiographic signals, spectral characteristics of human tissue,
percentage of oxygenation of blood, bloodflow, hematocrit, biochemical assays of tissue, electrical plethysmography, transpiration of gases, electrical property of skin, blood pressure, and
differential blood volumes can be used to detect a live finger.
The most important objection to biometrics from a security analysis point of view is that of
key replacement (Schneier, 1999): what happens when a biometric (e.g., finger) measurement
43
Forensic
Government
Commercial
Corpse Identification,
Criminal Investigation,
Terrorist Identification,
Parenthood Determination,
Missing Children, etc.
National ID card,
Correctional Facility,
Drivers License,
Social Security,
Welfare Disbursement,
Border Control,
Passport Control, etc.
Table 1.2. Most of the fingerprint recognition applications are divided here into three categories.
Traditionally, forensic applications have used manual biometrics, government applications have
used token-based systems, and commercial applications have used knowledge-based systems.
Fingerprint recognition systems are now being increasingly used for all these sectors. Note that
over one billion dollars in welfare benefits are annually claimed by double dipping welfare recipients in the United States alone.
44
1 Introduction
ATM
Laptop
Data
Security
Network
Logon
Credit
Card
Electronic
Access
Cellular
Phone,
PDA
Airport
Check-in
Web
Access
Electronic
Banking
Figure 1.17. Various electronic access applications in widespread use that require automatic
recognition.
These applications may be divided into the following groups: i) applications such as banking, electronic commerce, and access control, in which biometrics will replace or enforce the
current token- or knowledge-based techniques and ii) applications such as welfare and immigration in which neither the token-based nor the knowledge-based techniques are currently
being used.
Information system/computer network security, such as user authentication and access to
databases via remote login, is one of the most important application areas for fingerprint recognition. It is expected that more and more information systems/computer networks will be
secured with fingerprints with the rapid expansion of the Internet. Applications such as medical information systems, distance learning, and e-publishing are already benefiting from deployment of such systems. Electronic commerce and electronic banking are also important and
emerging application areas of biometrics due to the rapid progress in electronic transactions.
These applications include electronic fund transfers, ATM security, check cashing, credit card
security, smartcard security, on-line transactions, and so on. Currently, there are several large
fingerprint security projects under development in these areas, including credit card security
(MasterCard) and smartcard security (IBM and American Express).
The physical access control market is currently dominated by token-based technology.
However, it is increasingly shifting to fingerprint-based biometric techniques. The introduc-
45
tion of fingerprint-based biometrics in government benefits distribution programs such as welfare disbursement has already resulted in substantial savings in deterring multiple claimants. In
addition, customs and immigration initiatives such as the INS Passenger Accelerated Service
System (INSPASS) which permits faster immigration procedures based on hand geometry will
greatly increase operational efficiency. Fingerprint-based national ID systems provide a
unique ID to the citizens and integrate different government services. Fingerprint-based voter
and driver registration provides registration facilities for voters and drivers. Fingerprint-based
time/attendance monitoring systems can be used to prevent any abuses of the current tokenbased/manual systems. Fingerprint-based recognition systems will replace passwords and tokens in a large number of applications. Their use will increasingly reduce identity theft and
fraud and protect privacy.
As fingerprint technology matures, there will be increasing interaction among market,
technology, and applications. The emerging interaction is expected to be influenced by the
added value of the technology, the sensitivities of the user population, and the credibility of
the service provider. It is too early to predict where and how fingerprint technology would
evolve and be mated with which applications, but it is certain that fingerprint-based recognition will have a profound influence on the way we will conduct our daily business.
46
1 Introduction
The person recognition value provided by a biometric identifier has almost no objectionable aspect to it from the perspective of the mainstream population and it is almost universally
agreed that it provides positive person recognition better than existing conventional technologies. The objections to biometric recognition are based on the following arguments. Some of
the privacy concerns surrounding biometrics may be related to personal sensitivities and connotations. Human recognition is traditionally conceived as a mutually reciprocal action between two individuals. Methods of automatic recognition of individuals, especially those
based on biometrics, may be perceived as undignifying to humans. Religious objections to the
use of biometrics interpret biometric recognition as the mark of beast by citing somewhat
dubious biblical references.2 Some people have raised concerns about hygiene of biometric
scanners requiring contact. Given that we routinely touch many objects (e.g., money) touched
by strangers, this objection may be considered as a frivolous excuse. There may be negative
connotations associated with some biometrics (fingerprints, faces, and DNA) due to their
prevalent use in criminal investigation. Despite the criminal stigma associated with fingerprints, a CNN poll on September 27, 2001 found that fingerprints rate high in social acceptability.
Although person recognition functionality of biometrics appears to be relatively nonintrusive, stronger criticisms are being leveled against the other capabilities of biometric identifiers.
Unintended Functional Scope: Because biometric identifiers are biological in origin,
some additional (possibly statistical) personal information may be gleaned from the
scanned biometric measurements. For instance, it has been long known that certain
malformed fingers may be statistically correlated with certain genetic disorders
(Babler (1991), Penrose (1965), and Mulvhill (1969)). With rapid advancement in
human genome research, the fear of inferring further information from biological
measurements may be imminent. Such derived medical information may become a
basis for systematic discrimination against the perceived risky sections of population.
Unintended Application Scope: Strong biometric identifiers such as fingerprints allow
the possibility of person recognition. For instance, persons legally maintaining multinyms (say, for safety reasons) can be found out based on their fingerprints. The ability to link bits and pieces of behavioral information about individuals enrolled in
widely different applications based on biometric identifiers is often construed as accumulation of power over individuals and leaving them with even fewer choices. One
of the specific concerns about biometrics is that they are not typically secret. By acquiring strong biometrics identifiers (either covertly or overtly without permission),
one has the capacity to track an identified individual. It has been argued that auto2 He also forced everyone, small and great, rich and poor, free and slave, to receive a mark on his right
hand or on his forehead, so that no one could buy or sell unless he had the mark, which is the name of the
beast or the number of his name. (Revelation 13:16-17)
47
matically gathering individual information based on biometric identifiers accrues unfair advantage to people in power and reduces the sovereignty of private citizens. In
the case of fingerprints, presently there is no technology to automatically/routinely
capture fingerprints covertly to facilitate effortless tracking.3
Covert recognition: Biometrics are not secrets. It is often possible to obtain biometric
samples (e.g., face) without the knowledge of the person. This permits covert recognition of previously enrolled people; consequently, the persons who desire to remain
anonymous in any particular situation may be denied their privacy due to biometric
recognition. Although currently there is no technology for snooping fingerprints of
sufficient quality to positively identify persons, we cannot exclude that in the future
such technology might become commonplace.
The possible abuse of biometric information (or their derivatives) and related accountability procedures can be addressed through legislation by government/public (e.g., EU legislation
against sharing of biometric identifiers and personal information (Woodward, 1999)), assurance of self-regulation (e.g., self-regulation policies of the International Biometrics Industry
Association (IBIA)) by the biometric industry, and autonomous enforcement by regulatory
independent organizations (e.g., a Central Biometric Authority). Until such consensus is
reached, there may be a reluctance to provide (either raw or processed) biometrics measurements to centralized applications and to untrustworthy applications with a potential to share
biometric data with other applications. As a result, applications delivering recognition capability in a highly decentralized fashion will be favored.
In biometric verification applications, one way to decentralize a biometric system is by
storing the biometric information in a decentralized (encrypted) database over which the individual has complete control. For instance, one could store the fingerprint template of a user in
a smartcard that is issued to the user. In addition, if the smartcard is integrated with a small
fingerprint sensor, the input fingerprint retrieved from the sensor can be directly compared
with the template on the smartcard and the decision delivered (possibly in encrypted form) to
the outside world. Such a smartcard-sensor integrated decentralized system permits all the
advantages of biometric-based recognition without many of the stipulated privacy problems
associated with biometrics. The available smartcard-based fingerprint recognition commercial
products offer fingerprint sensing and matching on the smartcard, but usually the feature extraction component is performed on the host PC. However, since the technology for delivering
compact feature extraction and matching exists today, we believe it is a matter of time before
completely decentralized fingerprint-based recognition is delivered from smartcard-sensor
integrated systems for pervasive person recognition.
3 Although there is touchless (direct) fingerprint scanning technology available, it is expected to work in
close cooperation of the subject at a very close distance. There is presently no technology capable of
video snooping of fingerprints.
48
1 Introduction
1.19 Summary
Fingerprint recognition has come a long way since its inception more than 100 years ago. The
first primitive live-scan scanners designed by Cornell Aeronautical Lab/North American Aviation, Inc. were unwieldy beasts with many problems as compared to the sleek, inexpensive,
and relatively miniscule sensors available today. Over the past few decades, research and active use of fingerprint matching and indexing have also advanced our understanding of individuality information in fingerprints and efficient ways of processing this information.
Increasingly inexpensive computing power, cheap fingerprint sensors, and demand for security/efficiency/convenience have lead to the viability of fingerprint matching for every-day
positive person recognition in the last few years.
There is a popular misconception that automatic fingerprint matching is a fully solved
problem since it was one of the first applications of automatic pattern recognition. Despite
notions to the contrary, there are a number of challenges that remain to be overcome in designing a completely automatic and reliable fingerprint matcher, especially when fingerprint images are of poor quality. Although automatic systems are successful, the level of sophistication
of automatic systems in matching fingerprints today cannot rival that of a dedicated, welltrained, fingerprint expert. Still, automatic fingerprint matching systems offer a reliable, rapid,
consistent, and cost-effective solution in a number of traditional and newly emerging applications. Performance of various stages of a recognition system, including feature extraction,
classification, and minutiae matching, do not degrade gracefully with deterioration in the quality of the fingerprints. As mentioned earlier, significant research appears to be necessary to
enable us to develop feature extraction systems that can reliably and consistently extract a diverse set of features that provide rich information comparable to those commonly used by the
fingerprint experts.
In most pattern recognition applications (e.g., optical character recognition), the bestperforming commercial systems use a combination of matchers, matching strategies, and representations. There is limited work in combining multiple fingerprint matchers (Chapter 7);
more research/evaluation of such techniques is needed. The proprietary set of features used by
the system vendors and lack of a meaningful information exchange standard makes it difficult,
if not impossible, for the system designers to leverage the complementary strengths of different commercial systems.
It is not an exaggeration to state that research in automatic fingerprint recognition has been
mostly an exercise in imitating the performance of a human fingerprint expert without access
to the many underlying information-rich features an expert is able to glean by visual examination. The lack of such a rich set of informative features in automatic systems is mostly due to
the unavailability of complex modeling and image processing techniques that can reliably and
consistently extract detailed features in the presence of noise. Perhaps, using the human intuition-based manual fingerprint recognition approach may not be the most appropriate basis for
the design of automatic fingerprint recognition systems; there may be a need for exploring
1.19 Summary
49
radically different features (see Chapter 4) rich in discriminatory information, robust methods
of fingerprint matching, and more ingenious methods for combining fingerprint matching and
classification that are amenable to automation.
Only a few years back, it seemed as though interest in fingerprint matching research was
waning. As mentioned earlier, due to a continuing increase in identity fraud, there is a growing
need for positive person recognition. Lower fingerprint sensor prices, inexpensive computing
power, and our (relatively better) understanding of individuality information in fingerprints
(compared to other biometrics) have attracted a lot of commercial interest in fingerprint-based
person recognition. Consequently, dozens of fingerprint recognition vendors have mushroomed in the last few years, and the market revenue is expected to constantly grow over the
next five years (see Figure 1.18.a). Reliable pervasive embedded applications of fingerprintbased recognition (e.g., in a smartcard or in a cell phone) may not be far behind. Scientific
research on fingerprint recognition is also receiving more attention; proof of this is the exponentially increasing number of scientific publications per year on this topic (see Figure
1.18.b). We strongly believe that higher visibility of (and liability from) performance limitations of commercial fingerprint recognition applications would fuel much stronger research
interest in some of the most difficult research problems in fingerprint-based recognition. Some
of these difficult problems will entail solving not only the core pattern recognition challenges
but also confronting very challenging system engineering issues related to security and privacy.
Fingerprint Market Revenue 2002-2007
50
($M US)
45
40
4.035
35
30
3.112
25
2.199
20
1.467
15
928
10
601
5
0
2002
2003
2004
2005
2006
2007
19
71
19
73
19
75
19
77
19
79
19
81
19
83
19
85
19
87
19
89
19
91
19
93
19
95
19
97
19
99
20
01
4500
4000
3500
3000
2500
2000
1500
1000
500
0
Figure 1.18. a) Biometric Market Report (International Biometric Group) predicts that total
fingerprint revenues, including law enforcement and large-scale public sector usage, will grow
rapidly. The maximum growth is expected in the PC/Network Access and e-Commerce market.
b) The number of scientific papers published on fingerprint research in last 30 years shows
gaining interest in fingerprint recognition research.
50
1 Introduction
On a more speculative note, we wonder whether the protection of personal privacy the way
we formulate it today, is a fundamentally flawed concept (Brin, 1998); we need to refine our
concept of privacy to accommodate the new values created by the automated information
society (Nissenbaum, 2001). With the advent of rapidly developing technologies, if rich and
resourceful people will be able to covertly (either using biometrics or otherwise) access information about all individuals in society, will we all agree to suffer the indignity of universal
transparency (through perhaps silicon chip transponders implanted into the bodies of human
beings (McGinity, 2000)) to achieve universal accountability in society? Certainly, these chip
technologies are more attractive in terms of their recognition performance and mutability of
their signature. It then remains to be seen in the years to come, whether we would prefer to
engineer biometric systems that will recognize us in a natural way, to engineer our bodies so
that they are more easily identifiable by the machines, or to engineer a system in between the
two extremes. An answer to this question will be based on how biometric-based recognition
will be perceived by our society faced with the problem of effectively combating increasing
identity fraud.
Further introductory readings (books, special issues, and survey papers) on the different
aspects of fingerprint recognition (including forensic practices and AFIS) can be found in:
Scott (1951), Chapel (1971), Moenssens (1971), Eleccion (1973), Swonger (1973), Banner
and Stock (1975a, 1975b), Riganati (1977), Cowger (1983), Wilson and Woodard (1987),
Mehtre and Chatterjee (1991), Federal Bureau of Investigation (1984, 1991), Overton and
Richardson (1991), Hollingum (1992), Miller (1994), Shen and Khanna (1997), Ashbaugh
(1999), Jain et al. (1999, 2001), Maio and Maltoni (1999b), Jain and Pankanti (2000, 2001,
2001b), Jain, Hong, and Pankanti (2000), Pankanti, Bolle, and Jain (2000) and Lee and
Gaensslen (2001).
Image processing
51
Pattern recognition
R.O. Duda, P.E. Hart, and D.G. Stork, Pattern Classification (2nd edition), WileyInterscience, New York, 2000.
P.A. Devijver and J. Kittler, Pattern Recognition: A Statistical Approach, PrenticeHall, Englewood Cliffs, NJ, 1982.
T. Pavlidis, Structural Pattern Recognition, Springer-Verlag, New York, 1977.
A.K. Jain and R.C. Dubes, Algorithms for Clustering Data, Prentice-Hall, Englewood
Cliffs, NJ, 1988.
A.K. Jain, R. Duin, and J. Mao, Statistical Pattern Recognition: A Review, IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 22, pp. 437, January 2000.
A.K. Jain, M.N. Murty, and P.J. Flynn, Data Clustering: A Review, ACM Computing Surveys, vol. 31, no. 3, pp. 264323, September 1999.
C.M. Bishop, Neural Networks for Pattern Recognition, Oxford University Press,
Oxford, 1995.
A.K. Jain, J. Mao, and M. Mohiuddin Neural Networks: A Tutorial, IEEE Computer, vol. 29, no. 3, pp. 3144, March 1996.
J. Kittler and F. Roli (eds.), Multiple Classifier Systems, First, Second and Third International Workshops (MCS 2000, 2001, and 2002), Proceedings, Lecture Notes in
Computer Science 1857, 2096, and 2364, Springer, New York, 2000, 2001, and 2002.
52
1 Introduction