Pattern Recognition 37 (2004) 1987 – 1998
www.elsevier.com/locate/patcog
Palmprint classi cation using principal lines
Xiangqian Wua , David Zhangb;∗ , Kuanquan Wanga , Bo Huanga
a School
b Department
of Computer Science and Technology, Harbin Institute of Technology (HIT), Harbin 150001, China
of Computing, Biometric Research Centre, Hong Kong Polytechnic University, Hung Hum, Kowloon, Hong Kong
Received 30 January 2004; accepted 12 February 2004
Abstract
This paper proposes a novel algorithm for the automatic classi cation of low-resolution palmprints. First the principal lines
of the palm are de ned using their position and thickness. Then a set of directional line detectors is devised. After that we use
these directional line detectors to extract the principal lines in terms of their characteristics and their de nitions in two steps:
the potential beginnings (“line initials”) of the principal lines are extracted and then, based on these line initials, a recursive
process is applied to extract the principal lines in their entirety. Finally palmprints are classi ed into six categories according
to the number of the principal lines and the number of their intersections. The proportions of these six categories (1–6) in
our database containing 13,800 samples are 0.36%, 1.23%, 2.83%, 11.81%, 78.12% and 5.65%, respectively. The proposed
algorithm has been shown to classify palmprints with an accuracy of 96.03%.
? 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Keywords: Biometrics; Palmprint classi cation; Principal lines; Life line; Head line; Heart line
1. Introduction
Computer-aided personal recognition is becoming increasingly important in our information society, and in this
eld biometrics is one of the most important and reliable
methods [1,2]. The most widely used biometric feature is
the ngerprint [3,4] and the most reliable feature is the
iris [1,5,6]. However, it is very dicult to extract small
unique features (known as minutiae) from unclear ngerprints [3,4] and iris input devices are very expensive. Yet
other biometric features, such as the face [7,8] and voice
[9,10], are less accurate. A palm is the inner-surface of
the hand between the wrist and the ngers [1]. A palmprint is de ned as the prints on a palm, which are mainly
composed of the palm lines and ridges. A palmprint, as
a relatively new biometric feature, has several advantages
compared with other currently available features [11]:
palmprints contain more information than ngerprints,
∗ Corresponding author. Tel.: +852-2766-7271;
fax: +852-2774-0842.
E-mail address:
[email protected] (D. Zhang).
so they are more distinctive; palmprint capture devices are
much cheaper than iris devices; palmprints contain additional distinctive features such as principal lines and wrinkles, which can be extracted from low-resolution images;
and last, by combining all of the features of a palm, such as
palm geometry, ridge and valley features, and principal lines
and wrinkles, it is possible to build a highly accurate biometrics system. Given these advantages, in recent years, palmprints have been investigated extensively in automated personal authentication. Duta et al. [12] extracted some points
(called “feature points”) on palm-lines from oine palmprint images for veri cation. Zhang et al. [13] used 2-D Gabor lters to extract the texture features from low-resolution
palmprint images and employed these features to implement
a highly accurate online palmprint recognition system. Han
et al. [14] used Sobel and morphological operations to extract line-like features from palmprints. Kumar et al. [15]
integrated line-like features and hand geometric features for
personal veri cation.
All of these palmprint authentication methods require
that the input palmprint should be matched against a large
number of palmprints in a database, which is very time
0031-3203/$30.00 ? 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
doi:10.1016/j.patcog.2004.02.015
1988
X. Wu et al. / Pattern Recognition 37 (2004) 1987 – 1998
Fig. 1. (a) The typical principal lines on a palm; (b) De ned points and lines on a palm.
consuming. To reduce the search time and computational
complexity, it is desirable to classify palmprints into several
categories such that the input palmprint need be matched
only with the palmprints in its corresponding category,
which is a subset of palmprints in the database. Like ngerprint classi cation [16,17], palmprint classi cation is a
coarse-level matching of palmprint. Shu et al. [18] used the
orientation property of the ridges on palms to classify oine
high-resolution palmprints into six categories. Obviously,
this classi cation method is unsuitable for low-resolution
palmprints because it is impossible to obtain the orientation
of the ridges from low-resolution images. As the rst attempt at online low-resolution palmprint classi cation, we
classify palmprints by taking into account their most visible
and stable features, i.e. the principal lines. Most palmprints
show three principal lines: heart line, head line and life line
(Fig. 1(a)). In this paper, we describe how these principal
lines may be extracted according to their characteristics,
which allows us then to classify palmprints into six categories by the number of principal lines and the number of
their intersections.
The rest of this paper is organized as follows. Section 2
presents some de nitions and notation. Section 3 develops
a key point detection technique. Section 4 details how principal lines are extracted. Section 5 explains the criteria for
palmprint classi cation. Section 6 reports some experimental results, and Section 7 provides some conclusions.
2. De nitions and notations
Because there are many lines in palmprints, it is very
dicult—without explicit de nitions—to distinguish principal lines from mere wrinkles. When people discriminate
between principal lines and wrinkles, the position and thickness of the lines play a key role. Likewise, we de ne the
principal lines according to their positions and thickness.
To determine the positions of principal lines, we rst dene some points and straight lines in a palmprint. Fig. 1(b)
illustrates these points and straight lines. Points A, B, C, F
and G are the root points of the thumb, fore nger and little
nger. Points D and E are the midpoints of the root line of
the middle nger and the ring nger. Line BG is the line
passing through Points B and G. Line AH is the straight line
parallel with Line BG, intersecting with the palm boundary
at Point H. Line AB is the line passing through Points A and
B. Line CL is the straight line parallel with Line AB and
intersecting with Lines BG and AH at Points M and L, respectively. Line GH passes through Points G and H. Lines
FI and EJ are the straight lines parallel with Line GH and
intersecting with Lines BG and AH at Points P, I, J and O,
respectively. K is the midpoint of straight line segment AH.
Line DR passes through Points D and K. Line EQ is the
straight line parallel with Line DR and intersects with the
palm boundary at Point Q. Using these points and lines, we
de ne the principal lines as below:
The heart line is a smooth curve that satis es the following conditions (Fig. 2(a)):
(1) Originating from region GHIP;
(2) Running across line-segment OJ;
(3) Not running across line-segment AH.
The head line is a smooth curve that satis es the following
conditions (Fig. 2(b)):
(1)
(2)
(3)
(4)
It is not the same curve as the extracted heart line;
Originating from region ABML;
Running across straight-line DR;
The straight line which passes the two endpoints of this
curve runs across line-segment EQ;
(5) The straight line which passes the two endpoints of this
curve does not run across line-segment BG.
The life line is a smooth curve that satis es the following
conditions (Fig. 2(c)):
(1) Originating from region ABML;
(2) Running across line-segment AH;
(3) The straight line which passes the two endpoints of this
curve does not run across line-segment EQ.
X. Wu et al. / Pattern Recognition 37 (2004) 1987 – 1998
1989
Fig. 2. De nitions of principal lines: (a) the heart line, (b) the head line and (c) the life line.
In our work, we describe principal lines according to following three basic rules. First, the number of each type of
principal line occurring in a palmprint is less than or equal
to 1. If more than one line satis es the same conditions, we
keep the one with the greatest average magnitude. Second,
we do not take into account broken principal lines. When a
principal line is broken at some place, we regard the broken
point as its endpoint. Third, we regard each principal line
as a curve without branches. Thus, if there are branches, we
keep the smoothest curve and discard the others.
3. Key points detection
Given the above de nitions, we must rst detect a set of
points and lines before we can extract principal lines. Point
A, B, C, D, E, F and G are the key points from which the
other points and the de ned lines can be easily obtained
using their de nitions (Fig. 1(b)). All images used in this
paper were captured online using a CCD-camera-based device tted with three pegs. One peg separates the rst and
middle ngers, another the middle and third ngers, and the
nal peg separates the third nger and little nger. These
pegs are broad enough to stretch the ngers apart, thereby
allowing us to detect the key points.
Detecting key points begins with extraction of the boundary of the palm by rst smoothing the original image
(Fig. 3(a)) using a low-pass lter and a threshold to con-
vert it into a binary image (Fig. 3(b)) and then tracing the
boundary of the palm (Fig. 3(c)). In our database, a small
portion of some palm images below the little nger is not
captured. In these cases, we use the corresponding boundary segment of the image to represent the missing palm
boundary segment.
Detecting point A (Fig. 3(d)):
(1) Find a point (X) on the thumb boundary;
(2) Find the point (Y) on the palm boundary whose column
index is less l than that of point X (here, l = 30);
(3) In all points on the palm boundary segment between
point X and point Y, nd the point which is farthest
from line XY as point A.
Detecting points C, D, E and F:
(1) Use straight lines, L1 , L2 , L3 , L4 , L5 and L6; to t
the segments of the boundary of the fore nger, middle
nger, third nger and little nger (Fig. 3(d)):
L i : y = ki x + b i ;
(1)
Mi k Mi k
Mi
xi × k=1 yi − Mi × k=1 (xik × yik )
;
ki = k=1
2
i
Mi
k 2
k
− Mi × M
k=1 (xi )
k=1 xi
(2)
Mi k
Mi k
yi − ki × k=1 xi
bi = k=1
;
(3)
Mi
1990
X. Wu et al. / Pattern Recognition 37 (2004) 1987 – 1998
Fig. 3. The process of key point detection.
i
where {(xik ; yik )}M
k=1 , (i = 1; : : : ; 6) are the coordinates
of the points on the segments of the boundary of the
fore nger, middle nger, third nger and little nger,
respectively; Mi is the total number of points on the
corresponding segment.
(2) Compute the bisectors (B1 , B2 and B3 ) of the angles formed by L1 and L2 , L3 and L4 , and L5
and L6 :
Bi : y = Ki x + Bi ;
(4)
1991
X. Wu et al. / Pattern Recognition 37 (2004) 1987 – 1998
(3)
(4)
(5)
(6)
(7)
2
2
+ k2×i × 1 + k2×i−1
k2×i−1 × 1 + k2×i
;
Ki =
2
2
1 + k2×i−1
+ 1 + k2×i
(5)
2
2
+ b2×i × 1 + k2×i−1
b2×i−1 × 1 + k2×i
Bi =
;
2
2
1 + k2×i−1
+ 1 + k2×i
(6)
4.1. Directional line detectors
where i = 1; 2; 3; k1 ∼ k6 ; b1 ∼ b6 are computed by
Eqs. (2)–(3). The intersections of B1 , B2 and B3 with
palm boundary segments between the ngers are Point
P1 , P2 and P3 (Fig. 3(d));
Find the points Q1 and Q2 on the boundary of fore nger
and middle nger, whose column index is less l1 than
that of point P1 (here l1 = 30), and link up P1 Q1 and
P1 Q2 (Fig. 3(e));
In all points on the nger boundary segment between
point P1 and point Q1 , nd the point which is farthest
from line P1 Q1 as one root point of the fore nger, C
(Fig. 3(e));
In all points on the nger boundary segment between
point P1 and point Q2 , nd the point which is farthest
from line P1 Q2 as one root point of the middle nger,
T1 (Fig. 3(e));
The root points of the middle nger, ring nger and
little nger, T2 , T3 , T4 and F, are obtained using the
same technique as described in Step 3–5 (Fig. 3(e));
Link up T1 T2 and T3 T4 , and take their midpoints as
points D and E (Fig. 3(f));
Detecting Points B and G (Fig. 3(f)):
(1) Draw the perpendicular of line L2 from point C, L7 ,
and intersect with palm boundary at point B;
(2) Draw the perpendicular of line L5 from point G, L8 ,
and intersect with palm boundary at point G.
Fig. 3(g) shows the palmprint overlaid with the obtained
key points and the de ned lines.
Suppose that I (x; y) denotes an image. We devise the
horizontal line detector (the directional line detector in 0◦
direction). To improve the connection and smoothness of
the lines, the image is smoothed along the line direction
(horizontal direction) using a 1-D Gaussian function Gs
with variance s :
Is = I ∗ Gs ;
(7)
where ∗ is the convolve operation.
The rst- and the second-order derivatives in the vertical
direction can be computed by convolving the smoothed image with the rst- (G′ d ) and second- (G′′d ) order derivative
of a 1-D Gaussian function Gd with variance d :
I ′ = Is ∗ (G′ d )T = (I ∗ Gs ) ∗ (G′ d )T
= I ∗ (Gs ∗ (G′ d )T ) = I ∗ H10
(8)
I ′′ = Is ∗ (G′′d )T = (I ∗ Gs ) ∗ (G′′d )T
= I ∗ (Gs ∗ (G′′d )T ) = I ∗ H20 ;
(9)
where H10 =Gs ∗(G′ d )T , H20 =Gs ∗(G′′d )T ; T is the transpose
operation; ∗ is the convolve operation. H10 , H20 are called
◦
the horizontal line detectors (directional line detectors in 0
direction).
The horizontal lines can be obtained by looking for the
zero-cross points of I ′ in the vertical direction and their
strengths are the values of the corresponding points in I ′′ :
′′
I (x; y) if I ′ (x; y) = 0 or
I ′ (x; y) × I ′ (x + 1; y) ¡ 0;
L10 (x; y) =
0
otherwise:
(10)
Furthermore, we can determine the type of a roof edge
(line), i.e. valley or peak, from the sign of the values in
L10 (x; y): plus signs represent valleys, while minus signs represent peaks. Since all palm-lines are valleys, the minus values in L10 (x; y) should be discarded:
L10 (x; y)
if L10 (x; y) ¿ 0;
0
otherwise:
4. Principal lines extraction
L20 (x; y) =
Palm-lines, including the principal lines and wrinkles, are
a kind of roof edge. A roof edge is generally de ned as a discontinuity in the rst-order derivative of a gray-level pro le
[19]. In other words, the positions of roof edge points are the
zero-cross points of their rst-order derivatives. Moreover,
the magnitude of the edge points’ second-derivative can reect the strength of these edge points [20]. We can use these
properties to devise principal lines detectors. Given that the
directions of principal lines are not constant, we should devise line detectors in di erent directions and then apply a
suitable detector according to the local information of the
principal lines.
Palm-lines are much thicker than ridges. For that reason,
one or more thresholds can be used to remove ridges from
L20 and obtain a magnitude image L0 , which is called the
directional line magnitude image in 0◦ direction.
The directional line detectors H1 , H2 in direction can be
obtained by rotating H10 , H20 with angle . The line points can
be obtained by looking for the zero-cross points in + 90◦
direction. After discarding the peak roof edges and ridges,
we can obtain the directional line magnitude image L in
direction.
There are two parameters s and d in these directional
line detectors. s controls the connection and smoothness of
(11)
1992
X. Wu et al. / Pattern Recognition 37 (2004) 1987 – 1998
the lines and d controls the width of the lines which can
be detected. Small s results in poor connection and poor
smoothness of the detected lines while large s results in the
loss of some short lines and of line segments whose curvatures are large. Thin roof edges cannot be extracted when
d is large, so we should choose the appropriate parameters
ad hoc. In general, principal lines are long, narrow and somewhat straight, so for principal line extraction s should be
large while d should be small. After tuning the values of
s and d through a lot of experiments, we obtain the suitable ones for principal line extraction: s = 1:8 and d = 0:5.
In this case,
their de nitions. Fig. 4 shows these extracted lines: (a) is
the original palmprint and (b) is the palmprint overlaid with
the extracted potential line initials of the principal lines.
4.3. Extracting heart lines
Because principal lines do not curve greatly, it is a simple
matter to use the current extracted part of the line to predict
the position and direction of the next short part. Now, based
on the extracted potential line initials, we devise a recursive
process to extract the whole heart line.
0:0009
0:0027
0:0058
0:0092
0:0107
0:0092
0:0058
0:0027
0:0065
0
H1 =
0:0000
−0:0065
0:0191
0:0412
0:0655
0:0764
0:0655
0:0412
0:0191
0:0000
0:0000
0:0000
0:0000
0:0000
0:0000
0:0000
−0:0191
−0:0412
−0:0655
−0:0764
−0:0655
−0:0412
−0:0191
−0:0009
−0:0027
−0:0058
−0:0092
−0:0107
−0:0092
−0:0058
−0:0027
−0:0009
0:0156
0:0211
0:0309
0:0416
0:0464
0:0416
0:0309
0:0211
0:0156
0:0257
0
H2 =
−0:0298
0:0257
0:0510
0:0954
0:1441
0:1660
0:1441
0:0954
0:0510
−0:1125
−0:2582
−0:4178
−0:4896
−0:4178
−0:2582
−0:1125
0:0510
0:0954
0:1441
0:1660
0:1441
0:0954
0:0510
0:0211
0:0309
0:0416
0:0464
0:0416
0:0309
0:0211
0:0156
The hysteretic threshold method [21], in which the high
threshold is obtained automatically by using Otsu’s method
[22] to the non-zero points of L2 and the low threshold is
chosen as the minimum value of the non-zero points of L2 ,
is used for principal line extraction.
4.2. Extracting potential line initials of principal lines
The heart line is de ned as a curve originating from
Region GHIP (Fig. 2(a)) and the life line and head line
are de ned as the curves originating from Region ABML
(Fig. 2(b) and (c)). Therefore we can extract the beginnings
(“line initials”) of the principal lines from these regions and
then use these initials as the basis to extract the principal
lines in their entirety. A careful examination of a palmprint
reveals that each principal line will initially run almost perpendicular to its neighboring palm boundary segment—approximated here by line AB (life and head line) or line GH
(heart line) (Fig. 3(g)). If we denote the slope angle of the
corresponding line (line AB or line GH) as , then the directions of the line initials of the principal lines are close
to + 90◦ , so we rst extract all lines in this region using
◦
◦
the + 90◦ line detectors H1+90 , H2+90 . Each of these
extracted line segments is a potential line initial of principal lines. Hence, we should extract lines from each of these
line initials and then keep the principal lines according to
0:0009
0:0065
0:0000
; (12)
−0:0065
0:0257
−0:0298
: (13)
0:0257
0:0156
Suppose that Curve ab in Fig. 5(a) is the extracted part of
the heart line of the palmprint shown in Fig. 4(a). To extract
the next part of the heart line, we trace back the extracted
heart line ab from point b and get the Kth point c (here
K = 20). Since the heart line does not curve greatly, the
region of interest (ROI), in which the next segment of heart
line would be located, can be de ned as a rectangular region
L × W whose center point is point b. Point c is the midpoint
of one border whose length is W . W is a prede ned value
(here W = 20), and L equal to twice the distance between
points b and c (Fig. 5(b)).
Joining points b and c gives us straight line cb. The slope
angle of straight line cb is . Because principal lines curve
so little, the direction of the next line segment should not
vary much. Therefore we employ directional line detectors
H1 , H2 to extract the line segments in this ROI and then
keep all of the branches connecting with ac (Fig. 5(c)). If
only one branch connected with ac, this branch is regarded
as the next line segment. Otherwise, we choose one branch
as follows. In Fig. 5(c), two branches, cok and coh, are
connected with ac in ROI, where Point o is the branch point.
Fig. 5(d) show the enlarged version of the ROI. We trace
the line oh, ok and oc from point o and get the N th points
f, g and e, respectively (here N = 10), and then link up
of, og and oe, and compute Angle foe and Angle goe. The
X. Wu et al. / Pattern Recognition 37 (2004) 1987 – 1998
1993
Fig. 4. Potential line initials of the principal lines: (a) original palmprint; (b) palmprint overlaid with the extracted potential line initials of
the principal lines.
branch (oh) corresponding to the maximum angle is chosen
as the next line segment.
After obtaining the next line segment, we should determine whether the heart line reaches its endpoint. We regard
the heart line as having reached its endpoint if the line ch
in the ROI satis es one of the following two conditions:
(1) If the minimum distance from endpoint h to three sides
of the ROI (not including the side passing through point
c) exceeds a threshold Td (here Td = 5), point h is the
endpoint.
(2) If Angle cmh is less than a threshold Ta (here Ta =135◦ ),
having joined points c and h, having supposed that point
m is the farthest point to the straight line ch on curve ch,
and having joined cm and hm, point m is the endpoint
(Fig. 5(e)).
If the curve ch satis es none of these conditions, we take the
longer curve ah as the current extracted heart line and repeat
this process recursively until the extracted curve reaches its
endpoint.
Fig. 5(f) shows the whole extracted heart line and (g) is
the palmprint overlaid with the whole heart line and all of
the ROIs involved in this heart line extraction.
4.4. Extracting life and head lines
The process of extracting the life line and the head line
di ers little from that of extracting the heart line. The di erence is in the rule that only one line may be extracted from
each line initial. While this works well in heart line extraction, it is unsuitable for life line and head line extraction
because life line and head line may share their line initials.
Given this, we apply our observation that the branch point
of the life line and head line should not exceed line-segment
NK and LK (Fig. 1(b)). With this in mind, when using the
recursive heart line extraction process to extract the life line
and head line, if the extracted curve does not run across
line-segment NK and LK and if there exists more than one
branch, instead of choosing just one of the branches, we extract and trace the curves from each branch. If the extracted
curve crosses line-segment NK or LK, the extraction process is the same as for heart line extraction.
Fig. 6 illustrates the process of life line and head line
extraction. In this gure, (a) is the extracted line including
two branches. In the original heart line extraction process,
only one branch (oe) would be chosen and the other one
(of) would be discarded. Obviously, it is wrong to discard
branch of because it is a part of the life line. Since the
extracted line does not run across line-segment NK and LK,
we split this branched curve into two curves aoe and aof
(Fig. 6(b) and (c)) and extract the lines from each of them.
In this gure, the extracted curve related with aoe is the
head line (Fig. 6(d)) and the one related with aof is the life
line (Fig. 6(e)). Fig. 6(f) shows the palmprint overlaid with
all of the extracted principal lines.
5. Palmprint classi cation
To classify a palmprint, we rst extract its principal lines
and then classify the palmprint by the number of the principal lines and the intersections of these principal lines. As the
number of each type of principal line is less than or equal to
1, there are at most three principal lines. Two principal lines
are said to intersect only if some of their points overlap or
some points of one line are the neighbors of some points of
another line. If any two principal lines intersect, the number
of intersections increases by 1. Therefore, the number of intersections of three principal lines is less than or equal to 3.
Regarding the number of their principal lines and the
number of the intersections of these lines, palmprints can be
classi ed into following six categories (Table 1):
Category 1: Palmprints composed of no more than one
principal line (Fig. 7(a));
Category 2: Palmprints composed of two principal lines
and no intersection (Fig. 7(b));
1994
X. Wu et al. / Pattern Recognition 37 (2004) 1987 – 1998
Fig. 5. Heart line extraction.
Category 3: Palmprints composed of two principal lines
and one intersection (Fig. 7(c));
Category 4: Palmprints composed of three principal lines
and no intersection (Fig. 7(d));
Category 5: Palmprints composed of three principal lines
and one intersection (Fig. 7(e));
Category 6: Palmprints composed of three principal lines
and more than one intersection (Fig. 7(f)).
The complete classi cation process for an input palmprint is as follows: (1) binary this palmprint and extract its
boundary; (2) detect the key points; (3) extract heart line;
(4) extract head line and life line; (5) calculate the number
X. Wu et al. / Pattern Recognition 37 (2004) 1987 – 1998
1995
Fig. 6. Life line and head line extraction.
Table 1
Palmprint classi cation rules
6. Experimental results
Number of principal lines
61
2
3
Number of the intersections of
principal Lines
0
0
1
0
1
¿2
Category no.
1
2
3
4
5
6
of principal lines and their intersections; and (6) classify
the palmprint into one of the de ned categories using the
classi cation rules.
Our palmprint classi cation algorithm was tested on a
database containing 13,800 palmprints captured from 1,380
di erent palms using a CCD-camera-based device, 10 images per palm. The images are 320 × 240 with eight bits
per pixel and the palmprints have been labeled manually. In
this database, 0.36% samples belong to Category 1, 1.23%
to Category 2, 2.83% to Category 3, 11.81% to Category 4,
78.12% to Category 5 and 5.65% to Category 6. The distribution of each category in our palmprint database is listed
in Table 2.
Correct classi cation takes place when the palmprint is
classi ed into a category whose label is same as the label
of this palmprint. Misclassi cation takes place when the
1996
X. Wu et al. / Pattern Recognition 37 (2004) 1987 – 1998
Fig. 7. Examples of each palmprint category.
Table 2
Distribution of each category in our database
Category no.
1
2
3
4
5
6
Number of palmprints
Percent (%)
50
0.36
170
1.23
390
2.83
1,630
11.81
10,780
78.12
780
5.65
palmprint is classi ed into a category whose label is different from the label of this palmprint. In all of the 13,800
palmprints in the database, 548 samples were misclassi ed:
7 in Category 1, 11 in Category 2, 25 in Category 3, 104 in
Category 4, 349 in Category 5 and 47 in Category 6. The
classi cation accuracy is about 96.03%. The confusion matrix is given in Table 3 and the classi cation accuracy in
Table 4.
X. Wu et al. / Pattern Recognition 37 (2004) 1987 – 1998
Table 3
Classi cation results of the proposed algorithm
Assigned
category no.
1
2
3
4
5
6
True category no.
1
43
3
4
0
0
0
2
6
159
0
3
2
0
3
3
1
365
2
11
8
4
23
41
18
1,526
13
9
5
41
132
95
12
10,431
69
6
2
7
13
0
25
733
Table 4
Classi cation accuracy of the proposed algorithm
Total samples Correctly classi ed Misclassi ed Classi cation
samples
samples
accuracy
13,800
13,252
548
96.03%
7. Conclusions
As the rst attempt to classify low-resolution palmprints,
this paper presents a novel algorithm for palmprint classi cation using principal lines. Principal lines are de ned and
characterized by their position and thickness. A set of directional line detectors is devised for principal line extraction. By using these detectors, the potential line initials of
the principal lines are extracted and then, based on the extracted potential line initials, the principal lines are extracted
in their entirety using a recursive process. The local information about the extracted part of the principal line is used
to decide a ROI and then a suitable line detector is chosen
to extract the next part of the principal line in this ROI. After extracting the principal lines, we present some rules for
palmprint classi cation. The palmprints are classi ed into
six categories considering the number of the principal lines
and their intersections. From the statistical results in our
database containing 13,800 palmprints, the distributions of
Categories 1–6 are 0.36%, 1.23%, 2.83%, 11.81%, 78.12%
and 5.65%, respectively. The proposed algorithm classi ed
these palmprints with 96.03% accuracy.
References
[1] D. Zhang, Automated Biometrics—Technologies and
Systems, Kluwer Academic Publishers, Dordrecht, 2000.
1997
[2] A. Jain, R. Bolle, S. Pankanti, Biometrics: Personal
Identi cation in Networked Society, Kluwer Academic
Publishers, Dordrecht, 1999.
[3] A. Jain, L. Hong, R. Bolle, On-line ngerprint veri cation,
IEEE Trans. Pattern Anal. Mach. Intell. 19 (4) (1997)
302–313.
[4] L. Coetzee, E.C. Botha, Fingerprint recognition in low quality
images, Pattern Recognition 26 (10) (1993) 1441–1460.
[5] R.P. Wildes, Iris recognition: an emerging biometric
technology, Proc. IEEE 85 (9) (1997) 1348–1363.
[6] W.W. Boles, B. Boashash, A human identi cation technique
using images of the iris and wavelet transform, IEEE Trans.
Signal Process. 46 (4) (1998) 1185–1188.
[7] R. Brunelli, T. Poggio, Face recognition: features versus
templates, IEEE Trans. Pattern Anal. Mach. Intell. 15 (10)
(1993) 1042–1052.
[8] Y. Gao, M.K.H. Leun, Face recognition using line edge map,
IEEE Trans. Pattern Anal. Mach. Intell. 24 (6) (2002) 764–
779.
[9] J.P. Campbell Jr., Speaker recognition: a tutorial, Proc. IEEE
85 (9) (1997) 1437–1462.
[10] K. Chen, Towards better making a decision in speaker
veri cation, Pattern Recognition 36 (2) (2003) 329–346.
[11] A.K. Jain, A. Ross, D. Prabhakar, An introduction to biometric
recognition, IEEE Trans. on Circuits and Systems for Video
Technology 14 (1) (2004) 4–20.
[12] N. Duta, A.K. Jain, K.V. Mardia, Matching of palmprint,
Pattern Recogn. Lett. 23 (4) (2001) 477–485.
[13] D. Zhang, W. Kong, J. You, M. Wong, Online palmprint
identi cation, IEEE Trans. Pattern Anal. Mach. Intell. 25 (9)
(2003) 1041–1050.
[14] C.C. Han, H.L. Chen, C.L. Lin, K.C. Fan, Personal
authentication using palm-print features, Pattern Recognition
36 (2) (2003) 371–381.
[15] A. Kumar, D.C.M. Wong I, H.C. Shen I, A. Jain, Personal
veri cation using palmprint and hand geometry biometric,
Lecture Notes in Computer Science, Vol. 2688, Springer,
Berlin, 2003, pp. 668–678.
[16] K. Karu, A.K. Jain, Fingerprint classi cation, Pattern
Recognition 29 (3) (1996) 389–404.
[17] R. Pelli, A. Lumini, D. Maio, D. Maltoni, Fingerprint
classi cation by directional image partitioning, IEEE Trans.
Pattern Anal. Mach. Intell. 21 (5) (1999) 402–421.
[18] W. Shu, G. Rong, Z. Bian, Automatic palmprint veri cation,
Int. J. Image Graphics 1 (1) (2001) 135–151.
[19] R.M. Haralick, Ridges and valleys on digital images, Comput.
Vision Graphics Image Process. 22 (1983) 28–38.
[20] K. Liang, T. Tjahjadi, Y. Yang, Roof edge detection using
regularized cubic B-spline tting, Pattern Recognition 30 (5)
(1997) 719–728.
[21] J. Canny, A computational approach to edge detection, IEEE
Trans. Pattern Anal. Mach. Intell. 8 (6) (1986) 679–698.
[22] J.R. Parker, Algorithms for Image Processing and Computer
Vision, Wiley, New York, 1997.
About the Author—XIANGQIAN WU received his B.Sc. and M.Sc. degrees in Computer Science from Harbin Institute of Technology
(HIT), China, in 1997 and 1999, respectively. He is currently a Ph.D. student in School of Computer Science and Technology at Harbin
Institute of Technology (HIT). His research interests include pattern recognition, image analysis and biometrics, etc.
1998
X. Wu et al. / Pattern Recognition 37 (2004) 1987 – 1998
About the Author—DAVID ZHANG graduated in computer science from Peking University in 1974 and received his M.Sc. and Ph.D.
degrees in computer science and engineering from Harbin Institute of Technology (HIT) in 1983 and 1985, respectively. From 1986 to 1988,
he was a postdoctoral fellow at Tsinghua University and became an associate professor at Academia Sinica, Beijing, China. He received
his second Ph.D. in electrical and computer engineering at University of Waterloo, Ontario, Canada, in 1994. Currently, he is a professor in
Hong Kong Polytechnic University. He is the Founder and Editor-in-Chief, International Journal of Image and Graphics (IJIG); Book Editor,
Kluwer International Series on Biometrics (KISB); and Program Chair, International Conference on Biometrics Authentication (ICBA). He
is Associate Editor of more than ten international journals including IEEE Trans on SMC-A/SMC-C, Pattern Recognition, and is the author
of more than 120 journal papers, twenty book chapters and nine books. As a principal investigator, He has since 1980 brought to fruition
many biometrics projects and won numerous prizes. He holds a number of patents in both the USA and China and is a current Croucher
Senior Research Fellow.
About the Author—KUANQUAN WANG received his BE and ME degrees from Harbin Institute of Technology (HIT), Harbin, China, and
his Ph.D. degree in computer science and technology from Chongqing University, Chongqing, China, in 1985, 1988, and 2001, respectively.
From 2000 to 2001 he was a visiting scholar in Hong Kong Polytechnic University supported by Hong Kong Croucher Funding. From 2003
to 2004 he was a research fellow in the same university. Currently, he is a professor and a supervisor of Ph.D. candidates of department
of computer science and engineering, and an associate director of Biocomputing Research Centre in HIT. So far, he has published over 70
papers. Also he is a member of the IEEE, an editorial board member of International Journal of Image and Graphics. His research interests
include biometrics, image processing and pattern recognition.