Pattern Recognition 41 (2008) 1316 – 1328
www.elsevier.com/locate/pr
Palmprint verification based on principal lines
De-Shuang Huang a,∗ , Wei Jia a,b , David Zhang c
a Intelligent Computation Laboratory, Hefei Institute of Intelligent Machines, Chinese Academy of Science, P.O. Box 1130, Hefei, Anhui 230031, China
b Department of Automation, University of Science and Technology of China, Hefei 230027, China
c Biometrics Research Centre, Department of Computing, The Hong Kong Polytechnic University, Hong Kong
Received 25 June 2006; received in revised form 22 August 2007; accepted 29 August 2007
Abstract
In this paper, we propose a novel palmprint verification approach based on principal lines. In feature extraction stage, the modified finite
Radon transform is proposed, which can extract principal lines effectively and efficiently even in the case that the palmprint images contain
many long and strong wrinkles. In matching stage, a matching algorithm based on pixel-to-area comparison is devised to calculate the similarity
between two palmprints, which has shown good robustness for slight rotations and translations of palmprints. The experimental results for the
verification on Hong Kong Polytechnic University Palmprint Database show that the discriminability of principal lines is also strong.
䉷 2007 Elsevier Ltd. All rights reserved.
Keywords: Palmprint; Biometrics; Principal lines; Line detection; Modified finite Radon transform
1. Introduction
In networked society, automatic personal verification is a
crucial problem that needs to be solved properly. And in this
field biometrics is one of the most important and effective solutions. Recently, palmprint based verification systems (PVS)
have been receiving more attention from researchers [1]. Compared with fingerprint or iris based personal verification systems
which have been widely used [2,3], the PVS can also achieve
satisfying performance. For example, it can provide reliable
recognition rate with fast processing speed [1]. Particularly, the
PVS has several special advantages such as rich texture feature,
stable line feature, low-resolution imaging, low-cost capturing
devices, and easy self-positioning, etc.
So far, there have been many approaches proposed for palmprint verification/identification, which can be mainly divided
into five categories: (1) texture based approaches [1,4]; (2) appearance based approaches [5–8]; (3) multiple features based
approaches [9]; (4) orientation based approaches [10,11]; and
∗ Corresponding author.
E-mail addresses:
[email protected] (D.-S. Huang),
[email protected] (W. Jia),
[email protected] (D. Zhang).
0031-3203/$30.00 䉷 2007 Elsevier Ltd. All rights reserved.
doi:10.1016/j.patcog.2007.08.016
(5) line based approaches [12–18]. The main approaches based
on texture are to extract texture feature by exploiting 2-D Gabor
filter, which have been shown to be of satisfying performance
in terms of recognition rate and processing speed [1,4]. Appearance based approaches were also reported to achieve exciting
results in many literatures, but they may be sensitive to illumination, contrast, and position changes in real applications. In
addition, it was reported in Ref. [9] that multiple features based
approaches using information fusion technology could provide
more reliable results. Recently, orientation codes are deemed
to be the most promising methods, since the orientation feature
contains more discriminative power than other features, and is
more robust for the change of illumination.
Obviously, line is the basic feature of palmprint. Thus, line
based approaches also play an important role in palmprint verification/identification field. Zhang et al. used overcomplete
wavelet expansion and directional context modeling technique
to extract principal lines-like features [12]. Han et al. proposed
using Sobel and morphological operations to extract the linelike features from palmprint images obtained using a scanner
[13]. Lin et al. applied the hierarchical decomposition mechanism to extract principal palmprint features from a region of
interest (ROI), which includes directional and multi-resolution
decompositions [14]. However, these methods cannot extract
1317
D.-S. Huang et al. / Pattern Recognition 41 (2008) 1316 – 1328
palm lines explicitly. Additionally, Wu et al. and Liu et al.
proposed two different approaches based on palm lines, which
will be discussed in later section [15–18].
It is well known that palm lines consist of wrinkles and
principal lines. And principal lines can be treated as a separate
feature to characterize a palm. Therefore, there are several
reasons to carefully study principal lines based approaches. At
first, principal lines based approaches can be jointly considered
with the person’s habit. For instance, when human beings are
comparing two palmprints, they instinctively compare principal lines. Secondly, principal lines are generally more stable
than wrinkles. The latter is easily masked by bad illumination
condition, compression, and noise. Thirdly, principal lines can
act as an important component in multiple features based approaches. Fourthly, in some special cases, for example, when
the police is searching for some palmprints with similar principal lines, other features cannot be used to replace principal
lines. At last, principal lines can be used in palmprint classification or fast retrieval schemes. However, principal lines based
approaches have not been studied adequately so far. The main
reason is that it is very difficult to extract principal lines from
complex palmprint images, which contain many strong and
long wrinkles. At the same time, many researchers claimed
that it was difficult to obtain a high recognition rate using
only principal lines because of their similarity among different
people [1]. In other words, they thought the discriminability of
principal lines was limited. Nevertheless, they did not conduct
related experiments to verify their viewpoints.
In this paper, we propose a novel palmprint verification
approach based on principal lines, and further discuss the
discriminability of principal lines. Here, before presenting the
proposed approach, we shall first present the definition of principal lines used in the whole paper. To illustrate this definition,
three typical palmprint images are shown in Fig. 1. Generally
speaking, most palmprints have three principal lines: heart line,
head line, and life line, which are longest, strongest, and widest
lines in palmprint image, and have stable line initials and positions (see Fig. 1(a)) [15]. In addition, a lot of palmprints may
have more or less principal lines due to their diversity and
complexity (see Fig. 1(b)). In this paper, assuming that within
a few of palmprints one or two longest and strongest wrinkles
that have similar directions to three principal lines are also
regarded as a part of principal lines (see Fig. 1(c)).
In principal lines extraction stage, what are criterions for
distinguishing principal lines from wrinkles is an important
issue. Through careful observation and analysis, we adopt two
main differences between principal lines and wrinkles as the
corresponding criterions. The one is the line energy of which
principal lines are stronger than that of wrinkles. Another one
is the direction of which most wrinkles obviously differ from
that of principal lines. In addition, since the Radon transform
and its variations are powerful tools to detect the directions
and energies of lines in an image, they are used in our method.
In matching stage, we devise a matching algorithm based on
pixel-to-area comparison to calculate the similarity between
two palmprints, which have shown good robustness for slight
rotations and translations.
Additionally, what we must stress here is that in this paper, all palmprint images are obtained from Hong Kong
Polytechnic University Palmprint Database [19], which were
captured by a CCD-based device described in Ref. [1].
This paper is organized as follows. Section 2 presents the
method of principal line extraction. Section 3 gives the palmprint matching method based on pixel-to-area comparison.
Section 4 reports the experimental results including principal lines extraction, verification, and computational time.
Section 5 discusses the discriminability of principal lines. And
Section 6 concludes the whole paper with some conclusive
remarks.
2. Feature extraction based on the modified finite Radon
transform
2.1. The Radon transform and the finite Radon transform
The Radon transform in Euclidean space was first established
by Johann Radon in 1917 [20]. The Radon transform of a 2D
function f (x, y) is defined as
∞ ∞
R(r, )[f (x, y)] =
f (x, y)(r − x cos
−∞ −∞
− y sin ) dx dy,
(1)
where r is the perpendicular distance of a line from the origin
and is the angle between the line and the y-axis. The Radon
transform accentuates linear features by integrating image intensity along all possible lines in an image, thus it can be used
to detect linear trends in the image.
However, there are some drawbacks using the Radon transform for linear feature detection when the intensity integration
is performed over the entire length of the image. For example, it cannot effectively detect line segments which are significantly shorter than the image dimensions, etc. In order to
overcome these problems of Radon transform, Copeand et al.
proposed a modified Radon transform scheme, i.e., Localized
Radon transform, which is defined as follows:
xmax ymax
R(r, )[f (x, y)] =
f (x, y)(r − x cos
xmin
ymin
− y sin ) dx dy,
(2)
where parameters xmax , xmin , ymax , and ymin define a local area
to perform Radon transform [21].
The finite Radon transform (FRAT) is another way to perform Radon transform for finite length signals [22]. FRAT is
generally defined as the summation of image pixels over a certain set of lines. Denoting {Zp = 0, 1, . . . , p − 1}, where p is a
prime number, the FRAT of real function f [x, y] on the finite
grid Zp2 is defined as
1
rk [l] = F RAT f (k, l) = √
p
(i,j )∈Lk,l
f [i, j ],
(3)
1318
D.-S. Huang et al. / Pattern Recognition 41 (2008) 1316 – 1328
Fig. 1. Three typical palmprint images.
where Lk,l denotes the set of points that make up a line on the
lattice Zp2 , which means:
Lk,l = {(i, j ) : j = ki + l(mod p), i ∈ Zp },
Lp,l = {(l, j ) : j ∈ Zp }.
0 k < p,
(4)
In Eq. (4), k represents the corresponding slope of the line and
l represents the intercept.
In a palmprint image, a palm line can be regarded as a straight
line approximately in a small local area. Therefore, it can be
detected by the FRAT. However, the FRAT treats the input
image as a periodic image. Thus, these lines exhibit a “wrap
around” effect due to the modulo operations in the definition
of lines of the FRAT [23]. In order to eliminate this effect,
we propose the modified finite Radon transform (MFRAT) to
extract line feature of palmprint, which is defined as follows:
Denoting Zp ={0, 1, . . . , p−1}, where p is a positive integer,
the MFRAT of real function f [x, y] on the finite grid Zp2 is
defined as
1
C
(i,j )∈Lk
f [i, j ],
(5)
where C is a scalar to control the scale of r[Lk ], and Lk denotes
the set of points that make up a line on the lattice Zp2 , which
means:
Lk = {(i, j ) : j = k(i − i0 ) + j0 , i ∈ Zp },
f = f ′ − mean(f ′ ),
f [i, j ] = 0 (i, j ) ∈ Zp2 .
(7)
(8)
In the MFRAT, the direction k and the energy e of center
point f (i0 , j0 ) of the lattice Zp2 are calculated by the following
formula:
2.2. Extracting principal lines using modified finite Radon
transform
r[Lk ] = MFRATf (k) =
not an invertible transform. And note that before taking the
MFRAT given in Eq. (5), the mean should be subtracted from
an input f ′ , thus we have:
(6)
where (i0 , j0 ) denotes the center point of the lattice Zp2 , and
k means the corresponding slope of Lk . In our paper, Lk has
another expression L(k ), where k is the angle corresponding
to k.
Compared with the FRAT, the MFRAT removes the intercept
l in Lk,l . Consequently, for any given slope k, the summation
of only one line, which pass through the center point (i0 , j0 )
of Zp2 , is calculated. It should be pointed out that all lines
at different directions have an identical number of pixels, and
some pixels belonging to one line could lap over other lines.
Unlike the FRAT, the number of k is not restricted by p, but
determined by practical situation. In addition, the MFRAT is
k(i0 ,j0 ) = arg(mink (r[Lk ])),
e(i0 ,j0 ) = | min(r[Lk ])|,
k = 1, 2, . . . , N,
k = 1, 2, . . . , N,
(9)
(10)
where | · | denotes the absolute operation.
In this way, the directions and energies of all pixels are calculated if the center of lattice Zp2 move over an image pixel by
pixel (or pixels by pixels). For an image I (x, y) of size m×n, if
the values of all pixels are replaced by their directions and energies, two new images, Direction_image and Energy_image,
can be created, respectively, i.e.
..
k(1,1) k(1,2)
.
k(1,n)
..
.
k(2,n) ,
Direction_image = k(2,1) k(2,2)
···
···
···
···
..
. k(m,n)
k(m,1) k(m,2)
..
e(1,1) e(1,2)
.
e(1,n)
..
e
e
.
e
(2,2)
(2,n) .
Energy_image = (2,1)
···
···
···
···
..
e
. e(m,n)
(m,1) e(m,2)
Fig. 2 shows two examples of the MFRAT whose sizes are
7 × 7 and 14 × 14, respectively, and whose lines, L(k ), are at
directions of /12, 2/12, . . . , and 6/12, respectively. Due
to space limitation, the remaining lines L(k ) at directions of
7/12, 8/12, . . . , and 12/12, are not depicted here. If the
line in the MFRAT is 1 pixel wide, p should be an odd number in order to clearly define the center of Zp2 . In 14 × 14
MFRAT, the center area of Zp2 contains four pixels, thus the directions and the energies of these four pixels could be calculated
1319
D.-S. Huang et al. / Pattern Recognition 41 (2008) 1316 – 1328
Fig. 2. The 7 × 7 and the 14 × 14 MFRAT. (a) and (g) 1 = /12; (b) and (h) 2 = 2/12; (c) and (i) 3 = 3/12; (d) and (j) 4 = 4/12; (e) and (k)
5 = 5/12; (f) and (l) 6 = 6/12.
simultaneously. After principal lines extraction, these four pixels, which have the same directions and energies, are regarded
as one pixel. Thus, the size of feature image is resized to half
of original image, which can be regarded as a subsampling operation. In the following example, the 14 × 14 MFRAT is performed on one 128 × 128 palmprint image. However, in order
to demonstrate the processing clearly, the 128 × 128 feature
image is presented instead of the 64 × 64 image.
Fig. 3 shows an example of principal lines extraction, which
contains some images appearing in this stage. Fig. 3(b) is the
Energy_image obtained from the original image of Fig. 3(a).
It can be seen that the energies of all palm lines are extracted
clearly and accurately. In Fig. 3(c), the important lines including
principal lines and some strong wrinkles are extracted according
to a threshold T. Here, the obtained binary image is called as
Lines_image, which can be defined as
Lines_image(x, y) =
0
if Energy_image(x, y) < T ,
1
if Energy_image(x, y) T .
(11)
In this step, many wrinkles are removed under the energy
criterion. It should be pointed out that T is an important parameter here. We sort all pixel values of Energy_image in descending order, and select the Mth largest pixel value as its adaptive
threshold T. In the above example, M was set to 1000.
Obviously, Lines_image also contains a lot of strong wrinkles. We can further remove them according to the direction
criterion. Generally speaking, the directions of most wrinkles
markedly differ from that of the principal lines. For instance, if
the directions of principal lines belong to (0◦ , . . . , 2 ] approximately, the directions of most wrinkles will be at [ 2 , . . . , )
approximately, and vice versa. Under this prior knowledge,
Lines_image is divided into LA_image (see Fig. 3(d)) and
LB_image (see Fig. 3(e)) according to (x,y) of every pixel. It
should be noted that those points whose directions are /2 are
assigned to both LA_image and LB_image, i.e.:
LA_image(x, y) = 1 if line_image(x, y) = 1,
and 0◦ < (x,y) /2,
LB_image(x, y) = 1 if line_image(x, y) = 1,
and /2 (x,y) < .
(12)
For LA_image and LB_image, which one contains principal lines? Using Eq. (1), Radon transform is respectively performed on LA_image from 0 to /2, and on LB_image from
/2 to . Thus two Radon energy maps are created, which are
R[LA_image(x, y)] (see Fig. 3(f)) and R[LB_image(x, y)]
(see Fig. 3(g)). The transform can be implemented very fast
since the images are at binary versions. As we know, the principal lines are generally longer and straighter than the wrinkles. Thus the Radon transform energy of principal lines will be
greater than that of the wrinkles. For a binary image F (x, y),
there are two criterions that can be adopted to evaluate the
Radon transform energy of R[F (x, y)], i.e.:
Emax (F (x, y)) = max(R[F (x, y)]),
Etotal (F (x, y)) =
n
m
(R[F (x, y)]),
(13)
(14)
x=1 y=1
where Emax (F (x, y)) is the max value of R[F (x, y)], and
Etotal (F (x, y)) is the total sum of all the values of R[F (x, y)].
In this approach, Etotal (F (x, y)) is adopted. And when we
calculated Etotal (F (x, y)), some restrictions were adopted to
remove the energies of some noisy lines. Next, by comparing Etotal (LA_image(x, y)) with Etotal (LB_image(x, y)),
the above question can be easily answered. The corresponding program code scheme is written as follows:
IF
END IF
Etotal (LA_image(x, y)) > Etotal (LB_image(x, y))
THEN LA_image contains principal lines
ELSE LB_image contains principal lines
1320
D.-S. Huang et al. / Pattern Recognition 41 (2008) 1316 – 1328
Fig. 3. Images appearing in feature extraction stage. (a) Original image; (b) Energy_image; (c) Lines_image; (d) LA_image; (e) LB_image; (f) R(LA_image);
(g) R(LB_image); (h) thinning lines of (d).
In the above example, the principal lines are located in
LA_image. At last, the thinned principal lines are depicted in
Fig. 3(h).
3. Palmprint matching
The task of palmprint matching is to calculate the degree
of similarity between a test image and a training image. In
our method, the similarity measurement is determined by
line matching technique. In Palm-Code [1] and Fusion-Code
schemes [4], the normalized Hamming distance was used to
calculate similarity degree between a test image and a training
image, while the Angular distance was adopted in CompetitiveCode scheme [11]. However, Hamming distance and Angular
distance based on pixel-to-pixel comparison are not suitable
for line matching since the line points of the same lines may
not be superposed on the palmprints captured from the same
palm at a different time. In this section, we devise an algorithm
based on pixel-to-area comparison for robust line matching.
Suppose that A is a test image and B is a training image, and
the size of A and B is m × n. In A and B, which are all binary
images, the value of principal line point is 1. The matching
score from A to B is defined as follows:
⎛
⎞
n
m
s(A, B) = ⎝
A(i, j ) ∩ B̄(i, j )⎠
NA
(15)
i=1 j =1
where “∩” is the logical “AND” operation, NA is the number
of points on detected principal lines in A, and B̄(i, j ) is a small
area around B(i, j ). In our approach, B̄(i, j ) is defined as B(i +
1, j ), B(i−1, j ), B(i, j ), B(i, j +1), and B(i, j −1). Obviously,
the value of A(i, j ) ∩ B̄(i, j ) will be 1 if A(i, j ) and at least
one point of B̄(i, j ) are principal line points simultaneously.
Fig. 4 shows the difference between pixel-to-pixel comparison
(see Fig. 4(a)) and pixel-to-area comparison (see Fig. 4(b)).
The essence of s(A, B) is that A matches with the dilated B
(see Fig. 4(c)).
In the same way, the matching score from B to A can also
be defined as
⎞
⎛
n
m
NB .
s(B, A) = ⎝
B(i, j ) ∩ Ā(i, j )⎠
(16)
i=1 j =1
At last, the matching score between A and B is to satisfy:
S(A, B) = S(B, A) = Max(s(A, B), s(B, A)).
(17)
Theoretically speaking, S(A, B) is between 0 and 1, and the
larger the matching score the greater the similarity between
A and B. The matching score of a perfect match is 1. From
the definition of S(A, B), it can be seen that it is robust for
slight translations and slight rotations between two images.
That is, the matching score will change little if the translation
is not to exceed one pixel and the rotation is not to exceed 3◦ .
However, because of imperfect preprocessing, there might have
large translations in practical applications. In order to overcome
this problem, we need to vertically and horizontally translate
one of feature images and match them again. The ranges of
the vertical and horizontal translations are defined from −2
to 2 pixels. The maximum value of S(A, B) obtained from
translated matching is considered as the final matching score.
On the other hand, the proposed matching method is also
robust for the results of principal lines extraction. As shown in
Fig. 5, two feature images were obtained from the same palm,
where there are no translations and rotations between them.
And one of principal lines could not be extracted in Fig. 5(b).
However, the matching score between them is still 1, which
can be regarded as a perfect matching.
D.-S. Huang et al. / Pattern Recognition 41 (2008) 1316 – 1328
1321
Fig. 4. Pixel-to-pixel comparison and pixel-to-area comparison.
Fig. 5. Principal lines extracted from the same palm, but different images.
4. Experimental results
The proposed approach in this paper was tested in the Hong
Kong Polytechnic University (PolyU) Palmprint Database,
which is available from website [19]. The PolyU Palmprint
Database contains 7752 grayscale images in BMP image format, corresponding to 386 different palms. In this database,
around 20 samples from each of these palms were collected in
two sessions, where around 10 samples were captured in the
first session and the second session, respectively. The average
interval between the first and the second collection was two
months. The resolution of all the original palmprint images is
384 × 284 pixels at 75 dpi. Although the resolution is low, the
principal lines and the wrinkles are still clear.
Usually, a square region is generally identified as the ROI
before feature extraction. Thus, the relevant features are extracted and matched only in this square region. The benefit of
this processing is that it can define a coordinate system to align
different palmprint images captured from the same palm. Otherwise, the matching result would be unreliable. In this paper,
by using the similar preprocessing approach described in literature [1], palmprints were orientated and the ROI, whose size
is 128 × 128, was cropped.
4.1. Results for proposed feature extraction method
Fig. 6 shows two examples of proposed feature extraction
method using 14 × 14 MFRAT described in Section 2. Fig. 6(a)
shows a simple palmprint image, which contains clear principal
lines and few wrinkles. From Fig. 6(b), it can be seen that the
principal lines were perfectly extracted. Even in the case of a
complex palmprint images as shown in Fig. 6(c), our method
also obtained a satisfying result, which is illustrated in Fig. 6(d).
In the MFRAT, some parameters can be adjusted, which are
the size of Zp2 determined by p, the number of line directions
N, the width of L(k ), W , and the threshold, T, determined by
M. The results of feature extraction are generally influenced by
these parameters. Here, we changed the values of p and M to
analyze the corresponding influence. At first, we adjusted M
to extract principal lines using the 14 × 14 MFRAT (p = 14,
N = 12, and W = 2). Fig. 7(a) shows the result extracted from
Fig. 3(a) while M was 500, and Fig. 7(b) shows the result while
M was 1800. From these two images, it can be concluded that
if more lines need to be extracted, we should choose a larger
M, otherwise it would be better to adopt a smaller M. Secondly,
we conducted two experiments using different p. Fig. 7(c) illustrates the result of the feature extraction method using 7 × 7
MFRAT described in Section 2 (p = 7, N = 12, W = 1, and
M = 1000), while Fig. 7(d) depicts the result using 22 × 22
MFRAT (p = 22, N = 12, W = 2, and M = 1000). Compared
with Fig. 7(d), it can be found that there are more short lines
in Fig. 7(c). In this regard, we can conclude that if there are
more short lines or curves with large curvatures to be extracted,
a smaller p should be adopted, otherwise a larger p is a better
choice.
4.2. Comparison with Gabor filter
Generally speaking, the Gabor filter is also a powerful tool
to detect the directional energies of lines. For example, it was
often used for fingerprint image enhancement [24]. However,
the Gabor filter is not suitable for detecting the directional energies of palm lines. The main reason is that its band limitation restricts its ability to detect the lines with different widths.
Yang et al. discussed this problem in their fingerprint image enhancement work [24]. They pointed out that the ridge width and
valley width often varied in different fingerprint images or in
different regions. Thus, applying Gabor filters with a fixed
bandwidth often resulted in some artifacts [24]. Similarly, the
same situation may occur on detecting palm lines since the
width of palm lines is also quite different, especially the ones
between principal lines and wrinkles. On the contrary, the
MFRAT has no such drawback.
In order to compare their performances, the MFRAT and circular Gabor filters with 12 directions were applied to detect the
lines’ directional energies of Figs. 3(a) and 6(c), respectively.
Here, the circular Gabor filter has the following form:
2
1
x + y2
G(x, y, , u, ) =
exp
−
22
22
exp{2i(ux cos + uy sin )},
(18)
√
where i = −1, u is the frequency of the sinusoidal wave,
controls the orientation of the function, and is the standard
deviation of the Gaussian envelope. Figs. 8 and 9 depict the
1322
D.-S. Huang et al. / Pattern Recognition 41 (2008) 1316 – 1328
Fig. 6. Extracting principal lines from two different palmprints.
Fig. 7. Extracting principal lines using different parameters. (a) M = 500, p = 14, N = 12, and W = 2; (b) M = 1800, p = 14, N = 12, and W = 2; (c) p = 7,
N = 12, W = 1, and M = 1000; (d) p = 22, N = 12, W = 2, and M = 1000.
Fig. 8. The directional energies of lines detected from Figs. 3(a) and 6(c) by Gabor filter with different parameters. (a) and (b) u = 0.2, = 3.5, and the size
of filter is 20 × 20; (c) and (d) u = 0.0916, = 5.6179, and the size of filter is 35 × 35.
Fig. 9. The directional energies of lines detected from Figs. 3(a) and 6(c) by using 9 × 9 and 17 × 17 MFRAT. (a) and (b) p = 9, N = 12, and W = 1; (c)
and (d) p = 17, N = 12, and W = 1.
D.-S. Huang et al. / Pattern Recognition 41 (2008) 1316 – 1328
1323
Fig. 10. Enhanced palmprint image and extracted palm lines using 9 × 9 MFRAT. (a) and (b) Enhanced palmprint image; (c) and (d) extracted palm lines
from (a) and (b).
corresponding results. It can be found that the performance of
Gabor filter is not satisfying (see Fig. 8). For example, given
a narrow bandwidth, Gabor filter cannot correctly detect the
directional energies of wide lines (see Figs. 8(a) and (b)), and
vice versa (see Figs. 8(c) and (d)). In addition, there are some
obvious artifacts in Figs. 8(a) and 8(b). On the contrary, the
energies of all lines were computed accurately by the MFRAT
(see Fig. 9). Moreover, we can use the MFRAT to enhance
palmprint image. If we subtract directional energies (Figs. 9(a)
and (b)) from original images (Figs. 3(a) and 6(c)), we can get
the enhanced images (Figs. 10(a) and (b)), in which all palm
lines are more clear and distinct. Particularly, all palm lines can
be easily extracted from these enhanced images. Here, suppose
that A is an enhanced image, and B is a filtered image by
applying a mean filter whose size is about 10 × 10 to A. The
palm lines image, D, is extracted by the following formula:
D = B − A − c × I,
Fig. 11. Palm lines detected from Figs. 3(a) and 6(c) using Liu’s approach.
(19)
where c is a positive integer to control noise, and I is an identity matrix. At last, we obtain the palm lines image after binarizing D. Figs. 10(c) and (d) are the palm lines extracted from
Figs. 10(a) and (b), respectively. From this experiment, it can
be concluded that the MFRAT is an effective and powerful tool
to extract palm lines.
Additionally, another advantage of the MFRAT is its computational efficiency. It runs very fast since only addition operation is involved according to Formula (5). On the contrary,
the convolution between one image and Gabor filters involves
a mass of multiplication operations. An experiment has been
conducted on a PC (the CPU is Pentium 4 (2.4 GHz) and the
software platform is Matlab 7.0) to compare their speeds. The
execution times of feature extraction using the MFRAT (p=14,
W = 2) and Gabor filters (the size is 35 × 35) are about 410
and 930 ms, respectively.
4.3. Comparison with Liu’s and Wu’s approaches
Liu et al. and Wu et al. also proposed two different approaches for palmprint recognition, which exploited all palm
lines. In Refs. [17] and [18], Liu et al. presented a novel wide
line detector using an isotropic nonlinear filter. This detector can
group pixels whose brightness is similar to the mask center’s
one into a weighted mask having similar brightness (WMSB).
Fig. 12. Palm lines detected from Figs. 3(a) and 6(c) using Wu’s approach.
The smaller the WMSB is, the larger the line response will be.
In fact, Liu’s approach is similar to the SUSAN corner detector to some extent [25], and actually is a segmentation based
approach. However, this approach cannot enhance the original
image before segmentation. Therefore, it may fail when the
palm lines are not clear. And it often incorrectly extracts the
small dark patches as a part of palm lines. Moreover, it cannot
detect the lines’ direction, which is an important feature to distinguish principal lines from wrinkles. Figs. 11(a) and (b) show
the palm lines detected from Figs. 3(a) and 6(c) using Liu’s
approach. It can be seen that its results contain many noises.
In Figs. 10(c) and (d), the palm lines can be clearly detected
by using our approach since the image was enhanced along the
lines’ directions during feature extraction stage.
Wu et al. treated palm lines as a kind of roof edge, and
extracted them according to the zero-cross points of image’s
first-order derivative and the magnitude of the edge points’ second derivative [15,16]. Therefore, two 1-D Gaussian functions,
1324
D.-S. Huang et al. / Pattern Recognition 41 (2008) 1316 – 1328
Fig. 13. Comparing Wu’s approach with proposed approach. (a) Original palmprint image containing wide principal lines. (b) Magnitude of second derivative
detected from (a) by using Wu’s approach. (c) Directional energies detected from (a) by using 9 × 9 MFRAT (N = 12 and W = 1). (d) Directional energies
detected from (a) by using 17 × 17 MFRAT (N = 12, and W = 1).
Fig. 14. Experimental results of verification. (a) and (b) Genuine and imposter distributions of matching scores for Databases I and II. (c) and (d) FAR and
FRR curves for Databases I and II.
Gs with variance s and Gd with variance d , were used to
smooth image and calculate the first- and second-order derivatives of image. Here, s and d were two important parameters
to control the smoothness and the width of lines to be detected.
And they were determined as 1.8 and 0.5, respectively. However, as the authors have pointed out in Ref. [16], it was impossible to extract all of the lines from various palm prints by
using fixed s and d . Thin lines can be detected well by using
the selected parameters. Two examples were given in Fig. 12,
which depicts the detected lines from Figs. 3(a) and 6(c). However, using the same parameters, their approach may fail to extract those wide lines. Fig. 13(a) is a palmprint image which
contains wide principal lines. Fig. 13(b) depicts the magnitude
of second derivative detected from Fig. 13(a) by using Wu’s
approach. Obviously, the principal lines are unclear and difficult to be extracted. The directional energies detected from
Fig. 13(a) are shown in Figs. 13(c) and (d), respectively, by using the MFRAT with the two sizes of 9×9 and 17×17. It can be
seen that the directional energies of all the lines, both principal
lines and wrinkles, were correctly computed. In Ref. [15], the
authors used the same method to extract the principal lines for
classification. However, some key points, such as potential beginnings of the principal lines, must be detected firstly as prior
knowledge before line detection. On the contrary, our approach
is independent on image content. In addition, in order to solve
rotation problem in palmprint matching stage, Wu’s approach
aims at rotating the palm lines image a few degrees and then
merging all rotated images by using logical “OR” operation so
D.-S. Huang et al. / Pattern Recognition 41 (2008) 1316 – 1328
1325
Table 1
Experimental results near cross-over points (boldface) in Fig. 14
Threshold (R)
FAR (%)
FRR (%)
(a) Results from 100 palmprints (Database I)
0.571
2.624
0.600
1.7037
0.650
0.7238
0.667
0.4929
0.690
0.3024
0.750
0.0635
0.800
9 × 10−3
0.856
1 × 10−3
0
0.0998
0.3992
0.499
1.2974
6.487
15.6687
37.125
(b) Result from 386 palmprints (Database II)
0.509
7.0789
0.550
4.1695
0.600
1.9899
0.667
0.565
0.700
0.2865
0.750
0.0893
0.850
4 × 10−3
0.923
6 × 10−5
0
0.0256
0.1027
0.5652
1.7928
7.3998
39.748
78.699
Fig. 15. ROC curves of the proposed approach, Palmcode, Wu’s approach,
and Liu’s approach.
as to construct a dilated image as a training image. However,
our proposed matching method can directly solve slight rotations by using pixel-to-area comparison.
4.4. Verification
Verification is a one-to-one comparison against a single
stored template, which answers the question of “whether the
person is whom he claims to be”. In the verification experiments, we set up two databases with different sizes. Database
I contains images from first 100 palmprints, and Database II
contains all images from 386 palmprints. The aim of setting
up two databases is to analyze the discriminability of principal lines in the databases with different sizes. In these two
databases, three samples of each palm captured in first session
were selected to construct a training set (or a template). Around
10 samples of each palm captured in the second session were
taken as the test set.
In experiments, the statistical pairs of FRR and FAR were
adopted to evaluate the performance of our approach. To obtain
the statistical pairs of FRR and FAR, each of the test images
was matched with all of the templates. If the test palmprint
image and the template are from the same palm, the matching
between them is remarked as a correct matching. Likewise, an
incorrect matching can also be defined in a similar manner.
Since each template has three palmprint images in the training
database, each test image can thus generate three scores. The
maximum of them is regarded as a correct matching score at
last. Similarly, when a test image matches with another template
that comes from a different palm, three incorrect scores can
be calculated, and the maximum of them is regarded as an
incorrect verification matching score. As a result, for Database
I, the number of correct and incorrect matchings are 1002 and
99 198, respectively. And for Database II, the number of correct
and incorrect matchings are 3892 and 1 498 420, respectively.
Fig. 16. The principal lines from two people with similar structure, but with
dissimilar position.
Fig. 17. The principal lines from two people with similar structure and
position.
Figs. 14(a) and (b) show the distributions of the genuine and
impostor matching scores obtained from two databases, respectively. They are shown that there are two distinct peaks in the
distributions of the matching scores. One peak (located around
0.9) corresponds to the genuine matching scores while the other
peak (located around 0.2) corresponds to the impostor matching sores. These two peaks are widely separated and the distribution curve of the genuine matching scores intersects very
little with that of impostor matching scores. Therefore, it can
be concluded that the proposed approach can very effectively
discriminate palmprints.
Figs. 14(c) and (d) depict the corresponding FAR and FRR
curves for Databases I and II, respectively. In the experiments,
assuming that the correlation threshold value is set to R, the
experimental results near the cross-over point of the FAR and
FRR curves are tabulated in Table 1.
In Fig. 14, it can be easily seen that the matching score
curves and FAR&FRR curves, obtained in Databases I and
1326
D.-S. Huang et al. / Pattern Recognition 41 (2008) 1316 – 1328
Fig. 18. Eight palmprint images from different people, which have similar principal lines.
Table 2
Matching scores among different palmprint images in Fig. 18
Figs
a
b
c
d
e
f
g
h
a
b
c
d
e
f
g
h
0.2143
0.2483
0.6857
0.5310
0.3071
0.1918
0.4552
0.3000
0.2767
0.4052
0.4414
0.0929
0.1757
0.7230
0.3851
0.5216
0.1500
0.2573
0.5882
0.3711
0.3243
0.3310
0.2000
0.2988
0.4641
0.4654
0.5405
0.3720
II, are similar. This demonstrates that the performance of the
proposed approach is stable in different size’s databases. When
the R is set to 0.667, the equal error rates (EER) obtained on
Databases I and II, where FAR equals FRR, are about 0.49
and 0.565, respectively. The EER on Database II is obviously
larger than that of Database I. A reasonable explanation for this
phenomenon is that there is a large probability that a palmprint
encounters other palmprints with similar principal lines in large
size database.
Further, the results coming from our approach and other approaches, such as Palmcode [1], Wu’s approach [16], and Liu’s
approach [17] were compared. And these approaches were all
implemented in Database II by using the same training and
test sets. Fig. 15 depicts the corresponding Receiver Operating Characteristic (ROC) curves, which is a plot of false reject
rate against false acceptance rate. In this figure, the EERs of
Palmcode, Liu’s approach, and Wu’s approach are 0.59, 0.4,
and 0.44, respectively. It can be seen that the EER of our approach is a little better than that of Palmcode, a classical texture
based approach. However, the wrinkles may also possess the
discriminant power for recognition. Thus, the EERs of Liu’s
approach and Wu’s approach are less than that of the proposed
approach.
4.5. Speed
The experiments for the proposed approach were conducted
on a personal computer with an Intel Pentium 4 processor
(2.4 GHz) and 256 MB RAM configured with Microsoft Windows XP and Matlab 7.0 with image processing toolbox. The
execution time for the preprocessing, feature extraction, and
matching are 285, 410, and 1.8 ms, respectively. The total execution time is about 0.7 s, which shows that this method is fast
enough for real-time verification. In fact, we have not completely optimized the program codes, so it is possible for us to
further reduce the computation time.
5. Discussions
In verification experiments, the EER for our approach is
even better than that of Palmcode. We can conclude that the
discriminability of principal lines is also strong. In the past,
many researchers claimed that the discriminability of principal
lines was limited due to their similarity among different people.
Obviously, this conclusion is not right.
In fact, the similarities of the principal lines among different
people include structure similarity and position similarity. For
example, the principal lines of Figs. 16(a) and (b) have similar structures, but their positions are dissimilar, as shown in
Fig. 16(c). As a result, the matching score between them is only
0.1635, which is a small value. In the past, researchers only
paid attention to the structure similarity, but ignored the position dissimilarity of the principal lines among different people.
Therefore, they derived an incorrect conclusion about the discriminability of principal lines.
On the other hand, certainly there are a few people whose
principal lines may be very similar. For instance, the matching
score between two different people may exceed 0.9. Furthermore, in Table 1 even when the threshold is set to 0.923, the
value of FAR is still not zero. The principal lines of Figs. 17(a)
D.-S. Huang et al. / Pattern Recognition 41 (2008) 1316 – 1328
and (b) have similar structures and positions (see Fig. 17(c)),
thus the matching score between them is very large, which is
0.906.
Fig. 18 illustrates eight palmprint images coming from different people, which have similar principal lines.Table 2 shows
the matching scores among them. It can be seen that among all
28 values, only two of them are large, shown in boldface.
6. Conclusions
In this paper, we propose a novel palmprint verification approach based on principal lines, and analyze the discriminability of principal lines. The theoretic analyses and experimental
results show that the proposed MFRAT can extract principal
lines from complex palmprint images effectively and reliably.
And pixel-to-area comparison is robust for slight rotations and
translations. From the experimental results of verification, it
can be concluded that the discriminability of principal lines is
also strong. In the past, many researchers claimed that the discriminability of principal lines was limited. However, they only
paid attention to the structure similarity, but ignored the position dissimilarity of principal lines among different people.
Compared to other approaches, our proposed approach only
using principal lines may miss other useful features. In the future work, we shall study how to use principal lines, texture
and other features to design a multiple features based verification scheme. In this way, a better performance of the PVS can
be expected. At the same time, we shall also investigate how
to use principal lines to design palmprint classification systems
or fast palmprint retrieval schemes.
Acknowledgments
The authors would like to express their sincere thanks to
Biometric Research Center at the Hong Kong Polytechnic University for providing us the PolyU Palmprint Database. They
would also like to thank Dr Zhenan Sun from Institute of Automation, CAS, China, and Dr Li Liu from Hong Kong Polytechnic University for their kindly help. And the authors are
most grateful for the constructive advice and comments from
the anonymous reviewers.
This work was supported by the grants of the National
Science Foundation of China, Nos. 60472111 and 60705007,
the grant from the National Basic Research Program of China
(973 Program), No. 2007CB311002, the grants from the
National High Technology Research and Development Program of China (863 Program), No. 2007AA01Z167.
References
[1] D. Zhang, A. Kong, J. You, M. Wong, Online palmprint identification,
IEEE Trans. Pattern Anal. Mach. Intell. 25 (9) (2003) 1041–1050.
1327
[2] A.K. Jain, L. Hong, R. Bolle, Online fingerprint verification, IEEE Trans.
Pattern Anal. Mach. Intell. 19 (4) (1997) 302–314.
[3] L. Ma, T.N. Tan, Y.H. Wang, D.X. Zhang, Personal identification based
on iris texture analysis, IEEE Trans. Pattern Anal. Mach. Intell. 25 (12)
(2003) 1519–1533.
[4] A. Kong, D. Zhang, M. Kamel, Palmprint identification using featurelevel fusion, Pattern Recognition 39 (2006) 478–487.
[5] T. Connie, A.T.B. Jin, M.G.K. On, D.N.C. Ling, An automated palmprint
recognition system, Image Vision Comput. 23 (5) (2005) 501–515.
[6] S. Ribaric, I. Fratric, A biometric identification system based on
eigenpalm and eigenfinger features, IEEE Trans. Pattern Anal. Mach.
Intell. 27 (11) (2005) 1698–1709.
[7] J. Yang, D. Zhang, J.Y. Yang, B. Niu, Globally maximizing locally
minimizing: Unsupervised Discriminant Projection with applications to
face and palm Biometrics, IEEE Trans. Pattern Anal. Mach. Intell. 29
(4) (2007) 650–664.
[8] L. Shang, D.S. Huang, J.X. Du, C.H. Zheng, Palmprint recognition
using FastICA algorithm and radial basis probabilistic neural network,
Neurocomputing 69 (13–15) (2006) 1782–1786.
[9] A. Kumar, D. Zhang, Personal authentication using multiple palmprint
representation, Pattern Recognition 38 (10) (2005) 1695–1704.
[10] Z.N. Sun, T.N. Tan, Y.H. Wang, S.Z Li, Ordinal palmprint representation
for personal identification, in: Proceedings of IEEE International
Conference on Computer Vision and Pattern Recognition, 2005,
pp. 279–284.
[11] A. Kong, D. Zhang, Competitive coding scheme for palmprint
verification, in: Proceedings of the 17th ICPR, vol. 1, 2004, pp. 520–523.
[12] L. Zhang, D. Zhang, Characterization of palmprints by wavelet signatures
via directional context modeling, IEEE Trans. Syst. Man Cybern. B. 34
(3) (2004) 1335–1347.
[13] C.C. Han, H.L. Cheng, C.L. Lin, K.C. Fan, Personal authentication using
palmprint features, Pattern Recognition 36 (2) (2003) 371–381.
[14] C.L. Lin, T.C. Chuang, K.C. Fan, Palmprint verification using
hierarchical decomposition, Pattern Recognition 38 (12) (2005)
2639–2652.
[15] X.Q. Wu, D. Zhang, K.Q. Wang, B. Huang, Palmprint classification
using principal lines, Pattern Recognition 37 (10) (2004) 1987–1998.
[16] X.Q. Wu, D. Zhang, K.Q. Wang, Palm line extraction and matching for
personal authentication, IEEE Trans. Syst. Man Cybern. A 36 (5) (2006)
978–987.
[17] L. Liu, D. Zhang, A novel palm-line detector, in: Proceedings of the 5th
AVBPA, 2005, pp. 563–571.
[18] L. Liu, D. Zhang, J. You, Detecting wide lines using isotropic nonlinear
filtering, IEEE Trans. Image Procession 16 (6) (2007) 1584–1595.
[19] PolyU Palmprint Database, http://www4.comp.polyu.edu.hk/∼biometrics/ .
[20] J. Radon, Über die bestimmung von funktionen durch ihre integralwerte
längs gewisser mannigfaltigkeiten, Berichte Sächsische Akademie der
Wissenschafter, Leipzig, Math.-Phys. Kl (69) (1917) 262–267.
[21] A.C. Copeland, G. Ravichandran, M.M. Trivedi, Localized radon
transform-based detection of ship wakes in SAR images, IEEE Trans.
Geosci. Remote Sensing 33 (1) (1995) 35–45.
[22] F. Matus, J. Flusser, Image representations via a finite radon transform,
IEEE Trans. Pattern. Anal. Mach. Intell. 15 (10) (1993) 996–1006.
[23] M.N. Do, M. Vetterli, The finite ridgelet transform for image
representation, IEEE Trans. Image Procession 12 (1) (2003) 16–28.
[24] J.W. Yang, L.F. Liu, T.Z. Jiang, A modified Gabor filter design method
for fingerprint image enhancement, Pattern Recognition Lett. 24 (12)
(2003) 1805–1817.
[25] S.M. Smith, J.M. Brady, SUSAN—A new approach to low level image
processing, Int. J. Comput. Vision 23 (1) (1997) 45–78.
About the Author—DE-SHUANG HUANG received the B.Sc. degree in electronic engineering from the Institute of Electronic Engineering, Hefei, China, in
1986, the M.Sc. degree in electronic engineering from the National Defense University of Science and Technology, Changsha, China, in 1989, and the Ph.D.
degree in electronic engineering from Xidian University, Xian, China, in 1993. From 1993 to 1997, he was a Postdoctoral Student at the Beijing Institute
of Technology, Beijing, China, and the National Key Laboratory of Pattern Recognition, Chinese Academy of Sciences (CAS), Beijing. In 2000, he was a
professor, and joined the Institute of Intelligent Machines, CAS, as a member of the Hundred Talents Program of CAS. He had published over 190 papers and,
1328
D.-S. Huang et al. / Pattern Recognition 41 (2008) 1316 – 1328
in 1996, published a book entitled Systematic Theory of Neural Networks for Pattern Recognition. His research interests include pattern recognition, machine
leaning, bioinformatics, and image processing.
About the Author—WEI JIA received the B.Sc. degree in informatics from Center of China Normal University, Wuhan, China, in 1998, the M.Sc. degree
in computer science from Hefei University of Technology, Hefei, China, in 2004. He is currently a Ph.D. student in the Department of Automation at the
University of Science and Technology of China. His research interests include palmprint recognition, pattern recognition, and image processing.
About the author—DAVID ZHANG graduated in computer science from Peking University. He received his M.Sc. in computer science in 1982 and his
Ph.d. in 1985 from the Harbin Institute of Technology (HIT). From 1986 to 1988 he was a Postdoctoral Fellow at Tsinghua University and then an Associate
Professor at the Academia Sinica, Beijing. In 1994 he received his second Ph.D. in electrical and computer engineering from the University of Waterloo, Ont.,
Canada. Currently, he is a Chair Professor at the Hong Kong Polytechnic University where he is the Founding Director of the Biometrics Technology Centre
(UGC/CRC) supported by the Hong Kong SAR Government. He also serves as Adjunct Professor in Tsinghua University, Shanghai Jiao Tong University,
Beihang University, Harbin Institute of Technology, and the University of Waterloo. He is the Founder and Editor-in-Chief, International Journal of Image
and Graphics (IJIG); Book Editor, Springer International Series on Biometrics (KISB); Organizer, the International Conference on Biometrics Authentication
(ICBA); Associate Editor of more than 10 international journals including IEEE Trans on SMC-A/SMC-C/Pattern Recognition; Technical Committee Chair of
IEEE CIS, and the author of more than 10 books and 160 journal papers. Professor Zhang is a Croucher Senior Research Fellow, Distinguished Speaker of
the IEEE Computer Society, and a Fellow of the International Association of Pattern Recognition (IAPR).