Implementation of Reliable Open Source

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Implementation of Reliable Open Source

IRIS Recognition System

Dhananjay Ikhar1, Vishwas Deshpande2 & Sachin Untawale3


1&3

Dept. of Mechanical Engineering, Datta Meghe Institute of Engineering, Technology & Research, Wardha
2
Ramdeobaba Collge of Engineering & Management, Nagpur
E-mail : [email protected], [email protected], [email protected]

user interfaces, to algorithms and decision theory. At the


same time as these good innovations, possibly even
outpacing them, the demands on the technology are
getting greater.

Abstract RELIABLE automatic recognition of persons


has long been an attractive goal. As in all pattern
recognition problems, the key issue is the relation between
inter-class and intra-class variability: objects can be
reliably classified only if the variability among different
instances of a given class is less than the variability
between different classes.The objective of this paper is to
implement an open-source iris recognition system in order
to verify the claimed performance of the technology. The
development tool used will be MATLAB, and emphasis
will be only on the software for performing recognition
and not hardware for capturing an eye image. A reliable
application development approach will be employed in
order to produce results quickly. MATLAB provides an
excellent environment, with its image processing toolbox.
To test the system, a database of 756 grayscale eye images
courtesy of Chinese Academy of Sciences-Institute of
Automation (CASIA) is used. The system is to be
composed of a number of sub-systems, which correspond
to each stage of iris recognition. These stages are- image
acquisition, segmentation, normalization and feature
encoding. The input to the system will be an eye image, and
the output will be an iris template, which will provide a
mathematical representation of the iris region. Which
conclude the objectives to design recognition system arestudy of different biometrics and their features? Study of
different recognition systems and their steps, selection of
simple and efficient recognition algorithm for
implementation, selection of fast and efficient tool for
processing, apply the implemented algorithm to different
database and find out performance factors.

The iris is an externally visible and well protected


organ whose unique epigenetic pattern remains stable
throughout adult life. These characteristics make it very
attractive for use as a biometric for identifying
individuals. Image processing techniques can be
employed to extract the unique iris pattern from a
digitized image of eye, and encode it in to biometric
template, which can be stored in a database. This
biometric template contains an objective mathematical
representation of unique information stored in iris, and
allows comparisons to be made between templates.
When a subject wishes to be identified by iris
recognition system, their eye is first photographed, and
then a template created for their iris region. This
template is then compared with other templates stored in
database until either a matching template is found and
the subject is identified, or no match is found and the
subject remains undefined.
So basis of every biometric trait is to get the input
signal/image apply some algorithm and extract the
prominent feature for person identification/verification.
In the identification case, the system is trained with the
patterns of several persons. For each person, a template
is calculated in training stage. A pattern that is going to
be identified is matched against every known template.
In the verification case, a person's identity is claimed a
priori. The pattern that is verified only is compared with
the person's individual template. Most biometric
systems allow two modes of operation, an enrolment
mode for adding templates to database, and an
identification mode, where a template is created for an
individual and then a match is searched for in the
database of pre-enrolled templates. Following figure
differentiates these enrolment and identification process
clearly.

Index TermsBiometrics, iris Recognition, Biometrics, Iris


image quality, fingerprint, iris Recognition, Normalisation

I.

INTRODUCTION

THE ANTICIPATED large-scale applications of


biometric technologies such as iris recognition are
driving innovations at all levels, ranging from sensors to

ISSN : 2319 3182, Volume-2, Issue-4, 2013

43

International Journal on Theoretical and Applied Research in Mechanical Engineering (IJTARME)

II. IMPLEMENTATION
Implementation of good image acquisition system is
very difficult because it is expected that from system to
obtain noise free image. But it is depend on surrounding
intensity, lightning used, and camera resolution and
distance between camera and users eye. LED is most
commonly used light source than IR LED because its
light affects the human eye system. In this thesis CASIA
DATABASE eye images are used which includes both
noise free and noisy images. It is taken from The
Centre of Biometric and Security Research, CASIA Iris
Image Database.

(a)

(b)

(c)

Fig 2. (a) Selected Iris Circle (b) Sharpened Iris


(c) Iris Part with Eyelashes
B.

Normalization

The Daugmans rubber sheet model remaps each


point within the iris region to a pair of polar coordinates

A. Image Segmentation

(r , ) where r is on interval [0, 1] and is angle


[0, 2 ]

In pupil detection, the iris image is converted into


grayscale to remove the effect of illumination. As pupil
is the largest black area in the intensity image, its edges
can be detected easily from the binarized image by using
suitable threshold on the intensity image. Thus the first
step to find or separate out the pupil apply histogram of
input image from which we get threshold value for
pupil, then apply edge detection, once edge of pupil
find, then center coordinates and radius can be easily
find out by following algorithm having steps-(A) Find
the largest and smallest values for both x and y axis.
(B)Add the two x-axis value and divide them by two
will gives x- center point. (C)Similarly add two y-axis
values, divide it by two, gives y- center point. (D)
Radius is calculated by subtracting minimum value from
maximum and divides it by two gives the radius of pupil
circle.

Fig.3. Daugmans rubber sheet model


The remapping of the iris region from (x, y) Cartesian
coordinates to the normalized non-concentric polar
representation is modeled as

I ( x(r , ), y (r , )) I (r , )
with
x(r , ) (1 r ) x p ( ) r xl ( )
y (r , ) (1 r ) y p ( ) r y l ( )

..(1)

I ( x, y) is the iris region image, ( x, y) are the


original Cartesian coordinates, (r , )
are the
Where

(a)

(b)

corresponding

(c)

normalized

polar

coordinates,

and

Fig 1 (a) Canny edge image (b) Only pupil, (c) Pupil ring

x p , y p and xl , yl are the coordinates of the pupil and


iris boundaries along the direction. The rubber sheet

Eyelash and eyelid always affects the


performance of system. The eyelashes are treated as
belonging to two types, separable eyelashes, which are
isolated in the image, and multiple eyelashes, which are
bunched together and overlap in the eye image. In this
thesis iris circle diameter is assumed as two times pupil
diameter and the noise, eyelash and eyelid, are avoided
by considering lower 1800 portion of iris circle. Hence
after segmentation a complete iris part is separate out.
Shown below

model takes into account pupil dilation and size


inconsistencies in order to produce a normalized
representation with constant dimensions. In this way the
iris region is modeled as a flexible rubber sheet
anchored at the iris boundary with the pupil centre as the
reference point.Even though the homogeneous rubber
sheet model accounts for pupil dilation, imaging
distance and non-concentric pupil displacement, it does
not compensate for rotational inconsistencies. In the
Daugman system, rotation is accounted for during
matching by shifting the iris templates in the
ISSN : 2319 3182, Volume-2, Issue-4, 2013

44

International Journal on Theoretical and Applied Research in Mechanical Engineering (IJTARME)

direction until two iris templates are aligned. For


normalization of iris regions a technique based on
Daugmans rubber sheet model is employed. The centre
of pupil is considered as the reference point, and radial
vector pass through the iris region, as shown in fig.

Cartesian coordinates of data points from the radial and


angular position in the normalized pattern. From the
doughnut iris region, normalization produces 2D array
with horizontal dimensions of angular resolution and
vertical dimensions of radial resolution. Another 2D
array is created for making reflections, eyelashes, and
eyelids detected in the segmentation stage. In order to
prevent non-iris region data from corrupting the
normalized representation, data points which occur
along the pupil border or the iris border are discarded.
As in Daugmans rubber sheet model, removing
rotational inconsistencies is performed at the matching
stage and will be discussed in the next chapter.

Fig.4. Normalization

Fig.5. Normalized iris part

A number of data points are selected along each radial


line and this is defined as the radial resolution. The
number of radial lines going around the iris region is
defined as the radial resolution. Since the pupil can be
non-concentric to the iris, a remapping formula is
needed to rescale points depending on the angle around
the circle. This is given by

C.

1. Radial and Circular Feature Encoding


This approach is based on edge detection .Edges are
detected in input image using canny edge detector. After
edge detection image is changed to binary format in
which white pixels are present on edges and black pixels
elsewhere. The number of white pixels in radial
direction and on circle of different radius gives
important information about iris patterns. Normalized
polar iris image will contain only white and black pixels
as it is obtained from above edge detected input image.
Features from normalized images are extracted in two
ways (a) radial way (b) circular way.

r ' 2 rl 2
with

o x2 o y2

oy

ox

cos arctan

Feature Encoding

(a) Radial features

.. (2)
Where displacement of the centre of the pupil relative to
the centre of the iris is given by o x , o y and r ' is the
distance between the edge of the pupil and edge of the
iris at an angle, around the region, and rl is the radius

Fig.6. Feature extraction in radial direction

of the iris. The remapping formula first gives the radius


of the iris region doughnut as a function of the angle
.A constant number of points are chosen along each
radial line, so that a constant number of radial data
points are taken, irrespective of how narrow and wide
the radius is at a particular angle. The normalized
pattern was created by backtracking to find the

In iris image value of radial feature at particular angle


will be number of white pixels along the radial
direction.
If
ISSN : 2319 3182, Volume-2, Issue-4, 2013

45

International Journal on Theoretical and Applied Research in Mechanical Engineering (IJTARME)

S r , 1 iris _ polar _ image[r ][ ] WHITE

3.

Total numbers of white pixels are stored.

0 iris _ polar _ image[r ][ ] BLACK

4.

Similar steps from 1 to 3 are followed for both the


query and data base image.

5.

For matching compare the two images by using


Subtraction of white pixels available in database
image from number of white pixels available in
query image.

Feature at angle

will be

F S r ,
r 1

(3)

(b) Circular features

Fig .7.Feature extraction in circular direction


In iris image value of circular feature at particular radius
will be considered as sum of white pixels along the
circle of that radius. Keeping the meaning of

Fig.9. Circular Feature

r ,

same .the feature of particular radius r will be given as


following.
2

Fr S r ,
0

.. (4)

Iris code will be considered as sequence of radial


and circular features. In this method number of white
pixel on radial and circular direction is measured which
then indicates code for that particular eye image. It is
obtained by following steps.
1.

Image in polar form is converted into binary


form.

Fig.10. Radial Features

Fig.8. Normalized image converted into binary


Fig11.(a)Query Image

(b)Retrieved Image

2. Number of white pixels in radial and circular


direction is measured.
White pixels [counts] = 1059
Black pixels [x]

= 7041

Fig.12. Normalized image

ISSN : 2319 3182, Volume-2, Issue-4, 2013

46

International Journal on Theoretical and Applied Research in Mechanical Engineering (IJTARME)

D.

Matching

The most commonly used metric for matching the


two bit strings generated by query image and template
stored in database is the Hamming Distance. It is a
simple XOR operation where result equal to zero when
both said a string has same bit string. Although, in
theory, two iris templates generated from the same iris
will have a hamming distance of 0.0, in practice this will
not occur. Because normalization is not perfect, and also
there will be some noise that goes undetected so some
variation will be present when comparing two intra-class
iris templates.
III .PERFORMANCE EVALUATION
The performance of the iris recognition as whole is
examined. Tests were carried out to find the best
separation. So that the false match and false accept rate
is minimized, and to conform that iris recognition can
perform accurately as a biometric for recognition of
individuals. As well as confirming that the system
provides accurate recognition, experiments were also
conducted in order to confirm the uniqueness of human
iris patterns by reducing the number of degrees of
freedom present in iris template representation. The
points which decide the performance of systems are- 1.
False Acceptance Rate [FAR] 2. False Rejection Rate
[FRR] 3. Equal Error Rate [EER] 4. Accuracy5.
Decidability6. FAR and FRR according to hamming
distance.

800
Series1

400
200

Database images

3.

10.bmp

880

11.bmp

658

12.bmp

820

802

16.bmp

806

2.bmp

802

3.bmp

4.bmp

852

5.bmp

816

6.bmp

794

7.bmp

862

8.bmp

792

9.bmp

892

(5)

Number of times same person rejected 100


Number of comparision between same person (6)

Decidability

The key objective of a recognition system is to be


able to achieve a distinct separation of intra-class and
inter-class hamming distances. The separation between
inter class and intra-class hamming distance
distributions can be measured by metric decidability
and is given by the following formula. Higher the value
of decidability better is the performance of system.
Actually it decides the separation of FAR and FRR.
Note that if the score distributions overlap, the FAR and
FRR intersect at a certain point. The value of the FAR
and the FRR at this point, which is of course the same
for both of them, is called the Equal Error Rate (EER)
and in above graphs it, is obtained for image 3.bmp
where HD is same for FAR and FRR(0.32778).

Fig.13. Database images vs. HD

618

15.bmp

Number of times different person matched 100


Number of comparision between different persons

FRR

1.bmp

720

2 .False Rejecting Rate:-The fraction of the number of


rejected client patterns divided by the total number of
client patterns is called False Rejection Rate (FRR).

1000

1.
bm
10 p
.b
m
11 p
.b
m
12 p
.b
m
13 p
.b
m
14 p
.b
m
15 p
.b
m
16 p
.b
m
p
2.
bm
3. p
bm
4. p
bm
5. p
bm
6. p
bm
7. p
bm
8. p
bm
9. p
bm
p

Absolute difference

Database images vs Absolute difference

Absolute
difference

14.bmp

1. False Accepting Rate:-The fraction of the number of


accepted client patterns divided by the total number of
client patterns is called False Rejection Rate (FAR).
FAR

Database
images

890

Performance Evaluation for Circular and radial

Figure and Table

600

13.bmp

ISSN : 2319 3182, Volume-2, Issue-4, 2013

47

International Journal on Theoretical and Applied Research in Mechanical Engineering (IJTARME)

V. REFERENCES

Decidability and EER Graph

[1] J.G.Daugman High confidence Visual Recognition


of
Persons
by
a
Test
of
statistical
IndependenceIEEE Trans. Pattern analysis and
machine Intelligence, Vol.15, no.11,1993,pp.11481161.

0.35
0.3

HD

0.25
0.2

Series1

0.15

Series2

0.1
0.05
0
1.bmp

1.bmp

1.bmp

2.bmp

2.bmp

3.bmp

[2] W.W.Boles and B.Boashash, A Human


Identification technique using images of Iris and
Wavelet Transform, IEEE trans.Signal processing,
Vol.46, no.41998, pp.1185-1188.

images

Fig.16. Graph of Decidability


IV. CONCLUSION AND FUTURE SCOPE

[3] Li Ma and T.Tan,Personal identification based on


iris texture analysis, IEEE Trans.Pattern analysis
and Machine Intelligence, Vol.25, no.12, 2003.

An iris recognition methods proposed in this thesis


employs iris feature extraction using a cumulative-sumbased change analysis and Radial and Circular method.
In order to extract iris features, using a cumulative-sumbased change analysis, a normalized iris image is
divided into basic cells. Iris codes for these cells are
generated by proposed code generation algorithm which
uses the cumulative sums of each cell. The method is
relatively simple and efficient compared to other
existing methods. Experimental results show that the
approach of implemented method has good recognition
performance and speed. In future, to make the system
more robust and reliable need to experiment on a larger
iris database. The performance of second method is not
encouraging because absolute difference between two
templates is considered for matching purpose. Also this
algorithm is based on result of edge detection and edge
detection algorithms are not efficient for illumination in
images. Some edges cannot be detected if image is taken
in low illumination condition.

[4] J. Matey, K. Hanna, R. Kolcyznski, D. LoIacono, S.


Mangru,O. Naroditsky, M. Tinker, T. Zappia, and
W-Y. Zhao, Iris on the move:Acquisition of
images for iris recognition in less constrained
Environments,Proc. IEEE, vol. 94, no. 11, pp.
19361947, Nov. 2006.
[5] J. G. Daugman, How iris recognition works, IEEE
Trans. Circuits Syst.Video Technol., vol. 14, no. 1,
pp. 2130, Jan. 2004

ISSN : 2319 3182, Volume-2, Issue-4, 2013

48

You might also like