An Efficient Technique For Iris Recocognition Based On Eni Features

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

RESEARCH PAPERS

AN EFFICIENT TECHNIQUE FOR IRIS RECOCOGNITION BASED ON ENI FEATURES


By K. LAVANYA * K. SUBBA RAO ** C. NAGARAJU ***

*-** Assistant Professor, Department of IT, L.B.R.College of Engineering, Mylavaram. ** Professor and Head, Department of IT, L.B.R.College of Engineering, Mylavaram.

ABSTRACT Iris location estimation has been studied in numerous works in the literature. Previous research shows satisfactory result. However, in presence of non frontal faces, eye locators are not adequate to accurately locate the center of the eyes. The Iris location estimation techniques are able to deal with these conditions, hence they may be suited to enhance the accuracy. In this paper, a new method is proposed to obtain enhanced Iris location estimation. This method has three steps (i) enhance the accuracy of Iris location estimations (ii) extend the operative range of the Iris locators with LBP and (iii) improve the accuracy of the Iris location. These enhanced estimations are used to obtain a novel visual Iris estimation system. Keywords: Iris Location , Morphological Trasformations, LBP . INTRODUCTION Image based stare estimation is important in many applications, spanning from Human Computer Interaction (HCI) to human behavior analysis. In applications where human activity is under observation from a static camera, the estimation of the visual stare provides important information about the interest of the subject, which is commonly used as control devices for disabled people [1], to analyze the user attention while driving [2], and other applications. The estimation of Iris location is often achieved using expensive, bulky or limiting hardware. Therefore, the problem is often simplified by either considering the Iris center locations as the only feature to understand the interest of a subject. There is an abundance of literature concerning to this topic: recent surveys on Iris center location estimation can be found in [3]. The Iris location algorithms found in commercially available eyetrackers share the problem of sensitivity to noise variations, require the user to be either equipped with a Iris mounted device, or to use a high resolution camera combined with a chinrest to limit the allowed Iris movement. Furthermore, daylight applications are precluded due to the use of active InfraRed (IR) illumination to obtain accurate Iris location through corneal reflection. The appearance based methods which make use of standard low resolution cameras are considered to be less invasive and so more desirable in a large range of applications. Within the appearance-based methods for Iris location proposed in literature [4-7] reported results support the evidence that accurate appearance based Iris center localization is becoming feasible and that it could be used as an enabling technology for a various set of applications. Iris location estimation often requires multiple cameras or complex face models which requires accurate initialization. In an online tracking algorithm employing adaptive view based appearance models is proposed. The method provides drift-free tracking by maintaining a dynamic set of key frames with views of the head under various poses and registering the current frame to the previous frames and key frames. Although several Iris location methods have shown success in stare estimation, the underlying assumption of being able to estimate stare starting from Iris location only is valid in a limited number of scenarios [8,9]. For instance, if we consider an environment composed of a target scene (a specific scene under analysis, such as a computer monitor, an advertising poster, a shelf, etc.) and a monitored area (the place from which the user looks at the target scene), an Iris stare tracker alone would fail when trying to understand which product

i-managers Journal on Information Technology, Vol. 1 l2 l - May 2012 No. March

21

RESEARCH PAPERS
on the shelf is being observed, while an head pose stare estimator alone would fail in finely control the cursor on a computer screen [10]. Although their compound tracking property promote them against separate methods, the practical limitations and the need for improved accuracy make them less attractive in comparison to monocular low resolution implementations. However, no study is performed on the feasibility of an accurate appearance-only stare estimator which considers the Iris location factors. Therefore, our goal is to develop a system capable of analyzing the visual stare of a person starting from monocular images. This allows to study the movement of the user's Iris in a more natural manner than traditional methods, as there are no additional requirements needed to use the system. To this end, the authors propose a unified framework for stare location estimation for visual stare estimation. The Iris location information is used in a multimodal visual stare estimation system, which uses the Iris to adjust the stare location determined. 1. Existing method In existing work Iris detection is performed by finding rough Iris regions with feature based methods and then used template based method to locate center of Iriss. In the same time, on the basis of these two regions, the sizes of two Iriss are evaluated, and the template of Iriss are created according to the estimated sizes. Finally, the precise locations of the two centers of iris are evaluated. Algorithm for existing method Acquire the image Calculate the gradient image For finding of gradient image sobel operator is applied. Operator takes intensity values for the coefficients which influence the image intensities to produce the gradient approximation. Apply horizontal and vertical projection to the gradient image I(x, y) is an M N grayscale image. The horizontal Where i=1.,,,.,2M-1; j=1,2N-1 2. Proposed Method In this a novel Iris detection method is presented. It has the following steps: (i) Image Acquisition (ii) normalization (iii) Reduction of image features using Principal Component Analysis (iv) Extraction of Iris structure using ENI Features (v) Matching with image template. The architecture of the proposed method is shown in Figure 1. 2.1 Image acquisition The first stage of any vision system is the image acquisition stage. The aim of image acquisition system can be summarized as the transformation of optical data into an array of numerical data which may be manipulated by a computer so the overall aim of vision may be achieved. In order to achieve this aim three major issues representation, transduction (or sensing) and digitization must be tackled. 2.2 Normalization Dynamic range of gray level value are normalized into 0(3) projection is used for finding Iris region. Matching: For Iris template, a matrix C, which represents the detected crossed-area, is formed with 0s and 1s. Thus, there are M N crossed areas detected in the Iris template, and the size of C is M N. The autocorrelation matrix R is computed as follows and it is shown in equation (3): (2) intersection between vertical and horizontal The (1) and vertical projection of the entire image is dened, respectively, as H(y) and V(x); thus, we have defined in equation (1) and (2).

Figure 1. Architecture of the proposed method

22

No. March i-managers Journal on Information Technology, Vol. 1 l2 l - May 2012

RESEARCH PAPERS
255 or 0-1. The constant dimension image extracted using normalized method and normalized image is reconstructed by morphological transformation. Morphological operations are used to reconstruct the original image by applying structure element by examining each pixel with their neighbours. Morphological openings g and closings are used to remove the l l amount of face image features with increasing size in Iris detection. Closing is expressed in terms of geodesic erosions. Geodesic erosion e and dually geodesic g (f)
(1) dilation d operations that are applied to a marker g (f) are (1)

(8) Step 2: Subtract the mean for each dimension to produce a zero mean data set Step3: Construct the matrix Y, where 2.4 Structure extraction by ENI features The ENI (edge pixels, noisy pixels and interior pixels) Features are used to extract Iris structure. This method denotes the number of homogeneous pixels in a local neighbourhood, and is significantly different for edge pixels, noisy pixels and interior pixels which are shown in Figure 2. This method redefines the controlling speed and the fidelity function to depend on ENI. According to their controlling function, the diffusion and fidelity process at edge pixels, noisy pixels and interior pixels can be selectively carried out. Further, a class of second-order improved, edge-preserving demising is proposed based on the controlling function in order to deal with random-valued impulse noise reliably. This method demonstrates the performance of standard test images, corrupted by random-valued impulse noise with various noise levels. 2.4.1 Algorithm Step 1: Find the absolute difference between central pixel p and its neighbourhood pixel q using equation (9). B(p,q)=|A(p)-A(q)| (9) Step 2: Gray level values which are obtained from step are made into two groups such that Where, T is gray value of central pixel of original window. Step 3: Find total no of object pixels where I(q) is equal to 1 by using the equation (10) Y= (10)

image f and a mask g. Where e d erosion and and are the dilation by the neighbourhood of the origin. Both operators can be further applied successively as follows which is shown in equation (4): (4) Hence, process is repeating until stability (i times, one can realize respectively a reconstruction by erosion shown in equation (5) and a reconstruction by dilation shown in equation (6)
(f) = (f)

(5) (6)

(f) =

(f)

Which reach stability after a finite number of steps. Consequently, one can define closing by reconstruction with a Structuring Element (SE) which is shown in equation (7) as follows: (f) ] 2.3 Feature extraction Image is a multi dimensional feature vector. Principal Component Analysis(PCA) eliminates irrelevant features from the face image. PCA extract the low dimensional Iris information from the high dimensional face image. The feature data set for a face image is an m x n matrix X = {x11, x12,....xmn}. PCA method find an optimal representation Y of the original data set X with B. The row vectors of B will become the principal components of X. Geometrically, B is a linear transform that rotates and stretches X into Y, i.e. BX = Y The following steps are used to determine B: Step 1: Calculate the mean of each dimension using equation (8) as follows (7)

Step 4: calculate rotation invariant value by controlling speed and fidelity function which is shown in equation (11) M= 1-1 4 4

(11)

Where N is the no of pixels in A and Y is no of object pixels. Step 5: Calculate the edge-preserving pixel value R1.

Figure 2a) A5 5 P=22, b) B (p,q)=|A(p)-A(q)|, c) Z=15

i-managers Journal on Information Technology, Vol. 1 l2 l - May 2012 No. March

23

RESEARCH PAPERS
R1=A(i,j)+(Wy^2*Wxx-(2*Wx*Wy)+ Wx^2*Wyy )/(Wx+Wy); Wx =(A(i,j)-A(i,j-1))/2; Wy =(A(i+1,j)-A(i,j)/2; Wxx =(A(i,j+1)+A(i,j-1))-2*A(i,j); Wyy =(A(i+1,j)+A(i-1,j))-2*A(i,j); (12) (13) (14) (15) (16) and Graph3 proposed method illustrate that Query1 image is very close to Swet1 and Swet2 image and remaining are more dissimilar where as Query2 image is not close to any reference images which is shown in Graph4. As per the Table1 and Graph1 existing method illustrate that Query1 image is very close to Swet2 image and remaining are more dissimilar where as Query2 image is not close to any reference images which is shown in Graph 4. The proposed method gives more than 85% accuracy for recognising iris methods. images than the existing

Where Wx ,Wy are the first order gradients along with x and y directions shown in equation (13) and (14) and Wxx ,Wyy are the second order gradients along with x and y directions shown in equation (15) and (16). The R1 value preserves the edges in noisy images shown in equaiton (12). In order to fill the structure of Iriss and removing irrelevant artifacts from face image morphological reconstruction filling operation is applied. 2.5 Matching After successful completion of Iris detection matching is applied by using Jaccard Coefficient to estimate accuracy. Jaccard coefficient method is commonly used for handling images with asymmetric attributes and it was found to give the lowest error rate compared with seven well-known similarity measures. Which is defined in equation (17) Jaccard Coefficient = (17)

Graph 1. Mapping between query1 and traing set with existing method

where f11 represents the total number of features that appeared in B but not in A,f10 represents the total number of features appeared in A but not in B: 3. Experimental results In this paper, the authors evaluate the proposed method by several experiments on different databases, including an outdoor database; Yale face database, PIE face database and my own database which is created for the project of DST. In this experiment, they choose two images per subject as query images, the other images as template images. The resultant distance values between template images and query images of both existing and proposed methods are shown in Table1 and the mapping performance of query images with traing set of existing method is illustrated in Graph 1 and Graph 2. Similarly the mapping performance of query images with traing set of proposed method is illustrated in Graph 3 and Graph 4. For example in the Figure 1 database, Image Bin1 to Swet2 is considered as reference image and Query1 and Query2 are query images. As per the Table1
Bin1 Bin10 Bin11 Swet1 Swet2 Bs1 Bs2 Bs3
Bin1 Bin10 Bin11 Bs1 Bs2 Bs3 sin Swet1 Swet2 Query1 Query2

Graph 2. Mapping between query2 and traing set with existing method

Figure 3. Input images (1st row), Existing method (2nd row), Proposed method (3rd row)
Existing Method Query1 0.013955 0.005327 0.002995 0.006482 0.00224 0.017962 0.021099 0.002782 Query2 0.015652 0.003630 0.004691 0.003860 0.001910 0.019659 0.022796 0.001086 Proposed method Query1 0.001631 0.007887 0.003673 0.000828 0.000706 0.001338 0.003992 0.001631 Query2 0.005839 0.00424 0.003797 0.006745 0.006668 0.006132 0.003477 0.005839

Table 1. Distance values between query images and traing images for both existing and proposed method

24

No. March i-managers Journal on Information Technology, Vol. 1 l2 l - May 2012

RESEARCH PAPERS
and emg.IN CH8.ACM. Pages: 3039-3044. DOI: 10.1145/1358628.1358804. [2]. COGAIN (2006). Communication by gaze interaction: Gazing into the future. http://www.cogain.org. [3]. E. Murphy-Chutorian and M. Trivedi (2008). Head pose estimation in computer vision: A survey. Pattern Analysis and
Graph 3. Mapping between query 1 and traing set with proposed method

Machine Intelligence. IEEE Transaction on Pattern Analysis and Machine Intelligence. DOI:10.1109/TPAMI.2008.106. [4]. Kwan Ho An, M. J Chung (2008). 3d head tracking and pose-robust 2d texture map-based face recognition using a simple ellipsoid model. International Conference on Intelligent robots and Systems. Pg:307-312. [5]. S. Ba and J. Odobez (2007). From camera head pose to 3d global roomhead pose using multiple camera views. In International Workshop Classification of Events Activities and Relationship. [6]. L. Bai, L. Shen, and Y. Wang (2006). A novel Iris location algorithm based on radial symmetry transform. In ICPR, Vol 3.pages 511514. [7]. B. Kroon, A. Hanjalic, and S. M. Maas (2008). Iris localization for face matching: is it always useful and under what conditions?. In CIVR. [8]. K. Smith, S. O. Ba, J.-M. Odobez, and D. Gatica-Perez (2008). Tracking the visual focus of attention for a varying number of wandering people. PAMI, 30(7), July 2008. [9]. M. Voit and R. Stiefelhagen (2008). Deducing the visual focus of attention from head pose estimation in dynamic multi-view meeting scenarios. IEEE Transaction on Pattern Analysis and Machine Learning. Vol 30. No 7, pp.12121229. [10]. J. Sung, T. Kanade, and D. Kim (2008). Pose robust face tracking by combining active appearance models and cylinder head models. International Journal of Computer Vision, Vol 80.No 2.

Graph 4. Mapping between query 2 image and traing set with proposed method

Conclusion This method normalizing the image into predefined range then image is transformed into gradient domain. horizontal and vertical projections are applied For elimination of irrelavant features from the face. To produce accurate results values has to updated based on the image. To overcome this drawback a new method is proposed. The Proposed method allows automatic recognition of basic features and precisely measures. By combining PCA with ENI vector features are extracted automatically. The method is validated by using computer-simulated database samples and truthful images. All iris features of the tested database samples are successfully recognized with Jaccard Coefficient. The computed table of values and graphs are consistent with the manual perception and statistical analysis. References [1]. J.S. Agustin, J.P Hansen, and J. Mateo. (2008). Gaze . beats mouse: hands-free selection by combining gaze

i-managers Journal on Information Technology, Vol. 1 l2 l - May 2012 No. March

25

RESEARCH PAPERS
ABOUT THE AUTHORS
K. Lavanya received her B.Tech degree in Computer Science from J.N.T. University Hyderabad, M.Tech degree in Computer Science from J.N.T. University Hyderabad. Currently, she is working as a Assistant Professor in Department of IT in LakiReddy Bali reddy College of Engineering, Vijayawada. She has got 8 years of teaching experience. She has published two research papers in various International Journals and one research papers in national conferences. She has attended five seminars and workshops. She is member of IEEE professional society.

K. Subba Rao received his B.Tech degree in Computer Science from Bapatla Engineering College Chirala, M.Tech degree in Software Engineering from J.N.T. University Anantapur and Pursuing PhD in Software Engineering from A N University. Currently, he is working as a Associate professor in department of IT in LakiReddy Bali reddy College of Engineering, Vijayawada. He has got 11 years of teaching experience. He has published one research papers in international journals and about two research papers in national and international conferences. He has attended ten seminars and workshops. He is member of various professional societies like IEEE and CSI.

Dr. C. NagaRaju received his B.Tech degree in Computer Science from J.N.T.University Anantapur, M.Tech degree in Computer Science from J.N.T. University Hyderabad and PhD in Digital Image Processing from J.N.T.University Hyderabad. Currently, he is working as a professor & Head of IT in LakiReddy Bali Reddy College of Engineering, Vijayawada. He is Professor incharge for systems department. He has got 15 years of teaching experience. He has published thirty research papers in various national and international Journals and about twenty eight research papers in various national and international conferences. He has attended twenty seminars and workshops. He is member of various professional societies like IEEE, ISTE and CSI.

26

No. March i-managers Journal on Information Technology, Vol. 1 l2 l - May 2012

You might also like