An Optical Flow-Based Approach To Robust Face Recognition Under Expression Variations

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 4

An Optical Flow-Based Approach to Robust Face Recognition

under Expression Variations


1. S.Ashok Kumar 2. M.Thembavani 3. G.Nithya

1. Assistant Professor, Adhiparasakthi Engineering college, Melmaruvathur


2. UG Scholar, Adhiparasakthi Engineering college, Melmaruvathur
3. UG Scholar, Adhiparasakthi Engineering college, Melmaruvathur

[email protected] [email protected]
[email protected]

Abstract—Face recognition is one of the most independent component analysis (ICA), discriminant
intensively studied topics in computer vision and common vector (DCV) , kernal-PCA,kernal-LDA ,
pattern recognition, but few are focused on how to kernal-DCV , etc. In addition, several learning
robustly recognize faces with expressions under techniques have been used to train the classifiers for
the condition that the training database contains face recognition, such as SVM. Although applying an
images of different expression of subject. A appropriate dimension reduction algorithm or a
constrained optical flow algorithm, which robust classification technique may yield more
combines the advantages of the unambiguous accurate recognition results, they usually require
correspondence of feature point labeling and the multiple training images for each subject. However
flexible representation of optical flow multiple training images per subject may not be
computation, has been developed for face available in practice.
recognition from expressional face images. In this
paper, we propose a face recognition system that In this paper, we focus on the problem of face
is robust against facial expressions by training up recognition from a single 2-D face image with facial
images of the subject stored in the database . Our expression. It can be used in practical applications of
experimental results show that the proposed banking ,ATM and secure places in order to avoid the
system improves the accuracy of face recognition entry of unauthorized persons. With this system we
from expressional face images. can identify whether the person is allowed inside or
Index Terms—Constrained optical flow, face not at the entry level itself.
recognition
II. CONSTRAINED OPTICAL FLOW
I. INTRODUCTION COMPUTATION
FACE recognition has been studied for The computational algorithms of traditional
several decades. Even though the 2-D face optical flow cannot guarantee that the computed
recognition methods have been actively studied in the optical flow corresponds to the exact pixels in
past, there are still some inherent problems to be different images, since the intensity variations due to
resolved for practical applications. It was shown that expression may mislead the computation of optical
the recognition rate can drop dramatically when the flow. The brightness constancy constraint, however,
head pose and illumination variations are too large, or is not valid in many circumstances. Therefore, with
when the face images involve expression variations. the generalized dynamic image model (GDIM)
Pose, illumination and expression variations are three proposed by Negahdaripour andYu, Tenget al.
essential issues to be dealt with in the research of generalized the optical flow constraint to
face recognition. To date, there was not much
research effort on overcoming the expression I x ( r )u (r ) + I y (r )v ( r ) + I t ( r ) + m(r ) I (r ) + c ( r ) = 0
variation problem in face recognition, though a
number of algorithms have been proposed to
overcome the pose and illumination variation where m(r ) and c (r ) denote the multiplier
problems.To improve the face recognition accuracy, and offset factors of the scene brightness variation
researchers have applied different dimension field, I is the image intensity function, the
reduction techniques, including principle component subscripts x, y and t denote the spatiotemporal
analysis (PCA,linear discriminant analysis (LDA), partial derivatives, r
is a point in the spatiotemporal
domain, and [u ( r ), v ( r )] T is the motion vector Minimize f (u ) = u T Ku − 2u T b + c
at the point r
. They proposed to minimize the Subject to u ( x i , y i ) =u i and v ( x i , y i ) = v i ,
following discrete energy function, which is defined ∀( x i , y i ) ∈S
with an adaptive smoothness adjustment scheme to
constrain both flow components (u i and v i ) and (
Where S is the set of feature points and u i , v i ) is
all the brightness variation multiplier and offset the specified optical flow vector at the i th feature
factors ( mi and c i ) point. A modified ICPCG procedure was applied to
2 solve this constrained optimization problem and the
 I u + I v + I +m I +c  details are referred to ]. By solving this optimization
 x ,i i 
f (u ) = ∑ωi 
y ,i i t ,i i i i
 problem with the displacements of the feature points
i∈D  2 2 2
I x ,i + I y ,i + I i +1  imposed as hard constraints, we can compute the
 
(
+ λ∑αx ,i u x ,i +αy ,i u y ,i + βx ,i v x ,i + βy ,i v y ,i
2 2 2 2
)motion vectors at all pixels in the entire image for
different expressions or subjects.
i∈D

+ µ∑(γ m x , i + γ y , i m y , i + δ x , i c x , i + δ y , i c y ,i ) III.Face Recognition With the Proposed System


2 2 2 2
x ,i
i∈D

In our proposed system, the training dataset


Where the subscript i denotes the i th location, contains 50 images of a person at different
u is the concatenation vector of all the flow expression. The images of training database is trained
components u i and v i and all the brightness up by means of
Optical flow algorithm.
variation multiplier and offset factors, λ and µ
are the parameters controlling the degree of
smoothness in the motion and brightness fields, D
is the set of all the discretized locations in the image
domain, ωi is the weight for the i th data
constraint, and
αx ,i , αy ,i , βx ,i , βy ,i , γ x ,i , γ y ,i , δx ,i and
δy ,i are the weights for the corresponding
components of the i th smooth constant along x –
and y - directions. In our application, we regard a
neutral face image and an expressional face image as
two adjacent time instances in the above formulation.

The above quadratic energy function can be rewritten


in a matrix-vector form given by :
f (u ) = u T Ku − 2u T b + c .By setting the first
order deviation to zero, minimizing this quadratic and
convex function is equivalent to solving a large linear Fig 1: Sample images of different expression present in the
system Ku = b and can be efficiently solved by the training database. (happy, disgust, anger, sad and neutral)
incomplete Cholesky preconditioned conjugate
gradient (ICPCG) algorithm .
Fig 1. Shows the different expression
However, the motion vectors of the facial feature images present in the training database like happy,
points, which were used only as references for disgust, angry, sad and neutral.The number of
interpolating computation in ICPCG, cannot be different expressional images and its percentage in
guaranteed to be consistent in the final converged the training database can be graphically represented
optical flow. In order to guarantee the computed in Fig 2.
optical flow to be consistent with the motion vectors
at these corresponding feature points, we modify the
unconstrained optimization problem in the original
formulation of the optical flow estimation to a
constrained optimization problem given as follows:
T rain Im age

30
25
20
15
10
5
0
1 2 3 4 5
Expression 13 11 9 8 7
Percentage 26 22 18 16 14

Fig 2: graphical representation of training database images

The image to be tested is compared with each


and every image in training database and the image
with lowest pixel difference is determined. The
training image which differs from less number of Fig 4:Accuracy of proposed system
pixels from the image to be tested is the best
matching image. Then the type of expression of the
test image is also found out. The type of expression is
determined from the best matching image. We can
calculate the difference between the test image and The accuracy of the proposed system is shown in
neutral image. Fig 4. Accuracy denotes the number of pixels
matched between the test and training image.
Here we tested with 10 images in the test
database. The test database contains the images to be
tested. It also includes different types of expression.
The main aim is to improve the accuracy of face
recognition of expressional images. So we used
optical flow algorithm to train up the images. Due to
that the number of non zero values can be reduced
which in turn increases the accuracy of face
recognition.

Fig 5: Graph showing distance of test image from neutral


Fig 3: Tabulation of matched image, distance from the image
neutral image and accuracy values
.
The above figure represents the distance from the [2] H. Cevikalp, M. Neamtu, M. Wilkes, and A.
neutral image i.e., the difference between the test Barkana, “Discriminative common vectors for face
image and the neutral image. recognition,” IEEE Trans. Pattern Anal. Mach.
Intell., vol. 27, no. 1, pp. 4–13, Jan. 2005.

IV. CONCLUSION [3] T. Cootes, C. Taylor, D. Cooper, and J. Graham,


“Active shape models—Their training and
In this paper, we proposed a 2-D expression- application,” Comput. Vis. Image Understand., vol.
invariant face recognition system based on the optical 61, pp. 18–23, 1995.
flow algorithm. Here optical flow algorithm is used
[4] T. Cootes, G. J. Edwards, and C. Taylor, “Active
to train up the training database. Thus the accuracy of
appearance models,” IEEE Trans. Pattern Anal.
face recognition is improved by using optical flow Mach. Intell., vol. 23, pp. 681–685, 2001.
algorithm when the face undergoes different
expressions. The extra advantages of our project is [5] I. A. Essa and A. Pentland, “A vision system for
that the determination of distance from neutral image observing and extracting facial action parameters,” in
and the recognition of type of expression of the Proc. IEEE Conf. Computer Vision Pattern
subject. Recognition, 1994, pp. 76–83.

V.REFERENCES

[1] M. S. Bartlett, J. R. Movellan, and T. J.


Sejnowski, “Face recognition by independent
component analysis,” IEEE Trans. Neural Netw., vol.
13, no. 6, pp. 1450–1464, Nov. 2002.

You might also like