Face Recognition System: Abstract-We Present An Approach To The Detection and

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

FACE RECOGNITION SYSTEM

Varnika Garg Mitushi Chauhan Pallavi Passi


Chandigarh University Chandigarh University Chandigarh University
Gharuan,Mohali Gharuan,Mohali Gharuan,Mohali
[email protected] [email protected] [email protected]

Abstract—We present an approach to the detection and object is located at a short distance away, the next is the
identification of human faces and describe a working, near introduction, which recognize a face as individuals.
real-time face recognition system which track a subject head
and then recognizes the person by comparing characteristics Facial Recognition using Python Libraries
of the face to those of known individuals Our approach treats
face recognition a two-dimensions recognition problem, take The most popular and probably the simplest way to detect
advantage of the fact that faces are normally upright and thus faces using Python is by using the OpenCV package.
may be described offer small set of 2-D characteristic views. Originally written in C/C++, OpenCV now provides bindings
Face images are projected onto a feature space (“face space”) for Python. It uses machine learning algorithms to search for
that encodes the variation among known face images. faces within a picture. Faces are very complicated, made of
OpenCV is used to create the dataset and trainer to train the thousands of small patterns and features that must be
images in the proper manner. In a surveillance mode, the matched. The face recognition algorithms break the task of
system automatically detects a person's presence, positions identifying the face into thousands of smaller, bite-sized
offer camera through analysis of the image, captures an image tasks, each of which is easy to solve, known as classifiers.
and then processes the image to determine if the person
enrolled in the enrollment database. A face may have 5000 or more classifiers, all of which must
match for a face to be detected. Since there are at least 5,000
Keywords—Deep learning, Machine Trainer, Face or more tests per block, you might have millions of
Recognition, LBPH face recognizer. calculations to do, which makes it a difficult process. To
solve this, OpenCV uses cascades. The OpenCV cascade
I. INTRODUCTION
breaks the problem of detecting faces into multiple stages. It
Face recognition is an important part of the capability of performs a detailed test for each block. The algorithm may
human perception system and is a routine task for human have 30 to 50 of these stages or cascades, and it will only
while building a similar computational model of face detect a face if all stages pass.
recognition. The computation model not only contribute to
theoretical insights but also to many practical like automated The cascades are a bunch of XML files that contain OpenCV
crowd surveillance, access control, design of human data used to detect objects. You initialize your code with the
computer interface (HCI), content-based image database cascade you want, and then it does the work for you. Since
management, criminal identification. The earliest work face face detection is such a common case, OpenCV comes with a
recognition can be traced back least to the 1950s in number of built-in cascades for detecting everything from
psychology, and to the 1960s in the engineering field of the faces to eyes to hands to legs.
earliest studies include work on facial expression are
emotions by Darwin. But research on automatic machine
recognition of faces started in the 1970s and after that the You may use other alternatives to OpenCV, like dlib – that
seminal work of Kaneda. In 1995, are view paper gave a come with Deep Learning based Detection and Recognition
thorough survey of face recognition technology at that time. models.
At that time, video-based face recognition was still a nascent
stage. During the past decades, a face recognition received Face Recognition using Python Algorithm
increased attention and has advanced technically. Many
commercial systems for still face recognition are recently Face Recognition using Python and OpenCV follows a well-
significant research efforts have been focused on video based defined pattern. When you meet someone for the first time in
face modelling/tracking, recognition and from integration. your life, you look at his/her face, eyes, nose, mouth, color,
New databases have been created and evaluate the and overall features. This is your mind learning or training
recognition techniques using these databases have been for the face recognition of that person by gathering face data.
carried out. Now, the face recognition has become one of the Then the person tells you his/her name. At this point, your
most active applications of pattern recognition, image mind knows that the face data it just learned belongs to the
analysis and understanding the easiest ways to distinguish the person. Now, your mind is trained and ready to do face
main individual identity of each other. Face recognition is of recognition. Next time when you will see the person or
a personal identification system that uses personal and for his/her face in a picture you will immediately recognize. This
characteristics of a person to identify the person's identity. is how Face Recognition works. The more you will meet, the
Human face recognition procedure basically consists of two more data your mind will collect about the person and the
phases, namely face detection, where this process takes place better you will become at recognizing him/her.
very rapidly in humans, except under conditions as where the

XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE


The coding steps for face recognition are the same as we have the option of switching between face recognizers with a
discussed it in real life example above. single line of code change. You can come up with detailed
codes with a simple approach, and what more, a much better
Training Data Gathering: Gather face data (face images in outcome.
this case) of the persons you want to recognize
Mastering Python for face recognition or otherwise will
Training of Recognizer: Feed that face data (and respective prepare you better for a rewarding career in Python.
names of each face) to the face recognizer so that it can learn. Tremendous growth, enormous learning, and lucrative salary
are some of the well-known perks of a promising career in
Python. Add to that the magic touch of a Data Analytics
Recognition: Feed new faces of the persons and see if the
course, and you are ready to rock!
face recognizer you just trained recognizes them

OpenCV provides the following three face recognizers:- Python career also offers diversity in terms of career choices.
One can start off as a developer or programmer and later
switch to the role of a data scientist. With a substantial
1. Eigenface recognizer amount of experience and Python online course certification,
2. Fisherface recogniser one can also become a certified trainer in Python or an
3. LBPH face recognize. entrepreneur. But the bottom line remains the same.

But we had used the LBHP face recognize because our


project is based on this recognizer only.given below you are
read in details that how this method is works.

Local Binary Patterns Histograms (LBPH) Face


Recognizer Both Eigenfaces and Fisherfaces are affected by LITERATURE SURVEY
Face detection is a computer technology that determines the
light and in real life, perfect light conditions are not always
location and size of human face in arbitrary (digital) image.
available. LBPH face recognizer is an improvement to
overcome this drawback.LBPH algorithm tries to find the The facial features are detected and any other objects like
trees, buildings and bodies etc. are ignored from the digital
local structure of an image and it does that by comparing each
image. It can be regarded as a specific ‘case of object-class
pixel with its neighboring pixels. With so much just on the
detection, where the task is finding the location and sizes of
horizon, it will be interesting to see where this rise in Facial
all objects in an image that belong to a given class. Face
Recognition technology takes us.
detection, can be regarded as a more general ‘case of face
localization. In face localization, the task is to find the
locations and sizes of a known number of faces (usually one).
Basically there are two types of approaches to detect facial
part in the given image i.e. feature base and image base
approach. Feature base approach tries to extract features of
the image and match it against the knowledge of the face
features. While image base approach tries to get best match
between training and testing images.
a) FACE RECOGNITION

Face recognition is fast gaining importance in the various DIFFERENT APPROACHES OF FACE RECOGNITION:
fields. We have entered an age when Facial Recognition There are two predominant approaches to the face recognition
technologies will soon be part of everyday life. China, for problem: Geometric (feature based) and photometric (view
example, monitors by CCTV or by police wearing special based). As researcher interest in face recognition continued,
glasses and then logs onto a database that checks on the many different algorithms were developed, three of which
habitual behavior of the people, their social credit and even have been well studied in face recognition literature.
their friends. Cameras and facial recognition are increasingly Recognition algorithms can be divided into two main
being used in public and private buildings. Some schools in approaches:
the United States are now installing facial recognition 1. Geometric: Is based on geometrical relationship between
facial landmarks, or in other words the spatial configuration
systems, to prevent gun attacks by students, given that most
rampages are carried out by students whose faces will already of facial features. That means that the main geometrical
be on a database and have full access to the premises. This features of the face such as the eyes, nose and mouth are first
has led to increased demand for coders and developers with located and then faces are classified on the basis of various
knowledge of Face Recognition algorithms; Python and geometrical distances and angles between features.
OpenCV, in particular. 2. Photometric stereo: Used to recover the shape of an
object from a number of images taken under different lighting
Face Recognition with Python takes just a few lines of code conditions. The shape of the recovered object is defined by a
to have a fully working face recognition application and you
gradient map, which is made up of an array of surface normal 3) The sensors work by projecting structured light onto the
(Zhao and Chellappa, 2006) face. Up to a dozen or more of these image sensors can be
placed on the same CMOS chip—each sensor captures a
Popular recognition algorithms include: different part of the spectrum.
1. Principal Component Analysis using Eigenfaces, (PCA)
4) Even a perfect 3D matching technique could be sensitive
2. Linear Discriminate Analysis,
to expressions. For that goal a group at the Technion applied
3.Elastic Bunch Graph Matching using the Fisher face
tools from metric geometry to treat expressions as isometries.
algorithm,
5) Skin texture analysis: - Another emerging trend uses the
visual details of the skin, as captured in standard digital or
scanned images. This technique, called Skin Texture
Analysis, turns the unique lines, patterns, and spots apparent
in a person’s skin into a mathematical space.

The face detection system can be divided into the


following steps:-

1. Pre-Processing: To reduce the variability in the faces, the


images are processed before they are fed into the network. All
positive examples that is the face images are obtained by
cropping images with frontal faces to include only the front
view. All the cropped images are then corrected for lighting
through standard algorithms.
2. Classification: Neural networks are implemented to
classify the images as faces or nonfaces by training on these
examples. We use both our implementation of the neural
network toolbox for this task. Different network
b) FACE DETECTION: configurations are experimented with to optimize the results.
Face detection involves separating image windows into two 3. Localization: The trained neural network is then used to
classes; one containing faces (tarning the background search for faces in an image and if present localize them in a
(clutter). It is difficult because although commonalities exist bounding box. Various Feature of Face on which the work
between faces, they can vary considerably in terms of age, has done on:- Position Scale Orientation Illumination.
skin colour and facial expression. The problem is further
complicated by differing lighting conditions, image qualities Facial recognition combining different techniques: - As
and geometries, as well as the possibility of partial occlusion every method has its advantages and disadvantages,
and disguise. An ideal face detector would therefore be able technology companies have amalgamated the traditional, 3D
to detect the presence of any face under any set of lighting recognition and Skin Textual Analysis, to create recognition
conditions, upon any background. The face detection task can systems that have higher rates of success. Combined
be broken down into two steps. The first step is a techniques have an advantage over other systems. It is
classification task that takes some arbitrary image as input relatively insensitive to changes in expression, including
and outputs a binary value of yes or no, indicating whether blinking, frowning or smiling and has the ability to
there are any faces present in the image. The second step is compensate for mustache or beard growth and the appearance
the face localization task that aims to take an image as input of eyeglasses. The system is also uniform with respect to race
and output the location of any face or faces within that image and gender.
as some bounding box with (x, y, width, height).
FEATURE BASE APPROCH: Thermal cameras: - A different form of taking input data for
We have used the different approaches to enhance the face recognition is by using thermal cameras, by this
compatibility of the project so that the user can use it in the procedure the cameras will only detect the shape of the head
easier manner of the view that can enable them to show their and it will ignore the subject accessories such as glasses, hats,
Compatibility of the done work not to give the required based or makeup. Unlike conventional cameras, thermal cameras
and as explained below as: can capture facial imagery even in low-light and nighttime
1) One advantage of 3D face recognition is that it is not conditions without using a flash and exposing the position of
affected by changes in lighting like other techniques. It can the camera. However, a problem with using thermal pictures
also identify a face from a range of viewing angles, including for face recognition is that the databases for face recognition
a profile view. is limited. Diego Smolinsky and Andrea Selinger (2004)
research the use of thermal face recognition in real life and
2) Three-dimensional data points from a face vastly improve operation sceneries, and at the same time build a new
the precision of face recognition. 3D research is enhanced by database of thermal face images. The research uses low-
the development of sophisticated sensors that do a better job sensitive, low-resolution ferroelectric electrics sensors that
of capturing 3D face imagery. are capable of acquiring longwave thermal infrared (LWIR).
The results show that a fusion of LWIR and regular visual
cameras has greater results in outdoor probes. Indoor results
show that visual has a 97.05% accuracy, while LWIR has
93.93%, and the Fusion has 98.40%, however on the outdoor
proves visual has 67.06%, LWIR 83.03%, and fusion has
89.02%. The study used 240 subjects over a period of 10
weeks to create a new database. The data was collected on
sunny, rainy, and cloudy days.
In 2018, researchers from the U.S. Army Research
Laboratory (ARL) developed a technique that would allow
them to match facial imagery obtained using a thermal
camera with those in databases that were captured using a
conventional camera. This approach utilized artificial
intelligence and machine learning to allow researchers to
visibly compare conventional and thermal facial
imagery. Known as a cross-spectrum synthesis method due to This can be done in that manner that firstly we have to run
how it bridges facial recognition from two different imaging the code while execution we will observe the camera will
modalities, this method synthesizes a single image by open and it start capturing the images of the person who look
analyzing multiple facial regions and details. It consists of a at the camera and make the dataset at the background. Hence,
non-linear regression model that maps a specific thermal this process is required if we don’t do the camera is not able
image into a corresponding visible facial image and an to take the dataset of the persons properly and thus this has
optimization issue that projects the latent projection back into its own pros and cons. The persons has to look straight at the
the image space. ARL scientists have noted that the approach camera in order to make the perfection if the person watch
works by combining global information (i.e. features across here and there so it is difficult for the computer to detect the
the entire face) with local information (i.e. features regarding person properly.
the eyes, nose, and mouth). In addition to enhancing the
discriminability of the synthesized image, the facial c)Module 3(trainer): -After doing so we will move forward
recognition system can be used to transform a thermal face towards the trainer so that the machine will save the dataset
signature into a refined visible image of a face. According to at the backend. Whenever the coder wants to check it whether
performance tests conducted at ARL, researchers found that it working or not. Now it save the dataset that we had created
the multi-region cross-spectrum synthesis model in the Module 2 for storing the images.
demonstrated a performance improvement of about 30% over
baseline methods and about 5% over state-of-the-art
methods. It has also been tested for landmark detection for Each image will be trained 10 images per person. With doing
thermal images. this we are able to train our machine properly. After taking
Different Modules used in the face recognition system. this the data will be stored in the database so that it will
a) Module 1(Installing the libraries OpenCV, pillow, permanently be stored it if the computer is powered off.
NumPy).
b) Module 2(creating the Dataset creator). d) Module 4(connect with the SQLite database):After
c)Module 3(train the machine). moving towards the detector part first we have to store the
d)Module 4(connect with SQLite database). data so that whenever we will run our python program it is
e )Module 5(detector). able to recognize the image and thus this will be done we we
have the database at the backend.
a) Module 1(Installing the libraries OpenCV, pillow,
NumPy).: - For the creation of this project we have installed
the many libraries like OpenCV, pillow and the NumPy. This This is the backend of the project and here we had stored our
will help for running the different modules so as to create the data in order to know very well about the connectivity of the
new format of the project in the basic way to enhance the python with the database i.e. SQLite. After this we will move
compatibility of the camera and the system that is used for forward to the result i.e. toward the next module to check
recognizing the images. whether it is working or not.
Firstly, we had written the python code by importing this
library so that the machine is able to recognize the face and e)Module 5 (detector): In the last we are left with the module
make the square form. 5 in which the face recognition is tested so that the project Is
Elaborate this with an example working properly or not

As you will see that the system is now finally recognizing the
image and able to detect the face.
It is clear that the recognizer is able to detect the face properly
b) Module 2(creating the Dataset ). For the recognition of the and stored data also. Hence we are successfully make the
images we have to create the dataset so that the machine is project and all the things work according to the work only.
able to recognize the image and identify of the person
properly. Further Issues and Conclusion
verification with WISARD,” in H. Ellis, M.
Jeeves, F. Newcombe, and A. Young (eds.), Aspects
of Face Processing, Martinus Nijhoff Publishers,
Dordrecht, 1986.
P. Burt, “Smart sensing within a Pyramid Vision
Machine,” Proc. IEEE, Vol. 76, No. 8,
Aug. 1988.
L. Sirovich and M. Kirby, “Low-dimensional
procedure for the characterization of human
faces,” J . Opt. Soc. Am. A, Vol. 4, No. 3, March
1987, 519-524.
[ll] M. Turk and A. Pentland, “Eigenfaces for
We are currently extending the system to deal with a range of Recognition”, Journal of Cognitive Neuroscience,
aspects (other than full frontal views) by defining a small
number of face classes for each known person corresponding
to characteristic views. Because of the speed of the
recognition, the system has many chances within a few
seconds to attempt to recognize many slightly different
views, at least one of which is likely to fall close to one of the
characteristic views. An intelligent system should also have
an ability to adapt over time. Reasoning about images in face
space provides a means to learn and subsequently recognize
new faces in an unsupervised manner. When an image is
sufficiently close to face space (i.e., it is face-like) but is not
classified as one of the familiar faces, it is initially labelled as
“unknown”. The computer stores the pattern vector and the
corresponding unknown image. If a collection of “unknown”
pattern vectors cluster in the pattern space, the presence of a
new but unidentified
face is postulated. A noisy image or partially occluded face
should cause recognition performance to degrade gracefully.
since the system essentially implements an auto associative
memory for the known faces.This is evidenced by the
projection of the occluded face image.

References
Davies, Ellis, and Shepherd (eds.), Perceiving
and Remembering Faces, Academic Press, London,
W. W. Bledsoe, “The model method in facial
recognition,” Panoramic Research Inc., Palo
Alto, CA, Rep. PRI:15, Aug. 1966.
T. Kanade, “Picture processing system by computer
complex and recognition of human faces,”
Dept. of Information Science, Kyoto University,
Nov. 1973.
A. L. Yuille, D. S. Cohen, and P. W. Hallinan,
“Feature extraction from faces using deformable
templates,” Proc. CVPR, San Diego,
CA, June 1989.
S. Carey and R. Diamond, “From piecemeal
to configurational representation of faces,” Science,
Vol. 195, Jan. 21, 1977, 312-13.
M. Fleming and G. Cottrell, “Categorization
of faces using unsupervised feature extraction,”
Proc. IJCNN-90, Vol. 2.
T. Kohonen and P. Lehtio, “Storage and processing
of information in distributed associative
memory systems,” in G. E. Hinton and
J. A. Anderson, Parallel Models of Associative
Memory, Hillsdale, NJ: Lawrence Erlbaum Associates,
1981, pp. 105-143.
T. 3. Stonham, “Practical face recognition and

You might also like