Latest Report Fyp

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 29

Title of the Project

Senior Project

Primary Advisor: <Advisor Name>


Secondary Advisor: <Advisor Name>

Presented by:
Student Reg# Student Name

Department of Computer Science


Forman Christian College (A Chartered University)
Topic of Project

By

NAME(S) OF PARTICIPTANT(s)

Project submitted to

Department of Computer Science,

Forman Christian College (A Chartered University),

Lahore, Pakistan.

in partial fulfillment of the requirements for the degree of

BACHELOR OF SCIENCE
IN
COMPUTER SCIENCE (Honors)

Primary Project Advisor Secondary Project Advisor

Senior Project Management


Committee Representative
Abstract
<Despite the fact that an abstract is quite brief, it must do almost as much work as the multi-page paper
that follows it. This means that it should in most cases include the following sections. Each section is
typically a single sentence, although there is room for creativity. In particular, the parts may be merged or
spread among a set of sentences. Use the following as a checklist for your abstract but do not write any
headings in this abstract:

Motivation: Why do we care about the problem and the results? If the problem isn't obviously
"interesting" it might be better to put motivation first; but if your work is incremental progress on a
problem that is widely recognized as important, then it is probably better to put the problem statement
first to indicate which piece of the larger problem you are breaking off to work on. This section should
include the importance of your work, the difficulty of the area, and the impact it might have if successful.
Problem statement: What problem are you trying to solve? What is the scope of your work (a
generalized approach, or for a specific situation)? Be careful not to use too much jargon. In some cases it
is appropriate to put the problem statement before the motivation, but usually this only works if most
readers already understand why the problem is important. Approach: How did you go about solving or
making progress on the problem? Did you use simulation, analytic models, prototype construction, or
analysis of field data for an actual product? What was the extent of your work (did you look at one
application program or a hundred programs in twenty different programming languages?) What important
variables did you control, ignore, or measure? Results: What's the answer? Specifically, most good
computer papers conclude that something is so many percent faster, cheaper, smaller, or otherwise better
than something else. Put the result there, in numbers. Avoid vague, hand-waving results such as "very",
"small", or "significant." If you must be vague, you are only given license to do so when you can talk
about orders-of-magnitude improvement. There is a tension here in that you should not provide numbers
that can be easily misinterpreted, but on the other hand you don't have room for all the caveats.
Conclusions: What are the implications of your answer? Is it going to change the world (unlikely), be a
significant "win", be a nice hack, or simply serve as a road sign indicating that this path is a waste of time
(all of the previous results are useful). Are your results general, potentially generalizable or specific to a
particular case? >

i
Acknowledgement
<The acknowledgement comes here. Make sure it is not more than on page long.>

ii
List of Figures
<Provide a list of all figures in following format.

Figure 1 Figure 1 Title page number

Note: In the document, Titles of the figures should be written under the figures, along with the figure
number (Figure 1: Xyz)>

iii
List of Tables
Provide a list of all tables in following format.

Table 1 Table 1 title page number

Note: Every table must bear a title and table number which should be written on the top of the table
(Table 1: Abc)

iv
TABLE OF CONTENTS
ABSTRACT........................................................................................................................................................ I
ACKNOWLEDGEMENT................................................................................................................................. II
LIST OF FIGURES.......................................................................................................................................... III
LIST OF TABLES........................................................................................................................................... IV
CHAPTER 1. INTRODUCTION............................................................................................................................ 1
1.1 INTRODUCTION..................................................................................................................................................1
1.2 OBJECTIVES....................................................................................................................................................... 1
1.3 PROBLEM STATEMENT.........................................................................................................................................1
1.4 SCOPE..............................................................................................................................................................1
CHAPTER 2. REQUIREMENTS ANALYSIS............................................................................................................ 2
2.1 LITERATURE REVIEW...........................................................................................................................................2
2.2 USER CLASSES AND CHARACTERISTICS.....................................................................................................................2
2.3 DESIGN AND IMPLEMENTATION CONSTRAINTS..........................................................................................................2
2.4 ASSUMPTIONS AND DEPENDENCIES........................................................................................................................2
2.5 FUNCTIONAL REQUIREMENTS................................................................................................................................2
2.6 USE CASE DIAGRAM...........................................................................................................................................3
2.7 NONFUNCTIONAL REQUIREMENTS.........................................................................................................................4
2.8 OTHER REQUIREMENTS.......................................................................................................................................4
CHAPTER 3. SYSTEM DESIGN............................................................................................................................ 5
3.1 APPLICATION AND DATA ARCHITECTURE.................................................................................................................5
3.2 COMPONENT INTERACTIONS AND COLLABORATIONS..................................................................................................5
3.3 SYSTEM ARCHITECTURE.......................................................................................................................................5
3.4 ARCHITECTURE EVALUATION.................................................................................................................................5
3.5 COMPONENT-EXTERNAL ENTITIES INTERFACE...........................................................................................................5
3.6 SCREENSHOTS/PROTOTYPE...................................................................................................................................5
3.7 OTHER DESIGN DETAILS.......................................................................................................................................6
CHAPTER 4. TEST SPECIFICATION AND RESULTS............................................................................................... 7
4.1 TEST CASE SPECIFICATION....................................................................................................................................7
4.2 SUMMARY OF TEST RESULTS................................................................................................................................7
CHAPTER 5. CONCLUSION AND FUTURE WORK................................................................................................ 8
5.1 PROJECT SUMMARY............................................................................................................................................8
5.2 PROBLEMS FACED AND LESSONS LEARNED...............................................................................................................8
5.3 FUTURE WORK...................................................................................................................................................8
REFERENCES........................................................................................................................................................ 9
APPENDIX A GLOSSARY.............................................................................................................................. 10
APPENDIX B DEPLOYMENT/INSTALLATION GUIDE......................................................................................10
APPENDIX C USER MANUAL....................................................................................................................... 10
APPENDIX D STUDENT INFORMATION SHEET.............................................................................................. 10
APPENDIX E PLAGIARISM FREE CERTIFICATE............................................................................................... 11
APPENDIX F PLAGIARISM REPORT.............................................................................................................. 11

v
Revision History
Name Date Reason For Changes Version

vi
Chapter 1. Introduction
1.1 Introduction
Due to the outbreak of the Coronavirus, essential use of face mask has become a serious issue for recognizing a
face. Facial recognition algorithms are finding it arduous to identify people wearing masks. Face recognition
techniques which are the most important means of identification, have been unsuccessful, which has brought
huge predicament to authentication applications that rely on face recognition. Face recognition systems are used
for many different purposes like unlocking phones, entry and exit, ticketing at railway stations and for
payments, diagnose diseases, attendances, and basically, are the need of the hour. When a person who
previously used facial recognition systems for different activities would hesitate doing so now, as no one would
want to take off their mask in these circumstances. Formerly, the measurements between different facial features
were used and compared with another previously stored image to identify a person. However, now, wearing a
mask over the nose, mouth and cheeks presents a great challenge for the facial recognition algorithms to identify
a masked person. Usually at airports and public places people tend to take of their masks, the cameras will
detect the user and notify the user to put on the mask. The issues of security in Pakistan are increasing day by
day, with COVID-19 making face mask as a medium of burglary and theft. Crime rates are always an issue in
our country and with this system detecting the person will not be an issue with the increasing time and
technology, Thieves tend to cover their faces with masks and to detect someone with 50% of the face occluded it
is quiet a challenge. If some intruder breaks into someone’s house security alerts are triggered but it is still
difficult to detect the person especially without an updated facial recognition system. Due to these failures it’s
hard to detect someone if they cover their face and makes property security unsafe, unreliable and very
inconvenient in such situations.
Our study is based on 2 objectives.
a) Recognizing if a person is wearing a mask
b) Recognizing the person wearing a mask
This system develops, how to identify a person with mask, techniques like Open CV, libraries related to face
recognition, mathplot, pillow import image, mobile net v2( which includes CNN already pre trained), image
shape 24 x 24 . Deep leaning algorithm is used for face mask detection. We made it more robust and tested on
real time basis. In future we would like to prepare a proper system that can work on images as well as develop
an application which will sync the person identity from his other biometric technologies like fingerprint analysis
for better security enforcements.

1.2 Objectives
Our main objective is to develop a system to
a) Recognizing a user if they are wearing a mask
b) Recognizing a user while they are wearing a mask

1.3 Problem Statement


The issues in security in Pakistan are too high. Entry of intruders at public places or even houses can be very
risky. In most of the offices, school and any commercial area people tend to scan cards after the covid-19
outbreak which caused the biometric detectors like finger scanner to be shut down as touching of surfaces are
moderately not allowed to stay safe from contaminated surfaces. Use of face mask became necessary which
made it hard for the camera to detect a user. To somehow reduce the level of crime this system detects user with
mask. Presence of the user information in the data will help in detecting who is entering the building, if it’s a
person from the office or some unknown person.

1
1.4 Scope
We are designing this system by training of the data set. We performed different test on the existing dataset of
several famous celebrities. We made this system in order to detect users with mask. We will display the result of
our implementations and testing of the data set on how it works.

2
Chapter 2. Requirements Analysis
2.1 Literature Review
There has been significant amount of work done on machine learning, facial recognition and pattern
recognition. Infact after COVID-19 a lot of AI specialist worked on to make several systems to
recognize users via masks. For our study we took 2 different types of approaches for reading the
research papers.
a) Recognizing if the end-user is wearing a mask
b) Recognizing the end-user from the naked part of the face
We studied 9 papers for both the approaches and found several gaps and techniques used in it. Some of
the common recent works on FR and MFR are surveyed here briefly. For part A we read 9 papers
which are explained briefly.

A) Recognizing if the end-user is wearing a mask


The following papers are:
(Salama, December 3, 2020) [1] This paper discusses the development of facial recognition
technology which includes study of transfer learning in fog computing and cloud computing. It
uses DCNN. DCNN works in extracting different features to allow them to compare faces in an
efficient way. They have used Decision Tree (DT) , K Nearest Neighbor (KNN), and support
vector machine (SVM). The experimental results show that the proposed method achieves
superiority over other algorithms according to all parameters. The suggested algorithm results
in higher accuracy (99.06%), higher precision (99.12%), higher recall (99.07%), and higher
specificity (99.10%) than the comparison algorithms.
(Ahmad, November 2012) [2] The goal of this paper is to provide a complete solution for
image based recognition of face with a higher accuracy and response rate This paper uses
Adaboost , Haar like features. LBP,SVM,HOG for face detection.They used 5 datasets. In
current system Haar-like features reported relatively well but it has much false detection than
LBP which could be considered being a future work in surveillance to reduce false detection in
Haar-like features and for the recognition part gabor is reported well as its qualities overcomes
dataset complexity.
(Nazeer, March 2007) [3] This paper provides an overview of the proposed FRS and explains
the methodology used it uses photometric normalization tech, histogram equalization ,
homomorphic filtering and comparing it with Euclidean distance It has 2 phases first the
camera acquires the face and loads in the database and its trained to detect the user . geometric
and photometric normalizations are used the features are extracted and then loaded in the
database The second phase is the verification phase The users face is again tested and it verifies
the user this phase requires several modules like Face detection module : FR uses the features
to be extracted from face then the image is resized and and made according to how it will be
recognized. They use the adaboost system. The other module is FR : it uses preprocessing
feature extractions derived from the camera and database. The results are The first experiment
is to evaluate the verification performance of the face recognition system using the original face
images. Which shows that even though the E.D classifier has the lowest HTER, NN classifier
gives the best result on average for both PCA and LDA feature extractors. In the second
experiment we apply the combo of histogram equalization and homomorphic filtering ,which
shows that NC classifier has the lowest HTER for both of the feature extractors. Combination
of homomorphic filtering and histogram equalization to face images shows NN has the lowest
HTER NN is considered the best classifier among the 3 classifiers.
(teoh, 2021) [4] This paper describes the concept on how to design and develop a face
recognition system through deep learning using OpenCV in python. Also in this paper a face
recognition and identification system is designed and developed using a deep learning
3
approach. The overall procedure of developing this face recognition system from training the
data using CNN approach to face recognition is described. It is verified that with the large
number of face images being trained into a classifier can achieve accuracy of 91.7% in
recognizing images and 86.7% in real-time video.
(Singh, 05 May 2019) [5] They used KLT Algorithm, Viola-Jones Algorithm face detection
which detect human face using Haar cascade classifier, The use of neural networks for face
recognition has been shown by and we can see the suggestion of a semi supervised learning
method that uses support vector machines for face recognition. The Recognition system is
simple and works efficiently. The performance of this method is compared with other existing
face recognition methods and it is observed that better accuracy in recognition is achieved with
the proposed method. Face Recognition using KLT algorithm and Fusion of PCA and
recognition plays a vital role in a wide range of applications. It is high-rate accuracy
applications in identifying a person is desired. all techniques are working well with face
recognition.
(Han, 2021) [6] In this paper, we focus on the research hotspots of face recognition based on
depth learning in the field of biometrics, combined with the relevant theory and methods of
depth learning, face recognition technology, along the order of depth learning, based on the
depth of learning face recognition, Deep learning has a key advantage over machine learning
for other face recognition techniques. First low-level features can be learned from raw data that
is almost no processing. Second complex interactions can be detected from the features.
Therefore, deep learning can not only learn to get more useful data, but also to build a more
accurate model
(Wang) [7] This paper uses the eye detection techniques by using the FRGC database it has
94.5% eye detection rate up till now. There are two purposes of eye detection. One is to detect
the existence of eyes, and another is to accurately locate eye positions. Experimental results are
then provided to show the validation of the eye detector using FRGC 1.0 database. The results
show that face recognition based on the automatic eye localization has comparable accuracy
with the face recognition based on manual eye positions. This demonstrates that our proposed
eye localization method can be incorporated into a fully automatic face recognition system.
(Hussain) [8] This paper discusses major two approaches, first is feature based and second is
statistical. Former comprises Scale Space Filtering, Elastic Bunch Graph and the later one
includes PCA and LDA techniques for recognition. This review provides a brief knowledge of
the featured based approach and statistical approach of face recognition. The use of linear
characteristics is coveredIn feature based technique and non- linear characteristics covered
statistical approach. Both methods are giving better performance under different circumstances
and provide a different base to recognition of faces. PCA and LDA enhance the power of
recognition with no loss of informative data

B) Recognizing the end-user from the naked part of the face


The following papers are:

(Mazli, 2021)[9] The system uses a deep learning model for face mask detection and Procrustes
analysis to measure the face identity similarity. A combination of standard face recognition
performance measures was used to measure the effectiveness of the system. The results indicate
that the system effectively recognizes the identity of a person with a face mask up to 97.14%
accuracy. Despite its exploratory nature, this study offers some insight into face masked
identity recognition.
(Ullah, 7 December 2021) [10] In this paper, they have proposed a novel DeepMasknet
framework capable of both the face mask detection and masked facial recognition. The
proposed work comprises two main phases: first phase includes the data collection and dataset
preparation, while the second phase presents a novel Deepmask Net model construction for
face mask detection and masked facial recognition. They have created a large scale and diverse
face images dataset to evaluate both the face mask detection and masked facial recognition. The
accuracy of 100% for face mask detection and 93.33% for masked facial recognition have

4
confirmed the superiority of our Deepmasknet model over the contemporary techniques.
Furthermore, experimental results on the three standard Kaggle datasets and our MDMFR
dataset have verified the robustness of the proposed Deepmask Net model for face mask
detection and masked facial recognition under diverse conditions i.e., variations in face angles,
lightning conditions, gender, skin tone, age, types of masks, occlusions (glasses), etc.
(Fitousi, 3 November 2021) [11] The current study focuses on the impact of masks on the
speed of processing of these and other important social dimensions. Here we provide a
systematic assessment of the impact of COVID- 19 masks on facial identity, emotion, gender,
and age. The results reveal that the COVID-19 masks pose a real challenge to our everyday
social interaction. We have shown that masks hinder major aspects of social perception, as they
interfere with normal speed and accuracy of extracting identity, emotion, age, and gender.
(Zhang, 18 November 2021) [12] This paper proposes a mask recognition algorithm based on
improved YOLO-V4 neural network and the integrated SE-Net and DenseNet network and
introduces deformable convolution. There are two schemes for mask detection technology. The
mainstream scheme is to analyze the pictures in the video surveillance through the target
detection model in artificial intelligence and then determine whether the pedestrian is wearing a
mask during the surveillance. The second solution is to process the obtained images through
traditional image processing methods to determine whether pedestrians are wearing masks
during monitoring. Compared with other target detection networks, the improved YOLO-V4
neural network used in this paper improves the accuracy of face recognition and detection with
masks to a certain extent.
(Walid, july 2020) [13] In this paper, we propose a reliable method based on discard masked
regions and deep learning- based features in order to address the problem of the masked face
recognition process. The proposed method improves the generalization of the face recognition
process in the presence of the mask. To accomplish this task, they proposed a deep learning-
based method and quantization-based technique to deal with the recognition of the masked
faces.
(Vu, 14 August 2021) [14] In this paper, they propose a method that takes advantage of the
combination of deep learning and Local Binary Pattern (LBP) features to recognize the masked
face by utilizing RetinaFace, a joint extra-supervised and self-supervised multi-task learning
face detector that can deal with various scales of faces, as a fast yet effective encoder.
Evaluation results have demonstrated that the proposed method is significantly outperformed
several state of the art face recognition methods including Dlib and InsightFace on the
published Essex dataset and our self-collected dataset COMASK20, with the results of 87% f1-
score on the COMASK20 dataset and 98% f1-score on the Essex dataset.
(Damer, 07 May 2022) [15] This work provides a joint evaluation and in-depth analyses of the
face verification performance of human experts in comparison to state-of-the-art automatic FR
solutions. They analyzed the correlations between the verification behaviors of human experts
and automatic FR solutions under different settings. These settings involved unmasked pairs,
masked probes and unmasked references, and masked pairs, with real and synthetic masks.
(Moungsouy, 21 April 2022 ) [16] They developed to recognize human faces on any available
facial components which could be varied depending on wearing or not wearing a mask. The
proposed solution is developed based on the FaceNet framework, aiming to modify the existing
facial recognition model to improve the performance of both scenarios of mask-wearing and
without mask-wearing. The result shows an outstanding accuracy of 99.2% on a scenario of
mask-wearing faces. The feature heatmaps also show that non-occluded components including
eyes and nose become more significant for recognizing human faces, when compared with the
lower part of human faces which could be occluded under masks.

5
2.2 User Classes and Characteristics
User : The user will turn on the system and the camera will take its photos and then we can enter it in
the database, pictures will include both of face mask and face. After that user can be detected while
wearing the mask.

2.3 Design and Implementation Constraints


 Hardware Requirements: We would need any sort of PC or laptop with a really good camera. We can
also use any external camera too.
 Powerful processor for training dataset.
 Python language is the language requirement
 Software Requirement: Anaconda (spyder 4.1.5)
 Coding libraries: Open CV, Jason 5, shutil , keras, tensorflow, Argparse

2.4 Assumptions and Dependencies


Person using the system should have some basic information of how to use this system and should have some
minimal qualifications.
 The user to be captured should be infront of the camera.
 The mask should be properly visible.
 The lighting in the area should be good
 The image should be crisp sharp not blurry
 The hardware and software requirements should be on mark for the project to work.

2.5 Functional Requirements


The functional requirements will require the app to work in a certain way and sequence which is explained in
the used cases below.

6
2.5.1 Face the Camera
Identifier UC1
Once user will face camera, he/she will be recognized
Purpose
Priority High
Pre-conditions …
Postconditions

Typical Course of Action
S# Actor Action System Response
1 Come in front of camera Start recognizing
Alternate Course of Action
S# Actor Action System Response
1 Walk away from camera System goes to sleep state

Table 1: UC-1

2.5.2 Start recognizing


Identifier UC2
Purpose Recognize user
Priority High
Pre-conditions Face the camera
Postconditions

Typical Course of Action
S# Actor Action System Response
1 Wait in front of camera Activate Camera
2 Wait Camera Takes a Picture
Alternate Course of Action
S# Actor Action System Response
1 Walk away from camera System goes to sleep state
Camera doesn’t switch on then
2 Wait
“Display Error Message”
Walk away from camera after
3 System goes to sleep state
reading “Error Message”

Table 2

2.5.3 Accept/Reject
Identifier UC3

7
Purpose Accept/Reject User
Priority High
Pre-conditions Camera Takes a Picture
Postconditions

Typical Course of Action
S# Actor Action System Response
Compare captured picture with
1 Stay in front of camera and wait
database
If matched, display “Success”
2 Stay in front of camera and wait
message
3

Alternate Course of Action
S# Actor Action System Response
If doesn’t match. Display “Failure”
1 Stay in front of camera and wait
message
2
3 Walk away System goes to sleep mode
Table 3

8
2.6 Use Case Diagram

9
2.7 Nonfunctional Requirements
The non functional requirements would contain a camera and other security features mentioned below.

2.7.1 Performance Requirements

According to what we have read, it takes 30ms to check whether other person is wearing a mask or no. And to
recognize that person, it might take 2-10 seconds in total derived from the current work done on this problem.

2.7.2 Safety Requirements

Between the user and the machine, there won’t be any physical interaction as that’s really the motive
behind the idea of masked facial recognition system where physical interaction will not only be limited
but will not exist. But software is also important and there are many safety requirements related to
software for which system safety is concerned primarily with the management of hazards: their
identification, assessment, removal, and control through scrutiny, design, and management procedures.

2.7.3 Security Requirements

As our program is an important one which contains sensitive data, we have to keep it secure. First and foremost,
the access control should be preserved. No one should be able to break the rules of personal privacy where
sensitive information of a huge amount of people is at stake. The information must be available to access and
use to authorized people only. Therefore, authentication, authorization and security monitoring is important.
There will be a constant threat of hacking to our program. Therefore, it should be secured from hacking. We also
have to back up our data so as to keep it secure.

2.7.4 Additional Software Quality Attributes

Reliability: The program will produce a low error rate for a certain number of input trials and will be reliable in
sense of output.
Efficiency: The program will fulfill its purpose of recognizing masked face and face mask efficiently with
minimum number of resources.

2.8 Other Requirements


We will be using different datasets to test our algorithms to find our accuracy rate. The datasets are of limited
quantity, so we will be testing on the best available dataset and compiling if needed. Other than that, we don’t
have any other requirements.

10
Chapter 3. System Design

Tool and Technique:

In this project, variety of hardware, software and simulation tools are used. Details of those tools are
given as below

3.1 Application and Data Architecture


Our project aims to meet the objectives based on the following steps:
1. Recognize if the person/image is wearing a mask.
2. Input of masked image and generated an artificially generated image using Auto Encoder.
3. Recognize artificially generated face using normal face recognition system.

3.2 Component Interactions and Collaborations


Hardware Tools:

There is a lot of tools accessible for Convolutional neural network. We use hardware for our project are hp Elite
Book and Lenovo, in our project they give marvelous result for our project.

3.1.1 Main System

The system which is used for training the Convolutional neural network is made by Lenovo. Our fundamental
system has high speed and extraordinary running time for processing.

3.1.2 CPU

Intel (R) Core (TM) i5-4200U CPU @1.6 GHZ 2.30 GHZ

3.1.3 Main Memory

4 GB RAM (Installed memory) ,64 bit operating system, x64-based processor

3.1.4 GPU

The machine learning runs faster on GPU. To use GPUs for Convolutional Neural Network training and
implication processes. NVIDIA provides CNN and Tensors respectively. Training a model in Machine learning
requires a huge amount of Dataset and GPU is best for huge database run faster from CPU. GPU work so fast
for huge data set and used graphically representation, which is very good for our project, GPU give great
response.

11
3.3 System Architecture
Software and Simulation Tools

We make database and create a neural network model and established architecture, a lot of software
and database used in our project to learning and training for faster work. In or project data set is in
large number for good response and increasing the running time. Increase the capacity to detecting the
sign.

3.2.2 Anaconda

Anaconda is free and open source distribution python Programming languages. Machine learning
application is a large data processing. For python coding it’s have different platform which is used for
our project for making an application of human hand gesture recognition system. Anaconda has many
platforms, but Spyder platform is best for telling debug and error.

3.2.3 Python
Python is a programming language it has different platform those are available in almost all the
operating system for making mobile, web based, embedded systems. The version of python 3.7
installed our operating system with interpreter in the shell. We write a code of python in IDE of python
is Spyder.

3.2.4 NVidia CUDA

NVidia toolkit is tool which is used for training neural network and NVidia CUDA toolkit enabled
with a GPU. We have huge amount of data that is why we used that toolkit for faster processing
otherwise without NVidia toolkit it would take lot of time consumption. The new version is 9.2 with
compute capability is 6.0.

3.2.5 Tensor flow

Open source library to help and develop the system through ML train, get start quickly running directly
in browser. Tensor Flow is an end-to-end open source for machine learning language. It has a
complete, flexible ecosystem of tools; libraries and community resources easily build organize ML
powered applications.

3.2.6 Numpy

Numpy is library for specific computing of python. It provides high-performance multidimensional


array object working these array. It utilizes to make N-Dimensional object which is gain in picture.
Using Numpy manufactured in function mathematical function can use array and matrices

3.2.7 Pyplot

Pyplot is the module which is included in python and It is using to plot a graphs. In our project it will
be used to plot the training and validation accuracy which is resumed by training component Tensor
Flow/ Keras.

3.2.8 Spyder

Spyder is an open source platform which is used for programming purpose for python. We build
python program in it. Spyder is the powerful scientific environment written in python it is written by
engineer ad analysis. It offers unique combination of editing, analysis, debugging.
12
3.4 Architecture Evaluation
The platform where the application is going to be tested will be on the environment, Anaconda Prompt. Since
our project is research based, we are not focusing on the development of a software/application. Keras is used as
a front end to validate the accuracy of the algorithms. The only requirement to run this program is to have a
system with good specification such as 8GB+ ram and a good graphic card if the dataset training is still
required.
Most of the work is done on Keras because it has a quicker neural network processing compared to its
competitor environments. Furthermore, it is easy to use and the libraries on Keras are more applicable and
suitable for our algorithms. In comparison to its’ competitors’, Keras has the best libraries and neural networks.

3.5 Component-External Entities Interface

Fig 6: This is a sequence diagram depicting component-component interactions

13
3.6 Screenshots/Prototype
3.6.1 Workflow

Fig 7: This is a swimlane diagram depicting component-external interface of our system

3.6.2 Screens

14
3.7 Other Design Details

Fig 9: This swim lane diagram depicts the workflow of the whole system. We start by pre-processing
of the dataset followed by training and modelling the dataset. The main working of the system starts
when the image is input into the system. If the mask is not detected, face is compared with existing
images in the dataset and the person is recognized. But if mask is worn, the face of the masked
person is re-constructed in an Auto Encoder and the reconstructed face is compared with existing
images in the dataset. Once a positive result is detected, an ID/both images are shown on the screen.

15
Chapter 4. Test Specification and Results
4.1 Test Case Specification
< Fill out the following template for each test case, also add any additional test cases that were not part of Phase
3or 4 document. Provide separate tables for input data with each test case if applicable. Research based projects
may need to replace this test specification with their own test mechanism.>

Identifier TC-1
<Include use-case identifier(s) for functional requirement(s) and
Related requirements(s)
document section/sub-section number(s) for other requirement(s).>
Short description …
Pre-condition(s) …
Input data …
Detailed steps …
Expected result(s) …
Post-condition(s) …
Actual result(s)
Test Case Result

Table 6.1: TC-1

4.2 Summary of Test Results


<Provide in tabular form the defects found in each of your software modules. For example see Table 6.2
below.>
Number of defects Number of Number of
found defects corrected defects still
Module Name Test cases run
so far need to be
corrected
Module 1 (for example
Bill Calculation Module, TC1, TC2,…
Speech Processing Unit
Module 2 …

<Sum all of the <Sum all of the <Sum all of the <Sum all of the
Complete System
above> above> above> above>

Table 6.2: Summary of All Test Results

16
Chapter 5. Conclusion and Future Work
5.1 Project summary
<Include a brief summary of how the proposed solution is going to/has addressed the problem statement
specified in the introduction section. Provide an overview of what kind of evaluations were undertaken in order
to prove that the solution really solves the problem with evidence on results findings.>

5.2 Problems faced and lessons learned


<Provide the details of problems faced during one year of tenure to complete the project. Problems can be
technical, financial and motivational. List down all the lesson learned. >

5.3 Future work


<Provide an overview of the recommendations and Include a future directions which is required as part of the
future work.>

17
References
<List all books, conference papers, journal articles, websites, etc. used in preparing the content of this
document. All of the references should be alphabetically ordered.

Journals
Author Name/s (Surname, initial), year of publication in-parenthesis, title of the article, name of the journal,
volume number, issue number (in parenthesis) followed by a colon and page numbers

Books
Author Name/s (Surname, initial), year of publication in-parenthesis, title of the book, publisher’s name, place
of publication, page numbers

Reference from Internet


Name of the Author/s (If Known), Title of the topic followed by complete web address.>

18
Appendix A Glossary
<Define all the terms necessary to properly interpret the document, including acronyms and abbreviations.>

Appendix B Deployment/Installation Guide


<Provide a list of instructions such that users of your system can deploy and install your system on their own>

Appendix C User Manual


<Provide a manual such that users of your system can use your system after installation. In business software
applications, where groups of users have access to only a sub-set of the application's full functionality, a user
guide may be prepared for each group. There should be step by step instruction for each user class.>

Appendix D Student Information Sheet


Roll No Name Email Address (FC College) Frequently Personal
Checked Email Cell
Address Phone
Number

19
Appendix E Plagiarism Free Certificate

This is to certify that, I am ________________________ S/D/o _______________________, group leader of FYP under
registration no _________________ at Computer Science Department, Forman Christian College (A Chartered
University), Lahore. I declare that my Final year project report is checked by my supervisor and the similarity index is
________% that is less than 20%, an acceptable limit by HEC. Report is attached herewith as Appendix F. To the best of
my knowledge and belief, the report contains no material previously published or written by another person except where
due reference is made in the report itself.

Date: ____________ Name of Group Leader: ________________________ Signature: _____________

Name of Supervisor: _____________________ Co-Supervisor (if any):____________________


Designation: _____________________ Designation: _____________________
Signature: _____________________ Signature: _____________________

Senior Project Management Committee Representative: _____________________


Signature: _____________________

Appendix F Plagiarism Report

20

You might also like