Mask Detection Using Python Final Report

Download as pdf or txt
Download as pdf or txt
You are on page 1of 45

SOCIAL DISTANCING DETECTION

SYSTEM
PROJECT REPORT
Submitted in the partial fulfilment of the requirements for the award of
degree of
BACHELOR OF TECHNOLOGY
In
Electronics and Communication Engineering

SUBMITTED BY
Name/Roll no.: Aditya Raj (1816643)
Ankit Kumar (1816652)
Jay Prateek (1816665)

BATCH: 2018-2022

CHANDIGARH ENGINEERING COLLEGE


JHANJERI, MOHALI

Affiliated to I.K Gujral Punjab Technical University, Jalandhar

i
CERTIFICATE

I hereby certify that the work which is being presented in the project report entitled
“SOCIAL DISTANCING DETECTION SYSTEM” in partial fulfilment of
requirements for the award of degree of B.Tech. Electronics& Communication
Engineering submitted in the Department of Electronics and Communication
Engineering at CHANDIGARH ENGINEERING COLLEGE, JHANJERI,
Mohali under I.K. GUJRAL PUNJAB TECHNICAL UNIVERSITY,
JALANDHAR is an authentic record of my own work under the supervision of Dr.
Sajjan Singh (H.O.D) E.C.E Department.

Signature of the Student

Roll No ……………….

Date:

Signature of the SUPERVISOR

Signature of H.O.D.

ii
ACKNOWLEDGEMENT

Working on this project is a great experience and sincere thanks to my faculty members.
It is a great opportunity to work under guidance of Ms. Ashmeet Kaur. It would have
not been possible to carry out the work with such ease without his immense help and
motivation. I consider my privilege to express my gratitude, respect and thanks to all of
them who are behind who guide me in choosing this project. I express sincere gratitude
to Dr. Sajjan Singh (HOD, ECE), forth is everlasting support towards the students for
providing us this opportunity and his support.

Ankit Kumar
1816652
Aditya Raj
1816643
Jay Prateek
1816665

iii
ABSTRACT

Face mask detection had seen significant progress in the domains of Image processing
and Computer vision, since the rise of the Covid-19 pandemic. Many face detection
models have been created using several algorithms and techniques. The proposed
approach in this paper uses deep learning, TensorFlow, Keras, and OpenCV to detect
face masks. This model can be used for safety purposes since it is very resource efficient
to deploy. The SSDMNV2 approach uses Single Shot Multibox Detector as a face
detector and MobilenetV2 architecture as a framework for the classifier, which is very
lightweight and can even be used in embedded devices (like NVIDIA Jetson Nano,
Raspberry pi) to perform real-time mask detection. The technique deployed in this paper
gives us an accuracy score of 0.9264 and an F1 score of 0.93. The dataset provided in
this paper, was collected from various sources, can be used by other researchers for
further advanced models such as those of face recognition, facial landmarks, and facial
part detection process.

iv
TABLE OF CONTENTS

Contents Page No.


Title page i
Certificate ii
Acknowledgement iii
Abstract iv
Table of content v

CHAPTER 1: Introduction 1-7


1.1 How well do face masks protect against coronavirus? 3-4
1.2 Introduction to python 5-7

CHAPTER 2: Literature review 8-19


2.1computer vision 8-11
2.2 Introduction to Tensor Flow 11-14
2.3 Tkinter 14-19

CHAPTER 3 :Methodologies 20-35


3.1 Project Brief 20-21
3.2 Working 22-30
3.3 GUI 31-35

CHAPTER 4: Detection System 36-38


4.1 APP OUTPUT 36
4.2 Social Distancing 37
4.3 Mask Detection 38

Chapter 5: Conclusion 39

Reference 40

v
CHAPTER 1

INTRODUCTION

The practice of social distancing means staying home and away from others as much as
possible to help prevent spread of COVID-19. The practice of social distancing encourages
the use of things such as online video and phone communication instead of in-person
contact.
As communities reopen and people are more often in public, the term “physical distancing”
(instead of social distancing) is being used to reinforce the need to stay at least 6 feet from
others, as well as wearing face masks. Historically, social distancing was also used
interchangeably to indicate physical distancing which is defined below. However, social
distancing is a strategy distinct from the physical distancing behavior.
Since the end of 2019, infectious coronavirus disease (COVID-19) has been reported for
the first time in Wuhan, and it has become a public damage fitness issue in China and even
worldwide. This pandemic has devastating effects on societies and economies around the
world causing a global health crisis . It is an emerging respiratory infectious disease caused
by Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) . All over the world,
especially in the third wave, COVID-19 has been a significant healthcare challenge . Many
shutdowns in different industries have been caused by this pandemic. In addition, many
sectors such as maintenance projects and infrastructure construction have not been
suspended owing to their significant effect on people’s routine life
The virus mainly spreads in those people; who are in close contact with each other (within
6 feet) for a long period. The virus spreads when an infected person sneezes, coughs, or
talks, the droplets from their nose or mouth disperse through the air and affect nearby
peoples. The droplets also transfer into the lungs through the respiratory system, where it
starts killing lung cells. Recent studies show that individuals with no symptoms but are
infected with the virus also play a part in the virus spread (W. C. D. C. Dashboard).
Therefore, it is necessary to maintain at least 6 feet distance from others, even if people do
not have any symptoms.

1
Multiple Mask Detection:-

Single Mask Detection:-

2
1.1 How well do face masks protect against coronavirus?

Face mask use by the general public for limiting the spread of the COVID-19 pandemic is
controversial, though increasingly recommended, and the potential of this intervention is
not well understood. We develop a compartmental model for assessing the community-
wide impact of mask use by the general, asymptomatic public, a portion of which may be
asymptomatically infectious. Model simulations, using data relevant to COVID-19
dynamics in the US states of New York and Washington, suggest that broad adoption of
even relatively ineffective face masks may meaningfully reduce community transmission
of COVID-19 and decrease peak hospitalizations and deaths. Moreover, mask use
decreases the effective transmission rate in nearly linear proportion to the product of
mask effectiveness (as a fraction of potentially infectious contacts blocked) and coverage
rate (as a fraction of the general population), while the impact
3
on epidemiologic outcomes (death, hospitalizations) is highly nonlinear, indicating masks
could synergize with other non-pharmaceutical measures. Notably, masks are found to be
useful with respect to both preventing illness in healthy persons and preventing
asymptomatic transmission. Hypothetical mask adoption scenarios, for Washington and
New York state, suggest that immediate near universal (80%) adoption of moderately
(50%) effective masks could prevent on the order of 17–45% of projected deaths over
two months in New York, while decreasing the peak daily death rate by 34–58%, absent
other changes in epidemic dynamics. Even very weak masks (20% effective) can still be
useful if the underlying transmission rate is relatively low or decreasing: In Washington,
where baseline transmission is much less intense, 80% adoption of such masks could
reduce mortality by 24–65% (and peak deaths 15–69%), compared to 2–9% mortality
reduction in New York (peak death reduction 9–18%). Our results suggest use of face
masks by the general public is potentially of high value in curtailing community
transmission and the burden of the pandemic. The community-wide benefits are likely to
be greatest when face masks are used in conjunction with other non-pharmaceutical
practices (such as social-distancing), and when adoption is nearly universal (nation-wide)
and compliance is high.

4
1.2 Introduction to Python

Python is a widely used High-level, general-purpose, interpreted, dynamic programming


language.
Its design philosophy emphasizes code readability, and its syntax allows programmers to
express concepts in fewer lines of code than would be possible in languages such
as C++ or Java. The language provides constructs intended to enable clear programs
on both a small and large scale. Python supports multiple programming paradigms,
including objectoriented, imperative and functional programming or procedural styles. It
features a dynamic type system and automatic memory management and has a large and
comprehensive standard library. Python interpreters are available for installation on
many operating systems, allowing Python code execution on a wide variety of systems.

Scripting language

A scripting or script language is a programming language that supports scripts, programs


written for a special run-time environment that automates the execution of tasks that
could alternatively be executed one-by-one by a human operator.
Scripting languages are often interpreted (rather than compiled). Primitives are usually
the elementary tasks or API calls, and the language allows them to be combined into
more complex programs.
Environments that can be automated through scripting include software applications, web
pages within a web browser, the shells of operating systems (OS), embedded systems, as
well as numerous games.
A scripting language can be viewed as a domain- specific language for a particular
environment; in the case of scripting an application, this is also known as an
extension language Scripting languages are also sometimes referred to as very
high-level programming languages, as they operate at a high level of abstraction,
or as control languages.
Object Oriented Programming Language Object-oriented programming (OOP) is a
programming paradigm based on the concept of "objects", which may contain data,
in the form of fields, often known as attributes; and code, in the form of procedures,

5
often known as methods. A distinguishing feature of objects is that an object's procedures
can access and often modify the data fields of the object with which they are associated
(objects have a notion of "this" or "self").
In OO programming, computer programs are designed by making them out of objects
that interact with one another. There is significant diversity in object-oriented
programming, but most popular languages are class-based, meaning that objects are
instances of classes, which typically also determines their type.

History
Python was conceived in the late 1980s, and its implementation was started in December
1989 by Guido van Rossum at CWI in the Netherlands as a successor to the ABC
language (itself inspired by SETL) capable of exception handling and interfacing with the
Amoeba operating system. Van Rossum is Python's principal author, and his continuing
central role in deciding the direction of Python is reflected in the title given to him by the
Python community, benevolent dictator for life (BDFL).

When we talk about the history of Python, we cannot miss ABC programming language
because it was ABCs influence that led to the design and development of programming
language called Python.
In the early 1980s, Van Rossum used to work at CWI (Centrum voor Wiskunde en
Informatica) as an implementer of the programming language called ABC. Later at CWI

6
in the late 1980s, while working on a new distributed operating system called AMOEBA,
Van Rossum started looking for a scripting language with a syntax like ABC but with the
access to the Amoeba system calls. So Van Rossum himself started designing a new simple
scripting language that could overcome the flaws of ABC.
Van Rossum started developing the new script in the late 1980s and finally introduced the
first version of that programming language in 1991. This initial release has module system
of Modula-3. Later on, this programming language was named ‘Python’.

History of Python: Story behind the name


Often people assume that the name Python was written after a snake. Even the logo of
Python programming language depicts the picture of two snakes, blue and yellow. But, the
story behind the naming is somewhat different.
Back in the 1970s, there was a popular BBC comedy tv show called Monty Python’s Fly
Circus and Van Rossum happened to be the big fan of that show. So when Python was
developed, Rossum named the project ‘Python’.

“Python is an experiment in how much freedom programmers need. Too much freedom
and nobody can read another's code; too little and expressiveness is endangered.”

7
Chapter 2

Literature Review

2.1 Computer Vision


Computer vision is a process by which we can understand the images and videos, how
they are stored and how we can manipulate and retrieve data from them. Computer Vision
is the base or mostly used for Artificial Intelligence. Computer-Vision is playing a major
role in self-driving cars, robotics as well as in photo correction apps.

OpenCV
OpenCV is the huge open-source library for the computer vision, machine learning, and
image processing and now it plays a major role in real-time operation which is very
important in today’s systems. By using it, one can process images and videos to identify
objects, faces, or even handwriting of a human. When it integrated with various libraries,
such as NumPy, python is capable of processing the OpenCV array structure for analysis.
To Identify image pattern and its various features we use vector space and perform
mathematical operations on these features.
The first OpenCV version was 1.0. OpenCV is released under a BSD license and hence
it’s free for both academic and commercial use. It has C++, C, Python and Java
interfaces and supports Windows, Linux, Mac OS, iOS and Android. When OpenCV was
designed the main focus was real-time applications for computational efficiency. All
things are written in optimized C/C++ to take advantage of multi-core processing.

Applications of OpenCV: There are lots of applications which are solved using
OpenCV, some of them are listed below

• face recognition

• Automated inspection and surveillance

• number of people – count (foot traffic in a mall, etc)

• Vehicle counting on highways along with their speeds

• Interactive art installations


8
• Anamoly (defect) detection in the manufacturing process (the odd defective products)

• Street view image stitching

• Video/image search and retrieval

• Robot and driver-less car navigation and control

• object recognition

• Medical image analysis

• Movies – 3D structure from motion

• TV Channels advertisement recognition

OpenCV Functionality

• Image/video I/O, processing, display (core, imgproc, highgui)

• Object/feature detection (objdetect, features2d, nonfree)

• Geometry-based monocular or stereo computer vision (calib3d, stitching, videostab)

• Computational photography (photo, video, superres)

• Machine learning & clustering (ml, flann)

• CUDA acceleration (gpu)

Image-Processing
Image processing is a method to perform some operations on an image, in order to get an
enhanced image and or to extract some useful information from it.
If we talk about the basic definition of image processing then
“Image processing is the analysis and manipulation of a digitized image, especially
in order to improve its quality”.

Digital-Image :
An image may be defined as a two-dimensional function f(x, y), where x and y are
spatial(plane) coordinates, and the amplitude of fat any pair of coordinates (x, y) is called
the intensity or grey level of the image at that point.

9
In another word An image is nothing more than a two-dimensional matrix (3-D in case of
coloured images) which is defined by the mathematical function f(x, y) at any point is
giving the pixel value at that point of an image, the pixel value describes how bright that
pixel is, and what colour it should be.
Image processing is basically signal processing in which input is an image and output is
image or characteristics according to requirement associated with that image.
Image processing basically includes the following three steps:

1. Importing the image

2. Analysing and manipulating the image

3. Output in which result can be altered image or report that is based on image analysis

How Does A Computer Read An Image?


Consider the below image:

We are humans we can easily make it out that is the image of a person who is me. But if
we ask computer “is it my photo?”. The computer can’t say anything because the
computer is not figuring out it all on its own.

READING IMAGE FROM OPEN CV


The computer reads any image as a range of values between 0 and 255. For any color
image, there are 3 primary channels -red, green and blue.

10
2.2 Introduction to TensorFlow
TensorFlow is an open-source software library. TensorFlow was originally developed by
researchers and engineers working on the Google Brain Team within Google’s Machine
Intelligence research organization for the purposes of conducting machine learning and
deep neural networks research, but the system is general enough to be applicable in a wide
variety of other domains as well!

11
Why is TensorFlow So Popular?TensorFlow is well-documented and includes plenty of
machine learning libraries. It offers a few important functionalities and methods for the
same.
TensorFlow is also called a “Google” product. It includes a variety of machine learning and
deep learning algorithms. TensorFlow can train and run deep neural networks for handwritten
digit classification, image recognition, word embedding and creation of various sequence
models.

Let us now consider the following important features of TensorFlow −


• It includes a feature of that defines, optimizes and calculates mathematical expressions
easily with the help of multi-dimensional arrays called tensors.

• It includes a programming support of deep neural networks and machine learning


techniques.
• It includes a high scalable feature of computation with various data sets.

• TensorFlow uses GPU computing, automating management. It also includes a unique


feature of optimization of same memory and the data used.

TensorFlow is basically a software library for numerical computation using data flow
graphs where:
• nodes in the graph represent mathematical operations.

• edges in the graph represent the multidimensional data arrays (called tensors)
communicated between them. (Please note that tensor is the central unit of data in
TensorFlow).

Consider the diagram given below:

Here, add is a node which represents addition operation. a and b are input tensors and c
is the resultant tensor.

12
This flexible architecture allows you to deploy computation to one or more CPUs or
GPUs in a desktop, server, or mobile device with a single API!

TensorFlow APIs
TensorFlow provides multiple APIs (Application Programming Interfaces). These can be
classified into 2 major categories:
1. Low level API:
● Complete programming control
● Recommended for mechine learning
● Provides fine levels of control over the models
● TensorFlow Core is the low level API of TensorFlow.
2. High level API:
● Built on top of TensorFlow Core
● Easier to learn and use than TensorFlow Core
● Make reptitive task easier and more consistent between different users
● Tf.contrib.learn is an example of a high level API

TensorFlow Core
1. Installing TensorFlow
An easy to follow guide for TensorFlow installation is available here:
Installing TensorFlow.
Once installed, you can ensure a successful installation by running this command in
python interpreter:
import tensorflow as tf

2. The Computational Graph


Any TensorFlow Core program can be divided into two discrete sections:
● Building the computational graph.A computational graph is nothing but a series
of TensorFlow operations arranged into a graph of nodes.

● Running the computational graph.TO actually evaluate the nodes,we must run the
computational graph within a session.A session encapulates the control and state
of the TensorFlow runtime.
13
Now, let us write our very first TensorFlow program to understand above
concept:

2.3 TKINTER
Tkinter is a Python binding to the Tk GUI toolkit. It is the standard Python interface to the
Tk GUI toolkit,and is Python's de facto standard GUI. Tkinter is included with standard
GNU/Linux, Microsoft Windows and macOS installs of Python.
The name Tkinter comes from Tk interface. Tkinter was written by Fredrik Lundh.
Tkinter is free software released under a Python license.

Description
As with most other modern Tk bindings, Tkinter is implemented as a Python wrapper
around a complete Tcl interpreter embedded in the Python interpreter. Tkinter calls are
translated into Tcl commands, which are fed to this embedded interpreter, thus making it
possible to mix Python and Tcl in a single application.

There are several popular GUI library alternatives available, such as wxPython, PyQt,
PySide, Pygame, Pyglet, and PyGTK.

Some definitions
Window
14
This term has different meanings in different contexts, but in general it refers to a
rectangular area somewhere on the user's display screen.

Top-level window
A window which acts as a child of the primary window. It will be decorated with the
standard frame and controls for the desktop manager. It can be moved around the desktop
and can usually be resized.

Widget
The generic term for any of the building blocks that make up an application in a graphical
user interface.

Core widgets: The containers: frame, labelframe, toplevel, paned window. The buttons:
button, radiobutton, checkbutton (checkbox), and menubutton. The text widgets: label,
message, text. The entry widgets: scale, scrollbar, listbox, slider, spinbox, entry
(singleline), optionmenu, text (multiline), and canvas (vector and pixel graphics).
Tkinter provides three modules that allow pop-up dialogs to be displayed: tk.messagebox
(confirmation, information, warning and error dialogs), tk.filedialog (single file, multiple
file and directory selection dialogs) and tk.colorchooser (colour picker).
Python 2.7 and Python 3.1 incorporate the "themed Tk" ("ttk") functionality of Tk This
allows Tk widgets to be easily themed to look like the native desktop environment in which
the application is running, thereby addressing a long-standing criticism of Tk (and hence
of Tkinter). Some widgets are exclusive to ttk, such as the combobox, progressbar and
treeview widgets
Frame
In Tkinter, the Frame widget is the basic unit of organization for complex layouts. A frame
is a rectangular area that can contain other widgets.

Child and parent


When any widget is created, a parent–child relationship is created. For example, if you
place a text label inside a frame, the frame is the parent of the label.

A minimal application
Here is a minimal Python 3 Tkinter application with one widget:[7]

Process
There are four stages to creating a widget[9]

Create
create it within a frame
Configure
change the widgets attributes.
Pack
15
pack it into position so it becomes visible. Developers also have the option to use .grid()
(row=int, column=int to define rows and columns to position the widget, defaults to 0) and
.place() (relx=int or decimal, rely=int or decimal, define coordinates in the frame, or
window).
Bind
bind it to a function or event.
These are often compressed, and the order can vary.
Create UI Using - Tkinter
Modern computer applications are user-friendly. User interaction is not restricted to
console-based I/O. They have a more ergonomic graphical user interface (GUI) thanks to
high speed processors and powerful graphics hardware. These applications can receive
inputs through mouse clicks and can enable the user to choose from alternatives with the
help of radio buttons, dropdown lists, and other GUI elements (or widgets).
Such applications are developed using one of various graphics libraries available. A
graphics library is a software toolkit having a collection of classes that define a
functionality of various GUI elements. These graphics libraries are generally written in
C/C++. Many of them have been ported to Python in the form of importable modules. Some
of them are listed below:
Tkinter is the Python port for Tcl-Tk GUI toolkit developed by Fredrik Lundh. This module
is bundled with standard distributions of Python for all platforms.
PyQtis, the Python interface to Qt, is a very popular cross-platform GUI framework.
PyGTK is the module that ports Python to another popular GUI widget toolkit called GTK.
WxPython is a Python wrapper around WxWidgets, another cross-platform graphics
library.
This tutorial explains the use of Tkinter in developing GUI-based Python programs.

Basic GUI Application


GUI elements and their functionality are defined in the Tkinter module. The following code
demonstrates the steps in creating a UI.

First of all, import the TKinter module. After importing, setup the application object by
calling the Tk() function. This will create a top-level window (root) having a frame
with a title bar, control box with the minimize and close buttons, and a client a rea to
hold other widgets. The geometry() method defines the width, height and coordinates
of the top left corner of the frame as below (all values are in pixels):
16
window.geometry("widthxheight+XPOS+YPOS") The application object then enters
an event listening loop by calling the mainloop() method. The application is now
constantly waiting for any event generated on the elements in it. The event could be
text entered in a text field, a selection made from the dropdown or radio button,
single/double click actions of mouse, etc. The application's functionality involves
executing appropriate callback functions in response to a particular type of event. We
shall discuss event handling later in this tutorial. The event loop will terminate as and
when the close button on the title bar is clicked. The above code will create the
following window:

All Tkinter widget classes are inherited from the Widget class. Let's add the most
commonly used widgets.
Button
The button can be created using the Button class. The Button class constructor requires a
reference to the main window and to the options.
Signature: Button(window, attributes)
You can set the following important properties to customize a button:

text : caption of the button


bg : background colour
fg : foreground colour
font : font name and size
image : to be displayed instead of text
command : function to be called when clicked

17
Label
A label can be created in the UI in Python using the Label class. The Label constructor
requires the top-level window object and options parameters. Option parameters are similar
to the Button object.

The following adds a label in the window.

Here, the label's caption will be displayed in red colour using Helvetica font of 16 point
size.

Entry
This widget renders a single-line text box for accepting the user input. For multi-line text
input use the Text widget. Apart from the properties already mentioned, the Entry class
constructor accepts the following:
bd : border size of the text box; default is 2 pixels.
show : to convert the text box into a password field, set show property to "*".
The following code adds the text field.
txtfld=Entry(window, text="This is Entry Widget", bg='black',fg='white', bd=5)
The following example creates a window with a button, label and entry field.

18
The above example will create the following window.

19
CHAPTER 3

Methodologies

3.1 Project brief

FLOW CHART

20
Application Directory
An application directory is a grouping of software code, help files and resources that
together comprise a complete software package but are presented to the user as a single
object. An application director oversees all activities related to application developments.
They coordinate tasks and supervise work phases related to the implementation of
computer applications. Analyzing the performance of applications is also their
responsibility.

Requirements
● NOTEPAD++
● ANACONDA
● PYCHARM

A) Notepad++ :- Notepad++ is a text and source code editor for use with
Microsoft Windows. It supports tabbed editing, which allows working with
multiple open files in a single window. The product's name comes from the
C increment operator. Notepad++ is distributed as free software.

B) Anaconda:- Anaconda is a distribution of the Python and R programming


languages for scientific computing, that aims to simplify package management
and deployment. The distribution includes data-science packages suitable for
Windows, Linux, and macOS

21
C) Pycharm:- PyCharm is an integrated development environment used in
computer programming, specifically for the Python language. It is developed by
the Czech company JetBrains

language used : -
● PYTHON

3.2 Working

MASK DETECTION

The COVID-19 mask detector we’re building here today could potentially be used to help

ensure your safety and the safety of others (but I’ll leave that to the medical professionals

to decide on, implement, and distribute in the wild).

Given the trained COVID-19 face mask detector, we’ll proceed to implement two more

additional Python scripts used to:

• Detect COVID-19 face masks in images

• Detect face masks in real-time video streams

The main python program is main.py from here the execution of the program starts and

then execution passes through Algorithms which enables the program to detect images

from video frames.

In first step the images then converted into coordinates which later on compared by the

existing dataset of the images which has mask and non-masks image data , we haven’t

added those images manually . Here the opencv and tensorflow comes into the play . These

beautiful modules helps to get the raw images data from huge open source images data set.

In this way by comparing each frames program detected the mask and non-mask inputs

and hence show the respective output on the screen.


22
Why Social Distancing ??
Social distancing is a method used to control the spread of contagious diseases.

As the name suggests, social distancing implies that people should physically distance
themselves from one another, reducing close contact, and thereby reducing the spread of
a contagious disease (such as coronavirus):

When the novel coronavirus (Covid-19) pandemic emerges, the spread of the virus has
left public keep anxiety if they do not have any effective cure. The World Health
Organization (WHO) has declared Covid-19 as a pandemic due to the increase in the
number of cases reported around the world [1]. To contain the pandemic, many countries
have implemented a lockdown where the government enforced that the citizens to stay at
home during this critical period. The public health bodies such as the Centers for Disease
Control and Prevention (CDC) had to make it clear that the most effective way to slow
down the spread of Covid-19 is by avoiding close contact with other people [2]. To
flatten the curve on the Covid-19 pandemic, the citizens around the world are practicing
physical distancing.

23
To implement social distancing, group activities and congregations such as travel,
meetings, gatherings, workshops, praying had been banned during the quarantine period.
The people are encouraged to use phone and email to manage and conduct events as
much as possible to minimize the person-to-person contact. To further contain the spread
of the virus, people are also informed to perform hygiene measures such as frequently
washing hands, wearing mask and avoiding close contact with people who are ill.
However, there is a difference between knowing what to do to reduce the transmission of
the virus and putting them into practice.
The world has not yet fully recover from this pandemic and the vaccine that can
effectively treat Covid-19 is yet to be discovered. However, to reduce the impact of the
pandemic on the country's economy, several governments have allowed a limited number
of economic activities to be resumed once the number of new cases of Covid-10 has
dropped below a certain level. As these countries cautiously restarting their economic
activities, concerns have emerged regarding workplace safety in the new post-Covid-19
environment. To reduce the possibility of infection, it is advised that people should avoid
any person-to-person contact such as shaking hands and they should maintain a distance
of at least 1 meter from each other.

In Malaysia, the Ministry of Health Malaysia (MOHM) has recommended several disease
prevention measures for workplaces, individuals, and families at home, schools, childcare
centres, and senior living facilities [3]. These measures include implementing social
distancing measures, increasing physical space between workers at the workplace,
staggering work schedules, decreasing social contacts in the workplace, limiting large
work-related gatherings, limiting non-essential work travel, performing regular health
checks of staff and visitors entering buildings, reducing physical activities especially for
organizations that have staff in the high-risk category, and conducting company events or
activities online.
Individuals, communities, businesses, and healthcare organizations are all part of a
community with their responsibility to mitigate the spread of the Covid-19 disease. In
reducing the impact of this coronavirus pandemic, practicing social distancing and self-
isolation have been deemed as the most effective ways to break the chain of infections
after restarting the economic activities. In fact, it has been observed that there are many
people who are ignoring public health measures, especially with respect to social

24
distancing. It is understandable that given the people's excitement to start working again,
they sometimes tend to forget or neglect the implementation of social distancing. Hence,
this work aims to facilitate the enforcement of social distancing by providing automated
detection of social distance violation in workplaces and public areas using a deep learning
model. In the area of machine learning and computer vision, there are different methods
that can be used for object detection. These methods can also be applied to detect the
social distance between people. The following points summarizes the main components
of this approach:
1. Deep learning has gained more attention in object detection was used for human
detection purposes.
2. Develop a social distancing detection tool that can detect the distance between people
to keep safe.
3. Evaluation of the classification results by analyzing real-time video streams from the
camera.

Related Work
This section highlights some of the related works about human detection using deep
learning. A bulk of recent works on object classification and detection involve deep
learning are also discussed. The state-of-the-art review mainly focuses on the current
research works on object detection using machine learning. Human detection can be
considered as an object detection in the computer vision task for classification and
localization of its shape in video imagery. Deep learning has shown a research trend in
multi-class object recognition and detection in artificial intelligence and has achieved
outstanding performance on challenging datasets. Nguyen et al. presented a
comprehensive analysis of state-of-the-art on recent development and challenges of
human detection [4]. The survey mainly focuses on human descriptors, machine learning
algorithms, occlusion, and real-time detection. For visual recognition, techniques using
deep convolutional neural network (CNN) have been shown to achieve superior
performance on many image recognition benchmarks .
Deep CNN is a deep learning algorithm with multilayer perceptron neural networks
which contain several convolutional layers, sub-sampling layers, and fully connected
layers. Later, the weight in the whole layers in the networks are trained for each object
classification based on its dataset. For object detection in image, the CNN model was one

25
of the categories in deep learning which are supervised feature learning methods robust in
detecting the object in different scenarios. CNN has achieved great success in large-scale
image classification tasks due to the recent highperformance computing system and large
dataset such as ImageNet . Different CNN models for object detection with its object
localization had been proposed in terms of network architecture, algorithms, and new
ideas. In recent years, CNN models such as AlexNet , VGG16 , InceptionV3 , and
ResNet-50 are trained to achieve outstanding results in object recognition. The success
of deep learning in object recognition is due to its neural network structure that is capable
of self-constructing the object descriptor and learning the high-level features which are
not directly provided in the dataset.
The current state-of-the-art object detectors with deep learning had their pros and cons in
terms of accuracy and speed. The object might have different spatial locations and aspect
ratios within the image. Hence, the real-time algorithms of object detection using the
CNN model such as R-CNN and YOLO had further developed to detect multi-classes in
a different region in images had been developed. YOLO (You Only Look Once) is the
prominent technique for deep CNN-based object detection in terms of both speed and
accuracy. The illustration for the YOLO model .
Adapting the idea from the work , we present a computer vision technique for detecting
people via a camera installed at the roadside or workspace. The camera field-of-view
covers the people walking in a specified space. The number of people in an image and
video with bounding boxes can be detected via these existing deep CNN methods where
the YOLO method was employed to detect the video stream taken by the camera. By
measuring the Euclidean distance between people, the application will highlight whether
there is sufficient social distance between people in the video.

26
FLOW CHART

We can use OpenCV, computer vision, and deep learning to implement social distancing
detectors.

The steps to build a social distancing detector include:

• Apply object detection to detect all people (and only people) in a video stream
(building an OpenCV people counter)
• Compute the pairwise distances between all detected people
• Based on these distances, check to see if any two people are less than N pixels
apart
• For the most accurate results, you should calibrate your camera through
intrinsic/extrinsic parameters so that you can map pixels to measurable units.
• An easier alternative (but less accurate) method would be to apply triangle
similarity calibration .
• Both of these methods can be used to map pixels to measurable units.
• Finally, if you do not want/cannot apply camera calibration, you can still utilize a
social distancing detector, but you’ll have to rely strictly on the pixel distances,
which won’t necessarily be as accurate.

27
For the sake of simplicity, our OpenCV social distancing detector implementation will
rely on pixel distances — I will leave it as an exercise for you, the reader, to extend the
implementation as you see fit.

Formula Used

Distance Measurement
In this step of the pipeline, the location of the bounding box for each person (x, y, w, h) in
the perspective view is detected and transformed into a top-down view. For each
pedestrian, the position in the top-down view is estimated based on the bottom-center
point of the bounding box. The distance between every pedestrian pair can be computed
from the top-down view and the distances is scaled by the scaling factor estimated from
camera view calibration. Given the position of two pedestrians in an image as (x1, y1)
and (x2,y2) respectively, the distance between the two

Result and Discussion

The video shows the pedestrian walking on a public street. In this work, the video frame
is fixed at a specified angle to the street. The perspectives view of the video frame is
transformed into a top-down view for more accurate estimation of distance measurement.
The social distancing detection in a video frame and the results of the top-down view.
The sequences are depicted from top to bottom. The points represent each pedestrian for
social distancing detection. The red points represent the pedestrians whose distance with
another pedestrian is below the acceptable threshold and the green points represent the
pedestrians who keep a safe distance from other pedestrians. However, there are also a
number of detection errors are shown in . These errors are possibly due to the pedestrians
walking too near to another pedestrian until they are overlaid on the camera view. The
precision of the distance measurement between pedestrians is also affected by the
pedestrian detection algorithm. The YOLO algorithm is also able to detect the half body
of the pedestrian as an object by showing the bounding box, the position of the pedestrian
corresponds the middle-point of bottom line is estimated based on the bounding box will
less precise. To overcome the detection errors, the proposed methodology had been
improved by adding a quadrilateral box to observe the appointed region in an image as
shown in . Hence, only the pedestrians walking within the specified space will be counted
for people density measurement.

28
Sources codes

MAIN.PY CODE

IMPORT TKINTER AS TK

IMPORT TKINTER.FONT AS FONT

IMPORT TKINTER.TTK AS TTK

FROM TKINTER IMPORT *

IMPORT TENSORFLOW AS TENSORFLOW

FROM TENSORFLOW.KERAS.APPLICATIONS.MOBILENET_V2 IMPORT PREPROCESS_INPUT

FROM TENSORFLOW.KERAS.PREPROCESSING.IMAGE IMPORT IMG_TO_ARRAY

FROM TENSORFLOW.KERAS.MODELS IMPORT LOAD_MODEL

FROM IMUTILS.VIDEO IMPORT VIDEOSTREAM

IMPORT NUMPY AS NP

IMPORT ARGPARSE

IMPORT IMUTILS

29
IMPORT TIME

IMPORT CV2

IMPORT OS

FROM SCIPY.SPATIAL IMPORT DISTANCE AS DIST

CAP = CV2.VIDEOCAPTURE(0)

FACE_MODEL = CV2.CASCADECLASSIFIER('HAARCASCADE_FRONTALFACE_DEFAULT.XML')

WINDOW = TK.TK()

WINDOW.TITLE("SOCIAL DISTANCING DETECTION SYSTEM")

WINDOW.GEOMETRY('1920X1080')

WINDOW.CONFIGURE(BACKGROUND ='#AF7AC5')

WINDOW.GRID_ROWCONFIGURE(0, WEIGHT = 1)

WINDOW.GRID_COLUMNCONFIGURE(0, WEIGHT = 1)

MESSAGE = TK.LABEL(

WINDOW, TEXT ="SOCIAL DISTANCING DETECTION SYSTEM",

BG ="#52BE80", FG = "BLACK", WIDTH = 50,

HEIGHT = 3, FONT = ('TIMES', 30, 'BOLD'))

MESSAGE.PLACE(X = 100, Y = 20)

30
3.3 GUI

Python has loads of frameworks for developing GUIs, and we have gathered some of the

most popular Python GUI frameworks in our list that are listed below.

• PyQt5. Developed By: Riverbank Computing.

• Tkinter. Developed By: Fredrik Lundh.

• Kivy. Developed By: Kivy Organization.

• Wx Python Developed By: Robin Dunn

We have used Tkinter For developing GUI

31
32
Modules

33
Video Stream Output

34
Face Detection using haarcascades

35
Chapter 4
DETECTION SYSTEM

4.1 App Output

36
4.2 SOCIAL DISTANCING

37
4.3 Mask Detection

38
Chapter 5

CONCLUSION

5.1 Benefits of this project :-


1. Reassuring Employee’s:- with 41% of employee’s not wanting to return to the
office until the workplace is made safe,having social distancing detection in the
workplace is a great way of reassuring staff that the workplace has been made safe
for their benefit.The solution is also safer than the thermal cameras-those without
a fever can also be contagious.
2. Utilising space:- with the detection software you will have the ability to see
which areas gain the most traction and are the offices ‘hotspots’.From this data
you will then be able to put the most relevant safety measures in place.
3. Monitoring and Measuring:-The technology isn’t just for the office,for example,at
a factory where employees are very close to each other,the software can be
integrated into the their security camera system.Allowing them to monitor the
working environment and highlight people whose distancing is below the
minimum acceptable distance.
4. Queue Monitoring:- For retail, healthcare and industries where queueing is
avoidable,queue monitoring can be integrated into your cameras. The cameras
will then have the ability to monitor and detect whether people are abiding by the
social distancing guidelines.The solution can also be set up to work with
automated barriers and digital signage for real time notifications and health and
safety information.

5.2 FUTURE SCOPE :-

This method was developed in an efficient way for the people who are not wearing face
masks and not maintaining social distance and notified to officials by email. As a future
enhancement, we can predict/detect time at which it gets crowded and heat maps can be

.
plotted in an accurate way In this project we have used recent techniques in the field of
computer vision and also in the open cv. A custom dataset can be created using
Google/Bing Search API, Kaggle datasets and RMFD dataset. The proposed system will
correctly detect the presence of a face mask and the person is safe at a distance. The
system is accurate, since we have used the opencv architecture for detecting face masks
and for distance computing we used Euclidean distance formula. Thus, it makes it easier
to deploy our model to embedded systems like Raspberry Pi, Goole Coral etc. We believe
that this approach will enlarge the safety of the individuals during the pandemic.

39
REFERENCES
The references should be sorted alphabetically by the last name of the first author. A few
examples of the references that include research articles in journals or in conference
proceedings, books, M.S. /Ph.D. Thesis etc. are given below.

● Hands-On Machine Learning with Scikit-Learn and TensorFlow (2nd Edition) by


Aurélien Géron
● The Hundred-Page Machine Learning Book by Andriy Burkov
● Building Machine Learning Powered Applications: Going from Idea to Product by
Emmanuel Ameisen
● Grokking Deep Learning by Andrew W. Trask
● Deep Learning with Python by Francois Chollet
● Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville
● Reinforcement Learning: An Introduction (2nd Edition) by Richard S. Sutton,
Andrew G. Barto
● Deep Reinforcement Learning Hands-On (2nd Edition) by Maxim Lapan
● TinyML: Machine Learning with TensorFlow Lite on Arduino and Ultra-Low-
Power Microcontrollers by Pete Warden & Daniel Situnayake
● Learning From Data by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, Hsuan-
Tien Lin.
● The Book of Why by Judea Pearl, Dana Mackenzie.
● Rebooting AI by Gary Marcus & Ernest Davis
● Machine Learning Yearning by Andrew Ng.
● An Introduction to Machine Learning Interpretability (2nd Edition) by Patrick
Hall & Navdeep Gill
● Interpretable Machine Learning by Christoph Molnar.
● Neural Networks and Deep Learning by Michael Nielsen.
● Generative Deep Learning by David Foster
● Deep Learning for Coders with fastai and PyTorch: AI Applications
Without a PhD by Jeremy Howard & Sylvain Gugger
● The Machine Learning Engineering Book by Andriy Burkov
● Machine Learning Interviews Book by Chip Huyen
● Human-in-the-Loop Machine Learning by Robert Munro

40

You might also like