Stress Detection in It Professional by Image Processing and Machine Learning

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 91

A

Minor Project Report

On

STRESS DETECTION IN IT PROFESSIONAL


BY IMAGE PROCESSING AND
MACHINE LEARNING
Submitted to JNTU HYDERABAD
In Partial Fulfillment of the requirements for the Award of Degree of

BACHELOR OF TECHNOLOGY
IN
INFORMATION TECHNOLOGY

Submitted
By

M.DIVYA (188R1A1237)
D. ABHISHEK (188R1A1212)
K. NARESH (188R1A1230)
P. SPARSHA (188R1A1245)

Under the Esteemed guidance of


Mrs. K. PRANATHI
Assistant Professor, Department of IT

Department of Information Technology

CMR ENGINEERING COLLEGE


(Accredicted by NAAC, Approved by AICTE, NEW DELHI, Affiliated to JNTU,
Hyderabad) Kandlakoya, Medchal Road, R.R. Dist. Hyderabad 501 401
2021-2022
CMR ENGINEERING COLLEGE
(Accredicted by NAAC, Approved by AICTE, NEW DELHI, Affiliated to JNTU,
Hyderabad) (Kandlakoya, Medchal Road, R.R. Dist. Hyderabad-501 401)

Department of Information Technology

CERTIFICATE

This is to certify that the project entitled “STRESS DETECTION IN IT

PROFESSIONALS BY IMAGE PROCESSING AND MACHINE


LEARNING” is a bonafide work carried out by
M.DIVYA (188R1A1238)
D.ABHISHEK (188R1A1247)
K. NARESH (188R1A1212)
P. SPARSHA (188R1A1245)
in partial fulfillment of the requirement for the award of the degree of BACHELOR OF
TECHNOLOGY in INFORMATION TECHNOLOGY from CMR Engineering College,
affiliated to JNTU, Hyderabad, under our guidance and supervision.
The results presented in this project have been verified and are found to be satisfactory. The
results embodied in this project have not been submitted to any other university for the award
of any other degree or diploma.

Internal Guide Project Coordinator Head of the Department


Mrs. K. PRANATHI Mr. V. RAMESH Dr. MADHAVI PINGILI
Assistant Professor Assistant Professor HOD - IT Department
Department of IT Department of IT CMR Engineering College
CMREC, Hyderabad CMREC, Hyderabad Hyderabad
DECLARATION

This is to certify that the work reported in the present project entitled “STRESS
DETECTION IN IT PROFESSIONALS BY IMAGE PROCESSING AND MACHINE
LEARNING” is
a record of bonafide work done by us in the Department of Information Technology, CMR
Engineering College, JNTU Hyderabad. The reports are based on the project work done
entirely by us and not copied from any other source. We submit our project for further
development by any interested students who share similar interests to improve the project in
thefuture.

The results embodied in this project report have not been submitted to any other University or
Institute for the award of any degree or diploma to the best of our knowledge and belief.

M.DIVYA (188R1A1237)
D.ABHISHEK (188R1A1212)
K.NARESH (188R1A1230)
P.SPARSHA (188R1A1245)
ACKNOWLEDGMENT

We are extremely grateful to Dr. A. Srinivasula Reddy, Principal and Dr. Madhavi Pingili,
HOD, Department of IT, CMR Engineering College for their constant support.

We are extremely thankful to Mrs. K. Pranathi, Assistant Professor, Internal Guide,


Department of IT, for his constant guidance, encouragement and moral support throughout
theproject.

We will be failing in duty if we do not acknowledge with grateful thanks to the authors of the
references and other literatures referred in this Project.

We express our thanks to all staff members and friends for all the help and co-ordination
extended in bringing out this project successfully in time.

Finally, we are very much thankful to our parents who guided us for every step.

M.DIVYA (188R1A1237)
D.ABHISHEK (188R1A1212)
K.NARESH (188R1A1230)
P.SPARSHA (188R1A1245)
CONTENTS

TOPIC PAGENO

ABSTRACT I

LIST OF FIGURES II

1. INTRODUCTION 1
Introduction& Objectives 1
Different Types of Rice Plant Diseases 2
ProjectObjectives 3
Purpose oftheProject 4
Existing system& Disadvantages 4
Proposed systemwith Features 5

2. LITERATURESURVEY 6

3. SOFTWARE REQUIREMENT ANALYSIS 8


ProblemSpecification 8
Modules andtheirFunctionalities 8
FunctionalRequirements 12
Non-Functional Requirements 12
FeasibilityStudy 12

4. SOFTWARE &HARDWARE REQUIREMENTS 14


Softwarerequirements 14
Hardwarerequirements 14

5. SOFTWAREDESIGN 15
SystemArchitecture 15
DataflowDiagrams 17
E-RDiagrams 24
UMLDiagrams 25
6. CODINGANDIMPLEMENTATION 35
Sourcecode 35
Implementation 61
ProjectStructure 54
InAnaconda Prompt 56

7. SYSTEMTESTING 59
Types ofSystem Testing 59
TestingStrategies 59

8. OUTPUTSCREENS 61

9. CONCLUSION 63

10. FUTUREENHANCEMENTS 64

11. REFERENCES 65
ABSTRACT

The main motive of our project is to detect stress in the IT professionals using vivid Machine
learning and Image processing techniques. Our system is an upgraded version of the old stress
detection systems which excluded the live detection and the personal counseling but this system
comprises of live detection and periodic analysis of employees and detecting physical as well as
mental stress levels in his/her by providing them with proper remedies for managing stress by
providing survey form periodically. Our system mainly focuses on managing stress and making
the working environment healthy and spontaneous for the employees and to get the best out of
them during working hours.

I
LIST OF FIGURES

S.NO FIGURE NO DESCRIPTION PAGENO

1 1.1.1.1 Brownspot 2

2 1.1.1.2 Leaf Blast 3

3 1.1.1.3 Hispa 3

6 5.1 System Architecture 15

7 5.2 Data flow diagram 18

8 5.3 E-R diagram 25

9 5.4.1 Sequence diagram 26

10 5.4.2 Usecase diagram 27

11 5.4.3 Activity diagram 28

12 6.2.1 Install tensorflow 53

13 6.2.2 Install keras 54

14 6.2.3 Install opencv 54

15 6.2.4 Install pillow 54

16 6.2.1.1 Annotations Structure 55

17 6.2.1.2 Tags 56

18 6.2.1.3 Draw the bounding boxes 56

19 6.2.2.1 Convert to yolo format 57

20 6.2.2.2 Download and convert yolo pre trained 57

21 6.2.2.3 Training yolov3 58

22 8.1 Testing the images 61

23 8.2 Display output 61

II
24 8.3 BrownSpot 62

25 8.4 Hispa 62

26 8.5 Leaf Blast 62

27 8.6 Healthy 62

III
1. INTRODUCTION

1.1 Introduction &Objectives

Stress management systems play a significant role to detect the stress levels which
disrupts our socio economic lifestyle. As World Health Organization (WHO) says, Stress
is a mental health problem affecting the life of one in four citizens. Human stress leads to
mental as well as socio-fiscal problems, lack of clarity in work, poor working
relationship, depression and finally commitment of suicide in severe cases. This demands
counseling to be provided for the stressed individuals cope up against stress. Stress
avoidance is impossible but preventive actions helps to overcome the stress. Currently,
only medical and physiological experts can determine whether one is under depressed
state (stressed) or not. One of the traditional method to detect stress is based on
questionnaire. This method completely depends on the answers given by the individuals,
people will be tremulous to say whether they are stressed or normal. Automatic detection
of stress minimizes the risk of health issues and improves the welfare of the society. This
paves the way for the necessity of a scientific tool, which uses physiological signals
thereby automating the detection of stress levels in individuals. Stress detection is
discussed in various literatures as it is a significant societal contribution that enhances the
lifestyle of individuals. Ghaderi et al. analysed stress using Respiration, Heart rate HR),
facial electromyography (EMG), Galvanic skin response (GSR) foot and GSR hand data
with a conclusion that, features pertaining to respiration process are substantial in stress
detection. Maria Viqueira et al. describes mental stress prediction using a standalone
stress sensing hardware by interfacing GSR as the only physiological sensor. David Liu et
al. proposed a research to predict stress levels solely from Electrocardiogram
(ECG).Multimodal sensor efficacy to detect stress of working people is experimentally
discussed in. This employs the sensor data from sensors such as pressure distribution, HR,
Blood Volume Pulse (BVP) and Electro dermal activity (EDA). An eye tracker sensor is
also used which systematically analyses the eye movements with the stressors like Stroop
word test and information related to pickup tasks. The authors of performed perceived
stress detection by a set of non-invasive sensors which collects the physiological signals
such as ECG , GSR, Electroencephalography(EEG), EMG, and Saturation of peripheral
oxygen (SpO2). Continuous stress levels are estimated using the physiological sensor
4
data such as GSR, EMG, HR, Respiration in. The stress

5
detection is carried out effectively using Skin conductance level (SCL), HR, Facial EMG
sensors by creating ICT related Stressors. Automated stress detection is made possible by
several pattern recognition algorithms. Every sensor data is compared with a stress index
which is a threshold value used for detecting the stress level. The authors of collected data
from 16 individuals under four stressor conditions which were tested with Bayesian
Network, J48 algorithm and Sequential Minimal Optimization (SMO) algorithm for
predicting stress. Statistical features of heart rate, GSR, frequency domain features of
heart rate and its variability (HRV), and the power spectral components of ECG were
used to govern the stress levels. Various features are extracted from the commonly used
physiological signals such as ECG, EMG, GSR, BVP etc., measured using appropriate
sensors and selected features are grouped into clusters for further detection of anxiety
levels
. In, it is concluded that smaller clusters result in better balance in stress detection using
the selected General Regression Neural Network (GRNN) model. This results in the fact
that different combinations of the extracted features from the sensor signals provide better
solutions to predict the continuous anxiety level. Frequency domain features like LF
power (low frequency power from 0.04 Hz to0.15Hz), HF power (High frequency power
from 0.15Hz to 0.4 Hz) , LF/HF (ratio of LF to the HF) and time domain features like
Mean , Median, standard deviation of heart signal are considered for continuous real time
stress detection in . Classification using decision tree such as PLDA is performed using
two stressors namely pickup task and stroop based word test wherein the authors
concluded that the stressor based classification proves unsatisfactory. In2016, Gjoreski et
al. created laboratory based stress detection classifiers from ECG signal and HRV
features. Features of ECG are analysed using GRNN model to measure the stress level.
Heart rate variability (HRV) features and RR (cycle length variability interval length
between two successive Rs) interval features are used to classify the stress level. It is
noticed that Support Vector Machine (SVM) was used as the classification algorithm
predominantly due to its generalization ability and sound mathematical background
Various kernels were used to develop models using SVM and it is concluded in that a
linear SVM on both ECG frequency features and HRV features performed best,
outperforming other model choices.

Nowadays as IT industries are setting a new peek in the market by bringing new
technologies and products in the market. In this study, the stress levels in employees are
also noticed to raise the bar high. Though there are many organizations who provide
mental health related schemes for their employees but the issue is far from control. In
this paper
we try to go in the depth of this problem by trying to detect the stress patterns in the
working employee in the companies we would like to apply image processing and
machine learning techniques to analyze stress patterns and to narrow down the factors that
strongly determine the stress levels. Machine Learning algorithms like KNN classifiers
are applied to classify stress. Image Processing is used at the initial stage for detection,
the employee’s image is clicked by the camera which serves as input. In order to get an
enhanced image or to extract some useful information from it image processing is used by
converting image into digital form and performing some operations on it. By taking input
as an image from video frames and output may be image or characteristics associated with
that image. Image processing basically includes the following three steps:
□ Importing the image via image acquisition tools.
□ Analyzing and manipulating the image.
□ Output in which result is altered image or report that is based on image analysis.

System gets the ability to automatically learn and improve from self-experiences
without being explicitly programmed using Machine learning which is an application of
artificial intelligence (AI). Computer programs are developed by Machine Learning that
can access data and use it to learn for themselves. Explicit programming to perform the
task based on predictions or decisions builds a mathematical model based on "training
data" by using Machine Learning. The extraction of hidden data, association of image
data and additional pattern which are unclearly visible in image is done using Image
Mining. It’s an interrelated field that involves, Image Processing, Data Mining, Machine
Learning and Datasets. According to conservative estimates in medical books, 50- 80%
of all physical diseases are caused by stress. Stress is believed to be the principal cause in
cardiovascular diseases. Stress can place one at higher risk for diabetes, ulcers, asthma,
migraine headaches, skin disorders, epilepsy, and sexual dysfunction. Each of these
diseases, and host of others, is psychosomatic (i.e., either caused or exaggerated by
mental conditions such as stress) in nature. Stress has three prong effects:
□ Subjective effects of stress include feelings of guilt, shame, anxiety, aggression or
frustration. Individuals also feel tired, tense, nervous, irritable, moody, or lonely.
□ Visible changes in a person's behavior are represented by Behavioral effects of stress.
Effects of behavioral stress are seen such as increased accidents, use of drugs or alcohol,
laughter out of context, outlandish or argumentative behavior, very excitable moods,
and/or
eating or drinking to excess.
□ Diminishing mental ability, impaired judgment, rash decisions, forgetfulness and/or
hypersensitivity to criticism are some of the effects of Cognitive stress

Project Objectives
By the end of this project you’ll understand
1. To predict stress in a person by the symptoms calculated by monitoring.
2. To analyze the stress levels in the employee.
3. To provide solutions and remedies for the person to recover his/her stress

Purpose of the Project


Nowadays as IT industries are setting a new peek in the market by bringing new
technologies and products in the market. In this study, the stress levels in employees are
also noticed to raise the bar high.Stress can place one at higher risk for diabetes, ulcers,
asthma, migraine headaches, skin disorders, epilepsy, and sexual dysfunction.In this
project we try to go in the depth of this problem by trying to detect the stress patterns in
the working employee in the companies we would like to apply image processing and
machine learning techniques to analyze stress patterns and to narrow down the factors
that strongly determine the stress levels. Machine Learning algorithms like KNN
classifiers are applied to classify stress. Image Processing is used at the initial stage for
detection, the employee’s image is clicked by the camera which serves as input. In
order to get an enhanced image or to extract some useful information from it image
processing is used by converting image into digital form and performing some
operations on it. By taking input as an image from video frames and output may be
image or characteristics associated with that image.
Existing System with Disadvantages

In the existing system work on stress detection is based on the digital signal processing,
taking into consideration Galvanic skin response, blood volume, pupil dilation and skin
temperature. And the other work on this issue is based on several physiological signals
and visual features (eye closure, head movement) to monitor the stress in a person while
he is working. However these measurements are intrusive and are less comfortable in real
application. Every sensor data is compared with a stress index which is a threshold value
used for detecting the stress level.

Disadvantages

1. Physiological signals used for analysis are often pigeonholed by a Non-stationary


time performance.
2. The extracted features explicitly gives the stress index of the physiological signals.
The ECG signal is directly assessed by using commonly used peak j48 algorithm
3. Different people may behave or express differently under stress and it is hard to find a
universal pattern to define the stress emotion.

Proposed System withFeatures

The proposed System Machine Learning algorithms like KNN classifiers are applied to
classify stress. Image Processing is used at the initial stage for detection, the employee’s
image is given by the browser which serves as input. In order to get an enhanced image or
to extract some useful information from it image processing is used by converting image
into digital form and performing some operations on it. By taking input as an image and
output may be image or characteristics associated with that images. The emotion are
displayed on the rounder box. The stress level indicating by Angry, Disgusted, Fearful,
Sad.
Advantages

1. By taking input as an image and output may be image or characteristics associated


with that images.
2. We will capture images of the employee based on the regular intervals and then the
tradition survey forms will be given to the employees
3. The emotions are displayed on the rounder box. The stress level indicating by Angry,
Disgusted, Fearful, Sad.

INPUT AND OUTPUT DESIGN


INPUT DESIGN
The input design is the link between the information system and the user. It
comprises the developing specification and procedures for data preparation and those steps are
necessary to put transaction data in to a usable form for processing can be achieved by
inspecting the computer to read data from a written or printed document or it can occur by
having people keying the data directly into the system. The design of input focuses on
controlling the amount of input required, controlling the errors, avoiding delay, avoiding extra
steps and keeping the process simple. The input is designed in such a way so that it provides
security and ease of use with retaining the privacy. Input Design considered the following
things:
 What data should be given as input?
 How the data should be arranged or coded?
 The dialog to guide the operating personnel in providing input.
 Methods for preparing input validations and steps to follow when error occur.

OBJECTIVES
1. Input Design is the process of converting a user-oriented description of the input
into a computer-based system. This design is important to avoid errors in the data
input process and show the correct direction to the management for getting
correct information from the computerized system.
2. It is achieved by creating user-friendly screens for the data entry to handle large
volume of data. The goal of designing input is to make data entry easier and to be
free from errors. The data entry screen is designed in such a way that all the data
manipulates can be performed. It also provides record viewing facilities.
3. When the data is entered it will check for its validity. Data can be entered with
the help of screens. Appropriate messages are provided as when needed so that
the user will not be in maize of instant. Thus the objective of input design is to
create an input layout that is easy to follow

OUTPUT DESIGN

A quality output is one, which meets the requirements of the end user and presents
the information clearly. In any system results of processing are communicated to the users and to
other system through outputs. In output design it is determined how the information is to be
displaced for immediate need and also the hard copy output. It is the most important and direct
source information to the user. Efficient and intelligent output design improves the system’s
relationship to help user decision-making.
1. Designing computer output should proceed in an organized, well thought out manner; the
right output must be developed while ensuring that each output element is designed so
that people will find the system can use easily and effectively. When analysis design
computer output, they should Identify the specific output that is needed to meet the
requirements.
2. Select methods for presenting information.
3. Create document, report, or other formats that contain information produced by
the system.
The output form of an information system should accomplish one or more of the
following objectives.
 Convey information about past activities, current status or projections of the
Future.
 Signal important events, opportunities, problems, or warnings.
 Trigger an action.
 Confirm an action.
2. LITERATURE SURVEY

1. Stress and anxiety detection using facial cues from videos

AUTHORS:G. Giannakakis, D. Manousos, F. Chiarugi

This study develops a framework for the detection and analysis of stress/anxiety emotional states
through video-recorded facial cues. A thorough experimental protocol was established to induce
systematic variability in affective states (neutral, relaxed and stressed/anxious) through a variety
of external and internal stressors. The analysis was focused mainly on non-voluntary and semi-
voluntary facial cues in order to estimate the emotion representation more objectively. Features
under investigation included eye-related events, mouth activity, head motion parameters and
heart rate estimated through camera-based photo plethysmography. A feature selection
procedure was employed to select the most robust features followed by classification schemes
discriminating between stress/anxiety and neutral states with reference to a relaxed state in each
experimental phase. In addition, a ranking transformation was proposed utilizing self reports in
order to investigate the correlation of facial parameters with a participant perceived amount of
stress/anxiety. The results indicated that, specific facial cues, derived from eye activity, mouth
activity, head movements and camera based heart activity achieve good accuracy and are
suitable as discriminative indicators of stress and anxiety.

2. Detection of Stress Using Image Processing and Machine Learning Techniques

AUTHORS: Nisha Raichur, Nidhi Lonakadi, Priyanka Mural

Stress is a part of life it is an unpleasant state of emotional arousal that people experience in
situations like working for long hours in front of computer. Computers have become a way of
life, much life is spent on the computers and hence we are therefore more affected by the ups
and downs that they cause us. One cannot just completely avoid their work on computers but one
can at least control his/her usage when being alarmed about him being stressed at certain point of
time. Monitoring the emotional status of a person who is working in front of a computer for
longer duration is crucial for the safety of a person. In this work a real-time non-intrusive videos
are captured, which detects the emotional status of a person by analysing the facial expression.
We detect an individual emotion in each video frame and the decision on the stress level is made
in sequential hours of the video captured. We employ a technique that allows us to train a model
and
analyze differences in predicting the features. Theano is a python framework which aims at
improving both the execution time and development time of the linear regression model which is
used here as a deep learning algorithm. The experimental results show that the developed system
is well on data with the generic model of all ages.

3. Machine Learning Techniques for Stress Prediction in Working

Employees AUTHORS:U. S. Reddy, A. V. Thota and A. Dharun

Stress disorders are a common issue among working IT professionals in the industry today. With
changing lifestyle and work cultures, there is an increase in the risk of stress among the
employees. Though many industries and corporates provide mental health related schemes and
try to ease the workplace atmosphere, the issue is far from control. In this paper, we would like
to apply machine learning techniques to analyze stress patterns in working adults and to narrow
down the factors that strongly determine the stress levels. Towards this, data from the OSMI
mental health survey 2017 responses of working professionals within the tech-industry was
considered. Various Machine Learning techniques were applied to train our model after due data
cleaning and preprocessing. The accuracy of the above models was obtained and studied
comparatively. Boosting had the highest accuracy among the models implemented. By using
Decision Trees, prominent features that influence stress were identified as gender, family history
and availability of health benefits in the workplace. With these results, industries can now
narrow down their approach to reduce stress and create a much comfortable workplace for their
employees.

4. Classification of acute stress using linear and non-linear heart rate variability
analysis derived from sternal ECG

AUTHORS :Tanev, G., Saadi, D.B., Hoppe, K., Sorensen, H.B

Chronic stress detection is an important factor in predicting and reducing the risk of
cardiovascular disease. This work is a pilot study with a focus on developing a method for
detecting short-term psychophysiological changes through heart rate variability (HRV) features.
The purpose of this pilot study is to establish and to gain insight on a set of features that could be
used to detect psychophysiological changes that occur during chronic stress. This study elicited
four different types of arousal by images, sounds, mental tasks and rest, and classified them
using linear and non-linear HRV features from electrocardiograms (ECG) acquired by the
wireless wearable ePatch® recorder. The highest recognition rates were acquired for the
neutral stage (90%), the
acute stress stage (80%) and the baseline stage (80%) by sample entropy, detrended fluctuation
analysis and normalized high frequency features. Standardizing non-linear HRV features for
each subject was found to be an important factor for the improvement of the classification
results.

5. HealthyOffice: Mood recognition at work using smartphones and wearable sensors


AUTHORS:Zenonos, A., Khan, A., Kalogridis, G., Vatsikas, S., Lewis, T., Sooriyabandara
Stress, anxiety and depression in the workplace are detrimental to human health and productivity
with significant financial implications. Recent research in this area has focused on the use of
sensor technologies, including smartphones and wearables embedded with physiological and
movement sensors. In this work, we explore the possibility of using such devices for mood
recognition, focusing on work environments. We propose a novel mood recognition framework
that is able to identify five intensity levels for eight different types of moods every two hours.
We further present a smartphone app ('HealthyOffice'), designed to facilitate self-reporting
in a structured manner and provide our model with the ground truth. We evaluate our system
in a small-scale user study where wearable sensing data is collected in an office environment.
Our experiments exhibit promising results allowing us to reliably recognize various classes of
perceived moods.
3. SOFTWARE REQUIREMENTS ANALYSIS

Problem Specification
According to conservative estimates in medical books, 50- 80% of all physical diseases
are caused by stress. Stress is believed to be the principal cause in cardiovascular diseases.
Stress can place one at higher risk for diabetes, ulcers, asthma, migraine headaches, skin
disorders, epilepsy, and sexual dysfunction.So in this project we are trying to detect the
stress patterns in the working employee in the companies we would like to apply image
processing and machine learning techniques to analyze stress patterns and to narrow down
the factors that strongly determine the stress levels.

Modules and Their Functionalities

1. USER:

The User can register the first. While registering he required a valid user email and mobile
for further communications. Once the user register then admin can activate the customer.
Once admin activated the customer then user can login into our system.First user has to
give the input as image to the system. The python library will extract the features and
appropriate emotion of the image. If given image contain more than one faces also possible
to detect. The stress level we are going to indicate by facial expression like sad, angry etc..
The image processing completed the we are going to start the live stream. In the live stream
also we can get the facial expression more that one persons also. Compare to tensorlflow
live stream the tesnorflow live stream will fast and better results. Once done the we are
loading the dataset to perform the knn classification accuracy precession scores.

2. Admin:
Admin can login with his credentials. Once he login he can activate the users. The
activated user only login in our applications. The admin can set the training and testing
data for the project dynamically to the code. The admin can view all users detected results
in hid frame. By click in gan hyperlink in the screen he can detect the emotions of the
images. The admin can also view the knn classification detected results. The dataset in
the excel format. By authorized persons we can increase the dataset size according the
imaginary values.
3. DATA PREPROCESS:
Dataset contains grid view of already stored dataset consisting numerous properties, by
Property Extraction newly designed dataset appears which contains only numerical input
variables as a result of Principal Component Analysis feature selection transforming to 6
principal components which are Condition (No stress, Time pressure, Interruption),
Stress, Physical Demand, Performance and Frustration.

4. MACHINE LEARNING:

K-Nearest Neighbor (KNN) is used for classification as well as regression analysis. It is a


supervised learning algorithm which is used for predicting if a person needs treatment or
not. KNN classifies the dependent variable based on how similar it is; independent
variables are to a similar instance from the already known data.theKnn Classification can
be called as a statistical model that uses a binary dependent variable. In classification
analysis, KNN is estimating the parameters of a KNN model. Mathematically, a binary
KNN model has a dependent variable with two possible value, which is represented by an
indicator variable, where the two values are labeled "0" and "1".

a. Functional Requirements

It provides the users a clear statement of the functions required for the system in order to solve the project
information problem it contain a complete set of requirements for the applications.
A requirement is condition that the application must meet for the customer to find the application
satisfactory. A requirement has the following characteristics
1. It provides a benefit to the origination.
2. It describes the capabilities the application must provide in business terms.
3. It does not describe how the application provides that capability.
4. It is stated in unambiguous words. Its meaning is clear and understandable.
5. It is verifiable.
b. Non Functional Requirements

Stress non-functional requirements, like diseases, the architects had to consider various worst-
case scenarios to determine alternatively safety features and their optimum implementation, i.e.,
size, scope, & number of examples, if the likewise, with today’s IT projects, to determine non-
functional requirements, like availability, the approach requires that the designer 1st determine
the scope: does the whole solution or only part of it need to be architected to meet minimum
levels?
1. Identify the critical areas of solutions
2. Identify the critical components within each critical area.
3. Determine each components availability and risk.
4. Model worst-case failure scenarios.

FEASIBILITY STUDY

The feasibility of the project is analyzed in this phase and business proposal is put forth with
a very general plan for the project and some cost estimates. During system analysis the
feasibility study of the proposed system is to be carried out. This is to ensure that the
proposed system is not a burden to the company. For feasibility analysis, some understanding
of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are,

 ECONOMICAL FEASIBILITY
 TECHNICAL FEASIBILITY
 SOCIAL FEASIBILITY
ECONOMICAL FEASIBILITY

This study is carried out to check the economic impact that the system will have on the
organization. The amount of fund that the company can pour into the research and development
of the system is limited. The expenditures must be justified. Thus the developed system as well
within the budget and this was achieved because most of the technologies used are freely
available. Only the customized products had to be purchased.

TECHNICAL FEASIBILITY

This study is carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand on the
available technical resources. This will lead to high demands on the available technical
resources. This will lead to high demands being placed on the client. The developed system must
have a modest requirement, as only minimal or null changes are required for implementing this
system.

SOCIAL FEASIBILITY

The aspect of study is to check the level of acceptance of the system by the user. This
includes the process of training the user to use the system efficiently. The user must not feel
threatened by the system, instead must accept it as a necessity. The level of acceptance by the
users solely depends on the methods that are employed to educate the user about the system and
to make him familiar with it. His level of confidence must be raised so that he is also able to
make some constructive criticism, which is welcomed, as he is the final user of the system.
4. SOFTWARE AND HARDWAREREQUIREMENTS

Software Requirements

System : Intel Core i5


Hard Disk : 1 TB.
Monitor : 15’’ LED
Input Devices : Keyboard, Mouse
Ram : 16 GB.

Hardware Requirements

Operating system : Windows 10.


Coding Language : Python
Tool : Visual Studio Code
Database : SQLite3
5. SOFTWARE DESIGN

System Architecture
The methodology of review consists following steps
1. Data Collection
2. Image Acquisition
3. Image Pre-Processing
4. Segmentation
5. Feature Extraction
6. Classification
7. Testing
8. Disease prediction

Figure 5.1 System Architecture


1. DataCollection
The dataset was composed of images of faces along with their respective expressios.
The dataset was about fear, happy, sad, neutral. Additionally, the situations of the frames
were more complex, such as the disordered distribution of face expressions, the images
of faces are not clear.
2. Image Acquisition
Image acquisition is the principal central advance in image processing which gives an
idea regards to the root of digital images. In addition, this stage includes the pre-
processing undertaking, for example, image scaling. Acquisition of image is very testing
challenge. It require proper gadget like scanner or good resolution camera.
3. Image Pre-Processing
Image processing is a procedure to change an input image onto digital frame and
performed a couple of process, with the true objective to get a redesigned picture or to
separate some significant data from it. It is a sort of signal dispensation which input is
image, like video edge or photo and result may be picture or characteristics related with
that image.
4. Segmentation
Image segmentation meant way toward sectioning default image into its tremendous
sections or objects. It is a standout amongst the most troublesome errands in
computerized image processing. It used to find desired objects.
5. Feature Extraction
The feature extraction technique plays an important role in image classification. The
features are the main parameter that are involved for classification of image. Texture
Extraction: Texture extraction is determined as the example of information or course of
action of the structure with random interval. Accordingly, texture attributes and
appearance of object size, shape, thickness, characterization, extent of its basic
properties.

Color Extraction

Color extraction is an important factor of distinctive classes. Digital image processing


produce color estimations which extremely useful when researching the sore for early
determination. An image pixel typically addressed in the RGB space, in which the color
space at each pixel addressed as a combination of RGB.
Shape Extraction
General descriptors, for example, object count, region of the shape, image dimension,
and zone of picture, are essential object to depict the shape of an image. Mass Analysis
utilized in this examination to compute insights for marked regions in a de-noised data,
for example, quantity of the protest, region, and edge.
Edge Detection:
Edges in image are the portions with solid boundaries and object with one pixel then
onto the following can make genuine assortment in the picture quality. Edge recognition
utilized for image segmentation and information extraction in regions, for example,
picture processing, computer vision, and machine vision.
6. Classification
In a typical classification system image captured by a camera and then processed. In
Supervised classification, most importantly preparing occurred through known gathering
of pixels. The numbers of clusters decided by users. When trained pixels are not
available, the supervised classification is used that is KNN.
7. Testing
In the testing phase the images are being tested.
8. Stress Disease prediction
Finally, we get the facial expression as output of that image .

Data Flow Diagrams


A Data Flow Diagram (DFD) is a traditional visual representation of the information
flows within a system. A neat and clear DFD can depict the right amount of the system
requirement graphically. It may be used as a communication tool between a system analyst
and any person who plays a part in the order that acts as a starting point for redesigning a
system. The DFD is also called as a data flow graph or bubble chart.
The Basic Notation used to create a DFD’s are as follows:
1. Dataflow: Data move in a specific direction from an origin to a destination.

2. Process: People, procedures, or devices that use or produce (Transform) Data.


The physical component is not identified.

3. Source: External sources or destination of data, which may be People, programs.

4. Data Store: Here data are stored or reference by a process in the System.
Figure 5.2 Data flow diagram

What is UML diagram?


There are several types of UML diagrams and each one of them serves a different
purpose regardless of whether it is being designed before the implementation or after (as part
of documentation).
The two broadest categories that encompass all other types are:
1. Behavioral UML diagram.
2. Structural UML diagram.
What is a UML Use Case Diagram?
Use case diagrams model the functionality of a system using actors and use cases. Use
cases are services or functions provide by the system to its users.
Basic Use Case Diagram Symbols and Notations
System
Draw your system's boundaries using a rectangle that contains use cases. Place actors
outside the system's boundaries.

UseCase
Draw use cases using ovals. Label with ovals with verbs that represent the system's
functio

Actors
Actors are the users of a system. When one system is the actor of another system,
label the actor system with the actor stereotype.

Relationships
Illustrate relationships between an actor and a use case with a simple line. For
relationships among use cases, use arrows labelled either "uses" or "extends." A "uses"
relationship indicates that one use case is needed by another in order to perform a task.
What is a UML activity diagram?
Activity Diagram
An activity diagram illustrates the dynamic nature of a system by modeling the flow
of control from activity to activity. An activity represents an operation on some class in the
system that results in a change in the state of the system. Typically, activity diagrams are used
to model workflow or business processes and internal operation. Because an activity diagram
is a special king of state chart diagram, it uses some of the same modeling conventions.
Basic Activity Diagram Symbols and Notations
Action states
Action states represent the non-interruptible actions of objects. You can grow an
action state in Smart Draw using a rectangle with rounded corners.

Action Flow
Action flow arrows illustrate the relationships among action states.

Object Flow
Object flow refers to the creation and modification of objects by activities. An object
flow arrow from an action to an object means that the action creates or influences the object.
An object flow arrow from an object to an action indicates that the action state uses the
object.

Initial State
A filled circle followed by an arrow represents the initial action state.
Final State

An arrow pointing to a filled circle nested inside another circle represents the final
action state.

Branching
A gaining represents a decision with alternate paths. The outgoing alternates should
be labelled with a cognition or guard expression.

Swim lanes
Swim lanes group related activities into one column.

Synchronization
A synchronization bar helps illustrate parallel transitions. Synchronization is also
called forking and joining.

What is a UML Component Diagram?


A component diagram describes the organization of the physical components in a
system.
Component
A component is a physical building block of the system Learn how to resize grouped
objects like components.

Interface
An interface describes a group of operations used or created by components.

Dependencies
Draw dependencies among components using gashes arrows. Learn about line styles
in Smart Draw.

Component
Anode is a physical resource that executes code components.
Learn how to resize grouped objects like nodes.

Association
Association refers to a physical connection between nodes, such as Ethernet. Learn
how to connect two nodes.

Components and Nodes


Place components inside the node that deploys them.

What is a UML sequence diagram?


Sequence Diagram
Sequence diagrams describe interactions among classes in terms of an exchange of
messages overtime

Basic Sequence Diagram Symbols and Notations Class roles


Class roles describe the way an object will behave in context. Use the UML object
symbol to illustrate class roles, but don’t list object attributes.

Activation
Activation boxes represent the time an object needs to complete a task.
Messages
Messages are arrows that represent communication between objects. Use half-arrowed
lines to represent asynchronous messages. Asynchronous messages are sent from an object
that will not wait for a response from the receiver before continuing its tasks.

Lifelines
Lifelines are vertical gashes lines that indicate the object's presence over time.

Destroying Objects
Objects can be terminated early using an arrow labeled "<< destroy >>" that points to
an X.

Loops
A repetition or loop within a sequence diagram is depicted as a rectangle. Place the
cognition for exiting the loop at the bottom left corner in square brackets.
5.3. E-R Diagram
ER Diagram stands for Entity Relationship Diagram, also known as ERD is a diagram
that displays the relationship of entity sets stored in a database. In other words, ER diagrams
help to explain the logical structure of databases. ER diagrams are created based on three
basic concepts: entities, attributes and relationships.
ER Diagrams contain different symbols that use rectangles to represent entities, ovals to
define attributes and diamond shapes to represent relationships.

Figure 5.3 E-R diagram


UML Diagrams

UML is a standard language for specifying, visualizing, constructing, and


documenting the artifacts of software systems. UML was created by the Object Management
Group (OMG) and UML 1.0 specification draft was proposed to the OMG in January 1997.
There are several types of UML diagrams and each one of them serves a different
purpose regardless of whether it is being designed before the implementation or after (as part
of documentation). UML has a direct relation with object oriented analysis and design. After
some standardization, UML has become an OMG standard.
The two broadest categories that encompass all other types are:
1. Behavioral UML diagram
2. Structural UML diagram.
As the name suggests, some UML diagrams try to analyze and depict the structure of a
system or process, whereas other describe the behavior of the system, its actors, and its
building components.
The different types are broken down as follows:
1. Sequence diagram
2. Use case Diagram
3. Activity diagram
1. Sequence diagram
A sequence diagram simply depicts interaction between objects in a sequential order i.e. the
order in which these interactions take place. We can also use the terms event diagrams or
event scenarios to refer to a sequence diagram. Sequence diagrams describe how and in what
order the objects in a system function. These diagrams are widely used by businessmen and
software developers to document and understand requirements for new and existing systems

Users Admins PyImages MachineLearning

1 : Register()

2 : Activate()
3 : Upload

Images()

5 : Response 6 : Start Liv Send to user() e Stream() 4 : Results Stored in Db()

7:S tart Deep Learning Live Stre m()


a

8 : View Detec ted images()

10 : 9Apply
: Load Dataset()
KNN Algorithm()

11 : Results sends to user()

Figure 5.4.1 Sequence diagram


List of actions
User:
User need to register

Admin:
Activates user login credentials
System:
Image is uploaded from the user’s portal
Results stored in database
Response sends to user
Start live stream
Apply KNN algorithm
Result sends to user
Result:
Detect the testing
Detect the Facial expression

2. Use case diagram


A use case diagram at its simplest is a representation of a user's interaction with the system that
shows the relationship between the user and the different use case in which the user is involved.
A use case diagram is used to structure of the behavior thing in a model. The use cases are
represented by either circles or ellipses.

Login

Upload Image

Stress Emotions

Live Stream
Admin
Users

DeepLearning Live Stream

KNN Results

Activate users

.
Figure 5.4.2 use case diagram
Actor:
User
Admin

Use case:
o User need to login
o Activation of user’s login credentials by admin
o User need to upload his/her image
o Stress Emotions are displayed
o Live stream can be enabled
o Results are displayed using KNN

1. Activity diagram
Activity diagram is another important diagram in UML to describe the dynamic
aspects of the system. Activity diagram is basically a flowchart to represent the flow from one
activity to another activity. This flow can be sequential, branched, or concurrent. Activity
diagrams deal with all type of flow control by using different elements such as fork, join, etc.

Users Admin

Upload Image

Activate users

Image Results

Detected images

Live Stream

KNN Results
Deep Learning Live Stream

KNN Results

Figure 5.4.3 activity diagram


List of actions:
Image is uploaded from the user’s portal
Results stored in database
Response sends to user
Start live stream
Apply KNN algorithm
Result sends to user
Data train
6. CODING AND ITS IMPLEMENTATION

Source code

User Side views.py:

From django.shortcutsimport render, HttpResponse


from .forms import UserRegistrationForm
from .models import UserRegistrationModel,UserImagePredictinModel
from django.contribimport messages
from django.core.files.storageimport FileSystemStorage
from .utility.GetImageStressDetectionimport ImageExpressionDetect
from .utility.MyClassifierimport KNNclassifier
from subprocessimport Popen, PIPE
import subprocess
# Create your views
here. # Create your
views here.
def UserRegisterActions(request):
if request.method == 'POST':
form = UserRegistrationForm(request.POST)
if form.is_valid():
print('Data is Valid')
form.save()
messages.success(request, 'You have been successfully registered')
form = UserRegistrationForm()
return render(request, 'UserRegistrations.html', {'form': form})
else:
messages.success(request, 'Email or Mobile Already Existed')
print("Invalid form")
else:
form = UserRegistrationForm()
return render(request, 'UserRegistrations.html', {'form': form})
defUserLoginCheck(request):
if request.method == "POST":
loginid = request.POST.get('loginname')
pswd = request.POST.get('pswd')
print("Login ID = ", loginid, ' Password = ', pswd)
try:
check = UserRegistrationModel.objects.get(loginid=loginid,
password=pswd)
status = check.status
print('Status is = ', status)
if status == "activated":
request.session['id'] = check.id
request.session['loggeduser'] = check.name
request.session['loginid'] = loginid
request.session['email'] = check.email
print("User id At", check.id, status)
return render(request, 'users/UserHome.html', {})
else:
messages.success(request, 'Your Account Not at activated')
return render(request, 'UserLogin.html')
except Exception as e:
print('Exception is ', str(e))
pass
messages.success(request, 'Invalid Login id and password')
return render(request, 'UserLogin.html', {})

defUserHome(request):
sssssreturn render(request, 'users/UserHome.html', {})

defUploadImageForm(request):
loginid = request.session['loginid']
data = UserImagePredictinModel.objects.filter(loginid=loginid)
return render(request, 'users/UserImageUploadForm.html', {'data':
data})
defUploadImageAction(request):
image_file = request.FILES['file']

# let's check if it is a csv file


if not image_file.name.endswith('.jpg'):
messages.error(request, 'THIS IS NOT A JPG FILE')

fs = FileSystemStorage()
filename = fs.save(image_file.name, image_file)
# detect_filename = fs.save(image_file.name, image_file)
uploaded_file_url = fs.url(filename)
obj = ImageExpressionDetect()
emotion = obj.getExpression(filename)
username = request.session['loggeduser']
loginid = request.session['loginid']
email = request.session['email']

UserImagePredictinModel.objects.create(username=username,email=email,lo
ginid=loginid,filename=filename,emotions=emotion,file=uploaded_file_url
)
data = UserImagePredictinModel.objects.filter(loginid=loginid)
return render(request, 'users/UserImageUploadForm.html', {'data':data})

defUserEmotionsDetect(request):
if request.method=='GET':
imgname = request.GET.get('imgname')
obj = ImageExpressionDetect()
emotion = obj.getExpression(imgname)
loginid = request.session['loginid']
data = UserImagePredictinModel.objects.filter(loginid=loginid)
return render(request, 'users/UserImageUploadForm.html', {'data':
data})

defUserLiveCameDetect(request):
obj = ImageExpressionDetect()
obj.getLiveDetect()
return render(request, 'users/UserLiveHome.html', {})

defUserKerasModel(request):
# p = Popen(["python", "kerasmodel.py --mode
display"], cwd='StressDetection', stdout=PIPE,
stderr=PIPE)
# out, err = p.communicate()
subprocess.call("python kerasmodel.py --mode display")
return render(request, 'users/UserLiveHome.html', {})

defUserKnnResults(request):
obj = KNNclassifier()

df,accuracy,classificationerror,sensitivity,Specificity,fsp,precision =
obj.getKnnResults()
df.rename(columns={'Target': 'Target', 'ECG(mV)': 'Time pressure',
'EMG(mV)': 'Interruption', 'Foot GSR(mV)': 'Stress', 'Hand GSR(mV)':
'Physical Demand', 'HR(bpm)': 'Performance', 'RESP(mV)': 'Frustration',
}, inplace=True)
data = df.to_html()
return
render(request,'users/UserKnnResults.html',{'data':data,'accuracy':accu
racy,'classificationerror':classificationerror,
'sensitivity':sensitivity,"Specificity":Specificity,'fsp':fsp,'precisio
n':precision})

user side forms.py

from djangoimport forms


from .models import UserRegistrationModel

class UserRegistrationForm(forms.ModelForm):
name = forms.CharField(widget=forms.TextInput(attrs={'pattern':
'[a-zA-Z]+'}), required=True, max_length=100)
loginid = forms.CharField(widget=forms.TextInput(attrs={'pattern': '[a-
44
zA-Z]+'}), required=True, max_length=100)

45
password =
forms.CharField(widget=forms.PasswordInput(attrs={'pattern':
'(?=.*\d)(?=.*[a-z])(?=.*[A-Z]).{8,}',
'title': 'Must contain at least one number and one uppercase and
lowercase letter, and at least 8 or more characters'}),
required=True, max_length=100)
mobile = forms.CharField(widget=forms.TextInput(attrs={'pattern':
'[56789][0-9]{9}'}), required=True,
max_length=100)
email = forms.CharField(widget=forms.TextInput(attrs={'pattern':
'[a-z0-9._%+-]+@[a-z0-9.-]+\.[a-z]{2,}$'}),
required=True, max_length=100)
locality = forms.CharField(widget=forms.TextInput(), required=True,
max_length=100)
address = forms.CharField(widget=forms.Textarea(attrs={'rows': 4,
'cols': 22}), required=True, max_length=250)
city = forms.CharField(widget=forms.TextInput(
attrs={'autocomplete': 'off', 'pattern': '[A-Za-z ]+', 'title': 'Enter
Characters Only '}), required=True,
max_length=100)
state = forms.CharField(widget=forms.TextInput(
attrs={'autocomplete': 'off', 'pattern': '[A-Za-z ]+', 'title': 'Enter
Characters Only '}), required=True,
max_length=100)
status = forms.CharField(widget=forms.HiddenInput(),
initial='waiting', max_length=100)

class Meta():
model = UserRegistrationModel
fields = ' all '

user side Models.py


fromdjango.dbimport models

# Create your models here.


class UserRegistrationModel(models.Model):
name = models.CharField(max_length=100)
loginid = models.CharField(unique=True, max_length=100)
password = models.CharField(max_length=100)
mobile = models.CharField(unique=True, max_length=100)
email = models.CharField(unique=True, max_length=100)
locality = models.CharField(max_length=100)
address = models.CharField(max_length=1000)
city = models.CharField(max_length=100)
state = models.CharField(max_length=100)
status = models.CharField(max_length=100)

def str (self):


return self.loginid

class Meta:
db_table = 'UserRegistrations'
class UserImagePredictinModel(models.Model):
username = models.CharField(max_length=100)
email = models.CharField(max_length=100)
loginid = models.CharField(max_length=100)
filename = models.CharField(max_length=100)
emotions = models.CharField(max_length=100000)
file = models.FileField(upload_to='files/')
cdate = models.DateTimeField(auto_now_add=True)

def str (self):


return self.loginid

class Meta:
db_table = "UserImageEmotions"
Image Classification:
from django.confimport settings
from PyEmotionimport *
import cv2 as cv
class ImageExpressionDetect:
defgetExpression(self,imagepath):
47
filepath = settings.MEDIA_ROOT + "\\"+ imagepath
PyEmotion()
er = DetectFace(device='cpu', gpu_id=0)
# Open you default camera
# img =
cv.imread('test.jpg') # cap
= cv.VideoCapture(0)
# ret, frame = cap.read()
frame, emotion = er.predict_emotion(cv.imread(filepath))
cv.imshow('Alex Corporation', frame)
cv.waitKey(0)
print("Hola Hi",filepath,"Emotion is ",emotion)
return emotion

defgetLiveDetect(self):
print("Streaming Started")
PyEmotion()
er = DetectFace(device='cpu', gpu_id=0)
# Open you default
camera cap =
cv.VideoCapture(0) while
(True):
ret, frame = cap.read()
frame, emotion = er.predict_emotion(frame)
cv.imshow('Press Q to Exit', frame)
if cv.waitKey(1) &0xFF == ord('q'):
break
cap.release()
cv.destroyAllWindows()

Deeplearning Model:
import numpyas np
import argparse
import cv2
from keras.modelsimport Sequential
from keras.layers.coreimport Dense, Dropout, Flatten
from keras.layers.convolutionalimport Conv2D
from keras.optimizersimport Adam
from keras.layers.poolingimport MaxPooling2D
from keras.preprocessing.imageimport ImageDataGenerator
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import matplotlibas mpl
mpl.use('TkAgg')
import matplotlib.pyplotas plt

# command line argument


ap = argparse.ArgumentParser()
ap.add_argument("--mode",help="train/display")
a = ap.parse_args()
mode = a.mode

defplot_model_history(model_history):
"""
Plot Accuracy and Loss curves given the model_history
"""
fig, axs = plt.subplots(1,2,figsize=(15,5))
# summarize history for accuracy
axs[0].plot(range(1,len(model_history.history['acc'])+1),model_history.
history['acc'])

axs[0].plot(range(1,len(model_history.history['val_acc'])+1),model_hist
ory.history['val_acc'])
axs[0].set_title('Model Accuracy')
axs[0].set_ylabel('Accuracy')
axs[0].set_xlabel('Epoch')
axs[0].set_xticks(np.arange(1,len(model_history.history['acc'])+1),len(
model_history.history['acc'])/10)
axs[0].legend(['train', 'val'], loc='best')
# summarize history for loss
axs[1].plot(range(1,len(model_history.history['loss'])+1),model_history
.history['loss'])

axs[1].plot(range(1,len(model_history.history['val_loss'])+1),model_his
50
tory.history['val_loss'])
axs[1].set_title('Model Loss')
axs[1].set_ylabel('Loss')
axs[1].set_xlabel('Epoch')

axs[1].set_xticks(np.arange(1,len(model_history.history['loss'])+1),len
(model_history.history['loss'])/10)
axs[1].legend(['train', 'val'], loc='best')
fig.savefig('plot.png')
plt.show()

# Define data
generators train_dir =
'data/train' val_dir =
'data/test'

num_train = 28709
num_val = 7178
batch_size = 64
num_epoch = 50

train_datagen = ImageDataGenerator(rescale=1./255)
val_datagen = ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(48,48),
batch_size=batch_size,
color_mode="grayscale",
class_mode='categorical')

validation_generator = val_datagen.flow_from_directory(
val_dir,
target_size=(48,48),
batch_size=batch_size,
color_mode="grayscale",
class_mode='categorical')
# Create the model
model = Sequential()

model.add(Conv2D(32, kernel_size=(3, 3), activation='relu',


input_shape=(48,48,1)))
model.add(Conv2D(64, kernel_size=(3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Conv2D(128, kernel_size=(3, 3), activation='relu'))


model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, kernel_size=(3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(7, activation='softmax'))

# If you want to train the same model or try other models, go for this
if mode == "train":

model.compile(loss='categorical_crossentropy',optimizer=Adam(lr=0.0001,
decay=1e-6),metrics=['accuracy'])

model_info = model.fit_generator(
train_generator,
steps_per_epoch=num_train // batch_size,
epochs=num_epoch,
validation_data=validation_generator,
validation_steps=num_val // batch_size)

plot_model_history(model_info)
model.save_weights('model.h5')
# emotions will be displayed on your face from the webcam feed
elifmode == "display":
model.load_weights('model.h5')

# prevents openCL usage and unnecessary logging messages


cv2.ocl.setUseOpenCL(False)

# dictionary which assigns each label an emotion (alphabetical order)


emotion_dict = {0: "Angry", 1: "Disgusted", 2: "Fearful", 3: "Happy",
4: "Neutral", 5: "Sad", 6: "Surprised"}

# start the webcam feed


cap = cv2.VideoCapture(0)
while True:
# Find haar cascade to draw bounding box around face
ret, frame = cap.read()
facecasc = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = facecasc.detectMultiScale(gray,scaleFactor=1.3,
minNeighbors=5)

for (x, y, w, h) in faces:


cv2.rectangle(frame, (x, y-50), (x+w, y+h+10), (255, 0, 0),
2)
roi_gray = gray[y:y + h, x:x + w]
cropped_img = np.expand_dims(np.expand_dims(cv2.resize(roi_gray, (48,
48)), -1), 0)
prediction = model.predict(cropped_img)
maxindex = int(np.argmax(prediction))
cv2.putText(frame, emotion_dict[maxindex], (x+20, y-60),
cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2, cv2.LINE_AA)

# show the output frame


cv2.imshow("Alex Corporations Press Q to Exit", frame)
key = cv2.waitKey(1) &0xFF
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break

cap.release()
cv2.destroyAllWindows()

Admin side Views.py


fromdjango.shortcutsimport render
from django.contribimport messages
from users.modelsimport UserRegistrationModel,UserImagePredictinModel
from .utility.AlgorithmExecutionsimport KNNclassifier

# Create your views here.

defAdminLoginCheck(request):
if request.method == 'POST':
usrid = request.POST.get('loginid')
pswd = request.POST.get('pswd')
print("User ID is = ", usrid)
if usrid == 'admin' and pswd == 'admin':
return render(request, 'admins/AdminHome.html')
elifusrid == 'Admin' and pswd == 'Admin':
return render(request, 'admins/AdminHome.html')
else:
messages.success(request, 'Please Check Your Login Details')
return render(request, 'AdminLogin.html', {})

defAdminHome(request):
return render(request, 'admins/AdminHome.html')

defViewRegisteredUsers(request):
data = UserRegistrationModel.objects.all()
return render(request, 'admins/RegisteredUsers.html', {'data': data})

defAdminActivaUsers(request):
if request.method == 'GET':
id = request.GET.get('uid')
status = 'activated'
print("PID = ", id, status)

UserRegistrationModel.objects.filter(id=id).update(status=status)
data = UserRegistrationModel.objects.all()
return render(request, 'admins/RegisteredUsers.html', {'data': data})

defAdminStressDetected(request):
data = UserImagePredictinModel.objects.all()
return render(request, 'admins/AllUsersStressView.html', {'data':
data})

defAdminKNNResults(request):
obj = KNNclassifier()
df, accuracy, classificationerror, sensitivity, Specificity, fsp,
precision = obj.getKnnResults()
df.rename(
columns={'Target': 'Target', 'ECG(mV)': 'Time pressure', 'EMG(mV)':
'Interruption', 'Foot GSR(mV)': 'Stress',
'Hand GSR(mV)': 'Physical Demand', 'HR(bpm)': 'Performance',
'RESP(mV)': 'Frustration', },
inplace=True)
data = df.to_html()
return render(request, 'admins/AdminKnnResults.html',
{'data': data, 'accuracy': accuracy,
'classificationerror': classificationerror,
'sensitivity': sensitivity, "Specificity": Specificity, 'fsp': fsp,
'precision': precision})

All urls.py
"""StressDetection URL Configuration

The `urlpatterns` list routes URLs to views. For more


information please see:
https://docs.djangoproject.com/en/2.0/topics/http/urls
/ Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: path('', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.urls import include,
pat
h
2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))

"""
from django.contribimport admin
from django.urlsimport path
from StressDetectionimport views as mainView
from users import views as usr
from admins import views as admins
from django.contrib.staticfiles.urlsimport static
from django.contrib.staticfiles.urlsimport staticfiles_urlpatterns
from django.confimport settings

urlpatterns = [
path('admin/', admin.site.urls),
path("", mainView.index, name="index"),
path("index/", mainView.index, name="index"),
path("logout/", mainView.logout, name="logout"),
path("UserLogin/", mainView.UserLogin, name="UserLogin"),
path("AdminLogin/", mainView.AdminLogin, name="AdminLogin"),
path("UserRegister/", mainView.UserRegister, name="UserRegister"),
### User Side Views
path("UserRegisterActions/", usr.UserRegisterActions,
name="UserRegisterActions"),
path("UserLoginCheck/", usr.UserLoginCheck, name="UserLoginCheck"),
path("UserHome/", usr.UserHome, name="UserHome"),
path("UploadImageForm/", usr.UploadImageForm,
name="UploadImageForm"),
path("UploadImageAction/", usr.UploadImageAction,
name="UploadImageAction"),
path("UserEmotionsDetect/", usr.UserEmotionsDetect,
name="UserEmotionsDetect"),
path("UserLiveCameDetect/", usr.UserLiveCameDetect,
name="UserLiveCameDetect"),
path("UserKerasModel/", usr.UserKerasModel, name="UserKerasModel"),
path("UserKnnResults/", usr.UserKnnResults, name="UserKnnResults"),

### Admin Side Views


path("AdminLoginCheck/", admins.AdminLoginCheck,
name="AdminLoginCheck"),
path("AdminHome/", admins.AdminHome, name="AdminHome"),
path("ViewRegisteredUsers/", admins.ViewRegisteredUsers,
name="ViewRegisteredUsers"),
path("AdminActivaUsers/", admins.AdminActivaUsers,
name="AdminActivaUsers"),
path("AdminStressDetected/", admins.AdminStressDetected,
name="AdminStressDetected"),
path("AdminKNNResults/", admins.AdminKNNResults,
name="AdminKNNResults"),

urlpatterns += staticfiles_urlpatterns()
urlpatterns += static(settings.MEDIA_URL,
document_root=settings.MEDIA_ROOT)

Base.html
<!DOCTYPE html>
{%load static%}
<html lang="en">
<head>
<title>Stress Feelings</title>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible"content="IE=edge">
<meta name="description"content="Unicat project">
<meta name="viewport"content="width=device-width, initial-scale=1">
<link rel="stylesheet"type="text/css"href="{%static
'styles/bootstrap4/bootstrap.min.css'%}">
<link href="{%static 'plugins/font-awesome-4.7.0/css/font-
awesome.min.css'%}"rel="stylesheet"type="text/css">
<link rel="stylesheet"type="text/css"href="{%static
'plugins/OwlCarousel2-2.2.1/owl.carousel.css'%}">
<link rel="stylesheet"type="text/css"href="{%static
'plugins/OwlCarousel2-2.2.1/owl.theme.default.css'%}">
<link rel="stylesheet"type="text/css"href="{%static
'plugins/OwlCarousel2-2.2.1/animate.css'%}">
<link rel="stylesheet"type="text/css"href="{%static
'styles/main_styles.css'%}">
<link rel="stylesheet"type="text/css"href="{%static
'styles/responsive.css'%}">
</head>
<body>

<div class="super_container">

<!-- Header -->

<header class="header">

<!-- Header Content -->


<div class="header_container">
<div class="container">
<div class="row">
<div class="col">
<div class="header_content d-flex flex-row align-items-center justify-
content-start">
<div class="logo_container">
<a href="{%url 'index'%}">
<div class="logo_text">Stress Detection in IT<span>
Professionals</span></div>
</a>
</div>
<navclass="main_nav_contaner ml-auto">
<ulclass="main_nav">
<li><a href="{%url 'index'%}">Home</a></li>
<li><a href="{%url 'UserLogin'%}">Users</a></li>
<li><a href="{%url 'AdminLogin'%}">Admin</a></li>
<li><a href="{%url 'UserRegister'%}">Registrations</a></li>

</ul>
</nav>

</div>
</div>
</div>
</div>
</div>

<!-- Header Search Panel -->


<div class="header_search_container">
<div class="container">
<div class="row">
<div class="col">
<div class="header_search_content d-flex flex-row align-items-center
justify-content-end">
<form action="#"class="header_search_form">
<input
type="search"class="search_input"placeholder="Search"required="required
">
<button class="header_search_button d-flex flex-column align-items-
center justify-content-center">
<iclass="fa fa-search"aria-hidden="true"></i>
</button>
</form>
</div>
</div>
</div>
</div>
</div>
</header>

{%block contents%}

{%endblock%}

<footer class="footer">
<div class="footer_background"style="background-image:url({%static
'images/footer_background.png'%})"></div>
<div class="container">
<div class="row copyright_row">
<div class="col">
<div class="copyright d-flex flex-lg-row flex-column align-items-center
justify-content-start">
<div class="cr_text"><!-- Link back to Colorlib can't be
removed. Template is licensed under CC BY 3.0. -->
Copyright &copy;<script>document.write(new
Date().getFullYear());</script> All rights reserved | This template is
made with <iclass="fa fa-heart-o"aria-hidden="true"></i> by <a
href="#"target="_blank">Alex Corporation</a>
<!-- Link back to Colorlib can't be removed. Template is licensed
under CC BY 3.0. --></div>

</div>
</div>
</div>
</div>
</footer>
</div>

<script src="{%static 'js/jquery-3.2.1.min.js'%}"></script>


<script src="{%static 'styles/bootstrap4/popper.js'%}"></script>
<script src="{%static 'styles/bootstrap4/bootstrap.min.js'%}"></script>
<script src="{%static 'plugins/greensock/TweenMax.min.js'%}"></script>
<script src="{%static
'plugins/greensock/TimelineMax.min.js'%}"></script>
<script src="{%static
'plugins/scrollmagic/ScrollMagic.min.js'%}"></script>
<script src="{%static
'plugins/greensock/animation.gsap.min.js'%}"></script>
<script src="{%static
'plugins/greensock/ScrollToPlugin.min.js'%}"></script>
<script src="{%static 'plugins/OwlCarousel2-
2.2.1/owl.carousel.js'%}"></script>
<script src="{%static 'plugins/easing/easing.js'%}"></script>
<script src="{%static 'plugins/parallax-js-
master/parallax.min.js'%}"></script>
<script src="{%static 'js/custom.js'%}"></script>
</body>
</html>

Index.html
{%extends 'base.html'%}
{%load static%}

{%block contents%}

<div class="home">
<div class="home_slider_container">
<!-- Home Slider -->
<div class="owl-carousel owl-theme home_slider">

<!-- Home Slider Item -->


<div class="owl-item">
<div class="home_slider_background"style="background-image:url({%static
'images/home_slider_1.jpg'%})"></div>
<div class="home_slider_content">
<div class="container">
<div class="row">
<div class="col text-center">
<div class="home_slider_title">Stress Detection in IT Professionals
</div>
<div class="home_slider_subtitle">by Image Processing and Machine
Learning</div>
<div class="home_slider_form_container">
<p>
<font color="Black">The main motive of our project is to detect stress
in the IT professionals using vivid Machine learning and Image
processing techniques .Our system is an upgraded version of the old
stress detection systems which excluded the live detection and the
personal counseling but this system comprises of live detection and
periodic analysis of employees and detecting physical as well as mental
stress levels in his/her by providing them with proper remedies for
managing stress by providing survey form periodically. Our system
mainly focuses on managing stress and making the working environment
healthy and spontaneous for the employees and to get the best out of
them during working hours.</font>
</p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
{%endblock%}

Index.html
{%extends 'base.html'%}
{%load static%}

{%block contents%}

<div class="home">
<div class="home_slider_container">

<!-- Home Slider -->


<div class="owl-carousel owl-theme home_slider">

<!-- Home Slider Item -->


<div class="owl-item">
<div class="home_slider_background"style="background-image:url({%static
'images/home_slider_1.jpg'%})"></div>
<div class="home_slider_content">
<div class="container">
<div class="row">
<div class="col text-center">
<div class="home_slider_title">User Register Form </div>
<center>
<form action="{%url 'UserRegisterActions'%}"method="POST"class="text-
primary comment_form"" style="width:100%">

{% csrf_token %}
<table>
<tr>
<td class="text-primary">User Name</td>
<td>{{form.name}}</td>
</tr>
<tr>
<td>Login ID</td>
<td>{{form.loginid}}</td>
</tr>
<tr>
<td>Password</td>
<td>{{form.password}}</td>
</tr>
<tr>
<td>Mobile</td>
<td>{{form.mobile}}</td>
</tr>
<tr>
<td>email</td>
<td>{{form.email}}</td>
</tr>
<tr>
<td>Locality</td>
<td>{{form.locality}}</td>
</tr>
<tr>
<td>Address</td>
<td>{{form.address}}</td>
</tr>
<tr>
<td>City</td>
<td>{{form.city}}</td>
</tr>
<tr>
<td>State</td>
<td>{{form.state}}</td>
</tr>
<tr>

<td>{{form.status}}</td>
</tr>
<tr><td></td>
<td><button type="submit"class="comment_button
trans_200">Register</button></td>
</tr>

{% if messages %}
{% for message in messages %}
<font color='GREEN'> {{ message }}</font>
{% endfor %}
{% endif %}

</table>

</form>
</center>

</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
{%endblock%}

User login .html


{%extends 'base.html'%}
{%load static%}

{%block contents%}

<div class="home">
<div class="home_slider_container">
<!-- Home Slider -->
<div class="owl-carousel owl-theme home_slider">

<!-- Home Slider Item -->


<div class="owl-item">
<div class="home_slider_background"style="background-image:url({%static
'images/home_slider_1.jpg'%})"></div>
<div class="home_slider_content">
<div class="container">
<div class="row">
<div class="col text-center">
<div class="home_slider_title">User Login Form </div>
<div class="home_slider_subtitle"></div>
<div class="home_slider_form_container">
<p>
<center>
<form action="{%url 'UserLoginCheck'%}"method="POST"class="text-
primary"style="width:100%">
{% csrf_token %}
<table>
<div class="form-group row">
<div class="col-md-12">
<input type="text"class="form-control"name="loginname"required
placeholder="Enter Login Id">
</div>
</div>
<div class="form-group row">
<div class="col-md-12">
<input type="password"class="form-control"name="pswd"required
placeholder="Enter password">
</div>
</div>

<tr>
<td>
<button class="btnbtn-block btn-primary text-white py-3 px-
5"style="margin-left:20%;"
type="submit">
Login
</button>
</td>
</tr>

{% if messages %}
{% for message in messages %}
<font color='GREEN'> {{ message }}</font>
{% endfor %}
{% endif %}

</table>

</form>
</center>

</p>

</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
{%endblock%}

KNN Results.html
{%extends 'users/userbase.html'%}
{%load static%}
{%block contents%}
<div class="features">
<div class="container">
<div class="row">
<div class="col">
<div class="section_title_container text-center">
<h2 class="section_title">Knn Algorithm Results</h2>
<h3>Accuarcy<font color="Green">{{accuracy}}</font></h3><br/>
<h3>Classification Error <font
color="Green">{{classificationerror}}</font></h3>
<h3>Sensitivity <font color="Green">{{sensitivity}}</font></h3>
<h3>Specificity <font color="Green">{{Specificity}}</font></h3>
<h3>False positive rate Error <font color="Green">{{fsp}}</font></h3>
<h3>Precision <font color="Green">{{precision}}</font></h3>

</div>
<center>
<h2>Results table</h2>
<font color="Black">
{{data | safe}}
</font>
</center>

</div>
</div>
<div class="row features_row">

</div>
</div>
</div>
{%endblock%}
6.2 Implementation

PYTHON

Python is a general-purpose interpreted, interactive, object-oriented, and high-level


programming language. An interpreted language, Python has a design philosophy
that emphasizes code readability (notably using whitespace indentation to
delimit code blocks rather than curly brackets or keywords), and a syntax that
allows programmers to express concepts in fewer lines of code than might be used
in languages such as C++or Java. It provides constructs that enable clear
programming on both small and large scales. Python interpreters are available for
many operating systems. CPython, the reference implementation of Python,
is open source software and has a community-based development model, as do
nearly all of its variant implementations. CPython is managed by the non-
profit Python Software Foundation. Python features a dynamic type system and
automatic memory management. It supports multiple programming paradigms,
including object-oriented, imperative, functional and procedural, and has a large
and comprehensive standard library.
DJANGO

Django is a high-level Python Web framework that encourages rapid development


and clean, pragmatic design. Built by experienced developers, it takes care of
much of the hassle of Web development, so you can focus on writing your app
without needing to reinvent the wheel. It’s free and open source.
Django's primary goal is to ease the creation of complex, database-driven websites.
Django emphasizes reusabilityand "pluggability" of components, rapid
development, and the principle of don't repeat yourself. Python is used throughout,
even for settings files and data models.
Django also provides an optional administrative create, read, update and
delete interface that is generated dynamically through introspection and configured
via admin models

Figure 6.2.1: Flow Diagram


7. SYSTEM TESTING

The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality of
components, sub assemblies, assemblies and/or a finished product It is the process of exercising
software with the intent of ensuring that the Software system meets its requirements and user
expectations and does not fail in an unacceptable manner. There are various types of test. Each
test type addresses a specific testing requirement.

TYPES OF TESTS

Unit testing
Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision branches and
internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform
basic tests at component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process performs accurately
to the documented specifications and contains clearly defined inputs and expected results.
Integration testing
Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic
outcome of screens or fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the combination of components
is correct and consistent. Integration testing is specifically aimed at exposing the problems that
arise from the combination of components.
Functional test
Functional tests provide systematic demonstrations that functions tested are available as
specified by the business and technical requirements, system documentation, and user manuals.
Functional testing is centered on the following items:
Valid Input : identified classes of valid input must be accepted.
Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs must be exercised.
Systems/Procedures : interfacing systems or procedures must be invoked.
Organization and preparation of functional tests is focused on requirements, key
functions, or special test cases. In addition, systematic coverage pertaining to identify Business
process flows; data fields, predefined processes, and successive processes must be considered
for testing. Before functional testing is complete, additional tests are identified and the effective
value of current tests is determined.
System Test
System testing ensures that the entire integrated software system meets requirements. It
tests a configuration to ensure known and predictable results. An example of system testing is
the configuration oriented system integration test. System testing is based on process
descriptions and flows, emphasizing pre-driven process links and integration points.
White Box Testing
White Box Testing is a testing in which in which the software tester has knowledge of
the inner workings, structure and language of the software, or at least its purpose. It is purpose. It
is used to test areas that cannot be reached from a black box level.
Black Box Testing
Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds of tests,
must be written from a definitive source document, such as specification or requirements
document, such as specification or requirements document. It is a testing in which the software
under test is treated, as a black box .you cannot “see” into it. The test provides inputs and
responds to outputs without considering how the software works.
Unit Testing
Unit testing is usually conducted as part of a combined code and unit test phase of
the software lifecycle, although it is not uncommon for coding and unit testing to be conducted
as two distinct phases.

Testing strategies
Field testing will be performed manually and functional tests will be written in detail.

Test objectives

 All field entries must work properly.


 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.

Features to be tested

 Verify that the entries are of the correct format


 No duplicate entries should be allowed
 All links should take the user to the correct page.

Integration Testing
Software integration testing is the incremental integration testing of two or more
integrated software components on a single platform to produce failures caused by interface
defects.
The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company level –
interact without error.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant participation
by the end user. It also ensures that the system meets the functional requirements.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
Sample Test Cases

Excepted Remarks(IF
S.no Test Case Result
Result Fails)
If User If already user
1. User Register registration Pass email exist then it
successfully. fails.
If Username and
password is Un Register Users
2. User Login Pass
correct then it will will not logged in.
getting valid page.
Image must be
Image uploaded to
640X480
3. Upload An Image server and strating Pass
resolution will get
process to detetct
better results
Detected images
Images must be
Draw Squares in draw square and
4. Pass clearly to detect
images writing stress
facial expression
emotions
PyImage libaray
If library not
will load the
5. Start live Stream Pass available then
process and start
failed
the live
Depends on
Start Deep If tensor flow not
system
6. learning live installed then it Pass
configuration and
stream will fail
tensorflow library
Load the dataset
The dataset must
7. KNN Results and process the Pass
be media folder
KNN Algorithm
Trains and test
Predicted and
Predict Train and size must be
8. original salary will Pass
Test data specify otherwise
be displayed
failed
Admin can login
with his login Invalid login
9. Admin login credential. If Pass details will not
success he get his allowed here
home page
Admin can Admin can If user id not
activate the activate the Pass found then it
10.
register users register user id won’t login.
8. OUTPUTSCREEN
Task 1: open cmd promte to excute code

Figure 6.2.1: Implementation

Task 2: The User can register the first. While registering he required a valid user email and

mobile for further communications.

Figure 6.2.2: User registration form


Task 3: Admin can login with his credentials. Once he login he can activate the users. The
activated user only login in our applications.

Figure 6.2.3 : Activation form

Task 4 : Once admin activated the customer then user can login into our system.

Figure 6.2.4 : User login form


Task 5: First user has to give the input as image to the system the python library

will extract the features and appropriate emotion of the image. If given image
contain more than one faces also possible to detect.
Figure 6.2.5 : Uploading Image
Task 6 : The stress level we are going to indicate by facial expression like sad,

angry etc.. The image processing completed the we are going to start the live
stream. In the live stream also we can get the facial expression more that one
persons also.

Figure 6.2.6 : Output Image


9. CONCLUSION

Stress Detection System is designed to predict stress in the employees by monitoring captured
images of authenticated users which makes the system secure. The image capturing is done
automatically when the authenticate user is logged in based on some time interval. The captured
images are used to detect the stress of the user based on some standard conversion and image
processing mechanisms. Then the system will analyze the stress levels by using Machine
Learning algorithms which generates the results that are more efficient.
10. FUTUREENHANCEMENTS

Biomedical wearable sensors embedded with IOT technology is a proven combination in the
health care sector. The benefits of using such devices have positively impacted the patients
and doctors alike. Early diagnosis of medical conditions, faster medical assistance by means
of Remote Monitoring and Telecommunication, emergency alert mechanism to notify the care
taker and personal Doctor, etc are a few of its advantages. The proposed work on developing a
multimodal IOT system assures to be a better health assistant for a person by constantly
monitoring and providing regular feedback on the stress levels. For future work, it would
be interesting to enhance this work into the development of a stress detection model by the
addition of other physiological parameters, including an activity recognition system and
application of machine learning techniques.
10. FUTUREENHANCEMENTS

Biomedical wearable sensors embedded with IOT technology is a proven combination in the
health care sector. The benefits of using such devices have positively impacted the patients
and doctors alike. Early diagnosis of medical conditions, faster medical assistance by means
of Remote Monitoring and Telecommunication, emergency alert mechanism to notify the care
taker and personal Doctor, etc are a few of its advantages. The proposed work on developing a
multimodal IOT system assures to be a better health assistant for a person by constantly
monitoring and providing regular feedback on the stress levels. For future work, it would
be interesting to enhance this work into the development of a stress detection model by the
addition of other physiological parameters, including an activity recognition system and
application of machine learning techniques.
11. REFERENCES

1. G. Giannakakis, D. Manousos, F. Chiarugi, “Stress and anxiety detection using facial


cues from videos,” Biomedical Signal processing and Control”, vol. 31, pp. 89-101,
January 2017.
2. T. Jick and R. Payne, “Stress at work,” Journal of Management Education, vol. 5, no. 3,
pp. 50-56, 1980.
3. Nisha Raichur, Nidhi Lonakadi, Priyanka Mural, “Detection of Stress Using Image
Processing and Machine Learning Techniques”, vol.9, no. 3S, July 2017.
4. Bhattacharyya, R., & Basu, S. (2018). Retrieved from ‘The Economic Times’.
5. OSMI Mental Health in Tech Survey Dataset, 2017
6. U. S. Reddy, A. V. Thota and A. Dharun, "Machine Learning Techniques for Stress
Prediction in Working Employees," 2018 IEEE International Conference on
Computational Intelligence and Computing Research (ICCIC), Madurai, India, 2018, pp.
1-4.
7. https://www.kaggle.com/qiriro/stress
8. Communications, N. . World health report.2001.
9. URL: http://www.who.int/whr/2001/media_centre/press_release/en/.
10. Bakker, J., Holenderski, L., Kocielnik, R., Pechenizkiy, M., Sidorova, N.. Stess@ work:
From measuring stress to its understanding, prediction and handling with personalized
coaching. In: Proceedings of the 2nd ACM SIGHIT International health informatics
symposium. ACM; 2012, p. 673–678.
11. Deng, Y., Wu, Z., Chu, C.H., Zhang, Q., Hsu, D.F.. Sensor feature selection and
combination for stress identification using combinatorial fusion. International Journal of
Advanced Robotic Systems 2013;10(8):306.
12. Ghaderi, A., Frounchi, J., Farnam, A.. Machine learning-based signal processing using
physiological signals for stress detection. In: 2015 22nd Iranian Conference on
Biomedical Engineering (ICBME). 2015, p. 93–98.
13. Villarejo, M.V., Zapirain, B.G., Zorrilla, A.M.. A stress sensor based on galvanic skin
response (gsr) controlled by zigbee. Sensors 2012; 12(5):6075–6101.
14. Liu, D., Ulrich, M.. Listen to your heart: Stress prediction using consumer heart rate
sensors 2015;.
15. Nakashima, Y., Kim, J., Flutura, S., Seiderer, A., Andre, E.. Stress recognition in daily
work. In: ´ International Symposium on Pervasive Computing Paradigms for Mental
Health. Springer; 2015, p. 23–33.
16. Xu, Q., Nwe, T.L., Guan, C.. Cluster-based analysis for personalized stress evaluation
using physiological signals. IEEE journal of biomedical and health informatics
2015;19(1):275
17. Tanev, G., Saadi, D.B., Hoppe, K., Sorensen, H.B.. Classification of acute stress using
linear and non-linear heart rate variability analysis derived from sternal ecg. In:
Engineering in Medicine and Biology Society (EMBC), 2014 36th Annual International
Conference of the IEEE. IEEE; 2014, p. 3386–3389.
18. Gjoreski, M., Gjoreski, H., Lustrek, M., Gams, M.. Continuous stress detection using a
wrist device: in laboratory and real life. In: Proceedings of the 2016 ACM International
Joint Conference on Pervasive and Ubiquitous Computing: Adjunct. ACM; 2016, p.
1185– 1193.
19. Palanisamy, K., Murugappan, M., Yaacob, S.. Multiple physiological signal-based
human stress identification using non-linear classifiers.Elektronika ir elektrotechnika
2013;19(7):80–85.
20. Widanti, N., Sumanto, B., Rosa, P., Miftahudin, M.F.. Stress level detection using heart
rate, blood pressure, and gsr and stress therapy by utilizing infrared. In: Industrial
Instrumentation and Control (ICIC), 2015 International Conference on. IEEE; 2015, p.
275–279.
21. Sioni, R., Chittaro, L.. Stress detection using physiological sensors. Computer
2015;48(10):26–33.
22. Zenonos, A., Khan, A., Kalogridis, G., Vatsikas, S., Lewis, T., Sooriyabandara, M..
Healthyoffice: Mood recognition at work using smartphones and wearable sensors. In:
Pervasive Computing and Communication Workshops (PerCom Workshops), 2016 IEEE
International Conference on. IEEE; 2016, p. 1–6.
23. Selvaraj, N.. Psychological acute stress measurement using a wireless adhesive
biosensor. In: 2015 37th Annual International Conference of the IEEE Engineering in
Medicine and Biology Society (EMBC). 2015, p. 3137–3140.
24. Vadana, D.P., Kottayil, S.K.. Energy-aware intelligent controller for dynamic energy
management on smart microgrid. In: Power and Energy Systems Conference: Towards
Sustainable Energy, 2014. IEEE; 2014, p. 1–7.
25. Rajagopalan, S.S., Murthy, O.R., Goecke, R., Rozga, A.. Play with memeasuring a
child’s engagement in a social interaction. In: Automatic Face and Gesture Recognition
(FG), 2015 11th IEEE International Conference and Workshops on; vol. 1. IEEE; 2015,
p. 1–8.
26. Koldijk, S., Neerincx, M.A., Kraaij, W.. Detecting work stress in offices by combining
unobtrusive sensors. IEEE Transactions on Affective Computing 2016;.
27. Koldijk, S., Sappelli, M., Verberne, S., Neerincx, M.A., Kraaij, W.. The swell knowledge
work dataset for stress and user modeling research. In: Proceedings of the 16th
International Conference on Multimodal Interaction; ICMI ’14. New York, NY, USA:
ACM. ISBN 978-1-4503-2885-2; 2014, p. 291–298. URL:
http://doi.acm.org/10.1145/2663204.2663257. doi:10.1145/2663204.2663257.Yoder, N..
Peak finder. Internet: http://wwwmathworks.com-/matlabcentral/fileexchange/25500 2011

You might also like