Stress Detection in It Professional by Image Processing and Machine Learning
Stress Detection in It Professional by Image Processing and Machine Learning
Stress Detection in It Professional by Image Processing and Machine Learning
On
BACHELOR OF TECHNOLOGY
IN
INFORMATION TECHNOLOGY
Submitted
By
M.DIVYA (188R1A1237)
D. ABHISHEK (188R1A1212)
K. NARESH (188R1A1230)
P. SPARSHA (188R1A1245)
CERTIFICATE
This is to certify that the work reported in the present project entitled “STRESS
DETECTION IN IT PROFESSIONALS BY IMAGE PROCESSING AND MACHINE
LEARNING” is
a record of bonafide work done by us in the Department of Information Technology, CMR
Engineering College, JNTU Hyderabad. The reports are based on the project work done
entirely by us and not copied from any other source. We submit our project for further
development by any interested students who share similar interests to improve the project in
thefuture.
The results embodied in this project report have not been submitted to any other University or
Institute for the award of any degree or diploma to the best of our knowledge and belief.
M.DIVYA (188R1A1237)
D.ABHISHEK (188R1A1212)
K.NARESH (188R1A1230)
P.SPARSHA (188R1A1245)
ACKNOWLEDGMENT
We are extremely grateful to Dr. A. Srinivasula Reddy, Principal and Dr. Madhavi Pingili,
HOD, Department of IT, CMR Engineering College for their constant support.
We will be failing in duty if we do not acknowledge with grateful thanks to the authors of the
references and other literatures referred in this Project.
We express our thanks to all staff members and friends for all the help and co-ordination
extended in bringing out this project successfully in time.
Finally, we are very much thankful to our parents who guided us for every step.
M.DIVYA (188R1A1237)
D.ABHISHEK (188R1A1212)
K.NARESH (188R1A1230)
P.SPARSHA (188R1A1245)
CONTENTS
TOPIC PAGENO
ABSTRACT I
LIST OF FIGURES II
1. INTRODUCTION 1
Introduction& Objectives 1
Different Types of Rice Plant Diseases 2
ProjectObjectives 3
Purpose oftheProject 4
Existing system& Disadvantages 4
Proposed systemwith Features 5
2. LITERATURESURVEY 6
5. SOFTWAREDESIGN 15
SystemArchitecture 15
DataflowDiagrams 17
E-RDiagrams 24
UMLDiagrams 25
6. CODINGANDIMPLEMENTATION 35
Sourcecode 35
Implementation 61
ProjectStructure 54
InAnaconda Prompt 56
7. SYSTEMTESTING 59
Types ofSystem Testing 59
TestingStrategies 59
8. OUTPUTSCREENS 61
9. CONCLUSION 63
10. FUTUREENHANCEMENTS 64
11. REFERENCES 65
ABSTRACT
The main motive of our project is to detect stress in the IT professionals using vivid Machine
learning and Image processing techniques. Our system is an upgraded version of the old stress
detection systems which excluded the live detection and the personal counseling but this system
comprises of live detection and periodic analysis of employees and detecting physical as well as
mental stress levels in his/her by providing them with proper remedies for managing stress by
providing survey form periodically. Our system mainly focuses on managing stress and making
the working environment healthy and spontaneous for the employees and to get the best out of
them during working hours.
I
LIST OF FIGURES
1 1.1.1.1 Brownspot 2
3 1.1.1.3 Hispa 3
17 6.2.1.2 Tags 56
II
24 8.3 BrownSpot 62
25 8.4 Hispa 62
27 8.6 Healthy 62
III
1. INTRODUCTION
Stress management systems play a significant role to detect the stress levels which
disrupts our socio economic lifestyle. As World Health Organization (WHO) says, Stress
is a mental health problem affecting the life of one in four citizens. Human stress leads to
mental as well as socio-fiscal problems, lack of clarity in work, poor working
relationship, depression and finally commitment of suicide in severe cases. This demands
counseling to be provided for the stressed individuals cope up against stress. Stress
avoidance is impossible but preventive actions helps to overcome the stress. Currently,
only medical and physiological experts can determine whether one is under depressed
state (stressed) or not. One of the traditional method to detect stress is based on
questionnaire. This method completely depends on the answers given by the individuals,
people will be tremulous to say whether they are stressed or normal. Automatic detection
of stress minimizes the risk of health issues and improves the welfare of the society. This
paves the way for the necessity of a scientific tool, which uses physiological signals
thereby automating the detection of stress levels in individuals. Stress detection is
discussed in various literatures as it is a significant societal contribution that enhances the
lifestyle of individuals. Ghaderi et al. analysed stress using Respiration, Heart rate HR),
facial electromyography (EMG), Galvanic skin response (GSR) foot and GSR hand data
with a conclusion that, features pertaining to respiration process are substantial in stress
detection. Maria Viqueira et al. describes mental stress prediction using a standalone
stress sensing hardware by interfacing GSR as the only physiological sensor. David Liu et
al. proposed a research to predict stress levels solely from Electrocardiogram
(ECG).Multimodal sensor efficacy to detect stress of working people is experimentally
discussed in. This employs the sensor data from sensors such as pressure distribution, HR,
Blood Volume Pulse (BVP) and Electro dermal activity (EDA). An eye tracker sensor is
also used which systematically analyses the eye movements with the stressors like Stroop
word test and information related to pickup tasks. The authors of performed perceived
stress detection by a set of non-invasive sensors which collects the physiological signals
such as ECG , GSR, Electroencephalography(EEG), EMG, and Saturation of peripheral
oxygen (SpO2). Continuous stress levels are estimated using the physiological sensor
4
data such as GSR, EMG, HR, Respiration in. The stress
5
detection is carried out effectively using Skin conductance level (SCL), HR, Facial EMG
sensors by creating ICT related Stressors. Automated stress detection is made possible by
several pattern recognition algorithms. Every sensor data is compared with a stress index
which is a threshold value used for detecting the stress level. The authors of collected data
from 16 individuals under four stressor conditions which were tested with Bayesian
Network, J48 algorithm and Sequential Minimal Optimization (SMO) algorithm for
predicting stress. Statistical features of heart rate, GSR, frequency domain features of
heart rate and its variability (HRV), and the power spectral components of ECG were
used to govern the stress levels. Various features are extracted from the commonly used
physiological signals such as ECG, EMG, GSR, BVP etc., measured using appropriate
sensors and selected features are grouped into clusters for further detection of anxiety
levels
. In, it is concluded that smaller clusters result in better balance in stress detection using
the selected General Regression Neural Network (GRNN) model. This results in the fact
that different combinations of the extracted features from the sensor signals provide better
solutions to predict the continuous anxiety level. Frequency domain features like LF
power (low frequency power from 0.04 Hz to0.15Hz), HF power (High frequency power
from 0.15Hz to 0.4 Hz) , LF/HF (ratio of LF to the HF) and time domain features like
Mean , Median, standard deviation of heart signal are considered for continuous real time
stress detection in . Classification using decision tree such as PLDA is performed using
two stressors namely pickup task and stroop based word test wherein the authors
concluded that the stressor based classification proves unsatisfactory. In2016, Gjoreski et
al. created laboratory based stress detection classifiers from ECG signal and HRV
features. Features of ECG are analysed using GRNN model to measure the stress level.
Heart rate variability (HRV) features and RR (cycle length variability interval length
between two successive Rs) interval features are used to classify the stress level. It is
noticed that Support Vector Machine (SVM) was used as the classification algorithm
predominantly due to its generalization ability and sound mathematical background
Various kernels were used to develop models using SVM and it is concluded in that a
linear SVM on both ECG frequency features and HRV features performed best,
outperforming other model choices.
Nowadays as IT industries are setting a new peek in the market by bringing new
technologies and products in the market. In this study, the stress levels in employees are
also noticed to raise the bar high. Though there are many organizations who provide
mental health related schemes for their employees but the issue is far from control. In
this paper
we try to go in the depth of this problem by trying to detect the stress patterns in the
working employee in the companies we would like to apply image processing and
machine learning techniques to analyze stress patterns and to narrow down the factors that
strongly determine the stress levels. Machine Learning algorithms like KNN classifiers
are applied to classify stress. Image Processing is used at the initial stage for detection,
the employee’s image is clicked by the camera which serves as input. In order to get an
enhanced image or to extract some useful information from it image processing is used by
converting image into digital form and performing some operations on it. By taking input
as an image from video frames and output may be image or characteristics associated with
that image. Image processing basically includes the following three steps:
□ Importing the image via image acquisition tools.
□ Analyzing and manipulating the image.
□ Output in which result is altered image or report that is based on image analysis.
System gets the ability to automatically learn and improve from self-experiences
without being explicitly programmed using Machine learning which is an application of
artificial intelligence (AI). Computer programs are developed by Machine Learning that
can access data and use it to learn for themselves. Explicit programming to perform the
task based on predictions or decisions builds a mathematical model based on "training
data" by using Machine Learning. The extraction of hidden data, association of image
data and additional pattern which are unclearly visible in image is done using Image
Mining. It’s an interrelated field that involves, Image Processing, Data Mining, Machine
Learning and Datasets. According to conservative estimates in medical books, 50- 80%
of all physical diseases are caused by stress. Stress is believed to be the principal cause in
cardiovascular diseases. Stress can place one at higher risk for diabetes, ulcers, asthma,
migraine headaches, skin disorders, epilepsy, and sexual dysfunction. Each of these
diseases, and host of others, is psychosomatic (i.e., either caused or exaggerated by
mental conditions such as stress) in nature. Stress has three prong effects:
□ Subjective effects of stress include feelings of guilt, shame, anxiety, aggression or
frustration. Individuals also feel tired, tense, nervous, irritable, moody, or lonely.
□ Visible changes in a person's behavior are represented by Behavioral effects of stress.
Effects of behavioral stress are seen such as increased accidents, use of drugs or alcohol,
laughter out of context, outlandish or argumentative behavior, very excitable moods,
and/or
eating or drinking to excess.
□ Diminishing mental ability, impaired judgment, rash decisions, forgetfulness and/or
hypersensitivity to criticism are some of the effects of Cognitive stress
Project Objectives
By the end of this project you’ll understand
1. To predict stress in a person by the symptoms calculated by monitoring.
2. To analyze the stress levels in the employee.
3. To provide solutions and remedies for the person to recover his/her stress
In the existing system work on stress detection is based on the digital signal processing,
taking into consideration Galvanic skin response, blood volume, pupil dilation and skin
temperature. And the other work on this issue is based on several physiological signals
and visual features (eye closure, head movement) to monitor the stress in a person while
he is working. However these measurements are intrusive and are less comfortable in real
application. Every sensor data is compared with a stress index which is a threshold value
used for detecting the stress level.
Disadvantages
The proposed System Machine Learning algorithms like KNN classifiers are applied to
classify stress. Image Processing is used at the initial stage for detection, the employee’s
image is given by the browser which serves as input. In order to get an enhanced image or
to extract some useful information from it image processing is used by converting image
into digital form and performing some operations on it. By taking input as an image and
output may be image or characteristics associated with that images. The emotion are
displayed on the rounder box. The stress level indicating by Angry, Disgusted, Fearful,
Sad.
Advantages
OBJECTIVES
1. Input Design is the process of converting a user-oriented description of the input
into a computer-based system. This design is important to avoid errors in the data
input process and show the correct direction to the management for getting
correct information from the computerized system.
2. It is achieved by creating user-friendly screens for the data entry to handle large
volume of data. The goal of designing input is to make data entry easier and to be
free from errors. The data entry screen is designed in such a way that all the data
manipulates can be performed. It also provides record viewing facilities.
3. When the data is entered it will check for its validity. Data can be entered with
the help of screens. Appropriate messages are provided as when needed so that
the user will not be in maize of instant. Thus the objective of input design is to
create an input layout that is easy to follow
OUTPUT DESIGN
A quality output is one, which meets the requirements of the end user and presents
the information clearly. In any system results of processing are communicated to the users and to
other system through outputs. In output design it is determined how the information is to be
displaced for immediate need and also the hard copy output. It is the most important and direct
source information to the user. Efficient and intelligent output design improves the system’s
relationship to help user decision-making.
1. Designing computer output should proceed in an organized, well thought out manner; the
right output must be developed while ensuring that each output element is designed so
that people will find the system can use easily and effectively. When analysis design
computer output, they should Identify the specific output that is needed to meet the
requirements.
2. Select methods for presenting information.
3. Create document, report, or other formats that contain information produced by
the system.
The output form of an information system should accomplish one or more of the
following objectives.
Convey information about past activities, current status or projections of the
Future.
Signal important events, opportunities, problems, or warnings.
Trigger an action.
Confirm an action.
2. LITERATURE SURVEY
This study develops a framework for the detection and analysis of stress/anxiety emotional states
through video-recorded facial cues. A thorough experimental protocol was established to induce
systematic variability in affective states (neutral, relaxed and stressed/anxious) through a variety
of external and internal stressors. The analysis was focused mainly on non-voluntary and semi-
voluntary facial cues in order to estimate the emotion representation more objectively. Features
under investigation included eye-related events, mouth activity, head motion parameters and
heart rate estimated through camera-based photo plethysmography. A feature selection
procedure was employed to select the most robust features followed by classification schemes
discriminating between stress/anxiety and neutral states with reference to a relaxed state in each
experimental phase. In addition, a ranking transformation was proposed utilizing self reports in
order to investigate the correlation of facial parameters with a participant perceived amount of
stress/anxiety. The results indicated that, specific facial cues, derived from eye activity, mouth
activity, head movements and camera based heart activity achieve good accuracy and are
suitable as discriminative indicators of stress and anxiety.
Stress is a part of life it is an unpleasant state of emotional arousal that people experience in
situations like working for long hours in front of computer. Computers have become a way of
life, much life is spent on the computers and hence we are therefore more affected by the ups
and downs that they cause us. One cannot just completely avoid their work on computers but one
can at least control his/her usage when being alarmed about him being stressed at certain point of
time. Monitoring the emotional status of a person who is working in front of a computer for
longer duration is crucial for the safety of a person. In this work a real-time non-intrusive videos
are captured, which detects the emotional status of a person by analysing the facial expression.
We detect an individual emotion in each video frame and the decision on the stress level is made
in sequential hours of the video captured. We employ a technique that allows us to train a model
and
analyze differences in predicting the features. Theano is a python framework which aims at
improving both the execution time and development time of the linear regression model which is
used here as a deep learning algorithm. The experimental results show that the developed system
is well on data with the generic model of all ages.
Stress disorders are a common issue among working IT professionals in the industry today. With
changing lifestyle and work cultures, there is an increase in the risk of stress among the
employees. Though many industries and corporates provide mental health related schemes and
try to ease the workplace atmosphere, the issue is far from control. In this paper, we would like
to apply machine learning techniques to analyze stress patterns in working adults and to narrow
down the factors that strongly determine the stress levels. Towards this, data from the OSMI
mental health survey 2017 responses of working professionals within the tech-industry was
considered. Various Machine Learning techniques were applied to train our model after due data
cleaning and preprocessing. The accuracy of the above models was obtained and studied
comparatively. Boosting had the highest accuracy among the models implemented. By using
Decision Trees, prominent features that influence stress were identified as gender, family history
and availability of health benefits in the workplace. With these results, industries can now
narrow down their approach to reduce stress and create a much comfortable workplace for their
employees.
4. Classification of acute stress using linear and non-linear heart rate variability
analysis derived from sternal ECG
Chronic stress detection is an important factor in predicting and reducing the risk of
cardiovascular disease. This work is a pilot study with a focus on developing a method for
detecting short-term psychophysiological changes through heart rate variability (HRV) features.
The purpose of this pilot study is to establish and to gain insight on a set of features that could be
used to detect psychophysiological changes that occur during chronic stress. This study elicited
four different types of arousal by images, sounds, mental tasks and rest, and classified them
using linear and non-linear HRV features from electrocardiograms (ECG) acquired by the
wireless wearable ePatch® recorder. The highest recognition rates were acquired for the
neutral stage (90%), the
acute stress stage (80%) and the baseline stage (80%) by sample entropy, detrended fluctuation
analysis and normalized high frequency features. Standardizing non-linear HRV features for
each subject was found to be an important factor for the improvement of the classification
results.
Problem Specification
According to conservative estimates in medical books, 50- 80% of all physical diseases
are caused by stress. Stress is believed to be the principal cause in cardiovascular diseases.
Stress can place one at higher risk for diabetes, ulcers, asthma, migraine headaches, skin
disorders, epilepsy, and sexual dysfunction.So in this project we are trying to detect the
stress patterns in the working employee in the companies we would like to apply image
processing and machine learning techniques to analyze stress patterns and to narrow down
the factors that strongly determine the stress levels.
1. USER:
The User can register the first. While registering he required a valid user email and mobile
for further communications. Once the user register then admin can activate the customer.
Once admin activated the customer then user can login into our system.First user has to
give the input as image to the system. The python library will extract the features and
appropriate emotion of the image. If given image contain more than one faces also possible
to detect. The stress level we are going to indicate by facial expression like sad, angry etc..
The image processing completed the we are going to start the live stream. In the live stream
also we can get the facial expression more that one persons also. Compare to tensorlflow
live stream the tesnorflow live stream will fast and better results. Once done the we are
loading the dataset to perform the knn classification accuracy precession scores.
2. Admin:
Admin can login with his credentials. Once he login he can activate the users. The
activated user only login in our applications. The admin can set the training and testing
data for the project dynamically to the code. The admin can view all users detected results
in hid frame. By click in gan hyperlink in the screen he can detect the emotions of the
images. The admin can also view the knn classification detected results. The dataset in
the excel format. By authorized persons we can increase the dataset size according the
imaginary values.
3. DATA PREPROCESS:
Dataset contains grid view of already stored dataset consisting numerous properties, by
Property Extraction newly designed dataset appears which contains only numerical input
variables as a result of Principal Component Analysis feature selection transforming to 6
principal components which are Condition (No stress, Time pressure, Interruption),
Stress, Physical Demand, Performance and Frustration.
4. MACHINE LEARNING:
a. Functional Requirements
It provides the users a clear statement of the functions required for the system in order to solve the project
information problem it contain a complete set of requirements for the applications.
A requirement is condition that the application must meet for the customer to find the application
satisfactory. A requirement has the following characteristics
1. It provides a benefit to the origination.
2. It describes the capabilities the application must provide in business terms.
3. It does not describe how the application provides that capability.
4. It is stated in unambiguous words. Its meaning is clear and understandable.
5. It is verifiable.
b. Non Functional Requirements
Stress non-functional requirements, like diseases, the architects had to consider various worst-
case scenarios to determine alternatively safety features and their optimum implementation, i.e.,
size, scope, & number of examples, if the likewise, with today’s IT projects, to determine non-
functional requirements, like availability, the approach requires that the designer 1st determine
the scope: does the whole solution or only part of it need to be architected to meet minimum
levels?
1. Identify the critical areas of solutions
2. Identify the critical components within each critical area.
3. Determine each components availability and risk.
4. Model worst-case failure scenarios.
FEASIBILITY STUDY
The feasibility of the project is analyzed in this phase and business proposal is put forth with
a very general plan for the project and some cost estimates. During system analysis the
feasibility study of the proposed system is to be carried out. This is to ensure that the
proposed system is not a burden to the company. For feasibility analysis, some understanding
of the major requirements for the system is essential.
ECONOMICAL FEASIBILITY
TECHNICAL FEASIBILITY
SOCIAL FEASIBILITY
ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the system will have on the
organization. The amount of fund that the company can pour into the research and development
of the system is limited. The expenditures must be justified. Thus the developed system as well
within the budget and this was achieved because most of the technologies used are freely
available. Only the customized products had to be purchased.
TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand on the
available technical resources. This will lead to high demands on the available technical
resources. This will lead to high demands being placed on the client. The developed system must
have a modest requirement, as only minimal or null changes are required for implementing this
system.
SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the user. This
includes the process of training the user to use the system efficiently. The user must not feel
threatened by the system, instead must accept it as a necessity. The level of acceptance by the
users solely depends on the methods that are employed to educate the user about the system and
to make him familiar with it. His level of confidence must be raised so that he is also able to
make some constructive criticism, which is welcomed, as he is the final user of the system.
4. SOFTWARE AND HARDWAREREQUIREMENTS
Software Requirements
Hardware Requirements
System Architecture
The methodology of review consists following steps
1. Data Collection
2. Image Acquisition
3. Image Pre-Processing
4. Segmentation
5. Feature Extraction
6. Classification
7. Testing
8. Disease prediction
Color Extraction
4. Data Store: Here data are stored or reference by a process in the System.
Figure 5.2 Data flow diagram
UseCase
Draw use cases using ovals. Label with ovals with verbs that represent the system's
functio
Actors
Actors are the users of a system. When one system is the actor of another system,
label the actor system with the actor stereotype.
Relationships
Illustrate relationships between an actor and a use case with a simple line. For
relationships among use cases, use arrows labelled either "uses" or "extends." A "uses"
relationship indicates that one use case is needed by another in order to perform a task.
What is a UML activity diagram?
Activity Diagram
An activity diagram illustrates the dynamic nature of a system by modeling the flow
of control from activity to activity. An activity represents an operation on some class in the
system that results in a change in the state of the system. Typically, activity diagrams are used
to model workflow or business processes and internal operation. Because an activity diagram
is a special king of state chart diagram, it uses some of the same modeling conventions.
Basic Activity Diagram Symbols and Notations
Action states
Action states represent the non-interruptible actions of objects. You can grow an
action state in Smart Draw using a rectangle with rounded corners.
Action Flow
Action flow arrows illustrate the relationships among action states.
Object Flow
Object flow refers to the creation and modification of objects by activities. An object
flow arrow from an action to an object means that the action creates or influences the object.
An object flow arrow from an object to an action indicates that the action state uses the
object.
Initial State
A filled circle followed by an arrow represents the initial action state.
Final State
An arrow pointing to a filled circle nested inside another circle represents the final
action state.
Branching
A gaining represents a decision with alternate paths. The outgoing alternates should
be labelled with a cognition or guard expression.
Swim lanes
Swim lanes group related activities into one column.
Synchronization
A synchronization bar helps illustrate parallel transitions. Synchronization is also
called forking and joining.
Interface
An interface describes a group of operations used or created by components.
Dependencies
Draw dependencies among components using gashes arrows. Learn about line styles
in Smart Draw.
Component
Anode is a physical resource that executes code components.
Learn how to resize grouped objects like nodes.
Association
Association refers to a physical connection between nodes, such as Ethernet. Learn
how to connect two nodes.
Activation
Activation boxes represent the time an object needs to complete a task.
Messages
Messages are arrows that represent communication between objects. Use half-arrowed
lines to represent asynchronous messages. Asynchronous messages are sent from an object
that will not wait for a response from the receiver before continuing its tasks.
Lifelines
Lifelines are vertical gashes lines that indicate the object's presence over time.
Destroying Objects
Objects can be terminated early using an arrow labeled "<< destroy >>" that points to
an X.
Loops
A repetition or loop within a sequence diagram is depicted as a rectangle. Place the
cognition for exiting the loop at the bottom left corner in square brackets.
5.3. E-R Diagram
ER Diagram stands for Entity Relationship Diagram, also known as ERD is a diagram
that displays the relationship of entity sets stored in a database. In other words, ER diagrams
help to explain the logical structure of databases. ER diagrams are created based on three
basic concepts: entities, attributes and relationships.
ER Diagrams contain different symbols that use rectangles to represent entities, ovals to
define attributes and diamond shapes to represent relationships.
1 : Register()
2 : Activate()
3 : Upload
Images()
10 : 9Apply
: Load Dataset()
KNN Algorithm()
Admin:
Activates user login credentials
System:
Image is uploaded from the user’s portal
Results stored in database
Response sends to user
Start live stream
Apply KNN algorithm
Result sends to user
Result:
Detect the testing
Detect the Facial expression
Login
Upload Image
Stress Emotions
Live Stream
Admin
Users
KNN Results
Activate users
.
Figure 5.4.2 use case diagram
Actor:
User
Admin
Use case:
o User need to login
o Activation of user’s login credentials by admin
o User need to upload his/her image
o Stress Emotions are displayed
o Live stream can be enabled
o Results are displayed using KNN
1. Activity diagram
Activity diagram is another important diagram in UML to describe the dynamic
aspects of the system. Activity diagram is basically a flowchart to represent the flow from one
activity to another activity. This flow can be sequential, branched, or concurrent. Activity
diagrams deal with all type of flow control by using different elements such as fork, join, etc.
Users Admin
Upload Image
Activate users
Image Results
Detected images
Live Stream
KNN Results
Deep Learning Live Stream
KNN Results
Source code
defUserHome(request):
sssssreturn render(request, 'users/UserHome.html', {})
defUploadImageForm(request):
loginid = request.session['loginid']
data = UserImagePredictinModel.objects.filter(loginid=loginid)
return render(request, 'users/UserImageUploadForm.html', {'data':
data})
defUploadImageAction(request):
image_file = request.FILES['file']
fs = FileSystemStorage()
filename = fs.save(image_file.name, image_file)
# detect_filename = fs.save(image_file.name, image_file)
uploaded_file_url = fs.url(filename)
obj = ImageExpressionDetect()
emotion = obj.getExpression(filename)
username = request.session['loggeduser']
loginid = request.session['loginid']
email = request.session['email']
UserImagePredictinModel.objects.create(username=username,email=email,lo
ginid=loginid,filename=filename,emotions=emotion,file=uploaded_file_url
)
data = UserImagePredictinModel.objects.filter(loginid=loginid)
return render(request, 'users/UserImageUploadForm.html', {'data':data})
defUserEmotionsDetect(request):
if request.method=='GET':
imgname = request.GET.get('imgname')
obj = ImageExpressionDetect()
emotion = obj.getExpression(imgname)
loginid = request.session['loginid']
data = UserImagePredictinModel.objects.filter(loginid=loginid)
return render(request, 'users/UserImageUploadForm.html', {'data':
data})
defUserLiveCameDetect(request):
obj = ImageExpressionDetect()
obj.getLiveDetect()
return render(request, 'users/UserLiveHome.html', {})
defUserKerasModel(request):
# p = Popen(["python", "kerasmodel.py --mode
display"], cwd='StressDetection', stdout=PIPE,
stderr=PIPE)
# out, err = p.communicate()
subprocess.call("python kerasmodel.py --mode display")
return render(request, 'users/UserLiveHome.html', {})
defUserKnnResults(request):
obj = KNNclassifier()
df,accuracy,classificationerror,sensitivity,Specificity,fsp,precision =
obj.getKnnResults()
df.rename(columns={'Target': 'Target', 'ECG(mV)': 'Time pressure',
'EMG(mV)': 'Interruption', 'Foot GSR(mV)': 'Stress', 'Hand GSR(mV)':
'Physical Demand', 'HR(bpm)': 'Performance', 'RESP(mV)': 'Frustration',
}, inplace=True)
data = df.to_html()
return
render(request,'users/UserKnnResults.html',{'data':data,'accuracy':accu
racy,'classificationerror':classificationerror,
'sensitivity':sensitivity,"Specificity":Specificity,'fsp':fsp,'precisio
n':precision})
class UserRegistrationForm(forms.ModelForm):
name = forms.CharField(widget=forms.TextInput(attrs={'pattern':
'[a-zA-Z]+'}), required=True, max_length=100)
loginid = forms.CharField(widget=forms.TextInput(attrs={'pattern': '[a-
44
zA-Z]+'}), required=True, max_length=100)
45
password =
forms.CharField(widget=forms.PasswordInput(attrs={'pattern':
'(?=.*\d)(?=.*[a-z])(?=.*[A-Z]).{8,}',
'title': 'Must contain at least one number and one uppercase and
lowercase letter, and at least 8 or more characters'}),
required=True, max_length=100)
mobile = forms.CharField(widget=forms.TextInput(attrs={'pattern':
'[56789][0-9]{9}'}), required=True,
max_length=100)
email = forms.CharField(widget=forms.TextInput(attrs={'pattern':
'[a-z0-9._%+-]+@[a-z0-9.-]+\.[a-z]{2,}$'}),
required=True, max_length=100)
locality = forms.CharField(widget=forms.TextInput(), required=True,
max_length=100)
address = forms.CharField(widget=forms.Textarea(attrs={'rows': 4,
'cols': 22}), required=True, max_length=250)
city = forms.CharField(widget=forms.TextInput(
attrs={'autocomplete': 'off', 'pattern': '[A-Za-z ]+', 'title': 'Enter
Characters Only '}), required=True,
max_length=100)
state = forms.CharField(widget=forms.TextInput(
attrs={'autocomplete': 'off', 'pattern': '[A-Za-z ]+', 'title': 'Enter
Characters Only '}), required=True,
max_length=100)
status = forms.CharField(widget=forms.HiddenInput(),
initial='waiting', max_length=100)
class Meta():
model = UserRegistrationModel
fields = ' all '
class Meta:
db_table = 'UserRegistrations'
class UserImagePredictinModel(models.Model):
username = models.CharField(max_length=100)
email = models.CharField(max_length=100)
loginid = models.CharField(max_length=100)
filename = models.CharField(max_length=100)
emotions = models.CharField(max_length=100000)
file = models.FileField(upload_to='files/')
cdate = models.DateTimeField(auto_now_add=True)
class Meta:
db_table = "UserImageEmotions"
Image Classification:
from django.confimport settings
from PyEmotionimport *
import cv2 as cv
class ImageExpressionDetect:
defgetExpression(self,imagepath):
47
filepath = settings.MEDIA_ROOT + "\\"+ imagepath
PyEmotion()
er = DetectFace(device='cpu', gpu_id=0)
# Open you default camera
# img =
cv.imread('test.jpg') # cap
= cv.VideoCapture(0)
# ret, frame = cap.read()
frame, emotion = er.predict_emotion(cv.imread(filepath))
cv.imshow('Alex Corporation', frame)
cv.waitKey(0)
print("Hola Hi",filepath,"Emotion is ",emotion)
return emotion
defgetLiveDetect(self):
print("Streaming Started")
PyEmotion()
er = DetectFace(device='cpu', gpu_id=0)
# Open you default
camera cap =
cv.VideoCapture(0) while
(True):
ret, frame = cap.read()
frame, emotion = er.predict_emotion(frame)
cv.imshow('Press Q to Exit', frame)
if cv.waitKey(1) &0xFF == ord('q'):
break
cap.release()
cv.destroyAllWindows()
Deeplearning Model:
import numpyas np
import argparse
import cv2
from keras.modelsimport Sequential
from keras.layers.coreimport Dense, Dropout, Flatten
from keras.layers.convolutionalimport Conv2D
from keras.optimizersimport Adam
from keras.layers.poolingimport MaxPooling2D
from keras.preprocessing.imageimport ImageDataGenerator
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import matplotlibas mpl
mpl.use('TkAgg')
import matplotlib.pyplotas plt
defplot_model_history(model_history):
"""
Plot Accuracy and Loss curves given the model_history
"""
fig, axs = plt.subplots(1,2,figsize=(15,5))
# summarize history for accuracy
axs[0].plot(range(1,len(model_history.history['acc'])+1),model_history.
history['acc'])
axs[0].plot(range(1,len(model_history.history['val_acc'])+1),model_hist
ory.history['val_acc'])
axs[0].set_title('Model Accuracy')
axs[0].set_ylabel('Accuracy')
axs[0].set_xlabel('Epoch')
axs[0].set_xticks(np.arange(1,len(model_history.history['acc'])+1),len(
model_history.history['acc'])/10)
axs[0].legend(['train', 'val'], loc='best')
# summarize history for loss
axs[1].plot(range(1,len(model_history.history['loss'])+1),model_history
.history['loss'])
axs[1].plot(range(1,len(model_history.history['val_loss'])+1),model_his
50
tory.history['val_loss'])
axs[1].set_title('Model Loss')
axs[1].set_ylabel('Loss')
axs[1].set_xlabel('Epoch')
axs[1].set_xticks(np.arange(1,len(model_history.history['loss'])+1),len
(model_history.history['loss'])/10)
axs[1].legend(['train', 'val'], loc='best')
fig.savefig('plot.png')
plt.show()
# Define data
generators train_dir =
'data/train' val_dir =
'data/test'
num_train = 28709
num_val = 7178
batch_size = 64
num_epoch = 50
train_datagen = ImageDataGenerator(rescale=1./255)
val_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(48,48),
batch_size=batch_size,
color_mode="grayscale",
class_mode='categorical')
validation_generator = val_datagen.flow_from_directory(
val_dir,
target_size=(48,48),
batch_size=batch_size,
color_mode="grayscale",
class_mode='categorical')
# Create the model
model = Sequential()
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(7, activation='softmax'))
# If you want to train the same model or try other models, go for this
if mode == "train":
model.compile(loss='categorical_crossentropy',optimizer=Adam(lr=0.0001,
decay=1e-6),metrics=['accuracy'])
model_info = model.fit_generator(
train_generator,
steps_per_epoch=num_train // batch_size,
epochs=num_epoch,
validation_data=validation_generator,
validation_steps=num_val // batch_size)
plot_model_history(model_info)
model.save_weights('model.h5')
# emotions will be displayed on your face from the webcam feed
elifmode == "display":
model.load_weights('model.h5')
cap.release()
cv2.destroyAllWindows()
defAdminLoginCheck(request):
if request.method == 'POST':
usrid = request.POST.get('loginid')
pswd = request.POST.get('pswd')
print("User ID is = ", usrid)
if usrid == 'admin' and pswd == 'admin':
return render(request, 'admins/AdminHome.html')
elifusrid == 'Admin' and pswd == 'Admin':
return render(request, 'admins/AdminHome.html')
else:
messages.success(request, 'Please Check Your Login Details')
return render(request, 'AdminLogin.html', {})
defAdminHome(request):
return render(request, 'admins/AdminHome.html')
defViewRegisteredUsers(request):
data = UserRegistrationModel.objects.all()
return render(request, 'admins/RegisteredUsers.html', {'data': data})
defAdminActivaUsers(request):
if request.method == 'GET':
id = request.GET.get('uid')
status = 'activated'
print("PID = ", id, status)
UserRegistrationModel.objects.filter(id=id).update(status=status)
data = UserRegistrationModel.objects.all()
return render(request, 'admins/RegisteredUsers.html', {'data': data})
defAdminStressDetected(request):
data = UserImagePredictinModel.objects.all()
return render(request, 'admins/AllUsersStressView.html', {'data':
data})
defAdminKNNResults(request):
obj = KNNclassifier()
df, accuracy, classificationerror, sensitivity, Specificity, fsp,
precision = obj.getKnnResults()
df.rename(
columns={'Target': 'Target', 'ECG(mV)': 'Time pressure', 'EMG(mV)':
'Interruption', 'Foot GSR(mV)': 'Stress',
'Hand GSR(mV)': 'Physical Demand', 'HR(bpm)': 'Performance',
'RESP(mV)': 'Frustration', },
inplace=True)
data = df.to_html()
return render(request, 'admins/AdminKnnResults.html',
{'data': data, 'accuracy': accuracy,
'classificationerror': classificationerror,
'sensitivity': sensitivity, "Specificity": Specificity, 'fsp': fsp,
'precision': precision})
All urls.py
"""StressDetection URL Configuration
"""
from django.contribimport admin
from django.urlsimport path
from StressDetectionimport views as mainView
from users import views as usr
from admins import views as admins
from django.contrib.staticfiles.urlsimport static
from django.contrib.staticfiles.urlsimport staticfiles_urlpatterns
from django.confimport settings
urlpatterns = [
path('admin/', admin.site.urls),
path("", mainView.index, name="index"),
path("index/", mainView.index, name="index"),
path("logout/", mainView.logout, name="logout"),
path("UserLogin/", mainView.UserLogin, name="UserLogin"),
path("AdminLogin/", mainView.AdminLogin, name="AdminLogin"),
path("UserRegister/", mainView.UserRegister, name="UserRegister"),
### User Side Views
path("UserRegisterActions/", usr.UserRegisterActions,
name="UserRegisterActions"),
path("UserLoginCheck/", usr.UserLoginCheck, name="UserLoginCheck"),
path("UserHome/", usr.UserHome, name="UserHome"),
path("UploadImageForm/", usr.UploadImageForm,
name="UploadImageForm"),
path("UploadImageAction/", usr.UploadImageAction,
name="UploadImageAction"),
path("UserEmotionsDetect/", usr.UserEmotionsDetect,
name="UserEmotionsDetect"),
path("UserLiveCameDetect/", usr.UserLiveCameDetect,
name="UserLiveCameDetect"),
path("UserKerasModel/", usr.UserKerasModel, name="UserKerasModel"),
path("UserKnnResults/", usr.UserKnnResults, name="UserKnnResults"),
urlpatterns += staticfiles_urlpatterns()
urlpatterns += static(settings.MEDIA_URL,
document_root=settings.MEDIA_ROOT)
Base.html
<!DOCTYPE html>
{%load static%}
<html lang="en">
<head>
<title>Stress Feelings</title>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible"content="IE=edge">
<meta name="description"content="Unicat project">
<meta name="viewport"content="width=device-width, initial-scale=1">
<link rel="stylesheet"type="text/css"href="{%static
'styles/bootstrap4/bootstrap.min.css'%}">
<link href="{%static 'plugins/font-awesome-4.7.0/css/font-
awesome.min.css'%}"rel="stylesheet"type="text/css">
<link rel="stylesheet"type="text/css"href="{%static
'plugins/OwlCarousel2-2.2.1/owl.carousel.css'%}">
<link rel="stylesheet"type="text/css"href="{%static
'plugins/OwlCarousel2-2.2.1/owl.theme.default.css'%}">
<link rel="stylesheet"type="text/css"href="{%static
'plugins/OwlCarousel2-2.2.1/animate.css'%}">
<link rel="stylesheet"type="text/css"href="{%static
'styles/main_styles.css'%}">
<link rel="stylesheet"type="text/css"href="{%static
'styles/responsive.css'%}">
</head>
<body>
<div class="super_container">
<header class="header">
</ul>
</nav>
</div>
</div>
</div>
</div>
</div>
{%block contents%}
{%endblock%}
<footer class="footer">
<div class="footer_background"style="background-image:url({%static
'images/footer_background.png'%})"></div>
<div class="container">
<div class="row copyright_row">
<div class="col">
<div class="copyright d-flex flex-lg-row flex-column align-items-center
justify-content-start">
<div class="cr_text"><!-- Link back to Colorlib can't be
removed. Template is licensed under CC BY 3.0. -->
Copyright ©<script>document.write(new
Date().getFullYear());</script> All rights reserved | This template is
made with <iclass="fa fa-heart-o"aria-hidden="true"></i> by <a
href="#"target="_blank">Alex Corporation</a>
<!-- Link back to Colorlib can't be removed. Template is licensed
under CC BY 3.0. --></div>
</div>
</div>
</div>
</div>
</footer>
</div>
Index.html
{%extends 'base.html'%}
{%load static%}
{%block contents%}
<div class="home">
<div class="home_slider_container">
<!-- Home Slider -->
<div class="owl-carousel owl-theme home_slider">
Index.html
{%extends 'base.html'%}
{%load static%}
{%block contents%}
<div class="home">
<div class="home_slider_container">
{% csrf_token %}
<table>
<tr>
<td class="text-primary">User Name</td>
<td>{{form.name}}</td>
</tr>
<tr>
<td>Login ID</td>
<td>{{form.loginid}}</td>
</tr>
<tr>
<td>Password</td>
<td>{{form.password}}</td>
</tr>
<tr>
<td>Mobile</td>
<td>{{form.mobile}}</td>
</tr>
<tr>
<td>email</td>
<td>{{form.email}}</td>
</tr>
<tr>
<td>Locality</td>
<td>{{form.locality}}</td>
</tr>
<tr>
<td>Address</td>
<td>{{form.address}}</td>
</tr>
<tr>
<td>City</td>
<td>{{form.city}}</td>
</tr>
<tr>
<td>State</td>
<td>{{form.state}}</td>
</tr>
<tr>
<td>{{form.status}}</td>
</tr>
<tr><td></td>
<td><button type="submit"class="comment_button
trans_200">Register</button></td>
</tr>
{% if messages %}
{% for message in messages %}
<font color='GREEN'> {{ message }}</font>
{% endfor %}
{% endif %}
</table>
</form>
</center>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
{%endblock%}
{%block contents%}
<div class="home">
<div class="home_slider_container">
<!-- Home Slider -->
<div class="owl-carousel owl-theme home_slider">
<tr>
<td>
<button class="btnbtn-block btn-primary text-white py-3 px-
5"style="margin-left:20%;"
type="submit">
Login
</button>
</td>
</tr>
{% if messages %}
{% for message in messages %}
<font color='GREEN'> {{ message }}</font>
{% endfor %}
{% endif %}
</table>
</form>
</center>
</p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
{%endblock%}
KNN Results.html
{%extends 'users/userbase.html'%}
{%load static%}
{%block contents%}
<div class="features">
<div class="container">
<div class="row">
<div class="col">
<div class="section_title_container text-center">
<h2 class="section_title">Knn Algorithm Results</h2>
<h3>Accuarcy<font color="Green">{{accuracy}}</font></h3><br/>
<h3>Classification Error <font
color="Green">{{classificationerror}}</font></h3>
<h3>Sensitivity <font color="Green">{{sensitivity}}</font></h3>
<h3>Specificity <font color="Green">{{Specificity}}</font></h3>
<h3>False positive rate Error <font color="Green">{{fsp}}</font></h3>
<h3>Precision <font color="Green">{{precision}}</font></h3>
</div>
<center>
<h2>Results table</h2>
<font color="Black">
{{data | safe}}
</font>
</center>
</div>
</div>
<div class="row features_row">
</div>
</div>
</div>
{%endblock%}
6.2 Implementation
PYTHON
The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality of
components, sub assemblies, assemblies and/or a finished product It is the process of exercising
software with the intent of ensuring that the Software system meets its requirements and user
expectations and does not fail in an unacceptable manner. There are various types of test. Each
test type addresses a specific testing requirement.
TYPES OF TESTS
Unit testing
Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision branches and
internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform
basic tests at component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process performs accurately
to the documented specifications and contains clearly defined inputs and expected results.
Integration testing
Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic
outcome of screens or fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the combination of components
is correct and consistent. Integration testing is specifically aimed at exposing the problems that
arise from the combination of components.
Functional test
Functional tests provide systematic demonstrations that functions tested are available as
specified by the business and technical requirements, system documentation, and user manuals.
Functional testing is centered on the following items:
Valid Input : identified classes of valid input must be accepted.
Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs must be exercised.
Systems/Procedures : interfacing systems or procedures must be invoked.
Organization and preparation of functional tests is focused on requirements, key
functions, or special test cases. In addition, systematic coverage pertaining to identify Business
process flows; data fields, predefined processes, and successive processes must be considered
for testing. Before functional testing is complete, additional tests are identified and the effective
value of current tests is determined.
System Test
System testing ensures that the entire integrated software system meets requirements. It
tests a configuration to ensure known and predictable results. An example of system testing is
the configuration oriented system integration test. System testing is based on process
descriptions and flows, emphasizing pre-driven process links and integration points.
White Box Testing
White Box Testing is a testing in which in which the software tester has knowledge of
the inner workings, structure and language of the software, or at least its purpose. It is purpose. It
is used to test areas that cannot be reached from a black box level.
Black Box Testing
Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds of tests,
must be written from a definitive source document, such as specification or requirements
document, such as specification or requirements document. It is a testing in which the software
under test is treated, as a black box .you cannot “see” into it. The test provides inputs and
responds to outputs without considering how the software works.
Unit Testing
Unit testing is usually conducted as part of a combined code and unit test phase of
the software lifecycle, although it is not uncommon for coding and unit testing to be conducted
as two distinct phases.
Testing strategies
Field testing will be performed manually and functional tests will be written in detail.
Test objectives
Features to be tested
Integration Testing
Software integration testing is the incremental integration testing of two or more
integrated software components on a single platform to produce failures caused by interface
defects.
The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company level –
interact without error.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant participation
by the end user. It also ensures that the system meets the functional requirements.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
Sample Test Cases
Excepted Remarks(IF
S.no Test Case Result
Result Fails)
If User If already user
1. User Register registration Pass email exist then it
successfully. fails.
If Username and
password is Un Register Users
2. User Login Pass
correct then it will will not logged in.
getting valid page.
Image must be
Image uploaded to
640X480
3. Upload An Image server and strating Pass
resolution will get
process to detetct
better results
Detected images
Images must be
Draw Squares in draw square and
4. Pass clearly to detect
images writing stress
facial expression
emotions
PyImage libaray
If library not
will load the
5. Start live Stream Pass available then
process and start
failed
the live
Depends on
Start Deep If tensor flow not
system
6. learning live installed then it Pass
configuration and
stream will fail
tensorflow library
Load the dataset
The dataset must
7. KNN Results and process the Pass
be media folder
KNN Algorithm
Trains and test
Predicted and
Predict Train and size must be
8. original salary will Pass
Test data specify otherwise
be displayed
failed
Admin can login
with his login Invalid login
9. Admin login credential. If Pass details will not
success he get his allowed here
home page
Admin can Admin can If user id not
activate the activate the Pass found then it
10.
register users register user id won’t login.
8. OUTPUTSCREEN
Task 1: open cmd promte to excute code
Task 2: The User can register the first. While registering he required a valid user email and
Task 4 : Once admin activated the customer then user can login into our system.
will extract the features and appropriate emotion of the image. If given image
contain more than one faces also possible to detect.
Figure 6.2.5 : Uploading Image
Task 6 : The stress level we are going to indicate by facial expression like sad,
angry etc.. The image processing completed the we are going to start the live
stream. In the live stream also we can get the facial expression more that one
persons also.
Stress Detection System is designed to predict stress in the employees by monitoring captured
images of authenticated users which makes the system secure. The image capturing is done
automatically when the authenticate user is logged in based on some time interval. The captured
images are used to detect the stress of the user based on some standard conversion and image
processing mechanisms. Then the system will analyze the stress levels by using Machine
Learning algorithms which generates the results that are more efficient.
10. FUTUREENHANCEMENTS
Biomedical wearable sensors embedded with IOT technology is a proven combination in the
health care sector. The benefits of using such devices have positively impacted the patients
and doctors alike. Early diagnosis of medical conditions, faster medical assistance by means
of Remote Monitoring and Telecommunication, emergency alert mechanism to notify the care
taker and personal Doctor, etc are a few of its advantages. The proposed work on developing a
multimodal IOT system assures to be a better health assistant for a person by constantly
monitoring and providing regular feedback on the stress levels. For future work, it would
be interesting to enhance this work into the development of a stress detection model by the
addition of other physiological parameters, including an activity recognition system and
application of machine learning techniques.
10. FUTUREENHANCEMENTS
Biomedical wearable sensors embedded with IOT technology is a proven combination in the
health care sector. The benefits of using such devices have positively impacted the patients
and doctors alike. Early diagnosis of medical conditions, faster medical assistance by means
of Remote Monitoring and Telecommunication, emergency alert mechanism to notify the care
taker and personal Doctor, etc are a few of its advantages. The proposed work on developing a
multimodal IOT system assures to be a better health assistant for a person by constantly
monitoring and providing regular feedback on the stress levels. For future work, it would
be interesting to enhance this work into the development of a stress detection model by the
addition of other physiological parameters, including an activity recognition system and
application of machine learning techniques.
11. REFERENCES