Ilovepdf Merged
Ilovepdf Merged
Ilovepdf Merged
On
DROWSINESS DETECTION SYSTEM
Submitted to
OSMANIA UNIVERSITY
In partial fulfillment of the requirements for the award of
BACHELOR OF ENGINEERING
IN
COMPUTER SCIENCE AND ENGINEERING (AI & ML)
BY
CERTIFICATE
This is to certify that the project report entitled “DROWSINESS DETECTION SYSTEM” that
is being submitted by SARTHAK JENA (245521748301), CHERUKU SHASHANK
(245521748012), under the guidance of Mr. P. NARESH KUMAR with fulfillment for the
award of the Degree of Bachelor of Engineering in Computer Science and Engineering (AI &
ML) to the Osmania University is a record of bonafide work carried out by his under my
guidance and supervision. The results embodied in this project report have not been submitted to
any other University or Institute for the award of any graduation degree.
EXTERNAL EXAMINER
Vision of KMEC:
To be a leader in producing industry-ready and globally competent engineers to make India a world
Leaders in software products and services.
Mission of KMEC:
1. To provide a conducive learning environment that includes problem solving skills, professional
and ethical standards, lifelong learning through multimodal platforms and prepare students to
become successful professionals.
2. To forge industry-institute partnerships to expose students to the technology trends, work culture
and ethics in the industry.
3. To provide quality training to the students in the state-of-art software technologies and tools.
4. To encourage research-based projects/activities in emerging areas of technology.
5. To nurture entrepreneurial spirit among the students and faculty.
6. To induce the spirit of patriotism among the students that enables them to understand India’s
challenges and strive to develop effective solutions.
Vision & Mission of CSE (AI & ML)
1 – LOW
2 - MEDIUM
3 - HIGH
P1 1 3 1 1 2 1 1 2 1 2 1 1
P2 3 1 1 3 1 2 2 1 3 1 3 1
P3 1 2 3 1 2 1 3 1 1 3 3 2
P4 2 2 1 2 3 1 3 2 3 1 3 1
P5 3 1 3 3 2 3 1 1 3 3 1 2
PROJECT OUTCOMES MAPPING PROGRAM SPECIFIC OUTCOMES:
P2 2 1 3
P3 1 2 3
P4 1 2 3
P5 2 3 2
P2 2 3 1 2
P3 3 2 2 1
P4 3 1 3 1
P5 1 1 2 3
DECLARATION
This is to certify that the mini project titled “DROWSINESS DETECTION SYSTEM” is a
bonafide work done by us in fulfillment of the requirements for the award of the degree Bachelor of
Engineering in Department of Computer Science and Engineering (AI & ML), and submitted to the
Department of CSE (AI & ML), Keshav Memorial Engineering College, Hyderabad.
We also declare that this project is a result of our own effort and has not been copied or
intimated from any source. Citations from any websites are mentioned in the bibliography. This
work was not submitted earlier at any other university for the award of any degree.
This is to place on our record my appreciation and deep gratitude to the persons without whose support
this project would never been this successful.
We are grateful to Mr. Neil Gogte, Founder Director, for facilitating all the amenities required
for carrying out this project.
It is with immense please that we would like to express our indebted gratitude to the respected
Prof. P.V.N Prasad, Principal, Keshav Memorial Engineering College, for providing a great support
and for giving us the opportunity of doing the project.
We express our sincere gratitude to Mrs. Deepa Ganu, Director Academics, for providing an
excellent environment in the college.
We would like to take this opportunity to specially thank to Dr. Birru Devender, Professor & HoD,
Department of CSE (AI & ML), Keshav Memorial Engineering College, for inspiring us all the way and for
arranging all the facilities and resources needed for our project.
We would like to take this opportunity to thank our internal guide Mr.P.Naresh Kumar, Asst
Professor, Department of CSE (AI & ML), Keshav Memorial Engineering College, who has
guided us a lot and encouraged us in every step of the project work. His valuable moral support and
guidance throughout the project helped us to a greater extent.
We would like to take this opportunity to specially thank our Project Coordinator, Mrs.
Gayathri Tippani , Assistant Professor, Department of CSE (AI & ML), Keshav Memorial
Engineering College, who guided us in successful completion of our project.
Finally, we express our sincere gratitude to all the members of the faculty of Department of
CSE (AI & ML), our friends and our families who contributed their valuable advice and helped us to
complete the project successfully.
I. PROBLEM STATEMENT i
II. ABSTRACT ii
1. INTRODUCTION 1-3
1.1 Introduction about the Concept
1.2 Existing system and Disadvantages
1.3 Literature Review
1.4 Proposed System and Advantages
2. SYSTEM ANALYSIS 4-7
2.1 Feasibility Study
2.2 System Requirements
2.2.1. Hardware Requirements
2.2.2. Software Requirements
2.2.3. Functional Requirements
2.2.4. Non functional Requirements
3. SYSTEM DESIGN 8-15
3.1 Introduction
3.2. Modules and Description
3.3 Block Diagram
3.4. UML Diagrams
3.4.1. Class Diagram
3.4.2. Use case Diagram
3.4.3. Data Flow Diagram
3.4.4. Sequence diagram
3.4.5. Activity Diagram
3.4.6. State Chart diagram
4. SYSTEM IMPLEMENTATION 16-24
4.1 Description of Platform, Database, Technologies, Methods,
Applications using and involving in development
5. SYSTEM TESTING 25-29
5.1. Test Plan
5.2. Scenarios.
5.3. Output Screens
6. CONCLUSION AND FUTURE SCOPE 30-32
6.1 Conclusion
6.2 Future Scope
7. REFERENCES 33-34
7.1. Bibliography
7.2 Web References
8. APPENDIX 35-40
Annexure-1: Sample Coding
Annexure-2: List of Figures
Annexure-3: List of Output Screens
PROBLEM STATEMENT
Drowsiness poses a significant risk in activities such as driving, where it can lead
to accidents due to impaired reaction times. Existing solutions for drowsiness
detection often require intrusive or costly hardware. The challenge is to develop a
non-intrusive, real-time drowsiness detection system using computer vision
techniques that can accurately identify signs of drowsiness solely from facial
features, particularly eye movements, thus enhancing safety and reducing
accidents.
INTRODUCTION
1. INTRODUCTION
SYSTEM ANALYSIS
2. SYSTEM ANALYSIS
Technical Feasibility:
- The use of computer vision algorithms for facial landmark detection and eye
aspect ratio calculation demonstrates the technical viability of the approach.
Economic Feasibility:
Operational Feasibility:
- The real-time nature of the system ensures timely alerts to prevent accidents,
enhancing its operational effectiveness.
- User Interface: A simple and intuitive user interface displaying the video feed
with overlaid alerts enhances usability.
SYSTEM DESIGN
3. SYSTEM DESIGN
3.1 Introduction
-Utilizes the dlib library to detect facial landmarks, particularly those associated
with the eyes.
-Calculates the eye aspect ratio (EAR) based on the detected landmarks to
determine the level of eye openness.
Real-time Monitoring:
-Constantly analyzes video frames from a webcam feed to monitor changes in the
EAR and detect signs of drowsiness.
Alert System:
-Triggers an alert when the calculated EAR falls below a predefined threshold for
a specified number of consecutive frames.
SYSTEM
IMPLEMENTATION
4. SYSTEM IMPLEMENTATION
SYSTEM IMPLEMENTATION:
EAR = (A+B)/(2.C)
- `frame_check` defines the number of consecutive frames where the EAR falls
below the threshold before triggering an alert.
- The dlib library is utilized to detect faces in the video stream using the
`get_frontal_face_detector()` function.
- The indices for the left and right eye landmarks are obtained using
`face_utils.FACIAL_LANDMARKS_68_IDXS`.
- These indices are used to extract the coordinates of the left and right eye
regions from the detected facial landmarks.
- The system continuously captures frames from the video feed using
`cv2.VideoCapture`.
- For each detected face, facial landmarks are predicted and converted into
NumPy arrays for ease of computation.
- Contours are drawn around the eyes to visualize the detected regions.
6. Alert Display:
- The system waits for a key press and checks if it corresponds to the "q" key to
exit the loop.
- Upon receiving the termination key, all OpenCV windows are closed, and the
video capture is released.
Libraries:
- dlib: Dlib is a C++ toolkit containing machine learning algorithms and tools
primarily used for computer vision tasks. In this code, it is utilized for face
detection and facial landmark detection.
- scipy: SciPy is a scientific computing library that provides various modules for
numerical integration, optimization, signal processing, and more. In this code, it is
used for computing the Euclidean distance.
Overall, the platform for running this code is flexible, as long as the required
software dependencies are met, and the hardware setup includes a webcam for
capturing video input.
DATASET
Dataset Description:
- Video Feeds: The video feeds are captured using a webcam or a similar camera
device. The camera is positioned to capture the facial area of individuals whose
drowsiness levels need to be monitored. These feeds are then processed in real-
time by the system.
Data Preprocessing:
- Frame Resizing: Each frame of the video feed is resized to a standard width of
450 pixels using the `imutils.resize()` function. Resizing the frames helps in faster
processing and reduces computational overhead.
Data Storage :
- Video Feeds: The video feeds are captured and processed in real-time by the
system. They are not stored persistently unless explicitly saved by the user.
Dataset Usage :
- Video Feeds: The real-time video feeds are continuously processed by the
system to monitor drowsiness levels. The frames from these feeds are analyzed to
compute the eye aspect ratio (EAR) and detect signs of drowsiness.
- Facial Landmark Model: The pre-trained facial landmark model is loaded into
memory using the `dlib.shape_predictor()` function. This model is then used to
detect facial landmarks in each frame of the video feed, particularly the landmarks
corresponding to the eyes, which are crucial for drowsiness detection.
Data Dependencies :
- Facial Landmark Model: The system requires the facial landmark model file
"shape_predictor_68_face_landmarks.dat" to be present in the specified location.
This file is essential for performing facial landmark detection using the dlib
library.
- Video Feeds: The system relies on real-time video feeds captured by a webcam
or similar device. These feeds must be accessible to the system during runtime for
drowsiness detection to occur.
- Facial Landmark Model: The facial landmark model file should be stored
securely to prevent unauthorized access or tampering. Access to the model file
should be restricted to authorized personnel only. Additionally, any data generated
or derived from the facial landmark model should be handled in accordance with
applicable privacy regulations.
TECHNOLOGIES:
dlib: Used for face detection and facial landmark detection. Dlib is a popular
library for machine learning, computer vision, and image processing tasks.
OpenCV (cv2): Utilized for capturing video frames from the camera, image
processing, and displaying the output. OpenCV is a powerful library for computer
vision tasks.
scipy: Specifically, the distance function from the scipy.spatial module is used to
calculate the Euclidean distance between facial landmarks.
imutils: This library provides convenience functions for resizing, rotating, and
displaying images, making it easier to work with OpenCV.
METHODS:
Eye Aspect Ratio (EAR): The core method used for drowsiness detection. EAR is
calculated based on the distances between specific facial landmarks detected in
the eye region. This ratio is indicative of the level of eye openness and is used to
infer drowsiness.
Convex Hull: Utilized to draw contours around the eyes. The convex hull helps in
visualizing the eye region and enhancing the accuracy of feature extraction.
SYSTEM TESTING
5. SYSTEM TESTING
4. Stability Testing: Run the system continuously for an extended period to check
for memory leaks, performance degradation, or crashes.
5.2. SCENARIOS :
1. Normal Conditions:
2. Drowsiness Detected:
- Expected Outcome: The system detects drowsiness based on the calculated eye
aspect ratio (EAR) and triggers an alert.
4. Partial Occlusion:
5. Distance Variations:
- Scenario: The user moves closer or farther away from the camera.
- Output screens would display the real-time video feed captured by the camera,
with overlays indicating the detected facial landmarks, eye regions, and any
triggered alerts.
CONCLUSION AND
FUTURE SCOPE
6.1 CONCLUSION:
Despite the effectiveness of the current system, there is still ample room for
further improvement and expansion. Some potential avenues for future research
and development include:
Overall, the drowsiness detection system presented here lays the groundwork for
further advancements in ensuring safety and reducing accidents caused by fatigue-
induced impairment, with promising opportunities for future research and
innovation.
REFERENCES
7. REFERENCES
7.1. Bibliography:
- Deng, W., Hu, J., & Guo, J. (2019). Real-time drowsiness detection using facial
landmark localization and deep learning. IEEE Access, 7, 162894-162902.
- Zhu, Z., Ji, Q., & Avidan, S. (2014). Real-time eye blink detection using facial
landmarks. In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition Workshops (pp. 83-90).
- OpenCV: https://opencv.org/
- dlib: http://dlib.net/
- imutils: https://github.com/jrosebr1/imutils
APPENDIX
8. APPENDIX
Annexure-1: Sample Coding
def eye_aspect_ratio(eye):
A = distance.euclidean(eye[1], eye[5])
B = distance.euclidean(eye[2], eye[4])
C = distance.euclidean(eye[0], eye[3])
ear = (A + B) / (2.0 * C)
return ear
thresh = 0.25
frame_check = 20
detect = dlib.get_frontal_face_detector()
predict =
dlib.shape_predictor("models/shape_predictor_68_face_landmarks.dat")# Dat file
is the crux of the code
ret, frame=cap.read()
frame = imutils.resize(frame, width=450)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
subjects = detect(gray, 0)
for subject in subjects:
shape = predict(gray, subject)
shape = face_utils.shape_to_np(shape)
leftEye = shape[lStart:lEnd]
rightEye = shape[rStart:rEnd]
leftEAR = eye_aspect_ratio(leftEye)
rightEAR = eye_aspect_ratio(rightEye)
ear = (leftEAR + rightEAR) / 2.0
leftEyeHull = cv2.convexHull(leftEye)
rightEyeHull = cv2.convexHull(rightEye)
cv2.drawContours(frame, [leftEyeHull], -1, (0, 255, 0), 1)
cv2.drawContours(frame, [rightEyeHull], -1, (0, 255, 0), 1)
if ear < thresh:
flag += 1
print (flag)
if flag >= frame_check:
cv2.putText(frame, "*****ALERT!*****",
(10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7,
(0, 0, 255), 2)
cv2.putText(frame, "*****ALERT!*****",
( (10,325), cv2.FONT_HERSHEY_SIMPLEX, 0.7,
( (0, 0, 255), 2)
#print ("Drowsy")
else:
flag = 0
cv2.imshow("Frame", frame)
key = cv2.waitKey(1) & 0xFF
if key == ord("q"):
break
cv2.destroyAllWindows()
cap.release()