Project Documentation Sandeep
Project Documentation Sandeep
Project Documentation Sandeep
A Project Report
Submitted in partial fulfillment of the requirements for the award
of the degree of
BACHELOR OF SCIENCE
(COMPUTER SCIENCE)
By
Ms.Anisha Asirvatham
Assistant Professor
CERTIFICATE
This is to certify that the project titled, ”VIRTUAL MOUSE” , is bonafied work of Mr.
SANDEEP VIJAY GUPTA bearing Roll No: 534 is submitted in partial fulfillment of the
requirements for the award of degree of BACHELOR OF SCIENCE in COMPUTER
SCIENCE from University of Mumbai.
I hereby declare that the project entitled, VIRTUAL MOUSE done at Nagindas
Khandwala College, has not been in any case duplicated to submit to any other university for
the award of any degree. To the best of my knowledge other than me, no one has submitted to
any other university. The project is done in partial fulfillment of the requirements for the award
of degree of BACHELOR OF SCIENCE (COMPUTER SCIENCE) to be submitted as final
semester project as part of our curriculum.
Signature
With the development technologies in the areas of augmented reality and devices that we
use in our daily life, these devices are becoming compact in the form of Bluetooth or wireless
technologies. This paper proposes an AI virtual mouse system that makes use of the hand
gestures and hand tip detection for performing mouse functions in the computer using computer
vision. The main objective of the proposed system is to perform computer mouse cursor
functions and scroll function using a web camera or a built-in camera in the computer instead
of using a traditional mouse device. Hand gesture and hand tip detection by using computer
vision is used as a HCI [1] with the computer. With the use of the AI virtual mouse system, we
can track the fingertip of the hand gesture by using a built-in camera or web camera and perform
the mouse cursor operations and scrolling function and also move the cursor with it.
While using a wireless or a Bluetooth mouse, some devices such as the mouse, the dongle
to connect to the PC, and also, a battery to power the mouse to operate are used, but in this
paper, the user uses his/her built-in camera or a webcam and uses his/her hand gestures to
control the computer mouse operations. In the proposed system, the web camera captures and
then processes the frames that have been captured and then recognizes the various hand gestures
and hand tip gestures and then performs the particular mouse function.
Python programming language is used for developing the AI virtual mouse system, and
also, OpenCV which is the library for computer vision is used in the AI virtual mouse system.
In the proposed AI virtual mouse system, the model makes use of the MediaPipe package for
the tracking of the hands and for tracking of the tip of the hands, and also, PyAutoGUI packages
were used for moving around the window screen of the computer for performing functions such
as left click, right click, and scrolling functions. The results of the proposed model showed very
high accuracy level, and the proposed model can work very well in real-world application with
the use of a CPU without the use of a GPU.
Page 1 of 13
1.2 Project Scope
For most laptop touchpad is not the most comfortable and convenient.
Virtual mouse, known as Virtual Multitask Mouse.
This is real time application.
User friendly application.
This project removes the requirement of having a physical contact with the touchpad.
1.3 Objectives
The goal is to manage computers and other devices with gestures rather than pointing and
clicking a mouse or touching a display directly
This approach can make it not only easier to carry out many existing chores but also take on
trickier tasks such as creating 3-D models, browsing medical imagery during surgery without
touching anything.
Page 2 of 13
1.4 Applicability
The AI virtual mouse system is useful for many applications; it can be used to reduce the space
for using the physical mouse, and it can be used in situations where we cannot use the physical
mouse. The system eliminates the usage of devices, and it improves the human-computer
interaction.
Major applications:
1. The proposed model has a greater accuracy of 99% which is far greater than the that of
other proposed models for virtual mouse, and it has many applications
2. Amidst the COVID-19 situation, it is not safe to use the devices by touching them
because it may result in a possible situation of spread of the virus by touching the
devices, so the proposed AI virtual mouse can be used to control the PC mouse functions
without using the physical mouse
3. The system can be used to control robots and automation systems without the usage of
devices
4. 2D and 3D images can be drawn using the AI virtual system using the hand gestures
5. AI virtual mouse can be used to play virtual reality- and augmented reality-based games
without the wireless or wired mouse devices
6. Persons with problems in their hands can use this system to control the mouse functions
in the computer
7. In the field of robotics, the proposed system like HCI can be used for controlling robots
8. In designing and architecture, the proposed system can be used for designing virtually
for prototyping
Page 3 of 13
Chapter 2: Gap Analysis
To develop the Virutal Mouse project, a careful gap analysis has revealed certain areas where
we can enhance the project's capabilities to better serve our users.
Gap Analysis:-
The proposed AI virtual mouse system can be used to overcome problems in the real world
such as situations where there is no space to use a physical mouse and also for the persons who
have problems in their hands and are not able to control a physical mouse. Also, amidst of the
COVID-19 situation, it is not safe to use the devices by touching them because it may result in
a possible situation of spread of the virus by touching the devices, so the proposed AI virtual
mouse can be used to overcome these problems since hand gesture and hand Tip detection is
used to control the PC mouse functions by using a webcam or a built-in camera.
Page 4 of 13
Chapter 3: Requirements and Analysis
The system can be used to control robots and automation systems without the usage of devices
This innovative approach leverages advanced software and communication protocols to enable
users to exert precise control over robotic operations and automation processes.
By eliminating the dependency on external gadgets, this system not only streamlines operations
but also reduces costs and enhances flexibility, making it an ideal solution for various
industrial applications.
Page 5 of 13
Hardware Requirements - Processor : Minimum 1 GHz; Recommended 2 GHz or more -
Hard Drive : Minimum 32GB; Recommended 64GB or more - Memory(RAM) : Minimum
1GB; Recommended 4GB or above
Web Cam
MATLAB is a high-performance language for technical computing. It integrates computation,
visualization, and programming in an easy-to-use environment.
Page 6 of 13
Chapter 4: System Design
Block Diagram
Key points
Neural Network
Page 7 of 13
Flow Chat
Page 8 of 13
GANTT CHATT
Page 9 of 13
4.3 User interface design
Home page
Images processing
Page 10 of 13
Output / Left Click
Page 11 of 13
Chapter 5: Implementation and Testing.
Code:-
import cv2
import mediapipe as mp
import pyautogui
cap = cv2.VideoCapture(0)
hand_detector = mp.solutions.hands.Hands()
drawing_utils = mp.solutions.drawing_utils
screen_width, screen_height = pyautogui.size()
index_y = 0
while True:
_, frame = cap.read()
frame = cv2.flip(frame, 1)
frame_height, frame_width, _ = frame.shape
rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
output = hand_detector.process(rgb_frame)
hands = output.multi_hand_landmarks
if hands:
for hand in hands:
drawing_utils.draw_landmarks(frame, hand)
landmarks = hand.landmark
for id, landmark in enumerate(landmarks):
x = int(landmark.x*frame_width)
y = int(landmark.y*frame_height)
if id == 8:
cv2.circle(img=frame, center=(x,y), radius=10, color=(0, 255, 255))
index_x = screen_width/frame_width*x
index_y = screen_height/frame_height*y
pyautogui.moveTo(index_x, index_y)
if id == 4:
cv2.circle(img=frame, center=(x, y), radius=10, color=(0, 255, 255))
thumb_x = screen_width / frame_width * x
thumb_y = screen_height / frame_height * y
print('outside', abs(index_y - thumb_y))
if abs(index_y - thumb_y) < 30:
pyautogui.click()
pyautogui.sleep(1)
cv2.imshow('Virtual Mouse',frame)
cv2.waitKey(1)
Page 12 of 13
REFERENCES:
http://globalaccessibilitynews.com/2011/06/06/uae-students-invents-virtual-
mouse-to-help-disabled-people/
http://www.inclusive.co.uk/articles/adapting-the-computer-for-those-with-
physical-disabilities--a250#
https://ieeexplore.ieee.org/document/8934612
http://www.onlinejournal.in/UIRV314/130.pdf
https://interestingengineering.com/how-gesture-recognition-will-change-our-
relationship-with-tech-devices
https://www.washington.edu/doit/working-together-people-disabilities-and-
computer-technology
Page 13 of 13