9
VIII
https://doi.org/10.22214/ijraset.2021.37528
August 2021
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.429
Volume 9 Issue VIII Aug 2021- Available at www.ijraset.com
Smart Hand Glove for Hearing and Speech Impaired
Basil Jose1, Yogini Lamgaonkar2, Raheel Shaikh3, Joshua D’souza4, Amey Pednekar5, Riyazul Ansari6, Varun Prabhu7
1, 2
Assistant Professor, Department Of Computer Engineering, Agnel Institute Of Technology and Design, Goa, India
3, 4, 5, 5, 6, 7
Department of Computer Engineering, Agnel Institute of Technology and Design, Goa, India
Abstract: With the advancement of technology, we can implement a variety of ideas to serve mankind in numerous ways.
Inspired by this, we have developed a smart hand glove system which will be able to help the people having hearing and speech
disabilities. In the world of sound, for those without it, sign language is a powerful tool to make their voices heard. The
American Sign Language (ASL) is the most frequently used sign language in the world, with some differences depending on the
nation. We created a wearable wireless gesture decoder module in this project that can transform the basic set of ASL motions
into alphabets and sentences. Our project utilizes a glove that houses a series of flex sensors on the metacarpal and interphalange joints of the fingers to detect the bending of fingers, through piezoresistive (change in electrical resistance when the
semiconductor or metal is subjected to mechanical strain) effect. The glove is attached with an accelerometer as well, that helps
to detect the hand movements. Simple classification algorithms from machine learning are then applied to translate the gestures
into alphabets or words.
Keywords: Arduino; MPU6050; Flex sensor; Machine learning; SVM classifier
I. INTRODUCTION
India constitutes of 2.4 million people with hearing and speech impairment which holds the world’s 20 percent of the hearing and
speech impaired population [1]. These people lack resources which a normal individual should possess. The main reason being the
communication gap, as deaf people are unable to hear and dumb people are unable to speak.
It is the natural ability of human beings to see, listen and interact with their external environment. Unfortunately, there are some
people who are specially challenged and lack the ability to use their senses to the best extent possible. Hearing and speech
impairment is a result of the physical ailment of hearing for deaf people, and speaking for dumb people. On the image processing
area, a camera is utilized to take images/videos, which are then processed and picture recognition performed using algorithms that
generate words in the display. In visual-based techniques, camera chase methods are widely utilized. When a speech impaired
person tries to communicate with a person who does not understand sign language, the normal individual finds it challenging to
comprehend what the deaf and dumb person is trying to convey and ask him/her to show gestures for the same. Thus, these people
have a language of their own to communicate with us. The only problem is to understand this language.
Fig. 1 American Sign Language
The primary goal of the suggested project concept is to develop a low-cost gadget that will allow the deaf and hard of hearing to
communicate with others using the Smart Hand Glove.. It implies that using Smart Hand Glove by the deaf person enables them to
communicate productively with a normal person which in turn bridges the gap between them. Challenges faced by the deaf person
with respect to their employment can be overcome by this method. So, in the proposed project, an intelligent microcontroller-based
system using Flex sensors will be developed which will be able to recognize the hand gesture using a classification algorithm.
© IJRASET: All Rights are Reserved
978
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.429
Volume 9 Issue VIII Aug 2021- Available at www.ijraset.com
II. LITERATURE REVIEW
A. Survey
According to the analysis done, one out of every five people is deaf or dumb on this planet is an Indian. In India about more than 1.5
million deaf people utilize Sign Language as a method of correspondence [2]. Normal parents of deaf children or vice versa use
gesture-based communication.
However, because of these issues, an automated Sign-to-Speech/text language interpretation system might help the hearing
challenged access more information. It may also be used as a teaching tool to familiarize yourself with any type of gesture
communication.
Plato's Cratylus [2], written in the 5th century BC, is one of the oldest examples of gesture-based communication. Juan Pablo Bonet
published Reduction of letters and art for teaching mute people to communicate in 1620, which is regarded as the first modern
research of communication by vocal gestures, laying out a plan for voice training for hard of hearing people and a standard letter set.
B. Hand Gesture Recongnition Techniques
Hand gesture recognition can be accomplished using two main types of sensors namely contact sensors and non-contact sensors [3].
1) Non contact approach: On the image processing area, a camera is utilized to take images/videos, which are then processed and
picture recognition performed using algorithms that generate words in the display. In visual-based techniques, camera chase
methods are widely utilized. The user generally wears a glove with specific colours or markings to indicate certain areas of the
hands, particularly the fingers. As the user performs a sign, the cameras record continuously changing images of the hand,
which are subsequently processed to recover the hand shape, position, and orientation. The need of sophisticated algorithms for
data processing is a disadvantage of vision-based methods. Another challenge in image and video recognition requires different
lighting conditions, backgrounds and field of view constraints and occlusion (often occurs when multiple objects come very
close and seemingly merge or combine with each other).
2) Contact Approach: To overcome the drawbacks of the non-contact approach and accurately recognize gestures, we can make
use of sensors for gesture recognition based on the strain gauges to sense deformations. These bends are changed to electric
signals. By measuring the electric signals, the sensor array can estimate the degree of deformations, along with compression
and tension caused by the bend angle of the fingers. A bend sensor comprises of three components: a flexible tube, an infrared
sensitive (photo diode) and an infrared diode. Seven bend sensors will be used to map the bending of the fingers from which 4
are placed on the proximal interphalangeal joints (PIJ) and 3 are placed in between the metacarpophalangeal joints (MCP).The
sensors bend with the curve of the joints on which they are placed. The intensity of light plummeting on the photo diode and
hence the current through the same decreases as the pipe bends due to the bending of the fingers while making a gesture [4].
The Hall Effect sensor (MH183) is easily available. When the South Pole is placed in front facing the Hall sensor, it generates a
0.1-0.4V output. Hall Effect sensors are mounted on the finger tips and the magnet is mounted on the palm in such a way that
the South Pole faces the ceiling. Now if we bend our fingers to touch our palms, these Hall Effect sensors come in close range
to the magnet and the inbuilt Schmitt triggers a low signal. The output is high when fingers are not within close proximity of the
palm (magnets).
3) Contribution of Machine Learning Toward the disable Community: We live in a world where not everyone is blessed with
inherent abilities that most of the human race possesses, when observed carefully one can notice that there are many who are
suffering from some kind of a disability. People with these types of disabilities are usually termed as ‘Special’ and are usually
grouped separately in our society due to their failure in functioning properly in a normal society. This creates sense of divide and
a sort of discrimination among the people which shouldn’t be there at the first place, but the advent of modern science has
bridged the gap between an average person and a person with any type of disability. One such technology that can contribute in
closing this gap is Artificial Intelligence (AI) which uses branches like Machine Learning and Deep Learning and can help
people with Sight, Hearing and Speech disabilities. [5]
© IJRASET: All Rights are Reserved
979
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.429
Volume 9 Issue VIII Aug 2021- Available at www.ijraset.com
III. BLOCK DIAGRAM
Fig. 2 Block diagram of Proposed System
As seen in figure 4.2, the hand gesture is recorded making use of the flex sensor and the accelerometer. This sensor data is
transmitted to the Arduino microcontroller which converts the analog signal to digital signal. This data is then passed to the data
processing unit via the serial interface. The data processing unit normalizes and standardizes the data which is then given to the
SVM classifier. The SVM classifier makes the prediction and the output is displayed on the computer screen.
A. Arduino Nano Microcontroller
The Arduino Nano is breadboard friendly microcontroller which is smaller in size compared to other Arduino boards. It is based on
the ATmega328. Functionality of Arduino nano is shown to be similar with the Arduino Duemilanove, but the package is different.
The nano boards do not aid a DC power jack. It uses a Mini-B USB cable to power and program the microcontroller. It has 8 analog
input pins and 14 digital pins. The analog pins supports 10 bits of resolution which provides 1024 different values and each of the
digital pins could be used as an input or output. The upper end of the range can be commuted using the analogReference() function,
by default it is 5 volts. Serial pins RX and TX which are at port 0 and 1, are used to receive and transmit serial data via a serial
connection. Out of the 8 analog pins, 6 and 7 cannot be used as digital pins. In addition to this, some of the pins (I2C: A4 (SDA) and
A5 (SCL)) experience specialized functionality. It also assists I2C (TWI) communication using the Wire library. It can be easily
programmed the Arduino IDE using the hardware programming language called processing, which share similarities with C
language.
Fig. 3 Arduino nano pin layout
© IJRASET: All Rights are Reserved
980
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.429
Volume 9 Issue VIII Aug 2021- Available at www.ijraset.com
B. Interfacing of Flex Sensors with Arduino Nano
Arduino consists of analog pin which is wired to the converter known as analog-to-digital converter (ADC or A/D). This converter
converts analog voltage values to digital values for easier use and application. In Arduino, it’s a 10-bit converter which mean, the
conversion value is between 0 to 1023.
The Arduino will read a value of 1023 at a high of 5V (maximum) and read a value of 0 for a low of 0 volts (minimum).
Fig. 4 Connection of flex sensors with Arduino nano
The output voltage you measure is the voltage drop against the pull-down resistor, not across the flex sensor. The output of the
voltage divider configuration is described by the equation:
Vo=VCC*(Rflex/ (Rflex +R))Eq 1: [6]
The output voltage will vary depending upon the bend radius.
For example, at a 5V supply and 47K pull-down resistor and a (00) bend angle, the resistance is comparatively low (around 25kO).
Thus, the following output voltage:
V0=5V*(25KΩ/(25KΩ+47Ω)) =1.74V
When flexed at an angle of 90°, the resistance rises to about 100KΩ. This results in the following output voltage:
V0=5V*(100KΩ/(100KΩ+47Ω)=3.4V
C. Interfacing of MPU6050
MPU6050 sensor module is complete 6-axis Motion Detecting Device. It combines the functionality of 3-axis Gyroscope, 3-axis
Accelerometer, and a Digital Motion Processor. It communicates with the micro controller via I2C bus interface. Rotational velocity
of the object along the X, Y, Z axes is detected using the MPU6050. The angular velocity along each axis is measured in degree per
second unit. The sensor data of MPU6050 module is comprised of 16-bit raw data which is represented in 2’s complement form.
The VCC of the sensor module is connected to 5V power supply and GND pins GND power pin on the Arduino board with the
connecting wires. To establish the I2C serial bus communication, the SDA and SCL pins of the sensor module is connected to the
A4 and A5 pins of the microcontroller respectively.
Fig. 4 Connections between MPU6050 and Arduino
© IJRASET: All Rights are Reserved
981
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.429
Volume 9 Issue VIII Aug 2021- Available at www.ijraset.com
IV. IMPLEMENTATION
Fig. 5 Glove prototype
The gloves prototype is shown in Fig. 5. The flex sensors are attached to each finger along with a 10 KΩ resistor thus creating a
voltage divider circuit, the MPU6050 is mounted on the back of the palm of the hand of the glove along with Arduino nano making
it a breadboard independent setup.
The glove operates in the following manner:
1) Capturing bend angles & spatial orientation
2) Processing Data
3) Executing machine learning
4) Displaying on GUI
A. Capturing bend angles & spatial orientation
The flex sensors along with 10 KΩ resistor attached to the glove help in capturing the bend angle based on the resistance produced
in the flex sensors. The 10 KΩ resistor forming a voltage divider avoids producing garbage values. The MPU6050 mounted on the
palm provides the X, Y, Z coordinates in the spatial domain. The data from the glove is converted from analog to digital using the
arduino nano microcontroller. This captured data is sent to the computer for further processing via the serial port using a USB data
cable.
B. Processing Data
The received data from the microcontroller to the computer is converted to a suitable form for the machine learning task, in which
we normalize the data. This is done to in order to standardize the flex sensor and MPU6050 readings. This normalized data is
sampled and linearized using linear interpolation. This makes the data discrete and suitable for machine learning algorithm.
C. Executing machine learning
The machine learning process consists of three parts that is
1) Training phase: In this phase a data set is created by recording the continuous data for each gesture. This can be easily done by
writing a simple python code. The continuous data is stored in the form of a .CSV (comma separated values), for each gesture a
sample size of 100 records were recorded, the size of the sample can vary depending on one’s application. This created file is
then used for further processing.
© IJRASET: All Rights are Reserved
982
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.429
Volume 9 Issue VIII Aug 2021- Available at www.ijraset.com
2) Learning phase: The .CSV file from the above phase is converted to a data fame which is use by the SVM algorithm this is
done so that we do not work directly on the original data set. In order to perform classification the data set has to be split into
training and testing data sets in the ratio of 8:2 respectively. The SVM classifier used the RBF kernel along with the parameters
gamma 0.01 and C in the range of 0.00000000001 to 100000000. The confusion matrix is show in the figure below.
Fig. 6 Confusion matrix
3) Testing phase: The SVM model is tested with the test data set and accuracy of 91% is achieved using the parameters of C=7
and gamma =0.01. The gestures are classified in real time with a delay of 3 seconds between each consecutive gesture.
D. Displaying on GUI
Fig. 7 Testing of prototype
The resulting classified ASL gesture is displayed on the GUI shown in the Fig.7 & 8.
Fig. 8 Graphical user interface
© IJRASET: All Rights are Reserved
983
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.429
Volume 9 Issue VIII Aug 2021- Available at www.ijraset.com
V. CONCLUSION
In this paper, smart hand glove is designed using flex sensor, MPU6050 and Arduino nano microcontroller with which hand
gestures are classified correctly with the help of the SVM classifier for which the data set was split in 8:2 ratio that is 80% training
data and 20% testing data. The SVM classifier used the RBF kernel with parameters C=7 and gamma=0.1 and produced an accuracy
of 91%. We have further widened the scope of the project by adding additional hand gestures for basic commands example “I am
good” and “I need help” where thumbs up and thumbs down represents the gestures respectively.
This prototype represents only single-handed gesture recognition, whereas ASL and some other sign languages may require both
the hands for communication. Another glove can be replicated in order to solve this problem. So, in the future both the glove can be
designed with better accuracy using high quality sensors. This prototype is comparatively less expensive than other prototypes that
involve visual based gesture recognition system.
VI. ACKNOWLEDGMENT
We are greatly indebted to our guide Prof. Basil Jose, Faculty, Computer Engineering Department, Agnel Institute of Technology
and Design, Goa for his valuable guidance throughout the dissertation, without which the study undertaken would not have been
accomplished at all. Our sincere thanks to our co-guide, Yogini Lamgaonkar, Faculty, Department of Computer Engineering, Agnel
Institute of Technology and Design, Goa for the constant support and encouragement rendered throughout the mission.
Our sincere thanks to our Head of Department Prof. Snehal Bhogan for the constant support and encouragement rendered
throughout the project
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
Varshney, Saurabh. "Deafness in india." Indian journal of otology 22.2 (2016): 73.
“Smart Glove for Hearing-Impaired.” International Journal of Innovative Technology and Exploring Engineering, vol. 8, no. 6S4, 2019, pp. 1188–1192.,
doi:10.35940/ijitee.f1245.0486s419.
Byun, Sung-Woo, and Seok-Pil Lee. "Implementation of hand gesture recognition device applicable to smart watch based on flexible epidermal tactile sensor
array." Micromachines 10.10 (2019): 692.
Chouhan, Tushar, et al. "Smart glove with gesture recognition ability for the hearing and speech impaired." 2014 IEEE Global Humanitarian Technology
Conference-South Asia Satellite (GHTC-SAS). IEEE, 2014.
Chaudhary, Suyash. “How Machine Learning Can Contribute to the Disabled Community.” Medium, Medium, 30 July 2019,
medium.com/@suyash15122/how-machine-learning-can-contribute-to-the-disabled-community-5b83e7847183.
“How to Use an Arduino Gyroscope Sensor.” OZEKI, www.ozeki.hu/p_2987-how-to-use-a-gyroscope-sensor-in-arduino.html.
© IJRASET: All Rights are Reserved
984