Singaram 2021 J. Phys. Conf. Ser. 1916 012201
Singaram 2021 J. Phys. Conf. Ser. 1916 012201
Singaram 2021 J. Phys. Conf. Ser. 1916 012201
Abstract. The most critical capacity of human beings is to interact (i.e., express their emotions,
opinions and feelings) with each other. We cannot overstate the use of communications. In
particular, it is very difficult for unspeakable people to communicate their meaning to ordinary
people, since there are more than 300 sign languages, and it is difficult for an average person to
learn and use these languages. As a result, unspeakable people face numerous irritations and
frustrations that limit their ability to fulfill their regular tasks. Different psychiatric services are
available to the muted population in order to get their speech back, but the expense of these
treatments is costly enough that no one can afford them. ere we suggest a smart speech device
that allows voiceless people to communicate their message to ordinary people using hand
movements and gestures. The system uses hand motion reading with the aid of flex sensors
along with a microcontroller (Aurdino) unit.
Keywords: Sign language, Smart glove, Flex sensors.
1. Introduction
This project is designed for one soul-purpose because speechless people can easily interact with others
because they have their own sign language, but average people cannot understand it. Our project can
allow them to easily communicate with others [1]. They can connect easily with each other using this
system. So, in our project, the system uses a finger motion reading system fitted with a flow sensor
and a Bluetooth module that can relay data to a mobile device, which is spoken out and shown
simultaneously on the computer with the aid of the application [2]. Here, we use Aurdino
microcontroller for hardware and operating system incorporating. The device consists of around 8 to
10 stored messages, such as "Support," "Danger," "Hungry" and so on, that unspeakable people
convey their simple messages. The machine reads the orientation of the human fingertips for various
differences in the action of the hand. Arduino continuously collects feedback sensor values and then
processes it. Now looks for similar messages for a range of sensor values for a particular pattern. Once
this message is located in memory, it is recovered and voiced using a smartphone application. In this
application, we will customize sentences that should be spoken or presented on a mobile phone so that
various age groups of people will profit from this one project [3].
2. Literature Review:
Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution
of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Published under licence by IOP Publishing Ltd 1
ICCCEBS 2021 IOP Publishing
Journal of Physics: Conference Series 1916 (2021) 012201 doi:10.1088/1742-6596/1916/1/012201
[4], this author uses a PCA algorithm to use Image Recognition Based Sign to Speech Translator for
Stupid People. This would describe device limitations, interfaces and communications with other
external applications. It would also clarify the purpose and the whole declaration for the evaluation of
the framework. The proposed study's aim is to solve challenges of identifying skin color for the natural
interaction among the users and system. Initiative is intended for people with disabilities and can help
them communicate with others. In this device, a webcam is mounted in front of the visually disabled
user. Physically disabled people position a finger with special action in front of a flex sensor. When
you make a gesture, the camera tracks the precise location of your fingers and analyses the image
using a theoretical part analysis algorithm. The collected coordinates will be mapped to the previously
saved one, and the exact angle of the operation from the database will be calculated as a result. This
project is therefore based on the Image Processing Domain and uses a PCA algorithm that is slow and
not compatible with regular use. So, we're taking a step forward by using different analog signals to
decode communications, which dramatically enhances speed and accuracy.
[5], Research is part of the Augmentative and Alternative Communications (AAC). This is a wired
project that connects to an external computer for processing and viewing information. It consists of
five flex sensors. Flex sensors are resistive sensors that adjust resistance by adjusting the bend's or
curvature's curvature to analog voltage. Since these devices can only track hand gestures, some of the
letters are not apparent because their movements are similar to those of other works. In order to
improve the accuracy of the accelerometer, the hardware portion which measures the orientation of the
hand has therefore been integrated. The Accelerometer will decide the orientation of the glove with
respect to the ground. To calculate the inclination of angled gloves, it will be applied to the middle of
the gloves. To detect x, y and z axes, the accelerometer uses a single structure. The output voltage of
the accelerometer varies with respect to the Earth, depending on the inclination. Arduino NANO is the
controller used in our build [6]. The built-in ADC that translates these analog inputs to digital output.
Here the software comes into action, the algorithm has some built-in data that matches input data and
built-in data, and if matching is achieved, the particular message will be displayed or displayed.
3. Problem Identification:
Sign Language is the most interactive means for unspeakable people to communicate with one
another. Common people, on the other hand, do not understand sign language, resulting in a
significant difference between unspeakable people and regular people. However, owing to natural
variations in the sentence, sign language is difficult to comprehend.
4. Proposed System:
We need the Flux sensors, the Gloves, the Aurdino microcontroller, the Accelerometer, the Bluetooth
transmitter and the power supply for this project. The Flex sensor is essentially a variable resistor
whose terminal resistance rises when the sensor is twisted. Therefore, this sensor resistance rise
depends on the linearity of the surface. Thus, resistance increases as linearity improves. The interface
of the flex sensors with the Aurdino microcontroller. The measurement instrument is directly attached
to analog ports since the flex sensors achieve analog output. Aurdino NANO has an integrated ADC
(Analog to Digital Converter) and is powered by a source of electricity. Now that the software
program comes into operation, it transmits data through Bluetooth transmitter by comparing the flex
sensor information, which is completely adjustable for simple and daily use. Finally, the display that
is text and the corresponding sound is played on the cell phone. Figure 1 proposed system.
2
ICCCEBS 2021 IOP Publishing
Journal of Physics: Conference Series 1916 (2021) 012201 doi:10.1088/1742-6596/1916/1/012201
5. Advantages:
• It provides less expensive.
• Compactness.
• Flexible to the user.
• It needs lesser power to operate the machine.
6. Result & Discussion
The aim of this project is to convey the appropriate meaning in accordance with the glove gesture. We
have also seen that the outputs of one of the lines of the subject's hand when wearing clever gloves, as
well as the accompanying words, are displayed on the android computer. Along with, the Android app
does the job by translating the captured text to speech signals, making them readable and
understandable to the daily masses.
7. Conclusion
Sign Language is one of the recommended resources for facilitating communication between deaf and
mute societies and mainstream culture. The feature character wishes to learn about a sign language
that is no longer possible, so our challenge is to eliminate such barriers. Gloves have proved helpful in
converting their sign language signs to speech using an Android smartphone. In contrast to other
approaches, smart gloves focus on decoding alphabet gestures. Smart gloves use precept function
evaluation to identify real-time feedback statistics for position extraction.
References
[1] Jack Hourcade, Tami Everhart Pilotte, Elizabeth West, and Phil Parette ―A History of
Augmentative and Alternative Communication for Individuals with Severe and Profound
Disabilities‖ Focus On Autism And Other Developmental Disabilities, 19(4), winter 2004.
[2] M. Mohandes, S. I. Quadri, and M. D. King, ―Arabic Sign Language Recognition an Image
Based Approack‖, in 21st International Conference on Advanced Information Networking and
Applications Workshops, 2007 (AINAW‘07), pp. 272-276, 2007.
3
ICCCEBS 2021 IOP Publishing
Journal of Physics: Conference Series 1916 (2021) 012201 doi:10.1088/1742-6596/1916/1/012201