Sign Language To Text Conversion
Sign Language To Text Conversion
Sign Language To Text Conversion
CHIKODI-591201, KARNATAKA
Department of Computer Science and Engineering
✩ INTRODUCTION
✩ LITERATURE SURVEY
✩ GAP ANALYSIS
✩ PROBLEM STATEMENT
✩ OBJECTIVE
✩ OVERVIEW OF PROJECT
✩ HARDWARE & SOFTWARE REQUIREMENTS
Cont...
✩ What is communication ?
“Exchanging or imparting of information from one person to another person ."
Exchanging of information can be by speaking , by writing or by other Medium .
Communications between deaf-mute and a normal person have always been a challenging
task. different methods adopted to reduce barrier of communication by developing an
assistive device for deaf-mute persons .
Communicating with the normal person has no issue but the problem comes when we
communicate with the person having disability.
To overcome these ,many techniques have already been done and project include same
techniques like sign language.
Sign Language :Sign language is an important part of life for deaf and mute people. They
rely on it for everyday communication with their peers. A sign language consists of a well-
structured code of signs, and gestures, each of which has a particular meaning assigned to it.
They have their own grammar and lexicon. It includes a mixture of hand positioning, shapes
and movements of the hand.
Cont..
A sign may also be recognized by the environment as a compression technique for the
information transmission subsequently reconstructed by the receiver. The signs are majorly
divided into two classes: Static signs and Dynamic signs. The dynamic signs often include
movement of body parts. It may also include emotions depending on the meaning that gesture
conveys. Depending on the context, the gesture may be widely classified as Arm gestures ,
Facial / Head gestures ,Body gestures.
Static signs includes only poses and configurations.
This model acquires image data using the webcam of the computer, then it is pre-processed
using a combinational algorithm and recognition is done using template matching.Edge detecion
algorithm is used remove background distraction like noise.To develop this approach, a dataset
is created with all Swaragalu, Vyanjanagalu, and numbers in Kannada language.
Fig(b).0-10 numerical sign
Fig(a).Kannada signs
LITERATURE SURVEY
In the literature survey section, the history of the earlier work done in this area and there
issues are discussed. It contains a record of all the research going in this area. In this section
detailed study of the earlier work done on the sign language recognition is discussed .
[1]. Kohsheen Tiku, Jayshree Maloo, Aishwarya Ramesh, Indra R ,“Real-time Conversion of
Sign Language to Text and Speech “International Conference of on Inventive Research in
Computing Applications (ICIRCA-2020)and IEEE explore ,2020.
Methodology: Histogram of gradients (HOG)descriptors,Support Vector Machine (SVM) ,a
machine learning algorithm uses HOG descriptors as the features of the image.
[2]. K. Manikandan, Ayush Patidar, Pallav Walia, Aneek Barman Roy "Hand Gesture
Detection and Conversion to Speech and Text " international Conference of on Inventive
Research in Computing Applications IEEE explore 2019
Methodology:The strategy proposed in this paper makes utilization of a webcam through
which hand gestures gave by the user are captured and identified accordingly.
Cont..
Conclusion:The people can easily communicate with each other. The user-friendly nature
of the system ensure that people can use it without any difficulty and complexit
[3]Ramesh M. Kagalkar and Shyamrao V. Gumaste have described in "Curvilinear tracing approach
for recognition of Kannada sign language" Int. J. Computer Applications in Technology, Vol. 59, No.
1, 2019
Methodology: curvilinear tracing approach for shape representation and recognition of Kannada sign
language to generate corresponding characters in Kannada language.To develop this approach, a
dataset is created with all Swaragalu, Vyanjanagalu, and numbers in Kannada language.
Conclusion:The presented approach results in higher retrieval performance as compared to the
conventional model of edge shape feature representation. The accuracy of true sign detection and
transformation is observed to be higher in the proposed curvilinear coding approach.
Cont..
[4] Zain Murtaza,Hadia Akmal,Wardah Afzal, Hasan Erteza Gelani, Zain ul Abdin
,Muhammad Hamza Gulzar ,“Human Computer Interaction based on Gestural Sign Language
to Text Conversion” was designed to show gesture recognition system for the development of
a Human Computer Interaction (HCI) using Leap Motion Sensor (LMS) which is a is a
device proficient with tracking hand motions or gestures.
[5]Victoria A. Adewale 1, Dr. Adejoke O. Olamiti “Conversion of sign language to text and
speech using Machine learning techniques “ was designed to show that sign language can be
converted into text by using machine learning types such as unsupervised learning.
[6]Omkar Vedak1, Prasad Zavre2, Abhijeet Todkar3, Manoj Patil4 "Sign Language
Interpreter using Image Processing and Machine Learning International Research Journal of
Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 06 Issue: 04 | Apr 2019"
Cont..
Methodology: aim of this paper is to develop an application which will translate sign
language The application acquires image data using the webcam of the computer, then it is
pre-processed using a combinational algorithm and recognition is done using template
matching.
Conclusion: The results show the overall accuracy of the system to be 88%.
GAP ANALYSIS
[5]Conversion of sign language to text and speech using Machine learning techniques. Tools
used : Image and Speech processing; Text-to-Speech (TTS); Unsupervised Learning; FAST
and SURF algorithms.
Issue :It was design for both conversion i.e sign language to text as well as sign language to
speech with the help of data capturing using kinetic sensor.
[6]Sign Language Interpreter using Image Processing and Machine Learning.
Tools Used: Pre-processing, Feature Extraction, Edge Detection, Classification.
Issue :It was design to create an application that will convert sign language to text and audio
and system shows 88% accuracy.
PROBLEM STATEMENT
✩ The Aim of this project is to develop a model for translating of hands gestures
into the text in kanada language. It maps letters, words and complete sentence
of a certain language from a set of hand gestures enabling an in individual to
communicate by using hands gestures rather than by speaking. The system
must capable of recognizing sign language symbols that can be used as a
means of communication with people having disabilities like deaf and dumb.
✩ Input : Showing different signs in front of camera.
✩ Output : Should display A letter , A word ,A Sentence
OBJECTIVES
☆ This project aims to bridge the gap between the speech and hearing impaired people and
the normal people.
☆ The project uses image processing system to identify, especially Kannada alphabetic sign
language used by the deaf people to communicate and converts them into text so that
normal people can understand and vise versa.
☆ The main objective is to translate sign language to text. This model provides a helping-
hand for speech-impaired to communicate with the rest of the world using sign language.
This leads to the elimination of the middle person who generally acts as a medium
of translation. We are developing a Indian sign language translator model for the ease of
communication, as the device converts the gestures performed by the user into text along
with displaying what the other person says.
OVERVIEW OF THE PROJECT
This sytem has been divided into two phases firstly, feature extraction phase which
in turn uses histogram technique, Hough and Segmentation to extract hand from
the static sign.
Secondly classification phase uses neural network for training samples. Extreme
points were extracted from the segmented hand using star skeletonization and
recognition was performed by distance signature.
The proposed method was tested on the dataset captured in the closed
enviroment with the assumption that the user should be in the field of view.
The developed system is focused with objective of reducing the communication
gap between normal people and vocally disabled.
SYSTEM DESIGN & METHODOLOGY
Cont..
✩ The system is designed to visually recognize all static signs of the Kannada
Sign Language (KSL) and all signs of alphabets using bare hands.
✩ The system gives the comparison of the three feature extraction methods
used for KSL recognition that is translation, rotation and scale invariant.
The combination of the feature extraction method with excellent image
processing and neural networks capabilities has led to the successful
development of KSL recognition system.
✩ The system has two phases: the feature extraction phase and the
classification phase. Images were prepared using portable document format
form so the system will deal with the images that have a uniform background.
Cont..
Software Requirements
☆ Python IDLE
☆ Open CV
☆ Machine Learning
☆ Python Libraries
Cont...
Hardware Requirements
☆ Camera
☆ A system with following specification
☆ 4GB RAM
☆ Core i3/i5 processor
☆ At least 3GB of free Disk space
☆ Window /Ubuntu Operating System
☆ 1280 x 800 minimum screen resolution
Applications
1. Speech Impaired:
The people who cannot speak can simply communicate with the outside world
with the help of sign language. Using the hand gestures, a person can convey the
message. The desired message is converted to both text.
2.Hearing Impaired:
The deaf people can make use of this system to communicate. The person can
simply use the hand movement to convert it into a text message which is further
displayed on the screen.
CONCLUSION
The proposed system could effectively converts the sign language to text that will
help the disable people’s to understand the language spoke by the normal person
and helps in the bridging the gap of communication between a normal person and
abnormal persons in kannada. It could efficiently recognize the alphabets from
gesture’s and as well as the word from the images. At the end we will be able to
conclude that the model we have design is satisfying our expectation and
providing expected output as well.
REFERENCES
5.Victoria A. Adewale 1, Dr. Adejoke O. Olamiti “Conversion of sign language to text and
speech using Machine learning techniques “ Journal of Research and Review in Science, 58-
65 December 2018.
6.Omkar Vedak1, Prasad Zavre2, Abhijeet Todkar3, Manoj Patil4 "Sign Language Interpreter
using Image Processing and Machine Learning "International Research Journal of
Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 06 Issue: 04 | Apr 2019.
Thank you !!