Google Glass Usedas Assistiv
Google Glass Usedas Assistiv
Google Glass Usedas Assistiv
net/publication/318733023
Google Glass Used as Assistive Technology Its Utilization for Blind and
Visually Impaired People
CITATIONS READS
12 1,831
4 authors, including:
Petra Poulová
University of Hradec Králové
171 PUBLICATIONS 580 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Ales Berger on 06 April 2018.
1 Introduction
In this chapter are published the basic definitions of Google Glass, as well as its history
of development, used technology and advantages. Project Google Glass represents a
futuristic gadget, which can be personalized by using different Smartphone’s options
and Internet connection. Glass is a new first-party hardware product designed by
2
Google has declared the Project Glass in April 2012 to use the wearable technology
with the optical head mounted display. Google has started selling Glass in the USA on
15th April 2013 for limited period [2]. The family of Google Glass technology consists
of the four Glass Generations:
of laser light were also verified. Another author [6] presents the summary of seven basic
advantages provided by Google Glass:
1. It is easily wearable and easy to handle.
2. It is useful technology for all kinds of people.
3. Access the documents, pictures, videos or map is very quickly.
4. There are mainly used: navigation, communication and social networks tools or ap-
plications.
5. There is natural voice command language for communication.
6. It is possible to use it with android phone through Wi-Fi.
7. It is an innovative, futuristic technology for technology lovers.
Bluetooth.
Bluetooth is another type of short range wireless communication. This technology is
based on 802.15.1, which is the standard specific for Wireless Personal Area Networks.
Similar to 802.11 it works by means of radio signals in the frequency band of 2.4 GHz,
but it is different due to the fact it was meant to replace wires among electronic devices.
Depending on the class of the device, this technology can provide ranges of up to 100
meters (class 1) [8]. The main advantage of the Bluetooth is fact that this technology is
part of every Smartphone, laptop or tablet.
EyeTap Technology.
EyeTap is a device which allows, in a sense, the eye itself to function as both a display
and a camera. EyeTap is at once the eye piece that displays computer information to
the user and a device which allows the computer to process and possibly alter what the
user sees. That which the user looks at is processed by the EyeTap. This allows the
4
EyeTap to, under computer control, augment, diminish, or otherwise alter a user’s vis-
ual perception of their environment, which creates a Computer Mediated Reality [9].
Wearable Computer.
A wearable computer is a digital device that is either strapped to or carried on a user’s
body. It is used most often in research that focuses on behavioral modeling, health mon-
itoring systems, IT and media development, where the person wearing the computer
actually moves or is otherwise engaged with his or her surroundings. Wearable com-
puters provide constant computer and user interaction. In extreme cases, they serve
much like a prosthetic, in that device use does not require users to cease other activities
Wearable computers are particularly helpful for application that need a lot of advanced
process support than simply hardware coded logics [11].
Wi-Fi Technology.
Wi-Fi is an abbreviation for the term Wireless Fidelity. This is popular name for IEEE
802.11 protocol for wireless local megabits per second (Mb/sec). In comparison, stand-
ard Ethernet provides maximum data speed of 10Mb/sec via cables. Wi-Fi operates in
the 2.4 Gigahertz (GHz) radio band the same frequency used by most Smartphones and
microwave ovens over 11 channels [12].
Vision API.
Google Cloud Vision API is a part of Google Cloud Services. Google Cloud Vision
API enables developers to understand the content of an image by encapsulating power-
ful machine learning models in as easy to use REST API. It quickly classifies images
5
into thousands of categories (e.g., “sailboat”, “lion”, “Eiffel Tower”), detects individual
objects and faces within images, and finds and reads printed words contained within
images. You can build metadata on your image catalog, moderate offensive content, or
enable new marketing scenarios through image sentiment analysis. Analyze image up-
loaded in the request or integrate with your image storage on Google Cloud Storage
[14]. The scheme of Google Glass and Cloud Platform functionality is shown in Figure
1.
Assistive technology is one of the core strategies schools and other organizations use
to help with learning and attention issues. Some adaptive tools are low-tech and some
are pretty high-tech. Here are some common examples of its utilization:
Calculators,
Writing Supports,
Graphic Organizers.
Vision,
Learning,
Dexterity and Mobility,
Language and Communication [15].
The global trend supported by calculated prediction of the blind people in 2020, was
conducted by the World Health Organization (abbreviated WHO) and is shown in Fig-
ure 2.
2014
2015
2016
2017
2018
2019
2020
Trend The Global Initiative
Fig. 2. Global Prediction - the Total Number of Blind People in 2020 [19].
A blind person or a person with a low vision is dealing with navigation, orientation and
other recognition issues in a daily basis. These problems can be easily solved by learn-
ing or using special equipment (e.g., cane etc.), especially in a limited space area, such
as in a flat or in a house. But when a blind person wants to visit a doctor, a friend, look
for a job or travel few blocks, navigation and orientation in a noisy environment over-
whelming with many people, cars and sometimes during night, windy or rainy weather
is very demanding and sometimes impossible without a help of a social assistant. Au-
thors of the paper have developed an assistive application, which is using a Google
Glass connected via Smartphone in order to help these people and their navigation or
recognition issues outdoors as well as indoor. Outdoor navigation is surely the most
important problem with which are visually impaired/blind people dealing every day.
3 Methodology
Primary purpose of the paper is to provide and test developed application for basic
navigation issues, which are usable for daily support of blind or visually impaired peo-
ple. Second part of research is using Google Glass camera as a basic recognition tool
for blind or visually impaired individuals.
Methods and methodology used in the presented research are based on case study
approach. Here the focus of attention is on a particular community (e.g. blind people,
visually disabled people), organization or set of documents. The attraction of this kind
8
of research is that it stems from empirical curiosity but is at the same time practical.
The whole research may be interested in a wider question but a case study enables re-
searchers to focus on a specific example. A major challenge in case study is connection
author's own primary research or re-analysis with the broader theoretical themes and
empirical concerns of the existing literature.
Addressed participants were testing the functionality of developed application by
using a pair of Google Glass Generation 4 and their own Smartphone with Android OS.
Authors aim to provide a GG as assistive technology for visually disabled people. When
the results of a presented study show that this devices and app can be successfully es-
tablished as helpful assistive technology for the participants (can easily substitute social
assistant and restore self-esteem/self-sufficiency of participants), it can provide a sig-
nificant improve and change in their daily living.
Firstly, during the process of testing the app, all blind or impaired users are dealing
with basic navigation issues in an open environment (e.g., town, park, hospital etc.).
Each participant gets an unknown address in the town and only by using a pair of
Google Glass device and developed app in Smartphone is trying to find estimated ad-
dress as soon as possible. After a serie of this navigation experiments the acquired re-
sults (first experiment measures time for destination point alone, second with help of
social assistant and third by using GG and developed app), obstacles and comments
from respondent’s deep interview are analyzed as well. Second part of this experiment
aims on recognition troubles indoor as well as outdoor. In this experiment, participants
were approaching in a front of 30 different kinds of objects, animals and obstacles in
order to use the app and GG instead of their eyes for their basic and successful recog-
nition.
3.2 Participants
There were 15 participants participating in the presented study. All of them are dealing
with visual disabilities in many varieties and 5 of them are completely blind. There are
8 women (3 of them are blind) and 7 men (2 of them blind). The average age is 45
years.
9
In this part are described two conducted experiments and measured data. First experi-
ment is dealing with navigation issue in three different situation. First situation meas-
ured total time of participants from start to final destination on their own (without any
help from an assistant or a Smartphone). Second situation measured total time from
start to final destination with the assistance of social assistant. This data are shown in
the table (see Table 1). Second situation provided mostly the shortest time in the first
experiment, because blind/visually impaired people can walk fluently and without any
troubles or obstacles in their route. They are guided by specialized assistant, who is
helping them with navigation and orientation in unknown environment. There is only
one disadvantage - blind/visually impaired people are trying to be as much independent
as possible. Third situation provided Google Glass and connected Smartphone with
App for better navigation. This situation is testing assistive technology (GG) in order
to sustain people’s independency without any additional support.
marked the majority of visually impaired/blind people. Next research could improve
the recognition process of these obstacles in larger participants sample with focus on
mentioned obstacles and their various forms.
Boards 60%
Fences 50%
Pylons 99%
5 Conclusion
Google Glass represents a great opportunity not only for business enterprises, but also
for medical organizations, education institutions and social services. Primary goal of
this paper is to provide and mainly test developed application for basic navigation is-
sues, which are usable for daily support of blind or visually impaired people. Secondly,
authors aim their attention to utilization of the app as a basic recognition tool for visu-
ally impaired people. In this point of view we can call Google Glass and developed
Smartphone app as assistive technology, because they are helping participants to get to
any place faster, easier and without assistance. Developed app with utilization of
Google Glass is very promising and authors are working on another testing and exper-
iments in order to get more data for upgrading this application. Conducted experiment
no. 1 shows that the total time to reach destination using GG and SP is almost the same
as with help of social assistant. Author’s data shows that tested application is focused
on two main functionalities: the first one is navigation and the second deals with recog-
nition of 30 different objects, which are mainly addressed as obstacles for visually im-
paired people. Next upgraded functionality can be also a localization tool in this devel-
oped application, which will be published in a following research. Tested Android ap-
plication with GG recognized successfully more than three fourths of tested obstacles
(75%). The worst obstacles are pylons (99%) and trees/bushes (95%), which marked
the majority of visually impaired/blind people. Next research is aimed to improve the
recognition process of these obstacles in larger participants sample with focus on men-
tioned obstacles and their various forms.
12
5.1 Acknowledgement
This work and the contribution were also supported by Students Grant Agency — FIM,
University of Hradec Kralove, Czech Republic (under ID: UHK-FIM-SP-2017-2108).
References
1. FREEMAN, Jay. Exploiting a Bug in a Google’s Glass. URL: http://www.saurik.com/id/16
2. DESHPANDE, Miss Shimpali; UPLENCHWAR, G.; CHAUDHARI, D. N. Google Glass.
International Journal of Scientific & Engineering Research, 2013, 4: 12.
13
3. SOLOMON, Kate. Google Glass 3.0 could be the Oculus Rift you can wear to work. URL:
http://www.techradar.com/news/portable-devices/other-devices/google-glass-3-0-could-be-
the-oculus-rift-you-can-wear-to-work-1246018
4. LYONS, Kenton Michael. Improving support of conversations by enhancing mobile com-
puter input. 2005. PhD Thesis. Georgia Institute of Technology.
5. DESHPANDE, Miss Shimpali; UPLENCHWAR, G.; CHAUDHARI, D. N. Google Glass.
International Journal of Scientific & Engineering Research, 2013, 4: 12.
6. PATHKAR, Namrata S.; JOSHI, Neha S. Google Glass: Project Glass'. International Journal
of Application or Innovation in Engineering & Management (IJAIEM), 2014, 3.10: 031-
035.
7. NIMODIA C. and DESHMUKH H.R. Android Operating System. Software Engineering,
ISSN: 2229-4007 & ISSN: 2229-4015, Volume 3, Issue 1, pp.-10-13.
8. SOSA, Abimael. Personnel tracking system using a Bluetooth-based epidemic protocol. The
University of Texas at El Paso, 2007
9. MANN, Steve, “Google Eye”, Supplemental material for “Through the Glass, Lightly”,
IEEE Technology and Society, Vol. 31, No. 3, Fall 2012, pp. 10-14.
10. Techopedia Inc. What is a Smart Grid? Definition from Techopedia. Where IT and Business
Meet URL: https://www.techopedia.com/definition/692/smart-grid
11. Techopedia Inc. What is a Wearable Computer? Definition from Techopedia. Where IT and
Business Meet. URL: https://www.techopedia.com/definition/16339/wearable-computer
12. CORRAL, Luis, et al. An Android Kernel Extension to Save Energy Resources Without
Impacting User Experience. In: International Conference on Mobile Web and Information
Systems. Springer International Publishing, 2016. p. 3-17.
13. Google Cloud Computing, Hosting Services & APIs. Google Cloud Platform. Google. URL:
https://cloud.google.com/
14. Google Developers. Google Cloud Vision API Documentation. URL:
https://cloud.google.com/vision/docs/
15. MORIN, Amanda. 8 Examples of Assistive Technology and Adaptive Tools. URL:
https://www.understood.org/en/school-learning/assistive-technology/assistive-
technologies-basics/8-examples-of-assistive-technology-and-adaptive-tools
16. Microsoft. Assistive technology providers. URL: https://www.microsoft.com/en-us/accessi-
bility/assistive-technology-partners
17. JENSEN, Eric. Brain-based learning: The new paradigm of teaching. Corwin Press, 2008.
18. FIXOT, R. S. American Journal of Ophthalmology, 1957.
19. Blindness. Vision 2020 - The Global Initiative for the Elimination of Avoidable Blindness.
URL: http://www.who.int/mediacentre/factsheets/fs213/en/
20. IHMC. Institute of Human and Machine Cognition Story. URL:
https://www.ihmc.us/about/aboutihmc/
21. BERGER, Ales; MALY, Filip. Smart Solution in Social Relationships Graphs. In: Interna-
tional Conference on Mobile Web and Information Systems. Springer International Publish-
ing, 2016. p. 393-405.