Wearable Computers: Submitted by

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 10

WEARABLE COMPUTERS

Submitted by
M.Sri Mounica,

07541A0538.
ABSTRACT:
With computing devices becoming smaller and smaller it is now possible for
an individual to don such a device like a hat or jacket. It is clear that these
technology will enable us to extent the desktop resources (including memory
computation and communication) to anywhere in travel. Also this constant
access, augmented by a battery of body mounted sensors will enable a
computer to be sensitive to the activities in which we are engaged and thus
allow the computer to participate in an active manner as we perform our
tasks. This area includes computer science, computer engineering and
psychology.

INTRODUCTION:
Other than being a portable computer, a wearable computer must be an
adaptive system with an independent processor. That is the system must
adapt to the whims and fancies of the user instead of the user having to
adapt his lifestyle for the system. The system must provide seamless
information transfer whenever the user requires it.

HISTORY:
The concept of wearable computing was first brought forward by Steve
Mann, who, with his invention of the 'Wear Comp' in 1979 created a
pioneering effort in wearable computing. Although the effort was great, one
of the major disadvantages was the fact that it was nothing more than a
miniature PC. Absence of lightweight, rugged and fast processors and display
devices was another drawback.
The 1980s brought forward the development of the consumer camcorder,
miniature CRTs etc. brought forward the development of the multimedia
computer. With the advent of the internet and wireless networking
technologies, wearable devices have developed a great deal.
After its invention wearables have gone through 18 generations of
development, with research going on at prestigious institutions like MIT,
Georgia Tech and Carnegie Mellon University.
The devices to be introduced represent the new frontiers in the
development of wearable technology. They are:
1. Nomad - Wearable Audio Computing
2. DyPERS - Dynamic Personal Enhanced Reality System
3. Wearable Cinema
3. NOMAD - WEARABLE AUDIO COMPUTING
The Nomadic Radio provides an audio only wearable interface and acts as a
unified messaging system. Remote information such as email, voicemail,
hourly news broadcasts, reminders, traffic reports etc are automatically
downloaded and presented to the user in a seamless manner. The
presentation is such that it produces minimum disturbance to the user.

Objective:
In the present day, when unlimited information is made available to the user
through various media, it is found increasingly that the user suffers from
information overload. That is, unwanted information is being provided to the
user and this causes less stress being placed on the required information.
E.g. Spam mails in our inbox. Moreover the user is not able to access the
information at all times.
Pagers and Cellular phones provide mobility to a large extent, but the
information that can be transmitted through a pager is very limited and
cellular phone services are expensive as all the data processing is done by
the telephony servers rather than by the phone itself.
The Nomad filters information and provides adaptive notification, messaging
and communication services on a wearable device. The system determines
the method of presentation of the information based on the time of the day,
physical position, scheduled tasks, message content, and level of interruption
and acoustics of the environment. The user's long term listening patterns
will also be taken into consideration.
Nomadic Radio: wearable audio messaging
Voice Recognition for Navigation and Control
Synthetic Speech for Feedback

Nomadic Radio is developed as a unified messaging system which utilizes


speech synthesis and recognition on a wearable audio platform. The system
mainly works on a client server model. A combination of speech and button
inputs allow the user unlimited access to the information he wants. Text
messages such as email; reminders etc are converted to voice using a
synthesizer. Users can select from the various categories of information
available, browse the messages and save or delete from the server. As the
system gains location awareness, a scenario is envisaged where the
information is presented depending on the location of the user.
Design of the Wearable Platform
Audio output must be provided such that it causes minimum hindrance and
maximum privacy to the user. Headphones cannot be used as it would be a
nuisance for obvious reasons. Thus speakers worn on the body were
developed.
The Soundbeam Neckset worn around the neck consists of two directional
speakers provided on the user's shoulders and a directional microphone
placed on the user's chest. A button is provided to activate speech
recognition. Spatialized audio is provided in the neckset.
Network Architecture

The nomadic radio consists of a client server model and works over a
wireless LAN. The Neck set is connected to a Pentium based portable
processor connected to the waist. The web servers download information
such as: emails and voicemails from the user's mailbox, reminders, hourly
news broadcasts, and weather and traffic reports. The web server filters
the information and removes unwanted information. The user, when notified
can download the information from the web server to the radio and listen to
it in the required format. The network also consists if a position server
whereby the position of the user can be determined.

Working with the Device:


The information must be provided to the user in such a manner that it
causes minimum disturbance to the user. One of the methods used by the
Nomad is to broadcast the news, reports etc in the background. The Audio
streamer device checks for Head Related Transfer Functions (HRTF), i.e.
whether the user is straining his head to listen to the news. If so, the
volume of the broadcast is increased. Spatial zed listening is provided for
the voicemails and emails, which arrive at different times of the day.
The device mainly works in 3 modes of operation: 1) Broadcasting
In this mode, messages are broadcast to the user at low tones, in the
background. If the user pays attention to the message (by button press or
HRTF), the message is brought to the foreground, else it is faded away.
2) Browsing
In this mode the user selects the category and plays back the messages
sequentially. When a required message is received, the user can stop the
device and listen to the message in the foreground.
3) Scanning
In this mode, certain portions of the message are played sequentially each
message coming to the foreground for sometime and then fading out as the
new message enters the foreground. The user selects the message as it
comes to the foreground.
AWARENESS & COMMUNICATION
The Nomad allows the user to be aware of the location of other users and
determine their location using the position sensor. The user can also chat
with other users from a remote location using the Nomad network.
4) DyPERS
Introduction
As computation becomes faster and easier, human capabilities like daily
scheduling like planning, scheduling etc can be performed by personal digital
assistants (PDAs). But transfer of this information from the real world to
the PDAs requires tremendous effort from the user. Thus this transfer of
information must be provided in a natural seamless manner. For this we use
DyPERS - Dynamic Personal Enhanced Reality System.
The device acts as an audio-visual memory assistant which reminds the user
at appropriate times using perceptual cues. The DyPERS stores relevant
information from what the user sees using a portable camera. This audio
visual clip is stored along with the required index in the memory of the
system. Whenever the device encounters the device again in its field of
vision, the system plays back the clip.
Audio-Visual Associative Memory System
The main principle of operation of DyPERS is called Record & Associate. In
this system, the user records relevant video clips using the camera mounted
on the line of sight of the user. After recording he associates the recorded
clip to an object which acts as the index to the clip. The device then scans
for the indexed image and if it 'sees' a similar object, it is sent to the
processor, which compares it with the original index and returns a
'confidence level'. If the confidence level is above a certain threshold level,
the video clip is played back by the system.

Working:
The audio-visual recording module accumulates buffers containing audio-
visual data. These circular buffers contain the past 2 seconds of compressed
audio and video. Whenever the user decides to record the current
interaction, the system stores the data until the user signals the recording
to stop. The user moves his head mounted video camera and microphone to
specifically target and shoot the footage required. Thus, an audio-video clip
is formed. After recording such a clip, the user selects the object that
should trigger the clip's playback. This is done by directing the camera
towards an object of interest and triggering the unit (i.e. pressing a button).
The system then instructs the vision module to add the captured image to
its database of objects and associate the object's label to the most
recently recorded Audio/Video clip. The user can select from a record
button, an associate button and a garbage button. The record button stores
the A/V sequence. The associate button merely makes a connection between
the currently viewed visual object and the previously recorded sequence.
The garbage button associates the current visual object with a NULL
sequence indicating that it should not trigger any play back. This helps
resolve errors or ambiguities in the vision system.
Whenever the user is not recording, the system continuously scans its field
of view to check whether any of the objects in its database are present. If
so the video clip is played back as instructed. The recording, association and
retrieval are presented in a continuous manner.
Object Recognition System
In order to recognize an object, multidimensional histograms of the object
image are taken and is compared with the histograms of the images in the
database of the system. Similar histograms were considered as a positive
recognition. In order to test whether such a system would work, an
experiment was conducted in which 103 similar objects were scanned at
different image plane rotations and views points.
Hardware
At present, data transmission is via wireless radio communications, which
makes mobility of the user, limited. In the future better data transmission
methods could be evolved.
The main components of the DyPERS system are shown:

The HUD is a Sony Glasstron display with semi-transparent display and


headphones. A video camera with wide eye lens is used to increase field of
vision and is mounted near the user's forehead to remain in the line of sight.
The A/V data captured by the camera is transmitted using a wireless radio
transmitter to a workstation. Here the captured video is split into image
clips and compared to various images in its database. The required data is
then transmitted back to the user. The clips are then displayed on the
Glasstron HUD. Two A/V channels are used at all times to transfer data
bidirectionally.
Applications
The applications of such a device are tremendous. Some of them are:
* Daily scheduling can be stored easily and associated with a personal trigger
object.
* An important conversation can be recorded and associated with the
person's visiting card.
* Online instructions could be provided for an assembly task.
* The device could be used for crime prevention by recognizing the criminal
by comparing with earlier records.

5. WEARABLE CINEMA:
Introduction
Application in Museum Environment:
Over many years, the concept of interactive cinema has been experimented
with, without much success. With the advent of wearable computing, this
concept might be a reality. Researchers sat the MIT Media Lab have
developed a new way whereby interactive cinema can be displayed to the
wearer, using visual cues from the environment.
The experimentation was performed in a museum environment. Interactive
documentaries and explanations on each exhibit had to be shown to the
visitor to give him an enhanced experience. The introductory presentation
must not divert the viewer's attention away from the exhibit. The wearable
cinema offers to fuse together the documentary and the visitor's path in
the exhibit using a wearable computer.
A variety of historical footage is collected and authored an interactive
presentation for a wearable computer using a Wearable City 3D graphics
presentation to situate the user in the space. The audiovisual presentation
of the footage and its description are authored using Macromedia's Flash
authoring environment. A perceptive media modeling of the content unfolds
the wearable cinema as the visitor walks around the space, and the camera
attached to the wearable recognizes its presence in specific locations or
relevant objects.
The Wearable Cinema system allows recording small chunks of video and
associates them with triggering objects. When the objects are seen again at
a later moment, the video is played back. Wearable Cinema is not a
simulation running on a desktop computer connected to a head mounted
display. It actually runs on a wearable, which was especially designed for it,
and the computer vision runs in real time on the wearable CPU.

The main distinctive characteristic of this setup is that it uses real time
computer vision as input for easier and faster location finding. The system
uses DyPERS technology to recognize objects in its field of vision. A quick
training on the locations or objects to recognize is the only setup needed for
the computer vision system at start. The wearable is made by two
sandwiched CPUs. One is dedicated to processing the input and the other to
produce the output shown on the wearable display. These two very thin and
lightweight computers are hosted inside a stylized backpack. The wearable is
connected to a small wide-angle camera worn on the user's shoulder, and to a
high resolution SVGA display.
Working
Once the training is over, the system is ready to be used. Initially the first
CPU and camera is used to recognize the object. As the viewer comes near
an exhibit, the image of the exhibit is captured by the camera and its
histogram is compared with the indexes in its database. Once the
information has been obtained, the CPU gives the contacts the next system
which stores all the documentaries. The required documentary is selected
and played back on an augmented reality display to enhance the viewer
experience.

CONCLUSION:
Wearable Computer has come a long way from the days of the WearComp.
Extensive research and development work at various centers have ensured
that these wonderful devices will change our lives dramatically in the near
future. Several commercial vendors have started manufacturing and
marketing these devices.
The earlier devices were quite obtrusive and often made the wearer ill at
ease, but recently, such devices have been gaining social acceptance. This is
attributed partly to miniaturization and partly to dramatic changes in
people's attitude to personal electronics. This factor will soon disappear as
the apparatus disappears into ordinary clothing and eyeglasses. Clothing
based computing with personal imaging will blur all boundaries between
seeing and viewing and between remembering and recording. Rather than
living within our own personal information domain, networking will enlarge our
scope through shared visual memory which enables us to "remember
"something we have never seen.
With computers as close as shirts on our backs, interaction will become more
natural. This will improve the ability to do traditional computing whiling
standing or walking. By letting computing system function as a second brain,
the system could develop situational awareness, perceptual intelligence and
an ability to see from the wearer's perspective while assisting him in his day
to day activities.
Within the next few years, we con expect entirely new modes of human -
computer interaction to arise. Wearable Computers will help in the
development of a cyborg - a system in which the camaraderie between a
human and machine becomes seamlessly simple. This will bring forward a new
set of technical, scientific and social needs which will have to be addressed
as we take the first step towards coexisting with wearable computers.

You might also like