Proceedings PDF
Proceedings PDF
Proceedings PDF
UbiComp 2003, the Fifth International Conference on Ubiquitous Computing, is the premier forum for presen-
tation of research in all areas relating to the design, implementation, deployment and evaluation of ubiquitous
computing technologies. The conference brings together leading researchers from a variety of disciplines, perspec-
tives and geographical areas, who are exploring the implications of computing as it moves beyond the desktop
and becomes increasingly interwoven into the fabrics of our lives.
The full papers and technical notes for UbiComp 2003 are published in the Springer-Verlag Lecture Notes in
Computer Science (LNCS) series, volume 2864. In addition to papers and technotes, UbiComp 2003 is hosting a
wide variety of other presentation forums, including demonstrations, interactive posters, a doctoral colloquium,
a video program, twelve workshops, and a panel. This broad selection of venues and media within the conference
is one of the great strengths of the UbiComp series, and this Adjunct Proceedings volume includes extended
abstracts from each of these forums. While the acceptance rates in these categories were higher than for the full
papers and technical notes, all submissions were subjected to a peer review process designed to ensure high quality.
Firstly, UbiComp 2003 includes a Panel track chaired by Gerd Kortuem. The panel, chaired by Eric Paulos
on the final day of the conference, features participants sharing their views on the prospects for new forms of
“mobile play” engendered by ubiquitous computing technologies.
The Demonstrations program, co-chaired by Eric Paulos and Allison Woodruff, with assistance from Eliza-
beth Goodman, includes approximately forty examples of ubiquitous computing technology, applications and art,
many of which provide opportunities for attendees to directly experience the impacts of ubiquitous computing.
The large collection of demonstrations includes a living sculpture, a WiFi game in the streets of Seattle, cardboard
boxes for configuring an information network, and new location-tracking systems and extended-sensor ‘motes’.
Our Interactive Posters track, co-chaired by Marc Langheinrich and Yasuto Nakanishi, offers a venue for the
presentation of late-breaking and/or controversial results in an informal and interactive setting. This year, over
forty posters were accepted, representing a variety of scientific backgrounds, and including many researchers who
are new to the field of ubiquitous computing.
The participants in the Doctoral Colloquium, chaired by Tom Rodden, are given the opportunity to present
their thesis research plans to a panel of senior researchers representing several areas within ubiquitous computing,
and receive focused, constructive feedback. These students may also choose to present their work to the larger
conference community as posters.
The Videos program, chaired by Jason Brotherton and Peter Ljungstrand, offers another format in which
researchers in ubiquitous computing can present their work. Videos offer an opportunity for authors to present
their work in a scenario of use, creating aspects of the context that may be difficult to replicate on-site at the
conference venue. In addition to the extended abstracts in this volume, the twelve videos this year are also dis-
tributed in DVD format.
Finally, twelve Workshops precede the main conference this year, offering the chance for small groups of par-
ticipants to share understandings and experiences, to foster research communities, to learn from each other and to
envision future directions. This year’s workshop program, chaired by Michael Beigl, covers many emerging topics
in ubiquitous computing, including healthcare, commerce, privacy and intimacy.
We are very grateful to the chairs and authors in all of these participation categories for providing attendees
with new perspectives on and experiences of ubiquitous computing. We are also want to express our immense
gratitude to Khai Truong, our webmaster and design guru, for his outstanding work on the conference web pages
and on the visual identity for the conference, which appears on the web site, the student volunteer tee-shirts, the
conference program, the conference DVD cover, and, of course, the cover to this volume.
Several organizations helped provide financial and logistical assistance for the conference, and we gratefully
acknowledge their support. The donations by our corporate benefactor, Intel, and by our corporate sponsors, Fuji
Xerox Palo Alto Laboratory, Hewlett-Packard Laboratories, IBM Research, Microsoft Research, Nokia Research
and Smart Technologies, help us provide a world-class conference experience for all attendees.
Finally, we wish to thank all the people attending the conference, as it is the opportunities to meet and interact
with all of you interesting people that makes the planning of such a momentous event a worthwhile endeavor for
all involved!
N.B. Copyright © 2003 is retained by the respective authors of each of the works contained herein.
iv
Conference Organization
Conference Chair
Joe McCarthy Intel Research Seattle (USA)
Program Chairs
Anind K. Dey Intel Research Berkeley (USA)
Albrecht Schmidt University of Munich (Germany)
Technical Notes Chairs
Tim Kindberg Hewlett-Packard Labs (USA)
Bernt Schiele ETH Zurich (Switzerland)
Demonstrations Chairs
Eric Paulos Intel Research Berkeley (USA)
Allison Woodruff Palo Alto Research Center (USA)
Interactive Posters Chairs
Marc Langheinrich ETH Zurich (Switzerland)
Yasuto Nakanishi University of Electro-Communications (Japan)
Videos Chairs
Peter Ljungstrand PLAY, Interactive Institute (Sweden)
Jason Brotherton Ball State University (USA), and
University College London (UK)
Doctoral Colloquium Chair
Tom Rodden Nottingham University (UK)
Workshops Chairs
Michael Beigl TecO, University of Karlsruhe (Germany)
Christian Decker TecO, University of Karlsruhe (Germany)
Panels Chair
Gerd Kortuem Lancaster University (UK)
Student Volunteers Chair
Stephen Voida Georgia Institute of Technology (USA)
A/V & Computing Chair
James Gurganus Intel Research (USA)
Treasurer
David McDonald University of Washington (USA)
Publications Chair
James Scott Intel Research Cambridge (UK)
Publicity Chair
Mike Hazas Lancaster University (UK)
Webmaster
Khai Truong Georgia Institute of Technology (USA)
Local Arrangements
Ellen Do University of Washington (USA)
Conference Manager
Debra Bryant University of Washington (USA)
Demonstrations: Program Committee
Jeff Burke University of California, Los Angeles (USA)
Elizabeth Churchill FX Palo Alto Laboratory (USA)
Mike Fraser University of Nottingham (UK)
Bill Gaver Royal College of Art (UK)
Lars Erik Holmquist Viktoria Institute (Sweden)
Sherry Hsi The Exploratorium (USA)
Mark Newman Palo Alto Research Center (USA)
Kenton O’Hara Appliance Studio (UK)
Dan O’Sullivan New York University (USA)
James Patten MIT Media Lab (USA)
Marc Smith Microsoft Research (USA)
Mark Smith HP Labs (USA)
John Stasko Georgia Institute of Technology (USA)
Lyndsay Williams Microsoft Research Cambridge (UK)
Ken Wood Microsoft Research Cambridge (UK)
vi
Kenji Oka Tokyo University (Japan)
Mario Pichler Software Competence Center Hagenberg (Austria)
Jaana Rantanen Tampere University of Technology (Finland)
Steffen Reymann Philips Research Laboratories (UK)
Dimitris Riggas Computer Technology Institute (Greece)
Matthias Ringwald ETH Zurich (Switzerland)
Michael Rohs ETH Zurich (Switzerland)
Tobias Rydenhag PLAY, Interactive Institute (Sweden)
Yutaka Sakane Shizuoka University (Japan)
Ichiro Siio Tamagawa University (Japan)
Martin Strohbach Lancaster University (UK)
Yasuyuki Sumi Kyoto University (Japan)
Tsutomu Terada Osaka University (Japan)
Tore Urnes Telenor Research and Development (Norway)
Julien Vayssiere INRIA (France)
Kousuke Yamazaki Tokyo University (Japan)
Tobias Zimmer TecO, Karlsruhe University (Germany)
Videos: Reviewers
Harold Thimbleby University College London (UK)
Matt Jones University of Waikato (New Zealand)
Armando Fox Stanford University (USA)
Brad Johanson Stanford University (USA)
Trevor Pering Intel Research (USA)
Chris Long Carnegie Mellon University (USA)
Khai Truong Georgia Institute of Technology (USA)
Marco Gruteser University of Colorado, Boulder (USA)
Merrie Ringel Stanford University (USA)
James Fogarty Carnegie Mellon University (USA)
Desney Tan Carnegie Mellon University (USA)
Sponsors
Corporate Benefactor Intel
Supporting Societies
UbiComp 2003 enjoys in-cooperation status with the following special interest groups of the Association for
Computing Machinery (ACM):
SIGCHI (Computer-Human Interaction)
SIGSOFT (Software Engineering)
vii
viii
Table of Contents
I Panel
Mobile Play: Blogging, Tagging, and Messaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Eric Paulos
II Demonstrations
Context Nuggets: A Smart-Its Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Michael Beigl, Albert Krohn, Christian Decker, Philip Robinson, Tobias Zimmer, Hans Gellersen, and
Albrecht Schmidt
Platypus Amoeba . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Ariel Churi and Vivian Lin
Noderunner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Yury Gitman and Carlos J. Gomez de Llarena
Pulp Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Tim Kindberg, Rakhi Rajani, Mirjana Spasojevic, and Ella Tallyn
Living Sculpture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Yves Amu Klein and Michael Hudson
Responsive Doors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Greg Niemeyer
Ambient Wood: Demonstration of a Digitally Enhanced Field Trip for Schoolchildren . . . . . . . . . . . . . . . . . . . 100
Cliff Randell, Ted Phelps, and Yvonne Rogers
x
Demonstrations of Expressive Softwear and Ambient Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Sha Xin Wei, Yoichiro Serita, Jill Fantauzza, Steven Dow, Giovanni Iachello, Vincent Fiano, Joey
Berzowska, Yvonne Caravia, Delphine Nain, Wolfgang Reitberger, and Julien Fistre
Mobile Capture and Access for Assessing Language and Social Development in Children with Autism . . . . . 137
David Randall White, José Antonio Camacho-Guerrero, Khai N. Truong, Gregory D. Abowd, Michael
J. Morrier, Pooja C. Vekaria, and Diane Gromala
The Narrator : A Daily Activity Summarizer Using Simple Sensors in an Instrumented Environment . . . . . 141
Daniel Wilson and Christopher Atkeson
Interfaces
Device-Spanning Multimodal User Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Elmar Braun and Andreas Hartl
On the Adoption of Groupware for Large Displays: Factors for Design and Deployment . . . . . . . . . . . . . . . . . 149
Elaine M. Huang, Alison Sue, and Daniel M. Russell
Instructions Immersed into the Real World — How Your Furniture Can Teach You . . . . . . . . . . . . . . . . . . . . . 155
Florian Michahelles, Stavros Antifakos, Jani Boutellier, Albrecht Schmidt, and Bernt Schiele
i-wall: Personalizing a Wall as an Information Environment with a Cellular Phone Device . . . . . . . . . . . . . . . 157
Yu Tanaka, Keita Ushida, Takeshi Naemura, Hiroshi Harashima, and Yoshihiro Shimada
Ambient Displays
Healthy Cities Ambient Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Morgan Ames, Chinmayi Bettadapur, Anind K. Dey, and Jennifer Mankoff
Habitat: Awareness of Life Rhythms over a Distance Using Networked Furniture . . . . . . . . . . . . . . . . . . . . . . . 163
Dipak Patel and Stefan Agamanolis
xi
AudioBored: a Publicly Accessible Networked Answering Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Jonah Brucker-Cohen and Stefan Agamanolis
ContextMap: Modeling Scenes of the Real World for Context-Aware Computing . . . . . . . . . . . . . . . . . . . . . . . 187
Yang Li, Jason I. Hong, and James A. Landay
Prototyping a Fully Distributed Indoor Positioning System for Location-aware Ubiquitous Computing
Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Masateru Minami, Hiroyuki Morikawa, and Tomonori Aoyama
Connectivity Based Equivalence Partitioning of Nodes to Conserve Energy in Mobile Ad Hoc Networks . . . 203
Anand Prabhu Subramanian
xii
Inside/Outside: an Everyday Object for Personally Invested Environmental Monitoring . . . . . . . . . . . . . . . . . 209
Katherine Moriwaki, Linda Doyle, and Margaret O’Mahoney
Applications
Using a POMDP Controller to Guide Persons With Dementia Through Activities of Daily Living . . . . . . . . 219
Jennifer Boger, Geoff Fernie, Pascal Poupart, and Alex Mihailidis
The Chatty Environment — A World Explorer for the Visually Impaired . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Vlad Coroama
IV Doctoral Colloquium
xiii
Ubiquitous Support for Knowledge and Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Michael A. Evans
Service Advertisement Mechanisms for Portable Devices within an Intelligent Environment . . . . . . . . . . . . . . 251
Adam Hudson
Towards a Rich Boundary Object Model for the Design of Mobile Knowledge Management Systems . . . . . . 257
Jia Shen
V Videos
DigiScope: An Invisible Worlds Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Alois Ferscha and Markus Keller
Bumping Objects Together as a Semantically Rich Way of Forming Connections between Ubiquitous
Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Ken Hinckley
Ubiquitous Computing in the Living Room: Concept Sketches and an Implementation of a Persistent
User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Stephen Intille, Vivienne Lee, and Claudio Pinhanez
STARS — A Ubiquitous Computing Platform for Computer Augmented Tabletop Games . . . . . . . . . . . . . . . 267
Carsten Magerkurth, Richard Stenzel, and Thorsten Prante
Breakout for Two: An Example of an Exertion Interface for Sports over a Distance . . . . . . . . . . . . . . . . . . . . . 271
Florian Mueller, Stefan Agamanolis, and Rosalind Picard
Concept and Partial Prototype Video: Ubiquitous Video Communication with the Perception of Eye
Contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Emmanuel Munguia Tapia, Stephen Intille, John Rebula, and Steve Stoddard
xiv
Virtual Handyman: Supporting Micro Services on Tab through Situated Sensing & Web Services . . . . . . . . . 285
Dadong Wan
VI Workshops
Ubicomp Education: Current Status and Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Gregory D. Abowd, Gaetano Borriello, and Gerd Kortuem
2003 Workshop on LocationAware Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Mike Hazas, James Scott, and John Krumm
UbiHealth 2003: The 2nd International Workshop on Ubiquitous Computing for Pervasive Healthcare
Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
Jakob E. Bardram, Ilkka Korhonen, Alex Mihailidis, and Dadong Wan
2nd Workshop on Security in Ubiquitous Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Joachim Posegga, Philip Robinson, Narendar Shankar, and Harald Vogt
Multi-Device Interfaces for Ubiquitous Peripheral Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
Loren Terveen, Charles Isbell, and Brian Amento
Ubicomp Communities: Privacy as Boundary Negotiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
John Canny, Paul Dourish, Jens Grossklags, Xiaodong Jiang, and Scott Mainwaring
At the Crossroads: The Interaction of HCI and Systems Issues in UbiComp . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
Brad Johanson, Jan Borchers, Bernt Schiele, Peter Tandler, and Keith Edwards
xv
Part I
Panel
Mobile Play: Blogging, Tagging, and Messaging
Eric Paulos
Intel Research
2150 Shattuck Avenue #1300
Berkeley, CA 94704
[email protected]
PANELISTS
Barry Brown, University of Glasgow, [email protected]
Bill Gaver, Royal College of Art, [email protected]
Marc Smith, Microsoft Research, [email protected]
Nina Wakeford, University of Surrey, [email protected]
You can discover more about a person in an hour of play CAN UBICOMP COME OUT AND PLAY?
than in a year of conversation. – Plato 427-347 BC Current ubiquitous computing research has provided
ABSTRACT marked milestones of systems, tools, and techniques along
Ubiquitous computing, by its very definition, aspires to the path of situated, focused problem solving. While
weave computing technologies across the fabric of our crediting the achievements of this area, we explicitly draw
everyday lives. Many of the successes and failures emphasis to the portion of everyday life made up of non-
encountered during the pursuit of ubiquitous computing goal directed activities and play.
will be dictated by the manifest integration of play. It is We make two important observations about play: (1)
play that helps us cope with the past, understand the humans seamlessly move in and out of the context of play
present, and prepare for the future. This panel of experts is (sometimes on a minute by minute basis) and (2) when at
passionately interested in engaging in a critical dialogue play, humans employ a separate mental cognition. The
around the applicability, adoption, and consequences of scope of their current activity is more ambiguous [8], and
such elements of play in ubiquitous computing research. As their expectations about people, artifacts, interfaces, tools,
motivation, several tremendously popular ubiquitous etc are increasingly relaxed. The mind is open up to wildly
computing themes with playful elements will be examined: fanciful interpretations, connections, and metaphors. The
blogging, tagging, and message play. rules of human engagement are completely altered. It is
often during this unique “play time” that we
Keywords
Play, blogging, tagging, messaging, digital graffiti, SMS, serendipitously establish important intellectual connections,
IM, ambiguity, toys, GameBoy, mobile computing, context leap to improved views of our world and society at large,
aware play. and resolve conflicting paradigms. In essence, it is often
through play that we advance our own substantial, novel
INTRODUCTION contributions in life.
It is during play that we make use of learning devices, treat
This fundamentally important human phenomenon clearly
toys, people, and objects in novel ways, experiment with
deserves a forum as a legitimate theme within the context
new skills, and adopt different social roles [1]. As children
of ubiquitous computing. In fact as ubiquitous computing
we clearly don’t play to learn, but we certainly learn from
researchers, we must not only be aware of this human
play [2, 3]. Play helps us as children (and adults) to answer
tendency to play, but perhaps more importantly use it to
the questions: What can I do in this world? What am I good
our advantage. When does play occur? How does it begin
at? What might I become [4]? Many of us attribute our
and end? When is it appropriate or inappropriate? What
abilities, interests, and even our careers, to childhood toys,
elements give rise to play? Quell play?
games, and play [5-7]. Play unquestionably resonates with
the very essence of human behavior and our role in society MOBILITY
and will play a vital role in the adoption of ubiquitous Play by its very nature is an active event, promoting co-
computing. ordination, flexibility, and fine motor skills [9]. Often toys,
While gaming is a popular and important part of human the tools of play, respond to movement and hold our
play, this panel is focused more specifically on the attention. From an early age toys encourage physical play:
fundamental activity of mobile, situated human play and its activity centers for babies, push-pull toys for toddlers, and
role in ubiquitous computing. blocks, balls and climbing frames for older children.
3
Throughout our lifetime, we draw upon these innate skills by the community. However, graffiti is simply defined as
and experiences to provide a safe and comfortable means an inscription or drawing made on some public surface.
of interfacing with others and the world around us through Graffiti is an extremely important medium through which
play. we engage into dialog across and within our community.
There is no doubt that the current commercial adoption of Not just “gang tags” but political stickers, city produced
wireless, mobile ubiquitous computing devices is indirectly marks indicating gas lines, discarded receipts, cigarette
spawning novel practices of social, mobile play. The butts, broken benches, covered parking meters, and
research buzzwords of context awareness, always on, body scrawled messages are all examples of public place
worn, multi-medial, community awareness, and social community message play.
networks are in fluid use across diverse non-research How will ubiquitous computing contribute to play within
communities. Today’s personal mobile devices have the space of tagging? What motivates the human passion
already been repurposed by independent, passionate users of marking objects? How do we communicate by, through
and groups for various forms of mobile play. As ubiquitous and with objects and artifacts? Why and how do objects
computing researchers, we have a primary interest in exhibit an aura [11]?
understanding the methods of such adoption and, more Not surprisingly, nearly every manufactured item already
importantly, the evolution of its re-appropriation. contains a unique “tag”. Better recognized as a barcode,
While we are interested in exploring new trends in mobile this form of tagging has been socially re-purposed by
play, there are numerous currently deployed systems that digital, wireless tools to generated independent dialogs
have been re-appropriated from the context of work to about these objects, empowering communities. Similarly,
play. The documented evolution of these systems and their where will radio frequency identification tags (RFID)
current usage models help drive many of the research situate themselves within this space of social community
questions for future mobile play. We use these systems as a dialogue? How will we tag wireless 802.11 access points?
starting point for debate of mobile play. Where will such technologies and techniques give rise to
play?
BLOGGING
A blog (derived from “web-log”) is a web page made up of MESSAGE PLAY
usually short, frequently updated posts that are arranged From childhood note passing to adult flirtations couched in
chronologically – similar to a “what’s new” page or amusing metaphors, we find humans engaged in message
journal. There is no limit to the content or topic of available play. We elucidate this continuing motivation for message
blogs: links and commentary about other web sites, play by example: the wireless pager. The initial usage
political issues, news about companies/people/ideas, model for pagers was that a person would send their phone
diaries, photos, poetry, mini-essays, project updates, number to another individual’s pager; the recipient would
fiction, journalism, and even personal messages by dial the received number on a phone and establish a voice
embedded reporters on today’s modern battlefield [10]. connection. What evolved was an entirely different usage
Blogs are almost always personal, imbued with the temper model. In fact a new cultural vocabulary of numerical
of their writers. Perhaps more importantly, to invoke messages arose. For example, users defined new encodings
Marx, blogs seize the means of production, bypassing the such as, “When I send ‘1-2-3’, that means ‘thinking of
ancient rituals of traditional publication houses. In some you’, ‘4-5-6’ means ‘feed the dog’.”
sense blog posts are instant messages to the web. Similar playful re-appropriate occurs with our current
The technologies to support blogging have been in place personal messaging tools such as cell phones and SMS text
since the dawn of the web, yet it has not been until recently messaging. One teen expressed, “I carry my mobile phone
that this technique has self organized itself into a playful around all the time, even in the house.…It's like my little
social pursuit. With modern wireless mobile PDA’s and baby, I couldn't live without my mobile, I bring it into the
phones, the urge to share and play with text, images, and bathroom with me.” Similarly, another couple on separate
sound in real time across vast distances and within a social continents (and hence time zones) used SMS to send
network of friends (and enemies) is overwhelmingly playful awareness messages to each other with no intention
compelling. of engaging in dialogue. “When I get up in the morning I
Several of the panelists have extensive experience playing send her an SMS message that I’m ‘Now making coffee’
in such worlds as well as building and evaluating tools that just to let her know what I’m doing….I guess I want her to
use and extend the blogging metaphor of social be able to imagine me in the kitchen making coffee.”
empowerment. This urge to send playful messages is evident in almost
every personal messaging tool in current use: instant
TAGGING
Tagging is often used within groups and communities to messaging (IM), SMS text messaging, mobile phones, and
mark ownership or control over an object or territory. wireless PDA’s. For example, corporations created the
Tagging and graffiti are typically viewed as an anathema service of “Caller ID”, but its appropriation as an
awareness messaging tool through “one ring calls” became
4
a preferred form of message play between users. discussion directly into the interface of such systems, the
Fundamentally, humans engage in play and will certainly overall experience of the technology can become a more
continue to socially repurpose mobile technology to satisfy enjoyable one.
this necessary human urge.
Bill Gaver
This leaves numerous open questions for debate: How will Biography
other forms of ubiquitous mobile message play be created? Bill Gaver is a Senior Research Fellow at the Royal
Engaged? Deigned for? Encouraged? Diverted? How will College of Art. He has pursued research on innovative
mobile play affect human relationships in terms of trust, technologies for over 15 years, working with and for
persuasion, and conflict? How will we map current companies such as Apple, Hewlett Packard, IBM and
messaging techniques onto and across such systems? What Xerox. Recent projects have included electronic furniture
direct and side effects will result? for public areas, information appliances that emphasize the
PANELISTS (Alphabetically) emotions and spirituality, and the creation of compelling
The following is an alphabetical listing of each of the public experiences from urban pollution sensing and data
panelists that will participate in this panel along with their from Antarctic lakes. He is a principle investigator on
position statement on this topic and a brief biography. Equator IRC, in which his group is exploring digital
devices that offer ludic opportunities for the home.
Barry Brown
Biography Position Statement: Designing Ubiquitous Play
Barry Brown is a research fellow and ethnographer at Play is ubiquitous. Not only do we play when we’re
Glasgow University where he explores social issues supposed to play – when we’re gaming, or blogging, or
surrounding human leisure and technology. Recently his flirting – but we play when we’re doing other things as
focus has been on various leisure enabling technologies well. We play with ideas, with interpretations, with our
such as music listening, museum visiting, and tourism. He own identities. We’re curious, we explore, we fiddle, and
has edited a highly respected book that deconstructs many doodle. From this point of view, play is not an activity so
aspects of mobile phone usage [12]. Barry has also much as an attitude, one in which we’re relatively free
investigated the parallels of video game interfaces and its from external constraints and defined tasks.
relationship to ubiquitous computing [13]. In my research I am trying to understand how to support
playful attitudes without defining systems as being ‘for
Position Statement
Designing technologies for leisure presents a number of play.’ For instance, in the ongoing Equator IRC, we are
challenges for technology designers. It is not just that the looking at technologies for the home that encourage people
goals in leisure are more diffuse, or that there are a more to reflect on their own activities, to try on new roles, to
diverse set of requirements. In leisure the aim is day-dream and speculate. None of the things we are
enjoyment, rather than productivity. How something is designing could be considered ‘for play,’ yet they all
done is often more important than the end result. For the depend on a playful frame of mind. They are intended to sit
tourists we have studied, using a guidebook was enjoyable in a middle ground between work, consumption and
in itself as well as contributing to their visit [14]. For entertainment, encouraging people to wander and wonder,
music enthusiasts finding new music is not just a goal but rather than focus on clear tasks.
enjoyable process in itself [15]. How do we design to allow play without dictating it? A
The importance of enjoyment as part of the experience of couple of factors seem important. First, we need to
using a technology is something we, as ubiquitous embrace subjectivity – our own and others’ – in our
computing researchers, can learn from gaming software. designs. Rather than seeking to create experiences based on
For example, gaming software often develops a user’s our knowledge about typical desires and activities, it is
skills in a particular technique, and when that technique is often more compelling to design for the idiosyncratic and
perfected discards that technique to encourage the unusual. Second, ambiguity and openness are important
development of new forms of competency. In this way factors in creating systems that people can appropriate into
games maintain an interest in learning new and more their own lives. Rather than dictating what a system is for,
advanced skills. or even what it means, it is often more effective to design
systems that are suggestive and open to interpretation. For
Games are also very much social activities (both co-present it is in the act of making meaning from ambiguous
and online), and much can be learned from how these situations that we are often at our most playful.
social experiences are pleasurable and shared. In our
current work we are studying groups at play – in situations
such as go-kart racing. We are interested in observing how Marc Smith
discussion and socializing around an event becomes a Biography
powerful component of the enjoyment of the event itself. Marc Smith is a research sociologist leading the
By designing social support for reflection and follow-up Community Technologies Group at Microsoft Research.
5
The focus of the group is to explore and build tools to here: young people are no longer seen as invisible and
support association and collective action through inconsequential subjects, but active actors with agency.
networked media. This explains the kinds of digital play which we have
Position Statement
observed amongst young people both on the 73 bus route
Play, in the form of exploration, direct manipulation, and study and which they have reported in in-depth interviews.
collaborative interaction is a critical component of social PANEL PLAYTIME
life. Information technologies, despite their extensive uses Clearly, the focus of this panel is to use the synergy of the
in the forms of “games” often lack a playful quality and panelists and audience participation to elucidate the grand
impose instrumental usage patterns. This often leads to research challenges in the area of mobile play. As
significant underutilization of technical capacities as users expected, individual panelists will present positions and
avoid exploration for fear of stepping beyond the scope of relevant work to support their arguments at the panel. The
their instrumental skills. The emerging capacities of inevitable insuring discussions across panelists and
ubiquitous computing suggest new opportunities for audience will hopefully reveal the foremost research
encouraging playful exploration of technical systems by questions associate with mobile play.
supporting the primary sensory channels of feedback, However, we are also interested in consciously creating
direct manipulation, inscription, and mutual awareness. At scenarios during the course of the panel that allow the
question is how the playful uses of information audience to freely enter into a playful state of mind. Not
technologies will be domesticated or will potentially literal game play, but play as a vital part of brainstorming,
rupture existing social institutions. self-discovery, identity, and creativity.
Nina Wakeford Come out and play!
Biography
Nina Wakeford is Director of the INCITE research centre REFERENCES
[1] L. S. Newman, "Intentional and unintentional memory in young
at the University of Surrey, UK. Trained in anthropology children : Remembering vs. playing," Journal of Experimental Child
and sociology she studied for her PhD at Oxford University Psychology, vol. 50, pp. 243-258, 1990.
where her thesis focused on the sociology of risk. For the [2] G. G. Fein, "Skill and intelligence. The functions of play,"
past ten years she has been working on sociological Behavioral and Brain Sciences, vol. 5, pp. 163-164, 1982.
[3] J. S. Bruner, "The nature and uses of immaturity," American
approaches to new technology production and Psychologist, vol. 27, pp. 687-708, 1972.
consumption, including studies of email discussion lists, [4] C. Adelman, "What will I become ? Play helps with the answer,"
web pages, mobile phone use, web logs and public internet Play and Culture, vol. 3, pp. 193-205, 1990.
access points, including wireless. One of her current [5] D. M. Tracy, "Toy-playing behaviour, sex-role orientation, spatial
ability, and science achievement," Journal of Research in Science
projects uses the route of the number 73 bus in London as a Teaching, vol. 27, pp. 637-649, 1990.
way to sample usage of digital content in the city, including [6] J. Piaget and B. Inhelder, The psychology of the child. New York,:
web pages, text messaging and blogging. She is also Basic Books, 1969.
studying the way in which ethnographers work with [7] J. O'Leary, "Toy selection that can shape a child's future," in The
Times, 1990.
interface designers, artists and engineers, and what they [8] W. Gaver, J. Beaver, and S. Benford, "Ambiguity as a resource for
learn from each other. design," presented at ACM CHI, 2003.
[9] J. A. Byers and C. Walker, "Refining the motor training hypothesis
Position Statement for the evaluation of play," American Naturalist, vol. 146, pp. 25-40,
A sociology of ubiquitous computing necessarily involves 1995.
thinking about the linkages between space and social [10] A. Harmon, "Improved Tools Turn Journalists Into a Quick Strike
practice. One way of engaging with digital content in the Force," in New York Times, Late Edition - Final ed. New York, 2003,
pp. 1.
city of London, for example, is to create mundane light
[11] W. Benjamin, Illuminations. New York: Schocken Books, 1969.
content which might be characterized as playful in nature. [12] B. Brown, N. Green, and R. Harper, "Wireless world: social, cultural
Teasing, joking, shaming, and pranking are all routine and interactional aspects of wireless technology," Springer Verlag,
activities of the set of young people in the UK who 2001.
[13] J. Dyck, D. Pinelle, B. Brown, and C. Gutwin, "Learning from
characterize themselves as heavy users of mobile phones.
Games: HCI Design Innovations in Entertainment Software," in
Creating a sociology framework around the concept of Proceedings of Graphics Interface 2003, 2003.
mobile play involves thinking about the many wider social [14] B. Brown and M. Chalmers, "Tourism and mobile technology," in
Proceedings of ECSCW 2003, 2003.
and structural processes in which these activities are [15] B. Brown, E. Geelhoed, and A. J. Sellen, "The Use of Conventional
embedded. For example to characterize an activity as and New Music Media: Implications for Future Technologies," in
'playful' draws on wider cultural assumptions of risk, trust Proceedings of Interact 2001, vol. 67-75, M. Hirose, Ed. Tokyo,
and blame. It may also involve notions of intimacy and Japan: IOS Press, 2001.
power. The contemporary sociology of childhood can aid
6
Part II
Demonstrations
Context Nuggets: A Smart-Its Game
Michael Beigl*, Albert Krohn*, Christian Decker*, Philip Robinson*, Tobias
Zimmer*, Hans Gellersen+, Albrecht Schmidt-
*TecO, University Karlsruhe + Lancaster University - Universität München
*{michael, krohn, cdecker, philip, zimmer}@teco.edu
+ [email protected]
- [email protected]
Keywords
Ubicomp Platform, Games, (usability, technology) tests
INTRODUCTION
In-Situ context generation, processing and communication
has advantages in many application areas. This demo Figure 1: Small Smart-It with sensors attached to body or
shows a platform for application scenarios, consisting of clothing
tiny computing devices that are embedded into everyday
objects, on people or clothing, or in the environment. A major part of the Smart-Its platform [3] is the hardware
Further demonstrated are development libraries and tools device, which comes in various forms (e.g. figure 1). It
for building applications and services for supervising builds the embedded hardware toolset and contains RF
applications and experiments. The demonstration presents based wireless communication, on-board processing,
this technology platform through one example application. memory, sensors and actuators. It can produce sensor
The central component of such a platform is a tiny device information from up to 12 sensors, process context
called the Smart-Its, which is used to retrieve context information within a local processor, provide adequate
information from the environment, run applications and storage of context and general information, and host
communicate via a wireless network. The first part of the applications such as the game described below. It works
demonstration is a Ubicomp game application, secondly we independently of any external infrastructure and allows
give more details on the technology involved. Attendees spontaneous, short-range peer-to-peer and ad-hoc exchange
are invited to take part or observe the game, and of processed data. Smart-Its are tiny, lightweight and have
subsequently have a closer look at the enabling software low energy consumption, such that the extent of objects to
and hardware design of Smart-Its. which they can be embedded ranges from very small or the
human body (figure 1). The Smart-Its software can be
Ubicomp games [1] stimulate use of Ubicomp technology, rapidly developed based on a simple-to-use library
as has been shown in previous Ubicomp conferences - e.g. providing a high-level access to communication, sensing
Pirates [2]. In our game, "Context Nuggets", attendees of and actuating functionality. Furthermore, generic programs
the conference are invited to be players by configuring and are available for certain application areas as usability tests.
using a small device. The device can be worn on the waist
or adhered to the shirt (figure 1). Analysis tools can be
9
While infrastructure is optional, as Smart-Its communicate nuggets through a secret formula known only by you. But
ad-hoc, PC based services such as wireless development unfortunately, you cannot use the ingredients you produce
and maintenance of Smart-Its applications require such yourself. Instead, you have to trade ingredients with other
integration. Infrastructure equipment enables access to alchemists - one of your lux for one of their magical
Smart-Its over the Internet and vice versa. Infrastructure – motions, one of your spells for one of their spells etc.
based services may also be a source of additional context Based on your formula, "context nuggets" are created and
information, such as location or a history database. Figure at the end of the day the alchemist with the most nuggets is
2 shows a setting with several Smart-Its distributed in a flat the winner. Players influence the progress of the game by
or office environment. entering the secret formula to make context nuggets. The
secret formula describes how many of the ingredients are
needed for creating a nugget and therefore determines the
strategy for the player. A total of 10 units from the 3
ingredients are required, and at least one lux, one magic
motion and one spell. The 7 remaining units can be
allocated arbitrarily by the user, based on a calculated
guess of the most available ingredients (figure 4).
10
buffered data. The "alchemist" with the device containing What it Demonstrates
the most nuggets is the winner. The Game shows some specific strengths of the Smart-Its
platform. First of all, it shows that the Smart-Its are
Technical Setting
complete self-contained and independent devices. They do
To run the game, players are required to wear a tiny
not need infrastructure and are able to generate higher-level
electronic Smart-It device and to attach this device to their
information, represented here by the context (nuggets) and
clothing (figure 1). The device works independently of
ingredients, from physical sensors.
other computers and networks, holds the game rules and all
information concerning the devices' player. The device The second strength is the ability of a Smart-It to work
constantly detects physical attributes such as light, sound unobtrusively as factor of its small dimensions and long
and movement. time operation. Furthermore, the device does not require
any administration, maintenance or other explicit
The device is able to communicate wirelessly and
interaction to fulfill its task.
spontaneously in order to exchange ingredients information
between participants of the game. Game communication is The third strength is the de-centralized communication. No
uses the Smart-Its ad-hoc and infrastructure-less master device or access point is necessary. Nevertheless,
networking. Collection of physical data, semantic when connected to a backbone network, additional data
aggregation and the ad-hoc communication is done analysis and statistic functionality is enabled.
implicitly without any user interaction. Any internal GAME EVALUATION
processing like trading and generation of ingredients or The evaluation part of the demonstration uses information
nuggets is stored together with a time stamp in the Smart- generated by the "Context Nuggets" game and shows one
Its memory buffer. When the device enters a special application of the infrastructure toolset and services. These
marked area with connection to the Internet, buffered data toolset and services can be used for a variety of application
is transferred to the Particle DataBase for immediate or areas including supervision of field tests or living lab [4]
later use. Conference attendees are able to monitor the tests. In our demo setting, it monitors the progress of the
current game status using either their own WiFi enabled game application, computes game data (e.g. the score) and
device or using a Game Terminal during the game. observes technical parameters.
Game rules Observation with Smart-Its
1) The goal of the game is to create more nuggets than In the Smart-Its set-up, simple sensors are the basis for the
other players supervision in contrast to complex video surveillance often
2) You create nuggets by collecting ingredients and used in controlled laboratory user studies. An advantage of
processing them to nuggets according to the formula you the use of directly attached simple sensor systems is that
entered at the start of the game. Converting ingredients to they can collect data automatically, with fine granularity
nuggets is done automatically when the necessary type and and independent of the location, e.g. while on the move.
amount of ingredients are available. Only traded materials Additionally, they are able to measure data constantly
can be used to produce nuggets, so you can't use your own without being disturbed by occlusion. For many situations,
ingredients. ad-hoc embedding of the proposed technology is easy to
handle and cheap, making the Smart-Its based evaluation
3) You are able to generate 3 ingredients: Lux (created
suitable for small ad-hoc tests. Due to the lack of video
from a light level sensor), magic motions (created from
surveillance cameras, the entire user behavior is not
movement sensor) and spells (created from an audio
supervised. This suggests a move towards privacy-sensitive
sensor) through wearing a tiny magic device. Ingredients
user monitoring.
are produced automatically without any user interaction.
There are also some disadvantages. Firstly, only specific
4) Ingredients that are not traded are perishable - their
parameters can be supervised and have to be known
maximum usability period is about 2 minutes.
beforehand. Secondly, in the absence of additional
5) To create nuggets, you consume 10 ingredients from surveillance of the user, users may fool the system by not
others alchemists. The ingredients you need are part of the or inappropriately using the devices and so adulterate
secret formula you enter at the start of the game. collected data.
6) You can trade with other participants on a 1-to-1 basis Example scenario: Game supervision
by standing within 5 meters of them or even passing by. The "Context nuggets" game builds the application
The longer you stand next to other wizards the more you scenario for collecting user related data. The behavior of
trade. players gives hints to how good the game performs in a
7) Trading and creating can be done everywhere as no given environment. From this data, valuable analyses about
extra infrastructure is needed. Simply wear your device the overall performance of the game can be carried out, but
correctly at the belt or shirt, not inside a bag etc. Otherwise also individual player performance data can be shown.
your ingredient production stops and you are not able to For the proposed game several parameters are of interest:
trade.
• How often and when do players generate ingredients
11
• How often do players meet in general access of every player and the load distribution among
access points are of interest. A low access to one of the
• How often do players exchange ingredients (combines
access points may indicate the need for a relocation of this
generation and meeting)
access point.
Additionally, for the progress of the game the information
Additionally, the context aggregation behavior of Smart-Its
about the average "nugget" production per time is of
is noteworthy in order to optimize the rule-set of the game.
interest.
The threshold and generating algorithms for units of
Technically, the above parameters are retrieved by ingredients can be adjusted according to the measured
querying the Particle DB through a Web-Server based movements, noise and light.
script. The ParticleDB holds all events that took place
All these measured parameters can be graphically displayed
during the game with the correlating timestamp. Using this
through the Particle Analyzer program from one of the
detailed data, the statistic applications generate graphs and
terminal PCs on the demonstration site.
reports. These can then be accessed by a Web-Browser
either from the Terminal on the demonstration place or ACKNOWLEDGMENTS
from any other computer connected to the network. Smart-Its project is funded by the Commission of the
European Union as part of the research initiative “The
Disappearing Computer” (contract IST-2000-25428).
REFERENCES
1. Björk, S., Holopainen, J., Ljungstrand, P. & Åkesson
K-P. (2002). Designing Ubiquitous Computing Games -
A Report from a Workshop Exploring Ubiquitous
Computing Entertainment. In Personal and Ubiquitous
Computing January, Volume 6, Issue 5-6, pp. 443 - 458
2. Björk, S., Falk, J., Hansson, R., & Ljungstrand, P.
Pirates! - Using the Physical World as a Game Board.
Paper at Interact 2001, IFIP TC.13 Conference on
Human-Computer Interaction, July 9-13, Tokyo, Japan.
3. Beigl, M., Zimmer, T., Krohn A., Decker, C., Robinson,
P.: Smart-Its - Communication and Sensing Technology
for UbiComp Environments. Technical Report ISSN
1432-7864 (2003)
Figure 5: Example Statistic 4. Kidd, Cory D., Robert J. Orr, Gregory D. Abowd,
Christopher G. Atkeson, Irfan A. Essa, Blair MacIntyre,
Elizabeth Mynatt, Thad E. Starner and Wendy
Newstetter. The Aware Home: A Living Laboratory for
Example scenario: Technical (network and sensor) set- Ubiquitous Computing Research" Proceedings of the
up evaluation Second International Workshop on Cooperative
For technical evaluation the ad-hoc and statistical analysis Buildings, October 1999.
are also of interest. For the "Context Nuggets" game,
network coverage or general network problems could be
critical. The percentage and the average time of backbone
12
Eos Pods: Wireless Devices for Interactive Musical
Performance
13
the MIDI generated by a master computer running a Max
patch and output the 30 player parts and a click track.
Audio of the triggered chunk would be fed to the main mix
and routed wirelessly to the table that triggered it and fed
SYSTEM DESCRIPTION to a local, powered monitor. A click track would also be
The performance was held in a banquet hall at which Eos output from the master computer and mixed to the main
Orchestra would be performing for invited supporters. The audio feed.
tables in the hall were arranged according to an instrument
layout for an orchestra, with each table representing one In the final implementation, the wireless audio return path
instrument. Each of the thirty tables at the banquet featured was not implemented. Instead, clusters of local speakers
an interactive centerpiece consisting of a 12” diameter were positioned overhead around the room. Three tables
translucent acrylic dome approximately 8” tall. Electronics were served by each cluster of speakers. Audio for each of
housed within the dome allowed the guests to tap the dome the tables served by a local cluster was routed to the
and trigger a phrase of music, as long as they had received cluster, so that the sound appeared to be coming from the
a cue from the conductor. In a manner similar to MIT table itself.
Media Lab’s “Tribble”s [5], LED lights within the dome
were used to signal status to the guest players. Three states
were communicated: Inactive, Enabled (cued by the SOFTWARE
conductor), and Playing. Domes were “served” to the tables Interaction, playback and communication software was
by the technical staff to replace the banquet centerpieces implemented on Macintosh computers in Cycling 74’s
after dinner was over, so no wires could be permitted for Max software. A master Max patch was developed for this
power or communications. User input sensors in the domes application with the following features:
communicated via a microcontroller back to a master
computer, sending data on how hard the dome was tapped.
The master computer used this data to trigger a musical • graphical user interface for operator control of performance
phrase played by that table’s instrument with a MIDI parameters
velocity relative to the force applied to the dome. • bidirectional UDP communication with pods
• interactive player subpatch for each pod
Two Akai samplers with 16 audio outputs each were fed • algorithms for MIDI playback, interaction arbitration and
Figure 1. Operator
14 interface in Max
progression through the piece enabled pod, with the circle’s line thickness representing
• control of pod LED’s for user feedback relative volume level. When a pod was triggered, a blue
circle within the green flashed on and off to indicate active
playback.
The piece “In C” is composed of 53 individual melodic
patterns of varying length to be played in sequence. In the
performance instructions, the composer states that each The interface enabled the Max operator to follow the
performer must progress through the patterns in order but conductor’s movement among the tables and assist the
may make entrances and repeat each pattern numerous times conductor in controlling the performance. Using a pen and
at his or her choosing. graphics tablet, the operator could move a circular marquee
around the interface, with pen pressure controlling the
diameter of the marquee and thus the area of influence. The
Our patch was designed to simulate these instructions, with operator could enable, disable and scale volume of each of
entrances determined by the players and progression the pods using keystrokes, with the keystrokes’ effects
through the patterns controlled by the patch. A “horse race” applying only to the pods enclosed by or touched by the
algorithm was implemented with a counter progressing marquee.
through pattern numbers 1 through 53 and each pod racing
towards or past this master number but remaining within
+/- 4 of its current value. This was done by periodically Bidirectional network communication was accomplished
incrementing each pod’s pattern number, with the odds of using the o t u d p object [6] to send UDP messages
incrementing being greater if further behind and less if wirelessly between the Max host computer and the pods.
further ahead of the master number. Outgoing messages from the host were addressed to each
pod on a local IP address and port, with the last byte of the
IP number corresponding to the pod’s table number.
Players were able to trigger playback of a sequence by Messages from the pods to the host were sent on a
tapping on the pod dome. Tap signals were sent via UDP broadcast IP address, enabling both the host and a backup
to the host with a message indicating the velocity of the host computer to receive them.
tap and the pod number. If the pod was enabled, a tap
triggered playback of the pod’s current pattern, with
playback volume scaled by both the tap velocity and the HARDWARE
pod’s level as set in the operator’s interface. Hardware for the pods was implemented on a PIC
microcontroller, using NetMedia’s Siteplayer web
coprocessor to manage TCP and UDP connectivity. The
Playback of all parts was synchronized to a master eighth- entire module was connected wirelessly using an Ethernet-
note clock running at a fixed tempo. Entrances occurred on to-WiFi bridge from D-Link. The pods were powered by
the next eighth note following receipt of a trigger, playing 12-volt rechargeable motorcycle batteries.
the triggering pod’s current pattern synchronized to the
master clock.
15
somewhat confused as instructions were rushed. For
RGB LED output was controlled by the PIC based on example, the pods were designed to trigger once a player’s
UDP messages received from the master control software. hand lifted from the pod; not all audience members
The white LED flash on each tap was generated locally by understood this, and some kept their hand on the pod
the PIC. All other LED control was received from the constantly, wondering why it didn’t play. In future
master control software. The Siteplayer HTML interface performances, we will look for a better solution, allowing
could also be used to set LED intensity levels, and was for a wider range of playing styles. Overall, however, the
used during installation for diagnostic purposes, when the audience had an enjoyable experience, and all parties were
master control software was not online. pleased with the result. Eos has hopes for future and more
ambitious performances with the pod system, and we look
forward to further refinement of the system as the design
Message protocol between the pod units and the master team and the orchestra gain more experience with it.
control software was kept as light as possible to minimize
traffic. Two bytes were sent from each pod to the master
control software on each tap: the last octet of the pod’s ACKNOWLEDGMENTS
address, and the velocity value. Three bytes were sent from We thank Jonathan Sheffer and the Eos Orchestra for their
the master control software to each pod: red, green, and support and for their openness to experimentation, without
blue intensity values for the LED’s. Optional fourth and which this project would not have been possible. We also
fifth bytes could be sent to adjust the sensitivity of the thank the staff and students of the Interactive
pod’s FSRs, but this was used only during installation. Telecommunications Program at ITP, many of whom were
drafted into service in the fabrication of the system; and the
staff of Audio, Video, & Controls, whose expertise and
Because the pods were to be laid out in a specific order in enthusiasm made for a more pleasant working experience
the performance space, fixed IP addresses were used for throughout the process.
each pod, to simplify contact with them during
installation. Likewise, the wireless bridges were also given
fixed addresses in advance. Although it was not strictly REFERENCES
necessary to associate IP addresses with instrument [1] T. Riley, “In C”, musical score and performance
numbers, it was convenient, given the time scale of the instructions (1964)
project. [2] R. Ulyate and D. Bianciardi, “The Interactive Dance
Club: Avoiding Chaos in a Multi-Participant
Environment” Computer Music Journal , Volume 26,
IMPLEMENTATION FOR UBICOMP 2003
Number 3, MIT Press (2002)
For UbiComp 2003, our implementation of the Eos Pods
will be somewhat less extensive than for the initial [3] P. Cook, "Principles for Designing Computer Music
performance. Between 6 and 12 pods will be used, and Controllers," ACM CHI Workshop in New Interfaces for
only one sampler or synthesizer will be used. Audio will Musical Expression (NIME), Seattle, April, 2001.
not be routed to local clusters, but will be designed around [4] M. Weiser and J.S. Brown. "The Coming Age of Calm
a central PA system feeding the entire space of the Technology",http://www.ubiq.com/hypertext/weiser/acmfut
demonstration. ure2endnote.htm (October 1996).
[5] J.A. Paradiso, "Dual-Use Technologies for Electronic
CONCLUSION Music Controllers: A Personal Perspective," Proceedings of
The initial performance of the Eos pods at the orchestra’s the 2003 Conference on New Instruments for Musical
annual supporter’s banquet was a successful test run for the Expression (NIME-03), Montreal, Canada
system. All technical components performed as specified, [6] M. Wright, “otudp 2.4”, CNMAT, UC Berkley.
with remarkably few technical problems. Communication http://cnmat.cnmat.berkeley.edu/OpenSoundControl/clients
between conductor, support staff, and audience was /max-objs.html
16
Wishing Well Demonstration
Tim Brooke Margaret Morris
Intel Corporation Intel Corporation
JF3-377 2111 N.E. 25th Ave. JF3-377 2111 N.E. 25th Ave.
Hillsboro, OR 97124 USA Hillsboro, OR 97124 USA
+1 503 264 8512 +1 503 264 8512
[email protected] [email protected]
17
RESEARCH FINDINGS three months until finding a sufficiently assistive
Orientation to current time is well recognized as a sign of environment. Each move was precipitated by a crisis:
cognitive lucidity. This immediate temporal orientation a fall, an incident of aggression, wandering.
is assessed in mental status exams and is relatively well Lucinda’s mother was miserably unhappy in
supported through calendaring tools. However, our everyplace but the last one. They now wish they had
research indicated the importance of broadening the known her mother’s probable trajectory: they think
consideration of temporal orientation to include not only this knowledge would have allowed them to avoid
the present, but also the distant past and distant future – some of the crises by moving to a place that was
realms not addressed in most calendaring tools. offered graduated levels of assistance.
• Paul and Jenna, who love their urban third floor flat,
expressed determination to live there forever.
Arthritis in Jenna’s knee already makes the stairs a
challenge though. When asked about their plans for
the future her response was “I suppose we’ll cross
that bridge when we get to it.”
• Sue, a former teacher and successful real estate
broker, now suffers from severe vision deterioration
that prevents her from driving and a host of other
activities. She recently moved to an upscale assisted
care environment that she finds stifling. Even
though she hasn’t driven for ten years, she keeps her
car as a symbol for the freedom that she misses.
• “I didn’t want to believe this was happening to her”
Figure 2. Calendars help with orient people to the said a young woman about her grandmother who has
present and they are often saved to ease recollection of Alzheimer’s. In retrospect, she sees that she and her
the past. But they are not so helpful with envisioning and parents overlooked signs of deterioration for years.
goal setting for the distant future. She feels that they may have an opportunity where
medication could have made a big difference in
It is true that orienting to the present is more challenging slowing the course of the disease.
in old age: retirement can involve a disconnection from
the rhythms, rituals and communities associated with
So why don’t people think about old age?
workdays and weekends. Cognitive impairment certainly
There are a variety of obstacles to envisioning the future
adds to this challenge. Equally if not more consequential
that emerged from our ethnographic research. First is the
though are the struggles of orienting to the distant future.
very daunting prospect of losing health, freedom, and
We found that many people avoid thinking about their
independence. Imagining these changes for oneself or a
and their loved one’s old age until forced to do so as a
loved one is so painful that many simply avoid thinking
result of health crises. In some of these cases, more
about them. Another is optimism and the accompanying
planful, proactive decision making may well have pre-
denial about the prospect of negative future events. This
empted a number of crises and consequently prolonged
is a delicate issue since an optimistic explanatory style
periods of independence. In other less dire situations,
has been associated with better mental and physical
envisioning the future may have influenced households’
health [7][8]. So to some degree, denial about the
choices about where to live, and what social relationships
prospect of illness may actually help ward it off. Denying
to build in ways that would have improved quality of life
evidence of existing illness, however, is certainly
later on.
problematic. Almost every household in our study
reported overlooking early signs of dementia and
Following are a range of some examples from subsequent regret about missing opportunities for
ethnographic fieldwork that illustrate the tendency to treatment, education, and lifestyle planning. Another
avoid thinking about the future: obstacle is the uncertainty about the future: in particular
the resources one will have and the health issues one will
have to contend with in old age. Even if these
• Joe and Lucinda cared for Lucinda’s mother when uncertainties weren’t there, goal setting and planning can
she developed dementia. This was a rocky couple of be intimidating. Some worry about not living up to goals:
years. They moved her mother to new facility every they would prefer to have low or no expectations than to
18
disappoint themselves. Most however, lack the a “Wishing Well” toolkit. It seems more like a board
preliminary ideas and vision to start concrete planning. game than retirement planning tool and a fun way to
They sometimes only have an inkling of what they want. consider the future. The pieces of the toolkit lie in front
They lack tools to explore these preliminary desires and of him on his desk. He starts to play around with a stone
wishes in a way that is speculative and even playful. that forms part of the toolkit. Using the stone Bob starts
Existing planning tools, which tend to be business to select some images that are displayed on a table top
oriented, are overly specified for loose ideation about the display panel. Later when the Bob is with Sue(his wife)
future. they look at the pictures of new houses and
neighbourhoods and activities Bob has selected.
Discussing his selections they add new images and
The needs and obstacles that we observed suggest a host remove some from the stone. Sue removes photographs of
of requirements for future envisioning tools that not part houses with stairs . Her arthritic knee is giving her
current calendaring and planning tools. Specifically, the trouble. Several months later Bob takes his stone to a
tools should allow people to: realtor. The images stored on the stone help the realtor
select houses that Bob and Sue would be interested in
§ Carve out periods of time that are personally buying. The images also form a journal of past thoughts
salient, while remaining oriented to universally and imaginings of Bob and Sue and aid them in making
accepted metrics of time. decisions and planning out how they might live their
retirement.
§ Ponder difficult decisions about old age in a
nonstressful way
The inspiration for a tool to aid future ideation, comes
§ Conjecture, “feel out”, play with, imagine from the experiences of making wishes, such as blowing
possibilities out candles on a birthday cake or throwing a coin into a
§ Examine values and let those guide life wishing well. These experiences are generative,
decisions imaginative, playful and hopeful regarding the future.
§ Work through obstacles that may impede wishes Wishes are fun to make, can be ambiguous, romantic and
or goals emotionally driven. This whimsical spirit of wishing is in
contrast to many existing computerized tools; for
§ Plan the way one wants to live, not just instance a travel website might demand an airport
milestones (preferably a 3 letter code), departure and return date
§ Evaluate the kind of community and when the user only has a vague idea of when and where
relationships that are important for one’s late travel would be desirable. The Wishing Well interface
phase of life and how to achieve the desired would invite self-reflection and projection of ambiguous
quality of social connectedness future plans (e.g. “I want to feel like I’ve been far away”
§ Plant wishes and goals without worrying about or “I want to travel to Europe”) rather than demanding
whether or not they are achievable specifics (e.g. date, time and airports.) The intention is to
use ambiguity as a resource for design enabling
§ Reflect and build on previously set goals and intriguing and delightful user experiences[9].
wishes
19
The hardware consists of a number of stones and a flat § Do people want to wish alone or with others?
horizontal touch screen onto which the main interface is § Is it more helpful to use ambiguous or literal stimuli
displayed. The main digital interface is an image browser to help with preliminary planning
that allows the user to navigate through a series of
images related to a future ideation. For instance if the § Do people want their wishes recorded?
user is planning a new home then images related to
homes, architecture, community and neighbourhood will Eventually, the Wishing Well and other Lifespan
be displayed. The stones are used to hold moods which Mapping interfaces will become integrated with the array
are defined by a collection of images. Images become of proactive health technologies that we are currently
associated with a stone by placing the stone over an prototyping. Our goal is to test these technologies as a
image. The image is then “absorbed” into the stone. home system through clinical trials in 2004.
20
Extended Sensor Mote Interfaces
for Ubiquitous Computing
Waylon Brunette1, Adam Rea1 Gaetano Borriello1,2
1 2
Dep’t of Computer Science & Engineering Intel Research Seattle
University of Washington 1100 NE 45th Street, Suite 600
Box 352350, Seattle, WA 98195 USA Seattle, WA 98105 USA
{wrb,area,gaetano}@cs.washington.edu [email protected]
21
information always flows to the person using the handheld
and his devices, eliminating the intervention of outside
infrastructure entirely.
MOTE INTERFACES
There has been a general lack of convenient methods to
connect motes to standard PCs and handheld devices. We
have developed two prototypes to interface motes via
common communication ports. The goal is to lower the
barriers to entry for new mote users and to provide a means
to utilize motes with computing devices that are not
equipped with traditional serial ports. We have developed
prototypes of a USB and PCMCIA based mote (shown in
Figure 3) that exhibit near plug and play functionality.
This makes connecting to existing infrastructure more
FIGURE 2: “Mite” Handheld RFID Reader
streamlined and less prone to error. In addition, we also use
a compact flash mote called a Canby to fill out our toolkit
reminding device, and as an input device for kiosks and of mote interfaces [7].
digital public displays. It can also be use as a remote
terminal to connect to specialized devices such as the Intel
Personal Server [5], that provides personal storage but has
no integrated display.
HANDHELD RFID READER
To leverage the growing field of passive RFID technology,
we created a small handheld RFID reader that can be used
as a personal actuation device. The low-power Mite shown
in Figure 2 has a small read range of only a few inches and
is based on the Skyetek multi-protocol RFID reader [6].
We enhanced the reader as a sensor node with buttons to PCMCIA Based Mote USB Based Mote
create a small, mobile reader with communication and
control capabilities. The Mite can read and write FIGURE 3: Prototypes of Mote Interfaces
information into an assortment of passive RFID tags that
have a globally unique ID and storage space for writing APPLICATIONS
additional information. The Mite also has a small While the goal was to develop a set of highly flexible
rechargeable lithium polymer power source with an sensors and PAN building blocks, the individual
accompanying USB charger. Our goal was to create a components have been designed around a few core
small, portable handheld reader to allow passive RFID tags applications sets. These generalized application sets helped
to be leveraged in many ubiquitous computing applications. guide the design process and provided a checklist for
functionalities that we wanted to be maintained throughout
An envisioned usage model for RFID tags is in smart
the development process. The Mite was designed to be a
spaces and location sensing. Tagged objects can contain
personal actuation mechanism that would allow the
part history, schematics, or even pointers to product
augmentation of objects and spaces with data. The
manuals. In addition, tags can moderate physical access
DisplayMote had a simple goal of maximizing the
control or can contain code that is executed upon a tag read
input/output capabilities.
allowing the Mite to be as actuator. Tags can also easily be
associated with auxiliary data contained in the An important application of the Mite is its ability to be
infrastructure making the possibilities of configuration used as an actuation mechanism. The Mite can be used to
almost infinite. cause actions in the environment based on the information
that the reader finds within RFID tags. For example, the
By allowing the user to have a personal reader the privacy
Mite can be used as an out-of-band connection mechanism.
model changes so that the user is in control of his or her
It can send the laptop, PDA, or any other device enough
data and whereabouts. This is in contrast to the fixed
information to bootstrap itself into a wireless network
readers where the environment is tracking the tag (on a
using the information contained in an RFID tag placed in
person or object). In this model, a user is tracked by the
the environment. A Bluetooth capable device can be
infrastructure. With the user having the reader under his
augmented with a RFID tag containing its MAC address
control, he is able to collect his own data without having to
allowing the discovery process to take less time and giving
worry about who has access to potentially sensitive
the user direct control of which devices he chooses to
location information. Additionally, in this scenario
22
communicate with. Conferences rooms can have RFID tags sensor” a part of the sensor network. By showing
that contain the necessary data to configure a laptop for messages and giving the user a means of input, a human
that location such as the SSID and WEP key of the wireless can interact in real time with a UCB sensor mote on a
network and the name of the printer and/or projector lightweight platform. This could be useful in the
available in the room. Not only does it allow a convenient deployment of sensor networks to ensure that each node is
method of configuration, it also limits people’s ability to properly configured and working.
obtain the information through room access. These
CONCLUSION
actuation events don’t have to be limited to computer Our goal is to create a toolkit for development of
interactions. RFID tags can be placed anywhere and be specialized personal-area network devices that utilize a
used as widgets to trigger events such as virtual switches to standard wireless communication platform. We leveraged
turn lights on and off. This allows for extremely dynamic the low-power radio and sensor network protocol work
environments where widgets can be reprogrammed and already in progress at UC Berkeley and other research
reconfigured. institutions to create general purpose I/O devices. By
The Mite was also designed for applications that augment creating a system of programmable I/O devices that share a
objects and spaces with information. These applications common programming language and low power
allow users to access and control data that had been communication protocol, developers should be able utilize
associated with a particular item. A prime example of this these building blocks for application development.
individual control of data is for associating repair histories Hopefully this approach enables the research community at
with a specific device. Past repairs and scheduled large to focus on implementing the desired functionality of
maintenance can be annotated at an elevator itself as well the device without having to divert their energies to
as in a centralized database. This means that the developing the base hardware components.
information needed on the worksite would already be at the
ACKNOWLEDGMENTS
worksite without relying on a network connection. In
We would like to thank Intel Research for their support of
addition, individual parts can now store their individual
the project. Roy Want’s team took our design and produced
history locally giving more precise and accurate
a prototype of the DisplayMote. Additionally, we would
information. Another strong advantage to having
like to thank Ken Smith of Intel Research Seattle for his
inexpensive, lightweight RFID reader/writers is the ability
help with packaging solutions. Finally, we would like to
for people to create personalized content. Business cards
thank Kurt Partridge and Saurav Chatterjee for advice in
are imprinted with a variety of static information (e.g.
building the DisplayMote.
Name, Title, email address, etc) and are given to a variety
of people. With writable RFID tags, business cards can REFERENCES
contain active content that allows them to become a 1. K. Fishkin, K. Partridge, and S. Chatterjee. “Wireless
malleable document full of additional information that can User Interface Components for Personal Area
be varied depending on who is the intended recipient. For Networks,” Pervasive Computing, Oct 2002, vol. 1, no.
example, it would be appropriate to embed the URL for a 4, pp 49–55.
work homepage within the card to give to a work colleague 2. J. Hill, et al. “System architecture directions for
but nice to be able point a friend to a site of pictures from networked sensors.” Proc. 9th Int'l Conf. on
last weeks golf outing using the same business cards. With Architectural Support for Programming Languages and
the reprogrammable memory available with RFID, business Operating Systems 2000, pp. 93-104.
cards can now contain sounds, product descriptions, or any
other data that can fit on an RFID tag. 3. M. Weiser, “The Computer for the 21st Century”,
Scientific American, Sept. 1991, vol. 265, no. 3, pp. 94-
The DisplayMote general application set is squarely 104.
focused on the lightweight I/O capabilities that the platform
offers. The DisplayMote was designed to provide access to 4. K. Partridge, et al., “TiltType: Accelerometer-
systems which has no integrated display like the Intel Supported Text-Entry for Very Small Devices,” Proc.
Personal Server [5]. Our goal was to make a platform the International Conference of User Interface Software and
size of a wristwatch to enabled short messages from these Technology 2002, pp 201-204.
devices to be displayed to the user and for the user to give 5. R. Want, et al, “The Personal Server: Changing the
feedback to these devices. Messages might be reminders or Way We Think about Ubiquitous Computing,” Proc.
lists of surrounding devices that the DisplayMote is able to International Conference of Ubiquitous Computing
communicate with. The DisplayMote can be used as a low 2002, pp 194-209.
cost notification system which can be placed outside 6. SkyeTekM1 http://www.skyetek.com/products/
common spaces (like conference rooms) to dynamically SkyeRead%20M1.pdf
show reservation schedules and the current status of the
room. Another application that the DisplayMote was 7. Lakshman Krishnamurthy, Intel Corporation. Personal
developed to address was the ability to make a “human Contact.
23
Palimpsests on Public View:
Annotating Community Content with Personal Devices
Scott Carter, Elizabeth Churchill, Laurent Denoue, Jonathan Helfman, Paul Murphy, Les Nelson
FX Palo Alto Laboratory
340 Hillview Avenue, Building 4,
Palo Alto, CA 94304, USA
+1 650 813 7700
{carter, churchill, denoue, helfman, murphy, nelson}@fxpal.com
ABSTRACT
This demonstration introduces UbiComp attendees to a
system for content annotation and open-air, social blogging
on interactive, publicly situated, digital poster boards using
public and personal devices. We describe our motivation, a
scenario of use, our prototype, and an outline of the
demonstration.
Keywords
Annotation; comment; public bulletin boards; community
content; social blogging
INTRODUCTION
palimpsest (n). “A manuscript, typically of papyrus or
parchment that has been written on more than once, with
the earlier writing incompletely erased and often legible.”
The system we propose to demonstrate allows people to
annotate content on interactive, digital bulletin boards
located in public places (Plasma Posters, Figure 1) using Figure 1: Annotating a Plasma Poster posting using a PDA
PDAs. We envisage this to be a mechanism by which
community members can exchange and explore interests Unlike digital advertisement boards (e.g. Adspace
and ideas. By publishing such annotations in public places, Network’s CoolSign boards), content that is posted to the
linked to the content to which they refer, we create a visible Plasma Posters is either generated by community members
“buzz” of “interest clusters”. and sent by email, or automatically selected from the
company intranet. Content typically consists of URLs, text,
In this demonstration description, we first describe our
images and short movies. A touch-screen overlay on the
digital, community, poster boards, and present user
plasma displays enables interaction with content, including
opinions related to commenting and annotating content
navigation and browsing of posted content and of
published on those boards. We then describe our approach
hyperlinks within that content.
to enabling personal and public annotation of digital
community content using public and personal devices. We Usage logs, user surveys and interviews have revealed
present a scenario, outline our current prototype, and considerable interaction with content at the Plasma Posters,
describe our demonstration at UbiComp 2003. including printing and forwarding of content to oneself and
to others from the Plasma Posters themselves [1]. Content
COMMUNITY CONTENT ON PUBLIC DISPLAY authors have also been emailed with comments regarding
Plasma Posters are large screen, interactive, digital, their postings (e.g. Figure 2). These comments are
community bulletin boards that are located in public spaces persistent, conversational threads [3] between readers and
[1]. Underlying the Plasma Posters is an information authors of posted content. We have also observed the
storage and distribution infrastructure called the Plasma existence of threaded posts (an item that is sent in response
Poster Network. We have had three Plasma Posters running to something previously posted). These threads and
in our lab for over a year, and two running in sister labs in comments demonstrate the ways by which posted content
Japan for 4 months. becomes the nexus of conversation.
24
collaborative annotation, the goal is usually to point
someone else to interesting parts of a document (e.g.
including text, video, voicemail, text-chat), as a method of
activity coordination, as a method of ongoing note-sharing
in a working situation, and for serendipitous sharing.
Finally, social or public annotation is less team-directed
than collaborative annotation, allowing people to leave
comments for others to happen across. In the last case, most
are Web–based (e.g. [5,7,10,14]).
Examples of current uses of public annotation can be found
in several applications on the World Wide Web. The most
common forms include newsgroups and Web-based
discussion forums, bulletin-boards or "blogs". Most are
Figure 2: A comment on a community posting emailed to designed to be accessed, contributed to and read by lone
the author of that content. The email contains the comment individuals from PCs. Our design challenges have been to
and a URL to the original posting. design easy-to-use and appealing methods for such
annotation from mobile devices, and to produce interfaces
Given people’s propensity to interact through and around that effectively display those annotations in public fora.
content in this way, we are developing methods that make
content annotation a more prominent feature of the Plasma
Poster Network and the Plasma Posters themselves.
Inspired by instances of PDA used for sharing comments in
focused collaboration, meeting and educational situations
(e.g. [5,10]) we have extended the Plasma Poster Network
to support capture of posted content to personal devices
such as PDAs, creation of annotations for that content on
the PDA (with text, graphics, and audio), and reposting of
the annotated content to the system, and thus to the Plasma
Posters. There are precedents for assuming people will post
personal content on situated, public displays from personal
devices. Examples include the Progress Bar’s Meshboard in
London, UK where patrons can send images from cell
phones [11], and the Appliance Studio’s TxTBoard, where
SMS text messages can be sent to public displays [13]. To
date, however, these technologies do not support inline
annotation of existing content. Further, these technologies
so far have focused on what has been called “person-to- Figure 3: The Plasma Poster Interface, the posting
place” publishing. We wish to extend this notion to represented on a PDA with the commenting facility visible,
“person-to-place-to-people-to-person” content annotation, and the Plasma Poster display with the created annotation
augmentation and publication. visible. The notes along the right edge of the Plasma Poster
are all annotations that have been created by community
ANNOTATION members from their PCs, PDAs, or the “scribble’ interface
Annotation involves marking of content where the original at the Plasma Poster itself.
remains unchanged. Most examples of digital annotation
deal with annotating textual documents, but some do
include annotation of audio or video content. Most ANNOTATING COMMUNITY CONTENT: A Scenario
annotations are text-based or ink-based, although some are Before detailing the technical aspects of our demonstration,
audio and pictorial. we present a scenario of the system in use.
We characterize annotation systems as falling broadly into While listening to a talk on a new shared note taking
3 categories: 1. annotations for personal use; 2. application, Jane, a conference attendee overhears someone
collaborative annotations; and 3. public/social annotations. near her talking about how they have just implanted a
In the first category, the goals are typically to support active tracking device in their dog. She opens her laptop and does
reading (e.g. [2,4,9]), to help with content retrieval a quick Google search on “rfid dogs” and e-mails the first
(including summarization, search and classification), for link she finds to the address of a nearby Plasma Poster,
new document retrieval and for content reuse in giving the posting the title “Is rover going robo?” Another
composition of new documents. In the second category, attendee, Jason, passing by the Plasma Poster in the lobby
25
nearby notices the post, and wants to add that such tracking client platform (e.g., large plasma display or personal
devices are highly controversial as their safety has not been computer).
fully proven. He presses the “comment” button on the
display and uses the scribble pad to attach an annotation
Reading/Listening Interfaces
(“not my dog!”) to the display, adding a pointer to a URL to
Web Personal
a Web site where the tags are discussed more critically. PosterShow Repository
(Public Display) Interface
Later, another attendee, Jeffrey, who has just been to a talk (any device)
on ambient displays sees same posting. He approaches the
Content and Annotate
display with his PDA and presses the “grab posting” button, Email Overview JSP
PosterMail Servlets and JSPs
and downloads the current posting to his PDA using the Clients
(Desktop)
Servlet
wifi connection. After he sees that his PDA has opened a Web
Posting Access
web page showing the content from the posting and the Clients
Servlet Servlet
(Desktop)
comment left by Jason, he wanders off to another talk,
Applet Sketcher
sketching a response along the way. (PDA &
Poster/ Personal Annotation
Public Display)
Metadata Repository Database
Later, other attendees gather around the display and begin eVB Audio
Annotation
Database Database
Application
talking about the post. They read the comment left by Jason (PDA)
Servlet
and look through the site he recommended, and Writing/Recording Posting Hosting and Distribution
conversation begins to focus on where exactly they implant Interfaces Infrastructure Infrastructure
26
Annotate JSP allows client applications access to the data in Instead of posting the annotation directly to the display, the
the annotation and personal repository databases. Client user might want to take another picture and perhaps make
side support for annotations on personal devices includes a comments about both pictures collectively. In this way she
sketching tool and an audio-recording tool. The sketching has appropriated publicly posted, social content into a novel
tool is implemented as a Java applet and allows users to piece of content that is once again personal. She could then
draw responses to comments. Users first specify a posting repost this new content to the display to again transfer the
to annotate using an interface served by the Annotation JSP. content to another domain of ownership. In future work we
Once a user has selected a posting, the sketch applet allows intend to explore how users conceptualize such transfers of
use of the PDA stylus to input simple annotations. The ownership.
audio annotation tool is implemented as an embedded
REFERENCES
Visual Basic application and allows users to record a brief
1. Churchill, E.F., Nelson, L. and Denoue, L. Multimedia
comment using the device’s built-in microphone. Comments
Fliers: Information Sharing With Digital Community
are uploaded to the Annotation Servlet using the wifi
Bulletin Boards. Proc. Communities and Technologies
enabled PDA.
2003, September 2003, Kluwer Academic Publishers.
The Annotate JSP provides client side interfaces for 2. Denoue, L. and Vignollet, L. An Annotation Tool for
annotations on public displays. The JSP dynamically Web Browsers and its applications to information
displays annotation icons next to postings their associated retrieval , RIAO2000, Paris, France, 2000, p. 180-195.
postings. In this way users may scroll through and open
annotations using simple gestures. Users may also sketch 3. Erickson, T. Persistent Conversation, Introduction to
annotations on the public display using a version of the Special Issue of JCMC 4 (4) June 1999.
sketching tool for that device. Also on the client side, a 4. Golovchinsky, G. Emphasis on the Relevant: Free-form
Web-based interface allows users to manage their personal Digital Ink as a Mechanism for Relevance Feedback,
content repository. Users can review postings and Proceedings de ACM SIGIR'98, Melbourne, Australia,
associated annotations that they have collected from public 1998.
displays or store new content to post at a later time. This 5. Greenberg, S. and Boyle, M. (1998) Moving Between
interface thus allows users working away from the display a Personal Devices and Public Displays. Workshop on
way to see annotations to postings in which they have Handheld CSCW, CSCW, November 14, 1998.
expressed interest.
6. Gronbaek, K., Sloth, L. and Orbaek P., WebWise:
DEMONSTRATION FOR UBICOMP
Browser and Proxy support for open hypermedia
Before the conference, select members of the UbiComp
structuring mechanisms on the WWW, International
community will be asked to register with our system and to
World Wide Web Conference, Toronto, Canada, 1999,
post some content for public display. Four PDAs will be
p. 253-267.
available at the conference itself to enable attendees to
interact with posted content. 7. Hanna, R. Annotation as Social Practice. In S. Barney
(Ed.) Annotation and Its Texts. New York, Oxford:
We will support content posting, capture and annotation
Oxford University Press, 1991.
from laptops, PDAs, and PCs. We will support, for
example, a laptop user in the conference’s internet 8. IMARKUP, http://www.imarkup.com, 1999.
connection area who wishes to post the web site of a nearby 9. Marshall, C.C., Price, M.N., Golovchinsky, G. and
restaurant that he enjoys as well as plate suggestions and Schilit, B. Collaborating over Portable Reading
other comments. Similarly, we will support users who, for Appliances , Personal Technologies, vol 3, n 1, 1999.
example, take and upload photos of a demonstration in-
10. Myers, B. A., Stiel, H., and Gargiulo, R. 1998.
progress. We will support viewing and annotating content
Collaboration using multiple PDAs connected to a PC.
both at the public display itself as well as via personal
In Proc CSCW ’98, ACM Press, pp. 285–294, 1998.
devices. For example, a person using the public display can
leave a sketch or audio response to the posted restaurant 11. Progress Bar Meshboard
suggestion. A PDA user, meanwhile, can press a button on http://news.bbc.co.uk/2/hi/technology/2861749.stm.
her display that captures the content of the posting and all 12. THIRDVOICE, http://www.thirdvoice.com , 1999.
of its annotations to her PDA. She could then walk over to
13. Appliance Studio’s TxtBoard
the demo to witness it herself and attach her comments to
http://www.appliancestudio.com/sectors/smartsigns/txtb
the posting. We will also support targeted annotation of
oard.htm.
specific parts of content. For example, a user of a public
display may use a gesture to select and attach annotations to 14. Yee K.P. The CritLink Mediator,
a specific region of text. http://www.crit.org/critlink.html.
27
Platypus Amoeba
Ariel Churi Vivian Lin
319 Manhattan Ave. #3 135 Washington Ave.
Brooklyn, NY 11211 USA Brooklyn, NY 11205 USA
+1 646 382 6522 +1 718 398 0081
[email protected] [email protected]
ABSTRACT INTRODUCTION
Platypus Amoeba (Platy) is a reactive sculpture. It knows Technology is continually being devised to satisfy people’s
when someone is petting it and it can indicate how it feels. needs. But how does technology change our needs? How
By petting Platy the user speaks to it. Platy uses lights and willing are we to change our actions and desires based on
sound to speak to the user. This feedback can indicate technology? Platypus Amoeba is an experiment in
happiness or sadness or other emotions. Users begin by human/computer interaction. It asks us; what is our
trying only to initiate a response from the Platy but then relationship to our technology? It is not technology
quickly change to trying to get a happy response. The user masquerading as a creature but rather a creature born of
is trying to control Platy by petting it in certain ways but technology. Platypus Amoeba entices with the desire for
Platy is controlling the user by indicating which way it power as it allows us to cause exciting light patterns and
would like to be petted. strange noises. But quickly we see the limitations of that
power as certain interactions cause negative or
Keywords
unsatisfactory responses. We then change our behavior to
virtual pet, interactive sculpture, responsive technology, get the desired response. Is the user controlling Platy or is
zoomorphism, human-robot interaction Platy controlling the user?
INTERACTION
The ideal interaction with the Platypus Amoeba is most
effective if there is one person at a time. The user must pet
from front to back to get a vocal response. If the user fails
to be consistent with their patterns of petting, Platy may
stop glowing or emit a harsh squeal. Platy can react with
different light formations. For example, Platy can follow
your hand with lights that mirror your action. Afterwards,
Platy can get tired and its lights start to trail the action of
your hand over its body. Like the Public Anemone Robot
(2) Platy can choose to not interact with the user.
PHYSICAL FORM
The physical form of the Platypus Amoeba morphed from
its original concept, a giant caterpillar, to an organic and
zoomorphic shape, unidentifiable but familiar. The shape of
Platy is round and bulbous. The nubby legs of Platy can be
Figure 2: Final Platypus Amoeba. attributed to the original giant caterpillar concept. When
one actually touches Platy, its texture is flexible and
resilient. There is resistance against your hand when Platy
is touched due to the thickness of the silicone material.
Platy’s exterior is made from soft, translucent silicon
rubber. Platy’s mass is derived from the resilience of the
Dragon Skin Q silicone rubber, which allows for the
density and resistance when touched. Based on human
interaction with Platy, the natural response is to squeeze
Platy’s body and hold one of its legs. Users are fascinated
with its texture and tactility, usually stroking Platy until
the point where they feel comfortable to squeeze its body.
28
behavior studies, cats exhibit signals indicating “Don’t Pet
Me Anymore” aggression, explaining why cats that seem to
enjoy being petted suddenly bite (3). In contrast, cats can
emit noises that express a desire for attention that gives
humans the desire to pet. With Platy, the user will
continue to pet it and receive positive feedback. If the
feedback is negative, the user will question, where in their
actions did the Platy signal “Don’t Pet Me Anymore”
aggression.
INTERFACE
Platy experiences the outside world through sixteen
phototransistors. These detect the shadow of the users hand
as he/she pets the Platy. Phototransistors were chosen
primarily for aesthetic reasons (See Figure 4). Many other
sensor options were discarded because they would not be
pleasing to the eye. Photo resistors would look bad, force-
sensing resistors would be expensive and unattractive,
Figure 1: Original concept of Platypus Amoeba.
QPROX sensors would have been nice as they could be
ZOOMORPHISM almost completely hidden but we were unable to get
The zoomorphic shape of Platypus Amoeba is attributed to consistent results. Platy provides feedback through the
the giant caterpillar, but also to an inconceivable shape not sixty-four LEDs, which shine through it’s back, through
found in our natural environment. The definition of the color of its eyes as well as through various purring and
zoomorphic is having the form of an animal, of, relating cooing noises from a hidden speaker. The lights can show
to, or being conceived of in animal form or with animal red, green, and blue like a TV screen as well as white. This
attributes. Platy’s shape lends itself to no particular gives us a full range of color options to work with. Platys
creature, but it’s two eyes (See Figure 3) and many sounds were created by a human voice. They were based
feet/nubs make it seem to be some sort of creature. Noises on years of living with pets while trying to keep away from
of an unknown creature emanate from it. Users are able to identification with any particular animal.
decipher a beckoning purring or sometimes a less friendly
or ambivalent response. What social cues determine
emotion through sound? How does the user determine if a
certain coo or purr emitted from the Platy is a positive or
negative response?
29
character design (1). The leg/nubs appear underdeveloped. ACKNOWLEDGMENTS
Overall it looks like the baby of a strange alien. Cindy Yang who made the Platypus Amoeba silicone body
in a three-part plaster mold without ever having attempted
ARCHITECTURE
such a thing before, Mallory Whitelaw who helped with
Platy is self-contained except for the power source. dynamic coding, Cindy Jeffers, for all her help, Greg
The exterior shape was first sculpted in oil-based Shakar, for his invaluable technical support, and Tom Igoe,
plastecine. From this shape we made a three-part plaster our professor.
mold which we poured the uncured silicon into. The
electronic components sit inside this silicon shell with REFERENCES
only a wire, for power, protruding. The Software resides on Pictures, video, schematic and instructions on building
a PIC16F877 microcontroller, which controls four your own available at:
MAX7219 light controllers and an ISD1416 chipcorder http://stage.itp.tsoa.nyu.edu/~ac1065/sculptwdatabody.htm
sound chip (See Figure 5). Also inside are a small speaker 1. AIC/Yoyogi Animation Gakuin, How to Draw Manga:
and four arrays of sixteen lights. One array for each color Making Anime, Graphics-Sha, 1996 (Japanese) and 2003
red, blue, green and white for a total of sixty-four lights. (English)
Economy was a consideration in design and the total cost
of the internal components was under $200US. 2. Breazeal, C., The Public Anemone Robot, SIGGRAPH
2002 Conference Abstracts and Applications, MIT
Media Lab, Cambridge MA, 2002.
3. Hetts S., Ph.D., Certified Animal Behaviorist,
Explaining Cat Aggression Towards People. Available
at http://www.catcaresociety.org/aggression.htm
RESULTS
User testing with general public took place at the ITP
Spring Show 2003. In general, users were pleased with the
tactility and interactivity of Platy. Many initial responses
were to try to pick up Platy and/or squeeze the main body
but Platy reacts only to petting. For video of user response
and interactions with the Platy please visit our video link:
http://stage.itp.tsoa.nyu.edu/~vl336/Spring_2003/SD/platy
pus.html
CONCLUSION
Perhaps Platypus Amoeba can be networked with others to
create a small army of responsive creatures. With different
personalities formed by how each user treats their Platy,
perhaps different Platys could interact with each other and
with information from a variety of sources. Ideally, Platy
would become completely portable and run on batteries.
Platy’s accessibility and mobility would become more
ubiquitous and part of the household.
30
M-Views: A System for Location-Based Storytelling
ABSTRACT
M-Views is a system for creating and participating in • Client-server architecture, allowing multiple clients
context-sensitive, mobile cinematic narratives. A Map Agent to connect to a story server, which analyzes their
detects participant location in 802.11 enabled space and context/location data and sends each client the
triggers a location appropriate video message which is sent next piece of its personalized experience
from the server to the participant's "in" box.
• Scripting language and authoring software [4],
Keyword giving authors the tools they need to create and
s
context -aware systems, participatory media, wireless indoor test location-based narratives
location awareness, mobile cinema, storytelling
• Location awareness engine, which uses wireless
INTRODUCTION network signal strength analysis to estimate the
location of each handheld client
As handheld computing becomes more popular, it will
gradually incorporate context -aware features into everyday
usage [1] [2]. Information selection will become easier
because devices will infer what their users want—even
before they pick up a stylus. While location-based
marketing and instant messaging seem certain, less
attention has been paid to the creative possibilities of
context-aware, ubiquitous computing until recently.
31
Mobile Cinema is augmented by physical surroundings and M-Views Client
social engagement. As the participant navigates physical
space, s/he triggers distinct media elements that often The M-Views client operates on the Windows CE operating
depict events at the location where they appear. The system (Pocket PC). Each new event is dropped into a
individual media segments are acquired at discrete times message queue, which is visibly represented as the user’s
and places, with allowances for the serendipitous inbox. In addition to the message manager interface shown
augmentation of the whole experience through instant in Figure 3, the client also features a map viewer/editor tool.
messaging (done with the M-Views client). Since any This permits users to see their server-calculated positions
system is only as good as its content, o ur research has also and those of others. It also allows administrators to
included the production of three mobile “movies” of this calibrate map coordinates using only the standard client.
kind, which range from a mystery, to a college drama, to our The software is modular and can be augmented for new
latest story: an action thriller called 15 Minutes. functionality and sensors. It uses third party programs
(such as Windows Media Player) to play streaming media
M-Views was designed for Mobile Cinema, but its robust over the network. When a message arrives with an
features allow it to have other capabilities as well. The associated media URL, the streaming media player is
platform can be used to support many types of applications. launched. The information flow is diagrammed in Figure 4.
TECHNOLOGY
32
Communication content, and an associated Media URL. The scripting
system is used to specify story behavior based on user
Communication between the client and server is carried out activity, and each event element contains user variable
via HTTP POST requests. Using this protocol provides requirements and results. If current variable values
both stability and portability. Every update cycle (maintained in the account data of each user) meet event
(approximately once per second), the client transmits requirements, the event is considered encountered, and the
authentication information, communication settings, and user’s variables are changed according to any update rules
sensor data to the server, which then validates the that may also be defined for that event.
information and sends back messages, story events, and
location estimates. This communication scheme eliminates Location Awareness
the need for a logon/logoff mechanism, and it is very fault-
tolerant. If the connection is interrupted (perhaps due to MapAgent is the default location awareness engine written
losing wireless network coverage), the client will keep trying for M-Views. M-Views clients monitor the Received Signal
to send the last request until a connection is made or the Strength Indicator (RSSI) for all 802.11 wireless access
program is terminated. To allow for roaming between points in range. These measurements are averaged over a
wireless networks, the client attempts to reinitialize its small time window and transmitted to instances of
wireless network card and DHCP address after any MapAgent running on the server. For each subscribed
connection timeout or interruption. In practice, it takes map, the associated MapAgent compares the RSSI
about 10-30 seconds to reacquire a new network connection averages to measurements recorded previously by an
after the previous one has been lost. administrator at known locations, which are called hotspots.
Hotspots have a threshold, and they are represented on the
M-Views Server map with translucent circles, as in Figure 6. The MapAgent
algorithm uses a combination of nearest neighbor matching,
The M-Views server is written in Java and runs as a servlet triangulation, and trajectory estimation to determine client
with the appropriate container software, such as Apache locations. The average accuracy is between 1 and 5 meters,
Tomcat. After initialization, the server maintains all story, depending on the environment, map resolution and
message, and user information as memory-resident XML calibration layout. It functions both indoors and outdoors.
data. XML management is done using the Apache Xerces 2 MapAgent also keeps track of all clients currently
package. appearing on the map, allowing applications to incorporate
a location-based social component.
The server features a messaging framework that is
specifically designed to support narrative structures but
flexible enough to be used for a full range of applications.
Under this framework, all messages—whether they are
client-to-client instant messages or events encountered in a
location-based story —are processed using the same
mechanism. All messages and events are stored in either a
story script or the general message forum (to which all users
are subscribed and where client-to-client messages are
created). Additionally, all messages, even those sent by
clients, can be made context -dependant and can have
associated media URLs. These features, coupled with
familiar functionality (i.e. message forwarding and group
mailing), allow for an intuitive, robust, context -aware
messaging experience.
Scripting
33
STORY DESIGN ACKNOWLEDGEMENTS
The need for good content has prompted the creation of The authors wish to acknowledge the contribution of all
numerous M-Views stories. These have included two large project team members: Carly Kastner, Lilly Kam, Debora Lui,
productions by students at the MIT Media Lab: a campus- Chris Toepel, and Dan Bersak. Special thanks goes to our
wide mystery (designed as a time-dependent scavenger Interactive Cinema colleagues: Barbara Barry, Paul
hunt) and a dramatic tour through the lives of students at Nemirovsky, Aisling Kelliher, and Ali Mazalek. We also
MIT. Each production stressed different aspects of Mobile thank Prof. Alan Brody, Prof. Bill Mitchell, Prof. Donald
Cinema—in particular, nonlinearity and the connection with Sadoway, and Prof. Ted Selker for excellent acting, advice,
space. and support. We gratefully acknowledge Thomas Gardos
from Intel, Taka Sueyoshi from Sony, Steve Whittaker from
Nonlinearity refers to the modularity of story clips. Authors BT, and Franklin Reynolds from Nokia for their kindness
must accept the possibility that clips will be seen at odd and support . This work is supported in part by grants from
times or in strange orders. Therefore, the story and each the MIT Media Lab’s Digital Life Consortia and the Council
clip that composes it must be able to withstand these for the Arts at MIT.
uncertainties. M-Views authors have discovered that every
clip should be entertaining independent of the other story REFERENCES
material; each scene must have its own miniature “story
arc.” [1]. G. Chen and D. Kotz, "A Survey of Context -Aware
Mobile Computing Research," Technical Report
Connecting with space is essential to the mobile experience. TR2000-381, Dept. of Computer Science, Dartmouth
The small screen of a handheld device is a disadvantage in College, November 2000.
this regard. Therefore, it is up to the author to anticipate
the interest and curiosity of the user. Carefully planned [2]. J. Hightower and G. Borriello, "Location Systems for
cinematography is the key here. Authors of Mobile Cinema Ubiquitous Computing," Computer, special issue on
have learned to give their audience spatial awareness and location-aware computing, vol. 34, no. 8, pp. 57-66,
dramatic focus through use of motion, extreme close-ups, 2001.
and wide establishing shots.
[3]. http://newhome.weblogs.com/historyOfWeblogs
SIGNIFICANCE
[4]. Pengkai Pan, Carly Kastner, David Crow, and Glorianna
Previous context -aware mobile media systems, such as the Davenport, "M-Studio: an Authoring Application for
Cyberguide system [5], the Guide system [6] and the Hippie Context -Aware Multimedia," ACM Multimedia 2002,
project [7], are all aimed at providing location-based Juan-les-Pins, France, 2002.
experiences for visitors, city travelers, or museum tourists.
All these systems adopt client-server architectures similar [5]. Gregory D. Abowd, Christopher G. Atkeson, Jason
to M-Views, but differ in that they do not focus on the Hong, Sue Long, Rob Kooper, and Mike Pinkerton,
narrative aspect. In addition, few systems are full "Cyberguide: a Mobile Context -Aware Tour Guide,"
development platforms for mobile applications. None of Wireless Networks, 3(5):421-433, October 1997.
this past research has focused on the development of
cinematic narrative and little effort has been made to [6]. Cheverst, K., N. Davies, K. Mitchell, A. Friday, and C.
purposely support multiple kinds of mobile applications Efstratiou, "Developing a Context -Aware Electronic
using these architectures. Tourist Guide: Some Issues and Experiences," Proc. of
CHI 2000, Netherlands, pp. 17-22, April 2000.
M-Views breaks new ground by giving people the chance
to author and experience Mobile Cinema with unlimited [7]. Reinhard, M. Specht, and I. Jaceniak, "Hippie: A
freedom—use it to create your desired type of mobile movie Nomadic Information System," In Proceedings of 1st
or game, or build your own context -aware application. Or International Symposium on Handheld and
simply write about your own life using the space around Ubiquitous Computing (HUC '99), pp. 330-333.
you as your medium.
34
Stanford Interactive Workspaces Project
Armando Fox, Terry Winograd, and the Stanford Interactive Workspaces group
Computer Science Department, Stanford University
Stanford, CA 94305
+1 650 723 9558
{fox,winograd}@cs.stanford.edu
35
University. of the room's EventHeap , recent activity in the room, and
the contents of the iClip-board. For example, the
iROS: Interactive Workspace Middleware
The dynamism and heterogeneity in ubiquitous computing EventHeap Visualization provides awareness of the flow of
environments on both short and long time scales implies information between machines within our environment,
that middleware platforms for these environments need to and has also been used to identify bugs and breakdowns in
be designed ground up for portability, extensibility and the system.
robustness. We have developed the iROS (iRoom iWall (Interactive Wall) is a software framework for easing
Operating System) middleware platform for augmented the development and deployment of multi-display post-
room-sized ubicomp environments through the use of three desktop applications for ubiquitous computing envi-
guiding principles: economy of mechanism, client ronments. Multiple general-purpose graphical "views" run
simplicity, and use of levels of indirection. Apart from on several devices of vary-ing capabilities and platforms
theoretical arguments and experimental results, our and are controlled by applications through a simple but
experience through several deployments with a variety of powerful EventHeap-based protocol. iWall interacts
applications, in most cases not written by the original smoothly with other iROS-based technologies such as
designers of the system, provides some validation in iStuff: for example, a user can play iPong (a table-tennis
practice that the design decisions have in fact resulted in game spanning multiple displays, using iWall as a
the intended portability, extensibility and robustness [6,4]. substrate) using iStuff and the PatchPanel to select among
An important lesson drawn from our experience so far is multiple physical “paddle controllers” as they play.
that a logically-centralized design and physically- The iClipboard, PointRight [8], and Multibrowsing [9]
centralized implementation enables the best behavior in together provide the essential mechanisms necessary to
terms of extensibility and portability along with ease of easily move data (Web pages and documents) back and
administration, and sufficient behavior in terms of forth between users’ personal devices and large shared
scalability and robustness. displays. PointRight allows a single mouse and keyboard to
The fundamental design stance of iROS is that a major control multiple screens. When the cursor reaches the edge
challenge of ubicomp middleware is design for integration. of a screen it seamlessly moves to the adjacent screen, and
We will inevitably continue to encounter situations in keyboard control is simulta-neously redirected to that
which the goal is to “integrate” a new behavior, controller, machine. Laptops may also redirect their keyboard and
or service into an existing environment not designed to pointing device, and multiple pointers are supported
accommodate it; therefore the design goal of all our simultaneously. Multibrowsing allows any pointer to direct
middleware is to make the integration task as easy as the movement of content from any display to any other
possible. This is reflected at the lowest layer in the iROS display. The iClipboard allows cutting-and-pasting of
EventHeap [7], at the application/UI integration layer by content across machines (shared or personal, Windows or
iStuff [1] and the Patch Panel [2], and at the UI generation Mac), and integrates with PointRight to do the right thing
layer by Interface Crafter [10]. (a user can cut something on one screen, use PointRight to
have the same mouse move the pointer onto another screen
Interactive Workspace Applications and Technologies
of a different type of machine, then Paste).
iROS has been the basis of numerous technologies and
applications, many in regular use in the Stanford iRoom Although most iRooms deployed to date are fixed facilities
and other interactive workspaces. Although each is the built into infrastructure-rich rooms, we have also explored
subject of one or more refereed publications, we try to give encapsulating much of the functionality of the iRoom in an
a sense of the breadth of work that has been enabled by this “appliance” that also addresses the needs of
platform: roaming/nomadic users. The MeetingMachine [5]
provides a substantial amount of the functionality of an
The Workspace Navigator is a suite of tools designed to
iRoom in a projector-like appliance, and can be
facilitate capture, recall and reuse of information in an
immediately deployed in a facility with no other
interactive environment.
infrastructure. The MeetingMachine’s design decisions
iStuff [1] is a framework for prototyping physical UI’s by reflect important differences that arise when trying to
building inexpensive physical devices and integrating them accommodate nomadic as well as “fixed” users in
rapidly with existing iRoom behaviors and applications. interactive workspaces.
The Patch Panel [2] provides a generic and easy-to-use
DEMONSTRATION SCHEDULE
software mechanism to intermediate between iStuff and
existing applications, suitable for a range of sophistication We will have three kinds of demonstrations at
from non-programmers to power users. Ubicomp’03:
The AmbienTable explores issues involving the use of “Let us show you” demos will consist of guided
tables as ambient displays. The table displays several demonstrations highlighting Interactive Workspace
visualizations relevant to iRoom users, including the status applications and technologies, including the Workspace
Navigator, iStuff for physical UI prototyping, the Patch
36
Panel for incremental integration and reconfiguration, and iRoom, or to download the iROS software (including easy
the AmbienTable for visualizing interactive workspace installers for Windows NT/2000/XP), visit
activity. Not all applications will be demonstrated during http://iwork.stanford.edu.
the entire demo session; please visit our booth for the
detailed schedule.
REFERENCES
“Exploratorium” style demos1 will provide conference 1. Ballagas, R., Ringel, M., Stone, M., and Borchers, J.
attendees the opportunity to freely interact with iRoom iStuff: A Physical User Interface Toolkit for Ubiquitous
demos as they wish; researchers will be available to explain Computing Environments. In Proc. Intl. Conf. on
“what’s going on” in each demo, and posters adjacent to Computer/Human Interaction (CHI) 2003 (to appear).
the demo booth will give further details.
2. Ballagas, R., Szybalski, A., and Fox, A. The Patch
“Try it, you’ll like it” will allow conference attendees Panel: Enabling Control-Flow Interoperability in
with Windows XP laptops to try the MeetingMachine for Ubicomp Environments. Submitted for publication.
themselves. Whereas the demo iRoom is intended to
simulate the permanent iRoom at Stanford, the 3. Johanson, B., Fox, A., and Winograd, T. The
MeetingMachine provides a substantial amount of the Interactive Workspaces Project: Experiences with
functionality of an iRoom in a projector-like appliance, a Ubiquitous Computing Rooms. IEEE Pervasive
kind of “iRoom-in-a-Box” that can be immediately Computing Magazine 1(2), April-June 2002.
deployed at a meeting or brainstorming session even if 4. Ponnekanti, S., Johanson, B., Kiciman, E. and Fox, A.
there is no network infrastructure in place. The client-side Portability, Extensibility and Robustness in iROS. Proc.
Windows software for the MeetingMachine will be IEEE International Conference on Pervasive Computing
available on CD-ROMs and USB Flash/CompactFlash and Communications (Percom 2003), Dallas-Fort
drives for attendees to download immediately to their Worth, TX. March 2003.
laptops. In addition the MeetingMachine will support 5. Barton, J., Hsieh, T., Vikram, V., Shimizu, T.,
media transfer via USB, CompactFlash, and RFID tags. Johanson, B., and Fox, A. The MeetingMachine:
SUMMARY Interactive Workspace Support for Nomadic Users.
iROS and its associated applications have been Proc Fifth IEEE Workshop on Mobile Comp. Sys. and
successfully used in a number of experimental and Apps. (WMCSA) 2003, Monterey, CA, October 2003.
production scenarios, including design brainstorming 6. Johanson, B ., and Fox, A. Tuplespace-based
sessions by professional designers, construction of class Coordination Infrastructures for Interactive
projects built on the iROS system, training sessions for Workspaces. Journal of Software and Systems (JSS),
secondary school principals, construction management, to appear in 2003.
collaborative writing in a Stanford English course, and of
course, our own weekly group meetings. iROS technology 7. Johanson, B., and Fox, A. The Event Heap: A
has also been deployed in two classrooms in Stanford’s Coordination Infrastructure for Interactive Workspaces.
new Wallenberg Hall, and Multibrowse and PointRight In Proc. Fourth IEEE Workshop on Mobile Computing
have been readily adopted by instructors of courses ranging Systems and Applications (WMCSA 2002), Callicoon,
from engineering to Classics. Overall results have been NY, June 2002
positive, with many suggestions for further development 8. Johanson, B., Hutchings, G., Winograd, T., and Stone,
and improvement; public deployments of iRooms for M. PointRight: Experience with Flexible Input
student use in libraries and dormitories are in the planning Redirection in Interactive Workspaces. In Proc. Symp.
stage, using the MeetingMachine appliance [5] for rapid on User Interface Sys. and Tech. (UIST) 2002.
deployment. We have been encouraged by comments from 9. Johanson, B., Ponnekanti, R., Sengupta, C., and Fox, A.
programmers who have appreciated how easy it is to Multibrowsing: Moving Web Content across Multiple
develop applications with our framework. Finally, the Displays. Proc. Intl. Conf. on Ubiquitous Computing
adoption and spread of our technology to other research (UBICOMP) 2001.
groups also suggests that our system is meeting the needs
of the growing community of developers for interactive 10. Ponnekanti, S., Lee, B., Fox, A., Hanrahan, P., and
workspaces. Winograd, T. ICrafter: A Service Framework for
Ubiquitous Computing Environments In Proc. Intl.
For more information and publications on the Interactive Conf. on Ubiquitous Computing (UBICOMP), 2001.
Workspaces project, to see photos and videos of the
1
Inspired by the “do, then read” hands-on exhibits at the
Exploratorium museum in San Francisco, for those
readers who have visited it.
37
Picture of Health: Photography Use in Diabetes Self-Care
Jeana Frost Brian K Smith
The Media Laboratory School of Information Sciences and Technology
MIT College of Education
Cambridge, Ma 02139 The Pennsylvania State University
USA University Park, PA 16802
[email protected] USA
[email protected]
38
capture and examine diet, exercise, and other routines to sections of the class. We prototyped the project through
understand the connection between behavior and blood using disposable cameras distributed to the class attendees.
sugar measurements. We asked people to take pictures of meals, exercising and
In this project, we outline and test the concept of a social events, anything that people thought might impact
behavior meter. This meter includes a novel data format to blood sugar. People took the cameras home between
supplement the blood glucose readings, photographs. These sessions to take pictures and returned them for processing
photographs offer literal portraits of events in a person’s at a subsequent class.
life. By juxtaposing these images with blood sugar values a Results
diabetic and health care providers can begin to understand Diabetics shared these images with other people in the
the intersection of behavior and blood sugar control for class. These pictures served as focal points of health
both creating a treatment plan and critiquing its efficacy. discussions. Instead of talking about subject in the abstract,
Images in this capacity function as data for reflection and the students discussed specific situations from people’s
review. Educational researchers have studied how learners experience. For example, people used the language of the
can analyze behaviors depicted in still and/or moving classroom such as ideal portion size to discuss meals
images, generate and test hypotheses about how and why pictured like the one in Figure 1. In addition, the
these behaviors occur to develop deeper understandings of nutritionists and nurse practitioners were allowed “into” the
various concepts [1-3,5,6,10,11,14,15]. In the area of homes of these patients through these pictures. They saw
healthcare, asthma patients were asked to videotape their what their patients regularly ate, what is in the refrigerator,
daily routines[9]. These results showed inconsistencies where they walk and generally what their lives are like.
between the amount of allergens patients reported exposure This information changed the level of specificity with
to and those captured on video. This work suggests that which health care providers discussed problem solving
people cast their activities in a “healthy” light even while with patients.
enacting unhealthy practices. We think diabetics may act
similarly, explaining a healthy lifestyle to medical
professionals while a closer examination of their daily
activities might reveal examples of unhealthy routines.
The remainder of the paper describes a series of design
studies that introduced photography into diabetes self-
monitoring practices. We report on a project done in a class
on diabetes self-care, where newly diagnosed diabetics
often go to learn how to cope with their disease. We
introduced photography into those courses to help students
connect lecture materials to their daily lives as captured on
film. And, we describe a computer-based visualization tool
designed to help diabetics see relationships between their
blood sugar measurements and photographs of their daily
routines.
figure 1: A photograph of one student’s dinner. This image
CLASSROOM STUDY prompted discussions about portion sizes, food preparation, and
Background balanced nutrition.
We began our inquiry through observing a course on Consistent with Rich et al.’s work with asthmatics,
diabetes self-management held in a local hospital to diabetics seemed to report their behaviors in a positive light
understand class procedure and curricula requirements. The even while engaging in unhealthy behaviors. In Figure 2,
class ran for ten weeks and met once a week for an hour. In the photographer reported taking the picture to show the
this class, diabetics learn about self-care practices such as syringes and healthy foods. Other students asked about the
eating the right number of portions of a particular food soda with sugar and the beer. Through the social
group and caring for feet and eyes that often suffer interaction of the class, inconsistencies between the
complications. A different specialist teaches each session in patient’s view of a situation and the health conscious
his or her own teaching style. Generally, these are hour- outsider’s view came to light. These discrepancies fueled
long lectures in which attendees passively. About 10 discussion.
people came to each session with 3 or 4 missing any
particular meeting.
Intervention
For the next course, we worked with the nurse practitioner
in charge to introduce photography into appropriate
39
Instances where very high readings, co-occurred with
images such as in Figure 4. Such examples from the data
set
40
motivated discussion. Additionally, Dan did experiments Lesgold (Eds.), Learning Issues for Intelligent Tutoring
using this tool. For example, he “tested” whether dancing Systems (pp. 1-18). New York: Springer-Verlag.
would lower his blood sugar by taking pictures during a 4. Committee on Health and Behavior. (2001). Health and
party. Yet, without social support such as was available in Behavior: The Interplay of Biological, Behavioral, and
the diabetes classroom, Dan did not seem to question his Societal Influences. Washington, DC: National
previously held beliefs about his health. The data Academy Press.
visualization with blood sugar data and images did allow
for exploration and reflection of personal health. But, while 5. Goldman-Segall, R. (1997). Points of Viewing
Dan generated explanations for events, he did not change Children's Thinking: A Digital Ethnographer's Journey.
his underlying theories about his own self-care. Meeting Mahwah, NJ: Lawrence Erlbaum Associates.
with a health care provider or other with diabetics may be 6. Gross, M.M. (1998). Analysis of human movement
critical in utilizing these image technologies for improving using digital video. Journal of Educational Multimedia
self-care. and Hypermedia, 7(4): 375-395.
GENERAL DISCUSSION 7. National Institute of Diabetes & Digestive & Kidney
Imaging technology is becoming more cheaply available, Diseases. Diabetes Overview. 1998, National Institutes
and increasingly ubiquitous. Generally, health care records of Health: Bethesda, MD.
are composed of physiological data that reflects the state of 8. Norris, S.L., Engelgau, M.M., & Narayan, K.M.V.
an individual’s health verses the causes of a particular (2001). Effectiveness of self-management training in
condition. Image technology allows for new types of health Type 2 diabetes. Diabetes Care, 24: 561-587.
records for personal reflection and sharing with health care
9. Rich, M., Lamola, S., Amorty, C., & Schneider, L.
providers. In these projects, we have explored how such a
(2000). Asthma in life context: Video
visual record of behavior could be made and the utility of
intervention/prevention assessment (VIA). Pediatrics,
such a record in both a classroom setting and on an
105(3): 469-477.
individual basis.
10. Rubin, A., Bresnahan, S., & Ducas, T. (1996).
FUTURE WORK Cartwheeling through CamMotion. Communications of
We are currently testing the value of data collection in the ACM, 39(8): 84-85.
implementing behavioral changes. To do so we have
enlisted a group of college-aged diabetics at Pennsylvania 11. Rubin, A. & Win, D. (1994). Studying motion with
State University. They have used our software to KidVid: A data collection and analysis tool for digitized
synchronize blood sugar and image data and have video. In Conference Companion to CHI '94 (pp. 13-
discussed these data with researchers. Currently, we are 14). New York: ACM Press.
analyzing results from this study. 12. Schiffrin, A. & Belmonte, M. (1982). Multiple daily
Acknowledgements self-glucose monitoring: Its essential role in long term
We would like to thank Shelley Leaf, R.N. for allowing us glucose control in insulin-dependent diabetic patients
to observe and introduce our photographic experience into treated with pump and multiple subcutaneous injections.
her diabetes education courses. We also thank Dr. Hector Diabetes Care, 5(5): 479-484.
Sobrino for medical consulting and assistance with 13. Smith, B.K. (2002). You prick your finger, we do the
experimental design. This research was sponsored by an rest: Glucose meter evolution. User Experience: The
NSF CAREER award (NSF REC-9984773) granted to the Magazine of the Usability Professionals' Association, 3:
second author and the MIT Media Laboratory’s 31-34.
Information: Organized consortium. 14. Smith, B.K., Blankinship, E., Ashford III, A., Baker,
References M., & Hirzel, T. (1999). Inquiry with imagery:
1. Bransford, J.D., Sherwood, R.D., Hasselbring, T.S., Historical archive retrieval with digital cameras. In
Kinzer, C.K., & Williams, S.M. (1990). Anchored ACM Multimedia 99 Proceedings (pp. 405-408). New
instruction: Why we need it and how technology can York: ACM Press.
help. In D. Nix & R. Spiro (Eds.), Cognition, 15. Smith, B.K. & Reiser, B.J. (1998). National Geographic
Education, and Multimedia: Exploring Ideas in High unplugged: Classroom-centered design of interactive
Technology (pp. 115-141). Hillsdale, NJ: Lawrence nature films. In Proceedings of the CHI 98 Conference
Erlbaum Associates. on Human Factors in Computing Systems (pp. 424-
2. Cappo, M. & Darling, K. (1996). Measurement in 431). New York: ACM Press.
Motion. Communications of the ACM, 39(8): 91-93.
3. Collins, A. & Brown, J.S. (1988). The computer as a .
tool for learning through reflection. In H. Mandl & A.
41
Noderunner
Yury Gitman Carlos J. Gomez de Llarena
Wireless Artist Media Architect
250 E. Houston St. apt PHB 17 W 54th St. Apt. 8-D
New York, NY 10008 USA New York, NY 10019
+1 646 263 5554 +1 212 765 4364
[email protected] [email protected]
Each four-person team was given a WiFi enabled laptop, a Noderunner sessions highlight overlaps between
digital camera, taxi fare, and two hours to get from Bryant information and the urban environment, encouraging the
Park in midtown to Bowling Green in Lower Manhattan, use of public spaces for creative endearvors. As wireless
both free wireless parks. Teams earned points by taking access becomes more prevalent in our cities, this paradigm
their portraits in the exact spots where they were able to offers new opportunities for applications that treat public
connect to wireless access points. They also earned points space as an interface. This work draws on spatially based
by using scanning software to sniff all the nodes along the games like tag, scavenger hunts, and hide-and-go-seek, as
way, even those that were password protected or too weak well as graffiti art, skateboarding, and urban bicycling that
to transfer pictures. The teams collected logs recording characterize cities like New York. Recently, new
hundreds of closed or weak nodes, but scored more points technologies have expanded the scope of these activities,
when they were actually able to use a node to upload a spawning a diverse community of artists, entrepreneurs and
picture. activists developing location-based models for social
movements, advertising, urban services and pervasive
The simple rule set forced players to develop strategies for gaming [2]. Instead of making our video games look more
planning the most rewarding routes within the city. For realistic, we now have the ability to turn our reality into a
example, the East Village was a popular route destination video game, a city’s infrastructure into a play space. Our
because it offered a large number of open nodes. cities are becoming game engines and software, as citizens
Participants also needed technical savvy to troubleshoot collectively program, code, or update the place where they
connection problems and upload pictures despite fragile live.
connections. Spending too much time on a weak node
could have been the difference between winning and losing
so teams moved quickly through the city with a
42
This diverse collective action means that even in the same Runner as celebration of free and open wireless connectivity
city, like New York, Noderunner's playing field is in and as a symbol of the city’s cultural flexibility and
constant flux as WiFi continues to proliferate. At first potency.
glance this would appear to make Noderunner easier but as
WiFi spreads new legislation, use patterns, and
technologies emerge. Will new security measures limit ACKNOWLEDGMENTS
open access despite an increase in nodes and improvements Noderunner was created by Yury Gitman and Carlos J.
in transmission distance? Played over time, Noderunner Gomez de Llarena for an exhibition called We Love NY:
games help answer these questions by providing empirical Mapping Manhattan with Artists and Activists
data about our culture’s adoption of wireless technology. (www.eyebeam.org/ny). The exhibition was produced by
Noderunner is in itself an exemplar of an emerging culture. Eyebeam, a new media arts organization, and curated by
A culture where smart and wireless environments are as Jonah Peretti and Cat Mazza. Like other Eyebeam R&D
much an object of play as is a grass field or an open lake. projects, Noderunner is a form of empirical research and
political engagement as well as an art project. Noderunner
was developed in collaboration with New York City
THE ART AND CULTURE OF OPEN WIRELESS
Wireless (www.nycwireless.net), a non-profit organization
The open wireless movement is being built by the end- dedicated to providing free wireless Internet, and supported
users, one node at a time. Drawing on the original spirit of by a grant from the New York State Council of the Arts.
the Internet, WiFi enthusiasts embrace open standards, Thanks to Jonah Peretti for helping with the editing of this
peer-to-peer dynamics, and user-centered innovation. As paper.
artists, we combined game design with the existing culture
of the open wireless movement. Instead of creating an
artificial game environment, we tapped into the revolution
that was already happening around us. Our goal was not
just to contribute to a new genre of public art, but also to
actively engage the general public in a vital cultural and REFERENCES
technological transformation. Node Runner is continually 1. Wired/Unwired: The Urban Geography of Digital
re-invented by the citizens who build the network and run Network. Available at http://www.mit.edu/~amt/.
the streets. The game is an entrance point to the political 2. SmartMobs Website. Available at:
and social movements behind wireless. We offer Node http://www.smartmobs.com.
43
UCSD ActiveCampus - Mobile Wireless Technology for
Community-Oriented Ubiquitous Computing
William G. Griswold+ Neil G. Alldrin+ Robert Boyer+ Steven W. Brown+
Timothy J. Foley+ Charles P. Lucas+ Neil J. McCurdy+ R. Benjamin Shapirox
+ x
Department of Computer Science & Engineering Department of Learning Sciences
University of California San Diego Northwestern University
La Jolla, CA 92093-0114 [email protected]
fwgg,nalldrin,rboyer,tfoley,cplucas,nemccurd,[email protected]
44
figuration and its institutions.1 First, the campus organiza-
tion itself brings people with complementary interests into
close proximity, easing communication and increasing the
chances of serendipitous interactions. The campus not only
brings learners and teachers together, but also concentrates
area specialists by organizing the campus into schools and
departments of expertise (such as schools of Engineering and
departments of Computer Science). A department is not just
an aggregation of interest, but is a full-blown institution pro-
viding services for its aggregate of people, including working
spaces, meeting spaces, seminars, opportunities for chance
interaction, equipment, curricula, degree programs, funding,
etc., to enable and encourage the processes of learning.
Because these institutions operate through proximity, they
function less well when people are not “there” on a full-time
or full-attention basis. Moreover, it can take considerable
time for someone to internalize the workings—the culture—
of an institution. If someone does not know the internal
workings of an institution (for example, how talks are sched-
uled and where they normally occur) its mediating power is
lost on them, and indeed possibilities are disguised (when
it is possible to drop in for a talk). When such obfuscation
is combined with a busy schedule, conflicting priorities, dis-
tractions, interruptions (most of UCSD undergraduates pos-
sess cell phones), it is not surprising that many opportuni-
ties are missed. Further complicating matters is that many
campus institutional structures crosscut each other, creating
ambiguity, but also richness. For example, UCSD is divided
into residential college neighborhoods. Each department sits
in a college neighborhood and is nuanced by it, yet it actually
Figure 1: The Map and Buddies services. The Map service belongs to a school, not the college. Each faculty member
shows a map of the user’s vicinity, with buddies, sites, and ac- belongs to a college, however, and of course a department.
tivities overlaid as links at their location. The Buddies service
shows colleagues and their locations, organized by their prox- We hypothesize that mobile computing applications, by me-
imity. Icons to the left of a buddy’s name are buttons that show diating the institutional mediation of learning, can accelerate
the buddy on the map, send the buddy a message, and look at one’s on-going acclimation process, thereby mitigating time
graffiti tagged on the buddy. Other services are reached by the
navigation bar or clicking items embedded in the views. and attention deficit. In such a role, ActiveCampus is not
a replacement or proxy for extant institutions, but rather a
facilitator. Such a role befits mobile devices, given (on the
in stages to our base of Jornada users for beta testing and negative side) their limited form factor, interface, and com-
structured events such as games. As part of a broader project puting power, as well as (on the positive side) their mobility
using the campus as a living laboratory, researchers in the and relative unobtrusiveness.
department of Communication are conducting ethnographies
Building on the idea that a campus organizes institutions for
to understand ActiveCampus’s impact on campus life.
mediating learning, it is natural to consider reifying (display-
In the following we first identify a set of sociological issues ing) contextual information about (a) you (the learner), (b)
and places them in a conceptual framework that clarifies how mediating institutions, and (c) the sources of learning en-
technology can contribute. Second, we define a base set of abled by those institutions such as a professor, friend, book,
services necessary to sustain a community through mobile event, or another institution like a lab. Since a campus insti-
computing. Third, we demonstrate these services with a par- tution is typically a physically aggregated entity, displaying
ticular design and implementation suitable for small form- an institution in a transparent form and showing its mediated
factor wireless devices. sources of learning “inside” it (or even next to it) is a natu-
ral way to convey mediating relationships. Depending on the
Theory and Requirements 1 Here, we interpret institution broadly, including entities like depart-
Learning activities, spontaneous and otherwise, are heavily ments, libraries, seminar series, and even people. The notions of mediated
mediated by a university campus through its structural con- learning described herein are informed by the work of Michael Cole [2].
45
possible relationships between the learner and the learning as she’d thought—it’s made out of metal and talking quietly.
source (including role reversal), participants may need the That’s so weird. Flipping open her PDA, she clicks over to
ability to talk—as well as see—through walls. Gradually, the digital graffiti page of ActiveCampus, since a friend told
then, through experience, a participant learns to parametri- her there was lots of arts stuff in there (by default graffiti is
cally associate the institution with learning sources, imbuing not shown on the map since it can clutter). There is a list
the institution with its full power. of graffiti that’s been “tagged” in the area, including a “liv-
ing dead tree” link near the top. Clicking on different parts
There are many research efforts on augmenting the physical of the tree leads to different parts of an interactive artwork.
world with information from virtual spaces, albeit without Clicking on the tree’s roots leads to a story about the tree,
an explicit focus on communities, culture, and learning. At pointing her to other talking trees on campus, and gives the
ATT Research Cambridge, users wear goggles which over- lowdown on UCSD’s Stuart art collection. Now she begins to
lay information to enhance their knowledge of what they are understand all the weird stuff she’d been seeing on campus!
already seeing [9]. GUIDE [3], CyberGuide [7], Hippie [10], Clicking on the spray can to the left of the graffiti’s subject
and a host of other electronic tour guides provide information line, she is taken to a page where she “tags” the interactive
for the user about the local surroundings using a mapping tree with a “Thanks tree!” note to be seen by others who
metaphor to abstract the world, making physical boundaries view the living dead tree via ActiveCampus. Walking off,
transparent, and thereby expanding the horizons of the user. she thinks, huh, I wonder if there is a role for art in engineer-
These interfaces typically include links to allow the user to ing? She’d have to ask Mark about that.
drill down for more information. HP’s Cooltown creates a
web presence for people, places, and things to support users DISCUSSION
as they go about their everyday tasks [11]. IR beacons, RF Sarah’s day reveals several crucial properties of ActiveCam-
ID tags, and bar codes identify entities in the environment. pus. The most notable is that it helps campus denizens see
through the unintended barriers created by institutions. Sarah
ACTIVECAMPUS SCENARIO can see that there is a talk starting nearby, even though it
Sarah, a UCSD computer engineering sophomore who trans- was only officially disseminated to the campus via posters in
ferred from Mesa Community College last quarter, walks engineering building hallways. Even if she had seen these
out of her morning Engineering 53 lecture, introduction to posters earlier, it would not have been in the context of her
electrical engineering. This isn’t what I signed up for, she’s frustrating day and probably long forgotten. Seeing a talk
thinking, wondering where was the engineering her Dad had with “human” in the title, and in an engineering building,
told her about—building things that improved people’s lives? was her cue that this talk might be especially relevant to her.
Flipping open her PDA, ActiveCampus shows a map of her This is a function of mediation—the particulars in the scope
vicinity, and she sees a link to a talk with “human” in the of the general establish a context for interpretation.
title (Figure 1, left).2 Clicking through, she sees there’s a
talk just starting in the engineering building on the human- ActiveCampus has similar benefits at the Price Center food
machine interface. Curious, she decides to go. Although the court and library. In the Price Center, the mere concentration
talk gets technical quickly, the introduction has shown her a of people is the barrier created by the institution, but the con-
link between people and computer engineering. text is eating, which implies relatively unstructured time—a
friend of hers at the Price Center is probably free to chat. Ac-
Realizing she’s hungry, Sarah heads to the Price Center for tiveCampus merely provided the “final mile” solution, timely
lunch. Her usual table of friends is probably gone by now. and contextualized information about her surroundings.
Really wanting to talk to someone about adjusting to UCSD,
Colleague Interactions. Sarah’s use of the buddy and in-
she checks ActiveCampus (Figure 1, right) and sees that her
“buddy” Brad is nearby and active (both location and mes- stant messaging features are indicative of ActiveCampus’s
sage icon highlighted in blue), clicks on him and sends him facilitator role. After helping her notice that her friend Brad
a “Wanna go eat?” with a couple of clicks. Brad notices the was nearby, she used messaging and his displayed location
“dome” on his PDA flashing,3 and flips it open to see that to purposefully find him. One-click messaging short-cuts are
Sarah has sent him a message and is nearby. Now both look- available for typical meeting-directed communications, for
ing for each other, they see each other through the lines of example, “Are you free?” If many friends were nearby, she
people and sit down to talk about their day. could have messaged all nearby or active buddies in one ac-
tion to speed the process. In this way, Sarah is using Active-
After lunch, Sarah decides to go to the library to get a head Campus to maintain and even develop her social network in
start on her Engineering 53 homework. Later, leaving the li- a chaotic context. Sarah could meet new people by modify-
brary, she notices that the tree outside the library is not dead, ing her privacy settings from the default “visible to buddies
2 ActiveCampus uses the PDA’s report of all sensed 802.11b access points only” to “visible to buddies and others”. Her location can be
and their relative signal strengths to infer a location [6]. suppressed independently from her on-line status.
3 The flashing dome feature has been prototyped but is not yet deployed.
ActiveCampus also uses the second line of each page to convey events like Revealing one’s location on ActiveCampus could lead to un-
a new message arrival. wanted interactions. Thus, before Sarah could see Mark on
46
her PDA (or vice versa), both she and Mark had to add each Acknowledgements. We thank Jim Hollan for his philo-
other as buddies—a mutual acceptance policy. In an ad hoc sophical and technical guidance on technology-sustained com-
community, it might be hard to buddy-up spontaneously with munities. We thank UCSD’s Facilities office, in particular
such a method. At UCSD Sarah can use Mark’s campus e- Roger Andersen, Robert Clossin, and Kirk Belles, for their
mail name to add him to her list. She didn’t have to ask him time, expertise, and resources. Thanks to Ed Lazowska for
for his ActiveCampus ID or exchange contacts with someone reading a draft of this paper. We thank Jeremy Weir, Jolene
else. In fact, UCSD has a “finger” service that maps names Truong, David Harbottle, Andrew Emmett, David Hutches,
to e-mail names. Jadine Yee, Justin Lee, Daniel Wittmer, Antje Petzold, Jean
Aw, Linchi Tang, Adriene Jenik, and Jason Chen for their
Digital Graffiti. Sarah used digital graffiti to answer the assistance on the project. Finally, we thank Intel’s Network
question “What is this tree?” because there was no official Equipment Division for donating network processors and Sym-
link for the tree. Consequently, she found out not only what bol Technologies for their software technical support.
the tree was, but what other people thought about it. This
is beneficial to Sarah because she is discovering that this is REFERENCES
not just a campus of busy, stuffy professors lecturing to qui- 1. J. Burrell and G. K. Gay. E-graffiti: Evaluating real-world
etly listening undergraduates, but a place where people just use of a context-aware system. Interacting with Computers,
14:301–312, 2002.
a bit “ahead” of her are participating in the campus’s aca-
demic life. Thus, as with discovering the talk, Sarah has— 2. M. Cole. Cultural Psychology: A Once and Future Discipline.
conceptually—seen through the walls of an art studio to see Harvard University Press, Cambridge, MA, 1996.
the campus in action. In actually posting her own graffiti, 3. N. Davies, H. Cheverst, K. Mitchell, and A. Efrat. Using and
Sarah has taken an important step from being a passive visi- determining location in a context-sensitive tour guide. IEEE
tor to a campus citizen involved in community discourse. Computer, 34(8):35–41, 2001.
4. F. Espinoza, P. Persson, A. Sandin, H. Nystrom, E. Cacciatore,
Not all of digital graffiti’s potential is revealed in Sarah’s and M. Bylund. GeoNotes: Social and navigational aspects of
day. Any ActiveCampus entity can be tagged: a static ob- location-based information systems. In Ubicomp 2001, pages
ject such as a restaurant (e.g., “Get the ham sandwich, it’s 2–17, Berlin, 2001. Springer.
great!”), physical location (e.g., someone’s favorite sunset 5. W. G. Griswold, R. Boyer, S. W. Brown, and T. M. Truong.
locale), transient object (a buddy), or other graffiti. Through A component architecture for an extensible, highly integrated
artistic expressions, political debates, and the like, graffiti context-aware computing infrastructure. In 2003 International
can become a valued record of campus life. For example, Conference on Software Engineering (ICSE 2003), 2003.
a student might learn what others thought about recent con- 6. W. G. Griswold, R. Boyer, S. W. Brown, T. M. Truong,
certs held at a campus venue, find links to band web sites, E. Bhasker, G. R. Jay, and R. B. Shapiro. Activecampus - sus-
etc., helping people choose amongst opportunities. taining educational communities through mobile technology.
Technical Report CS2002-0714, UC San Diego, Department
Early Experience. Our own use of ActiveCampus has not of CSE, July 2002.
been unlike that of our character Sarah. The following are 7. S. Long, R. Kooper, G. D. Abowd, and C. G. Atkeson. Rapid
a few typical examples of serendipitous interactions assisted prototyping of mobile context-aware applications: The Cy-
by ActiveCampus. berguide case study. In Proceedings of the 2nd ACM Inter-
national Conference on Mobile Computing and Networking
Ben drops by Bill’s office, but he’s not there. Ben checks (MobiCom’96), November 1996.
his PDA and sees that Bill is at the cafeteria across the 8. J. F. McCarthy and E. S. Meidel. ACTIVEMAP: A visualiza-
quad. Ben heads over to the cafeteria and joins Bill and tion tool for location awareness to support informal interac-
Jens for lunch. tions. In Intl. Symposium on Handheld and Ubiquitous Com-
Bill is stuck in a late meeting and sees that Pat is still in puting (HUC’99), pages 158–170, 1999.
his office. A quick message confirms that Pat will still be 9. J. Newman, D. Ingram, and A. Hopper. Augmented reality
there in a half hour for a much-needed meeting. in a wide area sentient environment. In Proceedings of the
2nd IEEE and ACM International Symposium on Augmented
Bill is late for a meeting, but has to pick up lunch first. The Reality (ISAR 2001), New York, 2001.
group waiting for him sees that he’s in the “line area” at
10. R. Oppermann and M. Specht. Context-sensitive nomadic
the food court, and concludes that he’ll arrive shortly.
exhibition guide. In Ubicomp 2000, pages 127–142, Berlin,
Bob is waiting for Bill to return to his office, while contin- 2000. Springer.
uing to work in the lab. When Bill shows up on his buddy 11. S. Pradhan, C. Brignone, J. H. Cui, A. McReynolds, and M. T.
list as being in “Griswold’s at APM”, he walks over. Smith. Websigns: Hyperlinking physical locations to the web.
While at his favorite cafe, Bill sees a graffiti claiming that IEEE Computer, 34(8):42–48, 2001.
the croissants are the best on campus, and he makes a note 12. H. Rheinhold. The Virtual Community. MIT Press, Cam-
to try one sometime. bridge, revised edition, 2000.
47
The Location Stack: Multi-sensor Fusion in Action
Jeffrey Hightower and Gaetano Borriello
Dep’t of Computer Science & Engineering Intel Research Seattle
University of Washington 1100 NE 45th Street, Suite 600
Box 352350 Seattle, WA 98105
Seattle, WA 98195 +1 206 633 6555
+1 206 543 1695 {jeffrey.r.hightower,gaetano.borriello}@
{jeffro,gaetano}@cs.washington.edu intel.com
48
RFID tags, ultrasonic ranging badges, active radio
proximity tags, global positioning system receivers,
infrared laser range-finders, and 802.11b wireless clients.
Information is pushed up the stack as sensors generate new
information about the changing state of the physical world.
This demonstration uses passive RFID tags and ultrasonic
ranging badges.
Measurements
Each sensor driver discretizes and classifies the data
produced into measurements of type Distance, Angle,
Proximity, or Position as well as several aggregate types
such as Scan (a distance-angle combination). For example,
infrared badges and RFID sensors both produce proximity
measurements with likelihood models based on the power
of the infrared emitters and the range and antenna
characteristics of the radio. These measurement likelihood
models describe the probability of observing a
measurement given a location of the person or object.
Such a model consists of two types of information: First,
the sensor noise and, second, a map of the environment. Figure 3: Measurement likelihood model for infrared
The problem of constructing maps of indoor environments proximity badges. Darker areas represent higher
receives substantial attention in the robotics research likelihood.
community and is not our focus in this work. Figure 2 shows the likelihood model at all locations in our
lab for a specific 4.5 meter ultrasound distance
measurement. The likelihood function is a ring around the
location of the sensor where the width of the ring is the
uncertainty in the measured distance. Such noise may be
represented by a Gaussian distribution centered at the
measured distance. Furthermore, since ultrasound sensors
frequently produce measurements that are far from the true
distance due to reflections, all locations in the environment
have some likelihood, as indicated by the gray areas in the
map. White areas are blocked by obstacles. Figure 3
illustrates the sensor model for the infrared badge systems.
Infrared sensors provide only proximity information, so
likelihood is a circular region around the receiver. RFID
tags are also a proximity technology and behave similarly.
Fusion
The Fusion layer continually merges measurements into a
probabilistic representation of objects' locations and
presents a uniform programming interface to this
representation. In this demonstration we illustrate
Figure 2: Measurement likelihood model ultrasound estimating the location of multiple people where each
tags. Darker areas represent higher likelihood. person wears an RFID tags and ultrasonic ranging badge.
Due to these sensors' low accuracy (relative to robotics and
motion capture sensors like precision scanning laser range
finders), the belief over each person's location is typically
very uncertain and often multi-modal, hence we apply a
Bayesian filter techniques called particle filters which is
commonly used in robot localization and is optimized for
this type of scenario. Particle filters can naturally integrate
information from different sensors. Refer to [3] for a
general survey of Bayesian filtering techniques for location
estimation or [4] for an in depth treatment of particle filters
and Monte Carlo statistical techniques.
49
There are two pieces of additional research we have
contributed but are not highlighting in this demonstration.
First, we have shown how particle filters can be used more
efficiently by constraining possible locations of a person to
locations on a Voronoi graph of free space that naturally
represents typical human motion along the main axes of the
environment. In experiments we found that such Voronoi
graph tracking results in better estimates with less
computation. Furthermore, the Voronoi graph structure
can be used to learn high-level motion patterns of a person.
For example, the graph can capture information such as
“Rebecca goes into room 22 with probability 0.67 when
she walks down hallway 9.” More details on using
Voronoi graphs with particle filters and on applying high-
level learning can be found in [5,6]. Second, although also
not shown in this demonstration, other work of ours at the
Fusion layer has addressed the problem of estimating
objects' identities in situations where explicit identity
information is not provided by the all the sensors. In
particular, we have introduced a technique to combine
Figure 4: Sensor fusion of infrared and ultrasound
highly accurate anonymous sensors like scanning infrared
sensors. Density of the particles reflects the probability
laser range finders with less accurate identity-certain
posterior of the person's location.
location technologies like infrared and ultrasonic badges
Figure 4 shows snapshots from a typical sequence [7].
projected onto a map of the environment. In this example, Arrangements
the person is wearing an infrared badge and ultrasound tag We provide two operators to relate the locations of multiple
and starts in the upper right corner as indicated by the icon. objects. We provide a test for multi-object proximity given
Since the start location is unknown to the system, the a distance and a test for containment with a map region.
particles are spread uniformly throughout the free-space of Because we operate directly on the location probability
the environment. The second picture (top right) shows the posteriors of each object, the results of these tests can also
location probability after the person has moved out of the be probabilistic. For example the proximity test produces a
cubicles and into the upper hallway. At this point, the pairwise confidence matrix that a given group of objects
samples are spread over different locations. After an are within 4 meters of one another. Taken together, these
ultrasound sensor detects the person, their location can be operators provide a probabilistic implementation of the
estimated more accurately, as shown in the third (bottom “programming with space” metaphor as used with great
left) picture in Figure 4. Later, after moving down the success in AT&T Sentient Computing project [8]. Future
hallway on the left, the samples are spread over a larger work in our implementation of the Arrangements layer is to
area, since this area is only covered by infrared sensors that provide an additional operator to test for more general
only provide very coarse location information (bottom geometric formations of multiple objects.
right).
Context and Activities
A single sensor fusion service running on a modern PC
The Contextual Fusion layer combines location information
(1.8 GHz Pentium 4 with 512 MB memory) can perform
with other contextual information such as personal data
real-time multi-sensor probabilistic tracking of more than
(schedules, email threads, contact lists, task lists),
40 objects at a sustainable rate of 2 measurements per
temperature, and light level while the Activities layer
second per object. Objects are tracked in 7 dimensions (x,
categorizes contextual information into semantic states
y, z, pitch, roll, yaw, and linear velocity). Higher
defining an application's interpretation of the world. Our
performance (more objects or a faster measurement rate)
implementation of the Context and Activities layers is in its
can be realized by reducing the state space to two
infancy because few ubiquitous computing systems have
dimensions or through more advanced techniques such as
been deployed which take sensor information all the way
our technique of constraining the particle filters to Voronoi
up to the level of human activity inference. To make
graphs of the environment discussed below. Another way
inroads, we are collaborating with the Assisted Cognition
to increase performance is to distribute computation across
research group, a group seeking to create novel computer
multiple fusion services, although applying certain
systems that will enhance the quality of life of people
Arrangements layer operators then poses additional
suffering from Alzheimer disease and similar cognitive
challenges.
50
disorders [9]. Our goal for this collaboration is to design Applications (WMCSA 2002), pages 22–28, Callicoon,
general interfaces for the Context and Activities layer NY, June 2002. IEEE Computer Society Press.
based on usage patterns of the existing Fusion and 3. Fox, D., Hightower, J., Liao, L., Schulz, D., and
Arrangements layers in support of these higher level Borriello, G. Bayesian Filtering for Location
learning tasks. Estimation. IEEE Pervasive Computing, vol. 2, no. 3,
SUMMARY pp. 24-33, IEEE Computer Society Press, July-
Our demonstration highlights the primary capabilities of September 2003
our Location Stack implementation: We show a highly 4. Doucet, A., and de Freitas, N., Gordon, N. editors.
flexible system which can track multiple people using Sequential Monte Carlo in Practice. Springer-Verlag,
statistical sensor fusion of information from multiple New York, 2001.
sensor technologies, in this case, RFID proximity tags and
ultrasonic distance measurement badges. 5. Liao, L., Fox, D., Hightower, J., Kautz, H., and Schulz,
D. Voronoi tracking: Location estimation using sparse
The Location Stack abstractions structure location systems and noisy sensor data. Proceedings of the IEEE/RSJ
into a layered architecture with robust separation of International Conference on Intelligent Robots and
concerns allowing us to partition the work and research Systems (IROS), 2003.
problems appropriately. Our implementation is a publicly
available Java package containing a complete framework 6. Rabiner, L. R. A tutorial on hidden Markov models and
for operating a multi-sensor location system in a ubiquitous selected applications in speech recognition.
computing environment. The implementation is typical of Proceedings of the IEEE. IEEE, 1989. IEEE Log
a modern ubiquitous computing system: a set of reliable Number 8825949.
distributed services communicating using asynchronous 7. Schulz, D., Fox, D., and Hightower, J. People Tracking
XML messages and linked using dynamic service with Anonymous and ID-Sensors using Rao-
discovery capability in the middleware. The Location Blackwellised Particle Filters. Proceedings of the
Stack is deployed in our laboratory and workspace at Intel Eighteenth International Joint Conference on Artificial
Research Seattle, operates nearly 24x7, and is used by Intelligence (IJCAI), 2003
other research projects as a reliable source of location 8. Addlesee, M., Curwen, R., Hodges, S., Newman, J.,
information. Steggles, P., Ward, A., and Hopper, A.. Implementing a
REFERENCES sentient computing system. Computer, 34(8):50–56,
1. Hightower, J. and Borriello, G. Location systems for August 2001.
ubiquitous computing. Computer, 34(8):57–66, August 9. Kautz, H., Etzioni, O., Fox, D., Weld, D., and Shastri,
2001. L. Foundations of assisted cognition systems. UW-CSE
2. Hightower, J., Brumitt, B., and Borriello, G. The 03-AC-01, University of Washington, Department of
location stack: A layered model for location in Computer Science and Engineering, Seattle, WA,
ubiquitous computing. Proceedings of the 4th March 2003.
IEEEWorkshop on Mobile Computing Systems &
51
A Novel Interaction Style for Handheld Devices
James Hudson and Alan Parkes
Computing Department, Lancaster University, Bailrigg, Lancaster , LA1 4YR, UK
{j.a.hudson@, app@comp.}lancs.ac.uk
52
interaction. By using a mouse, pen, or touchpad, the user Gesture Activated Buttons and List Elements
simply draws a 2D symbol to execute an action; we will In Figure 2 we see the use of the gesture activated “Name”
refer to this as stroke or gesture interaction. However, button to search for a given phone number. By drawing a
gestural input is partly a consequence of implementing ‘T’ over it (left) the interface lists all telephone number
visual overloading, since it is necessary to resolve issues of entries that begin with the letter ‘T’ and by drawing a ‘P’
layer interaction. To avoid the overhead of manipulating (middle) the list is further optimized to all elements that
layers, such as moving them about, to address, for example, begin with the letter ‘T’ and contain the letter ‘P’. This
elements or widgets, which are beneath a layer, gestural approach drastically cuts down on executions for selecting
interaction is used to provide the necessary context. an entry, whilst possessing a greater cognitive salience.
IMPLEMENTATION
Our implementation takes the form of a mock up of a
mobile phone interface with a standard menu or list driven
interface on a 12x5cm display. This approach was taken to
assist in rapid prototyping and to avoid any difficulties with
device specific limitations. We chose to use simple
animated black and white transparent gifs. This we did to
show that processor intensive alpha blending was not
essential and adequate results could be achieved with simple
well chosen animations.
Overview
Commands can be executed with either the standard “point Figure 2. A gesture activated ‘Name’ Button is used to
& click” approach or the user can circumvent intrusive make a search for a telephone number.
hierarchical menu interaction by drawing a symbol that To further optimize the interface, drawing a symbol or
starts over the relevant list item or button, which takes the tapping on the left of the list executes a command; such as
user directly to the required dialogue or executes the desired a double-click to call a number or drawing a ‘d’ to access
command. Note that a stroke is not restricted in size. the ‘list details’ dialogue. Whereas, a symbol drawn on the
right side of the list will further refine the search to any
remaining items that contain the desired letter.
Redundancy of Interaction Styles
This form of interaction model is not restricted to gestural
interaction alone, it can be used in the same way as a
conventional mobile phone or by using the gesture
optimizations. This allows the user to learn these gesture
optimizations as they become familiar, thus avoiding any
significant learning overhead. To access a list element the
user can either tap over it or gesture over it. For example as
depicted in Figure 1 (middle), the user can simply draw a
‘d’ starting over the list element, to go straight to the
desired ‘list details’ dialogue, in this case from the item
marked ‘sport centre’, or looking at the list of frequently
Figure 1. The initial screen contains a list of frequently called numbers (Figure 1), to access the details of a
dialed numbers and two animated overloaded controls. The telephone number the user can click on the menu button
darker traces show the execution of a stroke. and navigate a series of submenus to access the ‘get
details’ option. Similarly, in the example from figure 2 the
In addition, two overloaded control elements, depicted in user can dispense with the gesture interaction and use a
Figure 1 are superimposed over the menu items, one of an series of hierarchical menus by simply tapping on the
envelope to access messaging functions and the others of option button and accessing a number in the conventional
the word ‘register’, to access the call register, which fashion.
demonstrates the overloading of text.
A necessary example is that of dialing a number (see
We now discuss the interface components and consider Figure 3), the use of gestures would be a less than adequate
some interaction scenarios to help explain the use and means of carrying out this task, so the approach resorts to a
benefits of this interface design. more conventional one where necessary.
53
input or “Compose” dialogue makes use of an overloaded
text input panel.
A letter is selected by starting a simple gradient gesture
over a group of letters, as shown in Figure 4 (Middle,
Left). The direction of the line determines the letter
selected. In this example the ‘L’ has been selected,
whereas an upward stroke would select ‘K’ and a left up
stroke would select the letter ‘J’.
The approach to text input enables the user to enter text
easily without a complex combination of keystrokes via an
adequately sized soft keyboard.
Figure 3. How a number can be dialed. The appropriate
With respect to the design requirements discussed earlier,
gesture is executed over the ‘Menu’ button to access a
the benefits of our proposed design of a mobile phone
dialogue to dial a number.
interface can be summarised as follows:
To dial a number the user either clicks on the ‘Menu’ button
• Practical one handed manual touch screen interaction
enabling him to access a hierarchical menu, containing an
option to ‘Dial a number’ (as in conventional interfaces), or • Maintain adequately sized control elements
the user executes the appropriate gesture, an (upward line), • The optimization of limited screen real-estate
over the menu button, to go directly a conventional dialogue
more suited to the desired task. This approach demonstrates • Avoid the use of memory intensive hierarchical menu.
the practical integration of the two models of interaction. • Reduction in the cognitive overhead of a visual search
Overloaded Icons schema, e.g., scanning for a list element
The initial screen has two overloaded icons (figure 1). As • A greater cognitive purchase afforded by the gesture
expected, executing the appropriate gesture over a list item interaction
will execute a command. However, if the gesture starts
• Greater redundancy in the functionality of controls.
within a region of an overloaded control and the gesture
relates to that overloaded control element the appropriate • More efficient number look-up e.g., the selection of a
command is executed, thus disambiguating between phone number within 1-3 gesture executions, rather
competing overloaded controls and menu items. than 3-8+ button presses.
For example, drawing an ‘M’ stroke over the ‘register’ • The incorporation of standard point & click with the
overloaded icon, demonstrated in figure 1 (left), accesses overloaded gesture interaction exploits a redundancy
the ‘Missed calls’ dialogue, whereas executing an ‘r’ of interaction styles, thus optimizing learnability.
accesses the ‘Received calls’ dialogue.
PRELIMINARY EVALUATION
Text Input Ten subjects used our interface design to carry out a range
of tasks such as those discussed above. The tasks were first
carried out in the conventional way (through hierarchical
menus), and then by the stroke-optimized route. After
spending a short time learning to use the interface, the
users readily completed the tasks unaided, and expressed a
preference for the gesture optimized shortcuts and
overloaded icons over conventional interaction styles.
The subjects reported they did not favor devices that relied
on additional interaction aids, such as a stylus, and
preferred our model, which supports manual operation.
Subjects also commented that our interface is less awkward
to use than systems without gesture interaction.
Figure 4. A text input dialogue that embodies the same
approach for the overloaded text input panel as used for the Moreover, we discovered that, with appropriate training, a
‘Register’ overloaded icon (Figure 1). user can input a text message, without using a stylus, at
rates comparable to that of standard single finger soft
Referring back to Figure1 (top left) drawing a ‘C’ for keyboards (i.e., around 40wpm). This is achieved without
“compose” over the animated envelope would open a text the cumbersome interaction associated with common
input dialogue (Figure 4), whereas an ‘I’ or ‘O’ would mobile devices. This represents a significant improvement
invoke the ‘Inbox’ and ‘Outbox’, respectively. The text over conventional text input for handheld devices with
small display screens.
54
It has to be said that there is a slight overhead in learning 3. Belge, M., Ishantha, L. and Rivers, D. Back to the
the appropriate gesture, although a user could always resort future: a graphical layering system inspired by
to the conventional form of interaction if difficulties were transparent paper. InterCHI'93 Adjunct Proceedings,
encountered. Users commented they found the gesture (April 1993), ACM Press, 129-130.
approach to be both novel and useful and many reported
they felt motivated to learn the necessary gestures. We 4. Bier, E., Stone, M., Pier, K., Buxton, W. and DeRose, T.
intend to reduce the learning overhead by using the more Toolglass and Magic Lenses: The See-Through
familiar ‘Graffiti’ handwriting recognition alphabet found in Interface. Proceedings SIGGRAPH '93, (August 1993),
many handheld devices. ACM Press 73-80.
CONCLUSION 5. Goldstein, M., and Chincholle, D. The Finger-Joint
This paper has proposed a solution to the problems and Gesture Wearable Keypad. Workshop on Mobile Devices
shortcomings of existing text input schemes, particularly for INTERACT'99 (Edinburgh, UK, August 1999).
small devices. A prototype system, making use of gestures
and visual overloading, was also described. It was 6. Harrison. B., Hiroshi, I., Vicente, K. and Buxton, W.
demonstrated that this prototype makes effective use of Transparent layered user interfaces. Proceedings CHI
screen area, and preserves the portability of the device, ‘95 (May 1995), ACM Press 317-324.
while providing a rich set of easily accessible features.
Our current work involves investigating the application of 7. Hudson, J. and Parkes, A. Visual Overloading. Adjunct
our techniques to support interaction for large screen Proceedings HCI International2002 (June 2003), 67-68.
devices such as Databoards, for public information displays,
8. Ishantha, L. and Suguru, I. GeoSpace. An interactive
the desktop and for very small, e.g., wearable, devices [12].
visualization system for exploring complex information
We are exploring the effectiveness of visual overloading
spaces. Proceedings CHI'95 (May 1995), ACM Press
itself, and seeking to improve touch screen interaction,
409-414.
among other things. We also intend to explore the use of our
techniques in a predictive text application. 9. Kamba, T., Elson, S., Harpold, T., Stamper, T. and
FURTHER RESEARCH Sukariya, P., POBox: Using small screen space more
In continuation of our work we intend to explore ways of efficiently. Proceedings CHI ‘96, (April 1996), ACM
providing better affordance, since poor affordance can be a 383-390.
major drawback of gesture interaction. We will explore the
use of visually overloaded help-prompts to provide for goal 10. Kurtenbach, G., Buxton, W., User learning and
navigation and goal exploration, such as gestures being used performance with marking menus, Proceedings of the
to call up an overloaded layer of commands related to a SIGCHI conference on Human factors in computing
control. systems, (April 1994), ACM Press 258-264
We are currently designing experiments to support our 11. Masui, T. An efficient Text Input method for Handheld
theory that gesture interaction and animated icons are and Ubiquitous Computers. Lecture Notes in Computer
suitable for creating highly usable small devices and to Science (1707), Handheld and Ubiquitous Computing,
examine the acceptability of animated transparencies with Springer Verlag, 289-300, 1999.
respect to distractibility.
12. MacKenzie, I., Zhang, S. and Soukoreff, W. Text entry
Finally, we recognize that our future research will benefit
using soft keyboards. Behaviour & Information
from an investigation into theories of perception. Such work
Technology 1999, 18, 235-244.
may help us to minimize, and govern the effects of, visual
rivalry, perhaps by introducing 3D elements and dynamic 13. Meyer, A. Pen computing: a technology overview and a
shading and elements [4,14]. vision, ACM SIGCHI Bulletin, (July 1995) v.27 n.3,
REFERENCES ACM 46-90,
1. Baecker, R., Small, I., Mander, R., Bringing icons to
life, Proceedings of the SIGCHI conference on Human 14. McGuffin, M. and Balakrishnan, R. Acquisition of
factors in computing systems, (April 1991) ACM 1-6, expanding targets. Proceedings of CHI'2002 (April
2002), ACM Press 57-64.
2. Bartlett, J. Transparent Controls for Interactive
Graphics. WRL Technical Note TN-30, Digital 15. Silvers, R. Livemap: a system for viewing multiple
Equipment Corp., Palo Alto, CA. (July 1992). transparent and time-varying planes in three-
dimensional space. Conference companion CHI'95
(May 1995)
55
WiFisense™: The Wearable Wireless Network Detector
Milena Iossifova and Ahmi Wolf
Interactive Telecommunications Program
New York University
[email protected], [email protected]
56
a fashion image. Creating various designs might make position in space relative to it, other activity on the same
WiFisense appealing to more people. access point, and the building materials of the space.
The bag can display up to 8 networks at a time, creating a
beautiful collage of multiple networks with various
strengths overlapping in space.
RESULTS & FUTURE WORK
Seeing us wear the WiFisense bag in the streets of New
York City, people have started conversations, amazed to
find out that there is such a thing as wireless Internet
access. A potential implication of this is that WiFisense
can be successful as a means to increasing public awareness
for this new technology.
However, we have already thought about what it means for
possible subsequent functionality such as moving away
from the passive act of scanning to joining and actively
communicating on available networks.
Lastly, we are currently exploring various physical forms
for future iterations of WiFisense.
CONCLUSION
WiFi is still an emerging standard for wireless Internet
communication. By broadcasting information about the
Experimenting with light as a gentle means of networks found, in the form of dynamic light patterns, we
communication, the bag uses LEDs for the physical display intend to increase public awareness for this new
of information. When WiFisense does not see a wireless technology.
network, the LEDs look like simple beads. When a WiFisense explores the boundaries between the tangible
network is discovered the LEDs light up in patterns and perceptible world and the rest that surrounds us – the
displaying the number of networks at a certain physical intangible yet present. It turns a person’s movement
location and their corresponding signal strength. through space into a display of the unseen yet increasingly
Attempting to capture one’s attention only when the device ubiquitous world of connectivity.
finds relevant information, lights from the bag color the
REFERENCES
environment as a means of ambient feedback.
1. Weiser, M. and Brown, J. Designing Calm
PROCESS Technology, PowerGrid Journal, v1.01, 1996.
WiFisense scans for the presence of 802.11b networks. 2. Wisneski, C., Ishii, H. Dahley, A., Gorbet, M.. et al.
When it discovers a network, it uses the signal strength and Ambient Displays: Turning Architectural Space Into an
encryption status to create patterns of light announcing the Interface between People and Digital Information.
network’s availability, quality and accessibility. Lecture Notes in Computer Science, Springer Verlag,
WiFisense goes beyond the currently marketed 2.4Ghz Vol 1370, p 22, 1998.
detectors [6, 7]. It uses an embedded controller and an 3 A. nthony Dunne and Fiona Raby
802.11 wireless card to implement its functionality. In a http://www.mediamatic.nl/Doors/Doors2/DunRab/Dun
passive manner it scans for the presence of 802.11 Rab-Doors2-E4.html
management frames broadcast by all access points. The
presence of these frames and the information that lies 4 . Norman, D.A., The Design of Everyday Things,
within them informs the device of an existing wireless Basic Books, New York, USA, 2002.
node and its various features. Characteristics such as signal 5. Warchalking website
strength and sometimes the encryption status are available
within these frames. http://www.warchalking.org
The controller uses eight rows of eight LEDs to announce a 6 . Kensington Technology Group. ………
network’s features with each row representing a single http://www.kensignton.com/html/3720.html
network . The strength of the signal is mapped to eight 7. SmartID Technology, Pte. Ltd.
LEDs – the higher the signal strength, the more LEDs that
http://www.smartid.com.sg/prod01.htm
light up. The LEDs flicker due to fluctuating signal
strength – based on the proximity to the base station, the
57
Tejp: Ubiquitous Computing as Expressive Means of
Personalising Public Space
Margot Jacobs Lalya Gaye, Lars Erik Holmquist
Play Studio, Interactive Institute Future Applications Lab, Viktoria Institute
Hugo Grauers gata 3, 412 96 Göteborg, Sweden Box 620, 405 30 Göteborg, Sweden
www.playresearch.com www.viktoria.se/fal
+46-(0)734055867 +46-(0)31-7735562
[email protected] {lalya, leh}@viktoria.se
58
otherwise turning aside from the normal course or purpose”
[1], which is usually used as a critique of the information
content pervading public space in many examples we
encountered during our study.
PROTOTYPES
We describe Audio Tags and Glitch as first examples of the
types of prototypes we develop and experiment with in this
project.
Audio Tags
Audio tags illustrate the notion of overlaying personal
traces on public space. An audio tag contains an audio
message that once recorded can be left at hidden places in
public spaces. When passers-by lean towards the device,
this personal message is whispered to their ears. People Figure 1: Audio tags: adding a layer of personal audio on
then have the possibility to record over the existing physical structures
messages with their own. Glitch
The prototypes are made from hacked low-cost gadgets and As opposed to overlaying information, Glitch is about
are about a few cm3 big. They consist of a small revealing a hidden layer of personal communication in
microphone through which an up to 10 seconds long audio public space. Interferences caused by passers-by’s messages
message can be recorded onto a small sample buffer while and phone calls, are loudly broadcasted at a public place
holding a button, and a small speaker that reveals the with high traffic potential, such as bus stops or busy street
content of the message when an IR sensor senses the corners. If the speaker array is f. ex. linearly disposed along
proximity of a person (Fig. 1). After recording their a usual pedestrian path, the glitches stalk the mobile user
message, people can attach the tags to walls or other during the whole phase of mobile communication initiation
structures in public space. (Fig. 2).
Virtual annotations of space have been explored by several The Glitch prototypes are arrays of powered-on
projects, such as GeoNotes [3] in which location-specific loudspeakers picking up electromagnetic interferences from
text annotations on public space authoured by the users mobile phones. Some of them use a standard antenna and
themselves are browsed through with PDAs. In its can be installed in a grid formation, while others parasite
approach to augmening public space with location-based off existing metallic urban structures such as fences or trash
audio, the Audio Tag experiment is also related to projects cans in the city, re-using them as antennas in parasitic way.
like Augment-able Reality [7] in which virtual voice notes
and photographs are accessible through an augmented
reality wearable system, and Hear&There [8] where personal
audio imprints are virtually linked to physical locations.
In the case of Audio Tags, we were interested in exploring
physical rather than virtual interaction with the audio space
in order for people not to need any particular device to
access the information and for the audio to be better
integrated in the public space. The Voice Boxes [4] that
record personal audio message when opened are in this way
similar to our experiment. However, while users of the
Voices Boxes trigger the messages by manipulating the
devices, the Audio Tag messages are triggered by physical
proximity, in a implicit way. By being fixed on physical
structures in the environment as parasites and by only
making themselves discretly heard within a certain radius,
as when whispering to someone, the tags open a space of
intimacy inside the public realm. This proximity triggering
combined with the small size of the tags that makes them
almost disappear into the environment, helps ensure a
serendipitous discovery.
Figure 2: Glitch: revealing a layer of meaning by re-
situating familiar phenomenas into unusual settings.
Earlier projects such as Live Wire [9] and Placebo [2] also
make otherwise hidden communication networks visible in
a way integrated into everyday contexts, respectively with
wires dangling to the amount of activity in a computer
59
network, and furnitures reacting to electromagnetic fields will be experimenting with a series of other low-tech
produced by mobile phones or other leaking electronic prototypes, resulting in informed design implications for
objects. A more recent example of this is WiFisense [10], a this field.
handbag covered with light diodes that light up when
ACKNOWLEDGMENTS
detecting wireless networks.
We would like to thank all of the participants and
Glitch, on the other hand, follows the situationsist tactic of interviewed individuals, specifically MABE, LADY and
détournement [1], by re-situating the familiar auditory EMMI. We would also like to thank Tobias Skog, Ramia
phenomena of speakers picking-up incoming calls or text Mazé, Daniel Rehn, Daniel Skoglund, as well as PLAY,
messages before the mobile phone does, usually taking FAL, and the Mobile Services project members for their
place at homes or offices, into the unexpected setting of comments and support. This project is funded by the
outdoor urban environments. As the nature and origin of Swedish Foundation for Strategic Research through the
the noises are familiar to most people and easily Mobile Services project and the Interactive Institute.
identifiable, the speakers remain hidden, a situation of
interruption is created, highlighting the virtual and REFERENCES
pervasive layer of mobile phones communication. 1. Debord, G.-E.: Methods of Détournement. Les Lèvres
Moreover, Glitch differs from the previously named Nues #8 (1956)
projects by its parasitic nature. 2. Dunne, A., and Raby F.: Design Noir: The Secret Life
of Electronic Objects, August/Birkhäuser, (London, UK
OUTCOME
and Basel, Switzerland, 2001).
Our hope is that through the accidental collaboration of
various actors in the public realm, the project will result in 3. Espinoza, F., Persson, P., Sandin, A., Nyström, H.,
physical networks of meaning, aesthetics and perhaps a Cacciatore. E., and Bylund, M.: GeoNotes: Social and
critique of the everyday environment. The Tejp prototypes Navigational Aspects of Location-Based Information
are tested on site through specifically crafted tactics and Systems, Proc. of Ubicomp ’01 (Atlanta, USA, 2001)
placement. Testing procedures and experiments range from 4. Jeremijenko, N. Voice Boxes: 3D Sound Icons for Real
outdoors workshops, to stake-outs and video-based Space, Ars Electronica Prix 95 (Linz, Austria, 1995)
analysis. Once having experimented with users in real
urban settings, we will derive informed design implications 5. Johnson, S.: Interface Culture: How New Technology
based upon reoccurring patterns of people’s (mis)use of the Transforms the Way We Create and Communicate,
prototypes and emerging narratives, both from the Harper San Francisco (San Francisco, USA, 1997)
perspective of active and accidental participants within the 6. Martin, N. M.: Parasitic Media: Creating Invisible
project. This implies observing changing content, Slicing Parasites and Other Forms of Tactical
placement, modes of initiation and interaction behaviours. Augmentation.
Based on these design implications, we will be able to http://www.carbondefense.org/cdl_writing_7.html
draw conclusions for and about expressive ubiquitous 7. Rekimoto, J., Ayatsuka, Y. Augment-able Reality:
computing environments. Situated Communication through Physical and Digital
CONCLUSION Spaces. Proc. of ISWC ‘98 (Pittsburgh, USA, 1998)
We have presented the project Tejp, which is a step towards 8. Rozier, J., Karahalios, K., and Donath, J.: An
a more poetic, strange, and personal expression in Augmented Reality System of Linked Audio. Proc. of
ubiquitous computing. Tejp explores how to empower ICAD ’00 (Atlanta, USA, 2000).
people with open pervasive means of structuring and
personalising their everyday environment through 9. Weiser, M. and Brown, J. Designing Calm
overlaying and uncovering meaning on public, physical Technology, Powergrid Journal (1996)
space. Besides the two examples we have described, we 10. Wifisense project, http://wifisense.com
60
Telemurals: Catalytic Connections for Remote Spaces
61
goal here is to make it clear that the users’ words are affect-
ing the space without necessarily requiring 100% accuracy
of the speech recognition system.
A current implementation of Telemurals is shown in Figure
1. Silhouettes of the participants in the local space are ren-
dered in orange. The participants at the remote end are ren-
dered in red. When they overlap, that region becomes
yellow. The aim of this cartoon-like rendering is to transmit
certain cues such as number of participants and activity
level without initially revealing the identity of the partici-
pants.
Participation is required for this communication space to
work. To reinforce a sense of involvement, we provide the
system with some intelligence to modify its space accord-
ing to certain movements and speech patterns. That is, the Figure 2: Telemural installation in Sidney-Pacific
more conversation and movement between the two spaces, Dorm.
the more image detail will be revealed to the participants at sion that the person you are speaking to is constantly look-
each end. The silhouettes slightly fade to become more ing elsewhere.
photo-realistic. This prompts the participants to move
closer into the space to see. If conversation stops, the With Telemurals, we are creating an environment where
images fade back to their silhouette rendering. We want the rendered video maintains subtle cues of expression such as
participants to choose their own level of commitment in posture and hand motion, yet also enhances other cues. For
this shared space [6]. The more effort they exert, the more example, changes in volume alter the style of the rendered
they see of both spaces. video. By adding another layer of abstraction into the video
stream, we can enhance cues in a manner that is not possi-
Much thought has been given to the design of the render- ble in straight video streams.
ings in Telemurals. We wanted to maintain the benefits of
video in their simplest form. Adding video to a communi- In this project, the abstraction of person, the blending of
cation channel improves the capacity for showing under- participants, the graffiti conversation, and the fading from
standing, attention, forecasting responses, and expressing abstract to photo-realistic are the social catalysts for the
attitudes [5]. A simple nodding of the head can express experience. This new wall created by filtering creates an
agreement or disagreement in a conversation. Gestures can icebreaker, a common ground for interaction, and an object
convey concepts that aren’t easily expressed in words; they for experimentation. How will one communicate in this
can express non-rational emotions, non-verbal experiences. abstracted space? How will their behavior affect their
appearance and the appearance of the setting? How differ-
Yet these cues are not always properly transmitted. There
ent is communication using photorealistic vs. non-photore-
may be dropped frames, audio glitches. Lack of synchro-
alistic video? The goal here is to create new styles of
nicity between image and audio can influence perceptions
movement and speech interaction by providing a common
and trust of the speaker at the other end. Other challenges
language across the two spaces.
include equipment placement. For example, camera place-
ment has long been a reason of ambiguous eye gaze in Telemurals currently connects two common area halls of
audio-video links. A large camera offset gives the impres- MIT graduate dormitories, Ashdown and Sidney-Pacific.
The Telemural in Ashdown is located to the right of the
main lobby. In Sidney-Pacific, the Telemural is placed in a
high traffic cross-way connecting the gym, the laundry
room, and the elevators (see Figure 2). This connection
came about as the under-construction Sidney-Pacific Dor-
mitory committee was looking to put public art in its public
areas and create spaces to encourage students to gather.
Ashdown, the oldest graduate dormitory on campus, was
similarly undergoing renovations to create public spaces for
social gatherings, and the two dormitories were open to the
idea of creating a shared communication link. The sites
within the dorms were chosen because they have traffic, are
public to the community, and because a large video wall
aesthetically blends into the space.
EVALUATION
This work combines the disciplines of technology, design,
and communication. Evaluation of this work is therefore
Figure 1: Current Telemurals implementation. threefold.
62
Engineering • The number of people using the system at any one time
We evaluate if the system functions. Does it work? That is, • The number of people present but not interacting
does it transmit audio and video? Is the sound quality • The number of unique users (if possible)
acceptable? Is the video quality and speed acceptable? Are • The number of repeat users (if possible)
the interface and networks reliable?
• The number of times and the duration that people use
Design Telemurals in one space only
This is in the form of a studio critique. Professors from var- • Repeated patterns of interaction: gestures, kicks, jumps,
ious architecture and design departments and research sci- screams
entists have been invited and have volunteered to These are factors that we believe are indicative of levels of
participate in a series of critiques. interaction. However, one must always be open to the unex-
Ethnography pected and attempt to find other underlying patterns as well
The field for this observation study is the semi-public space in studying the social catalysts.
within the two chosen dormitories. The participants are Privacy
graduate students who live in the respective dormitory and When running such a project and study, it would be irre-
their friends. We are primarily interested in seeing, (1) how sponsible to ignore privacy concerns. The audio and video
people use Telemurals, (2) if the catalysts attract them, and transmitted in the Telemurals interface is not saved or
(3) how we can improve the system. stored in any way. We hope to mitigate this problem with
DISCUSSION proper signage.
As an engineering project, Telemurals works. It runs on the Summary
school network and typically uses less than1MB of band- Overall time-schedule, social events, signage, trust, site
width with audio latency varying from 500ms to 1 second selection, and a changing environment proved to influence
depending on network usage. The networking audio and population mass at the Telemural sites. The motion of peo-
image libraries are all written in C over UDP, and we use ple, the ambient noise, and the graffiti stemming from your
the Intel OpenCV library for image segmentation. own words and those of your remote companions, kept peo-
The video was reliable, the audio had acceptable lag, and ple at the site.
the system ran continuously for over two months. The one We discovered we had a larger population of use when
technical challenge that could use improvement is the Telemurals was up for shorter intervals of time. We believe
audio. Using just one microphone does not cover the it became more of an event - something that should not be
intended space and the acoustics of each space play a huge missed. Nevertheless, we continued getting requests to run
role. We are experimenting with microphone arrays and it continuously.
with physical objects that one interacts with that contain the
microphone. Dorm events such as meetings and social hours attracted
large crowds. Oftentimes it was for comic relief. Other
Telemurals was evolving throughout its construction and times it was because of the quantity of people. One person
connected installation period. We experimented with sev- at the Telemural, whether at the local or remote end tended
eral different renderings of people at each end, we changed to attract more people. A wedding party proved to be the
the fading algorithm, changed the hours of operation, and most interactive period with children repeatedly running
changed the Telemural wall site at Sidney Pacific. These back and forth across the wall. Food associated with these
changes were made according to suggestions and critiques events also attracted people. Moving food in the field of
throughout a five month period. view of the mural made it a popular spot.
The Telemurals observation took place in May and June of There was a tremendous difference with and without
2003. Initially, Telemurals ran for two hours each Wednes- instructional signage. There was original signage explain-
day and Sunday night in conjunction with a coffee hour/ ing Telemurals was an audio-video connection. However,
study break. Signage was placed in the entry ways of both people were not convinced either because there was no one
spaces to describe what is being transmitted and the privacy at the other end or because it was unfamiliar. Later, detailed
concerns of the project. instructional signage was added and usage increased five-
We had requests from both spaces to increase the hours of fold. With time, people also began to trust and understand
the connection. Telemurals then ran every night for two the link better.
hours and then ran continuously twenty-four hours a day. Changing the Telemural site at Sidney Pacific from a bright
We performed three different types of observations. room with high ceilings and glass walls, to the other corner
that was dimmer and closer to the elevators similarly
• Observation while immersed in the environment increased interaction levels. The new space provided more
• Observation from mounted camera video of a surprise, better mural visibility, and more time to inter-
• Observation from abstract blended video act.
The footage from these tapes was used to annotate patterns Ashdown and Sidney-Pacific have an interesting history. A
of use for this study and were then discarded. Initially, we good number of the inhabitants of Sidney-Pacific lived in
were interested in observing: Ashdown the previous year. Some students arranged meet-
• How long people speak using Telemurals ing times to meet at their respective Telemural.
63
People preferred more abstracted silhouettes to photorealis-
tic images due to the ubiquitous nature of the connection.
3. Galloway, K. and Rabinowitz S. Hole in Space. Avail-
The speech recognition algorithm provided positive feed-
able at http://www.ecafe.com/getty/HIS/
back loops of people talking doing their best to hit a suc-
cessfully translated phrase and comic relief. 4. Goffman, E. Behavior in Public Spaces: Notes on the
Above all, having a person present at either end is a big social organization of gathering. New York: The Free
attractor. This was even surprising during observation ses- Press. 1963.
sions when we sat near the Telemurals taking notes; we 5. Isaacs E. and Tang J. What Video Can and Can't do for
thought participants would be self conscious being Collaboration: A Case Study. Multimedia ’93.
watched. However, people would come just become some-
one was watching the wall. 6. Jacobs, J. The Death and Life of Great American Cities.
New York: The Modern Library. 1961.
Whether there is a person at the remote end or the local end,
they attracted more people. William Whyte was right. 7. Jancke, G., Venolia, G., Grudin, J., Cadia, J., and Gupta,
“What attracts people most is other people”. With Telemu- A. Linking Public Spaces: Technical and Social Issues.
rals, we hope to facilitate that. Proceedings of CHI2001.
ACKNOWLEDGMENTS 8. Krueger, M. Artificial Reality II. Addison-Wesley. May
We would like to thank Michael Bove and Stefan Agaman- 1991.
olis for their useful comments, the Sociable Media Group
for our discussions, and the Things That Think Consortia 9. Pederson, E.R. and Sokoler, T. AROMA: abstract repre-
for their support of this work. sentation of presence supporting mutual awareness. Pro-
ceedings of CHI ‘97.
REFERENCES
1. Agamanolis, S. Westner, A. and Bove, V.M. Reflection 10. Whyte, W.H. City: Rediscovering the Center. New
of Presence: Toward More Natural and Responsive Telecol- York: Doubleday. 1988.
laboration. Proc. SPIE Multimedia Networks, 3228A, 1997.
2. Bly, S. and Irwin, S. Media Spaces: Bringing people
together in a video, audio and computing environment.
Comm. ACM 36,1, 28-47.
64
Fluidtime:
Developing an Ubiquitous Time Information System
Michael Kieslinger
Interaction Design Institute Ivrea
Via Montenavale 1, 10015 (TO) Ivrea, Italy
[email protected]
65
business factor. In some cases, time has been the only stationary Internet, the service performs simple tasks, such
product that was sold. as time monitoring, and user reminding.
Samuel Langley was one of the first ever to market time as Prototype contexts
a product in the end of the 19th century. [8] His product The two contexts that were chosen for the prototype
was standard time, which was used to set a common time development were the public transport in the city of Turin
standard on which a train schedule could be based. Langley [1] and the laundry service at the Interaction Design
broadcasted the observatory’s time signal and other cities Institute Ivrea.
paid him in order to receive and use this standard time.
On average 20,000 people use the public transport facilities
Especially today, people are willing to pay for time since it in Turin every day. Turin transport authorities have already
is a highly-valued commodity. A survey that was done implemented a system that tracks all the buses and trams.
after the implementation of the new traffic information The first service prototype makes this data visible to
system in Turin [5] indicates that people would be willing travellers at home, at work or on the move. They can find
to give money for real-time arrival information. A report by dynamic information on mobile screen-based devices; while
Alpern et al. [2] shows possible revenue model of how at home or at the office, people can get the same
money could be made on real time information for public information on mechanical display units.
transport.
The second service prototype is a scheduling and time
In many operations, from travel services to home services, monitoring system to help Interaction-Ivrea students
accurate time information is a by-product of upgrading the organise their use of shared laundry facilities. The 50
business operations to digital systems. It is the decision of students and researchers share the use of a washing
individual management whether this by-product is offered machine. Having to book a time slot, remember to bring
to customers for free (in order to improve the service the dirty laundry, keep the appointment in mind, and check
experience), or if it is used to generate revenue. The future the washing machine in the basement to see when it’s
will show which models will work and which ones the finished—all add up to a less than comfortable experience.
customers will not accept.
Using different interface modalities, the service performs
Fluidtime simple tasks, such as reminding users in the morning to
The Fluidtime project aims to contribute to these bring their laundry to the Institute, or letting them know
developments by finding engaging, convenient and when their laundry slot is ready or their washing is done.
effective means to view and interact with real-time Since the system knows the users’ profiles and how busy
information. Especially through advances in wireless the day is, it can adjust it’s behaviour regarding reminders
Internet technology, it is possible to create ubiquitous from being either strict or more relaxed.
access to real-time information. Ambient devices allow the laundry users to monitor the
Current systems have the drawback that they are not progress of the machine and know when it is time to
accessible through easy-to-use interfaces or products collect the laundry. The system prototype also allows users
whether the customer is at home, in the office or on the to take advantage of a free laundry slot with enough
move. For instance, travelers first need to go to the train advance notice if needed. It does this by both checking the
station in order to find out if the train is delayed, or in the schedule and getting confirmation from those who are
case of a SMS-based updating systems, any timing changes affected.
are not reflected until the next SMS is sent. If, however,
INTERFACES
every fluctuation in the schedule produces a SMS message,
The challenge for interface design was to create a simple
the recipients could easily be flooded with too many
and effective system of interactions. The intrinsic problem
messages..
with time planning systems is that they require time to be
With the Fluidtime project, we want to investigate the use used. On one hand, they help us free up our time or
of ambient displays that reside in the background or organise our activities in a better way, but on the other
wireless mobile devices that allow the user to monitor the hand, they require time to be operated, and thus reducing
information constantly and utilize the advantages to the use the overall effectiveness.
of real time information. We hope to build a pervasive
We developed two categories of interfaces: one that was
information environment that is subtle and pleasing to use.
meant to be mobile and accessed anytime and anywhere,
PROTOTYPE DEVELOPMENT and a second category that was stationary, and designed to
The Fluidtime system be used in the context of the home or office. It is worth
We developed a time information system and interface mentioning that with the physical object interfaces we
prototypes in order to investigate the opportunities and focused on the exploration of the quality of interaction and
impact of using real time information. The system works information representation. We don’t see them as proposals
by tapping into already existing real-time logistical for products that should be built and go to the market
information from bus companies and laundry services and tomorrow, but that explore some basic functionality and
makes it available to the Fluidtime users via wired and quality. Alternatively, using a generic mobile phone
wireless networks. Using SMS, e-mail and the mobile and allowed us to explore interfaces that are on the market now
and wouldn’t require special investment from customers.
66
Mobile Interfaces Physical object interfaces
The interface is based on a Java software application that FI6: Mechanical display unit with icons
runs on a standard mobile phone (we used the NOKIA This is an object for the transport context. It has the
6610) connects to a server to get the real-time information dimensions of a small hi-fi stereo (See Figure 3) and is
and then visualises the data. meant to be placed in the home or office environment.
We created an optional wristband that allowed test users to Through the glass fronts of the object, the users can see
wear the phone interface on the lower arm, just like a small iconic representations of the buses that move from
regular watch. Once the application was activated, it the background to the right hand corner in the front. The
allowed them to check any changes of time information position of the miniature bus tells the users how far the bus
just by looking at the display, since the application was is from the bus stop. The user can configure the bus routes
always on and always connected. and stops through a web interface.
FI1 (Fluidtime interface 1): Perspective visualisation FI7: Mechanical display unit with shoes
The interface shows how far a certain pre-selected bus is This object looks just like a small shoe rack that people
away from the chosen stop. (See Figure 1) The application keep in their homes. (See Figure 3) When the user activates
permanently updates the visualization with data originating the object, the movement of the shoes indicates the
from the Turin transport authorities. distance of the actual bus. If the shoes move slowly the bus
is still far away, and the user could walk slowly and still
FI2: Iconic representation of time catch the bus. If the pairs of shoes start to ”run” then the
An icon on the upper part of the screen indicates the state user would also need to run in order to catch the bus. Since
in which the user should be in order to catch the next bus. the moving of the shoes creates an acoustic pattern, the user
(See Figure 1) If the icon displays a tranquil character, the can listen to the information even if he is not in the same
user can be relaxed. If the icon is a running figure, the user room as the object.
knows that the bus is due to arrive.
67
All of the candidates were commuters that used buses It was important to provide the fully functional prototypes
everyday to travel to and from work. The candidates were to the test users in their particular everyday life contexts in
interviewed three times during trial period. These order to study the direct influence of the new technology
interviews aimed to capture their use habits regarding the into their daily habits and rituals
prototypes, the functional value of the interfaces, the In ubiquitous computing environments, the flow of
usability and aesthetic quality, and the emotional and everyday interaction has to be as smooth as possible. The
social attitudes of the test candidates value gained by new applications is often not equal to the
Looking at the use habits, we concluded that the interfaces effort put into learning and using them.
were consulted on a daily basis. Each one of the users Ubiquitous solutions are difficult to test in the everyday
interacted with it either on their way to the bus stop or life context, since many factors influence the results of the
once arrived. Only in cases where the apartment or office investigation. Nevertheless, we found it particularly helpful
was close to the bus stop, they started the application to spend time with the users while employing the system
before leaving the place. If people could estimate the time on the streets in their everyday environment.
it would take them to the stop (e.g. using elevator) they
only started the application once out on the street. ACKNOWLEDGMENTS
We thank the Interaction Design Institute Ivrea for
Over time, the users gained experience in estimating their
supporting this work and the entire team that made it
timings. One of the users adjusted her "leaving the office"
possible including Joanna Barth, Crispin Jones, Alberto
routine over time. She would start the application and if
Lagna, William Ngan, Laura Polazzi, Antonio Terreno, and
the bus was still distant, surf the web or chat with
Victor Zambrano. We also want to thank the faculty and
colleagues, until the bus was due to arrive.
staff at the Institute for providing helpful comments and
The applications’ release used by the trial candidates did feedback for this project.
not allow the storage of frequently used bus routes and
stops. This turned out to be the biggest handicap of REFERENCES
adoption. As mentioned in the introduction of the 1. 5T consortium Turin:
interfaces section, if time applications take too much time http://www.5t-torino.it/index_en.html
to operate, the value gained by the time information 2. Alpern, M.; J. Bush, R. Culah, J. Hernandez, E.
doesn’t equal with the effort put into accessing the Herrera, L. Van. M-Commerce opportunities and
information. revenues models in mass public transportation
A second usability handicap was the fact that the scheduling,
application can only be started from within the application http://www.alpern.org/files/MobileBus%20Report.pdf
menu of the mobile phone. Again, this effort is too big in 3. Barth, J. Fluidtime survey: competitive research
order to be useful on a frequent or daily basis. examples,
All users found the interfaces aesthetically pleasing and http://www.fluidtime.net/download/Fluidtime_survey.p
gave them marks like “beautiful”, “cute”, or “entertaining” df, 2002
On a social, psychological level, the team found interesting 4. Beecham, L. Missed GP appointments cost NHS
that real-time services not only support those who like to money,
plan ahead and want to compare different route possibilities http://bmj.com/cgi/content/full/319/7209/536/c, 1999
in order to save time or be more efficient, but that it also 5. Brardinoni, M. Telematic Technologies for trafic and
gives people less inclined to plan more possibilities to transport in Turin,
seize the moment. This supported our hypothesis that time www.matisse2000.org/Esempi.nsf/0/45bb8686703c9e3a
information devices don’t necessarily save time, since they c12566a40036100e/ $FILE/5T.doc
depend on the person that uses it. In the case of Fluidtime,
the aim is to give people more control over time -- it is the 6. Dyer, O. Patients will be reminded of appointments by
user’s choice that of how to deal with the information. text messages
http://bmj.com/cgi/content/full/326/7402/1281-a, 2003
CONCLUSIONS
The aim of the project was to develop a prototype 7. Kreitzman, L. The 24 hour society. Profile Books,
infrastructure and a set of interfaces that allowed users to London, 1999
access real-time information in the context of everyday life 8. Levine, R. A geography of time. Basic Books, New
(commuting and doing the laundry). York, 1997.
9. NextBus Inc.: http://www.nextbus.com
68
Pulp Computing
Tim Kindberg, Rakhi Rajani, Mirjana Spasojevic, Ella Tallyn
Mobile and Media Systems Lab
Hewlett-Packard Labs
Palo Alto, CA 94304 USA
{timothy, rarajani, mirjana, etallyn}@hpl.hp.com
69
Seiko InkLink technology [8] for identifying “hot spots” in
a printed image.
REFERENCES
1. Barton, J., Goddi, P., and Spasojevic, M. Creating and
Experiencing Ubimedia. HP Labs Tech report HPL-2003-38,
2003.
2. Filofax home page: http://www.filofax.com/.
3. Frohlich, D., Kuchinsky, A., Pering, C., Don, A., and Ariss, S.
http://mirjanashomepage.com Requirements for photoware. Proceedings of the 2002 ACM
conference on Computer supported cooperative work,
I work on the Pulp Computing
project at HP Labs November 16-20, 2002, New Orleans, Louisiana, USA.
submit 4. Kindberg, T. Implementing physical hyperlinks using
ubiquitous identifier resolution, in proc. 11th International
World Wide Web Conference.
Figure 2: Dynamic annotations of web-based images 5. Ljungstrand, P., Redström, J. and Holmquist, L. E.
Webstickers: Using Physical Tokens to Access, Manage and
Share Bookmarks to the Web. In: Proceedings of Designing
Demonstration 2. Active Photos Augmented Reality Environments (DARE) 2000, ACM Press,
An “active photo” is one with both web-based and printed 2000.
representations, each of which has links to annotations in 6. Paper++ site ETH Zurich:
the form of text, audio, video or hyperlinks to web pages. http://www.globis.ethz.ch/research/paperpp/index.html
The annotations may apply to the entire image or parts of 7. The Pulp Computing home page:
the image. The photos can be taken and annotated on the http://purl.org/net/PulpComputing/home.
fly, e.g. during a meeting or at a conference (Figure 2). The
8. Seiko InkLink. http://www.siibusinessproducts.com/.
annotations can be viewed from prints of the photos as well
as on the web. 9. Sellen, A., and Harper, R. The Myth of the Paperless Office.
We shall demonstrate how a conference attendee can 10. Siio, I., Masui, T., and Fukuchi, K. Real-world interaction
using the FieldMouse. Proceedings of the 12th annual ACM
annotate themselves in a group photo with text such as their
symposium on User interface software and technology, 1999.
home page, or a spoken message. Those annotations can be pp 113-119.
experienced on a public display at the conference or at any
time from the printed “informal proceedings” of the 11. Stifelman, L. Augmenting Real-World Objects: A Paper-
Based Audio Notebook. In proceedings CHI ’96.
conference (Figure 3). Printed images have certain
advantages such as being of higher resolution, and easy to
pass around and share. Our current implementation uses the
x
Browsing
y
appliance
70
Living Sculpture
Yves Amu Klein Michael Hudson
[email protected] [email protected]
Lorax Works Scottsdale, AZ 85254
12415 N. 61st Place (480) 991-4470
71
process. In a sense, the principles of natural selection are light and reacts upon these changes. To interact with the
applied to a Living Sculpture in progress. sculpture, a person only needs to move his hands above the
A Living Sculpture should be interesting (appealing to our eight light sensors placed around the brain frame.
senses) not only in appearance but also in behavior. A Depending on the interaction of the participant, Octofungi
sculpture should respond appropriately as people interact will manifest different behaviors.
with it, and different people should provoke different
responses from the sculpture. Living Sculptures are When Octofungi is at rest or at an emotionally neutral
imbued with both instinctive behaviors and more complex state it is an inanimate object like a vase; Octofungi is
responses. For example, an aggressive viewer may trigger a symmetrical sculpture in a classical way. To stimulate
defense mechanism within the sculpture, while a gentle Octofungi people will cast shadows of varying degrees of
viewer may experience a more subtle and pleasing darkness, speed, and direction relative to it’s eight photo-
reaction. The more time the viewer invests in developing a cells “eyes” by moving their hands above the eight light
constructive relationship with the sculpture, the more sensors placed around the brain frame.
interesting the response should become. We attempt to
make pieces that show depth of design as well as depth of
behavior. Functionality and aesthetics are tightly linked The photo cells use an analog-to-digital convert to convert
within the works and create a sense of unity and the analog stimulus to a digital signal that is send to a
homogeneity. Our audience will enjoy all these features of Kohonen neural network. The neural network knows its
a living sculpture, which are implemented in Octofungi. environment; it can distinguish between an intruder and a
However, it is sometime difficult to construct behavior in periodic activity such as a fan or a slight variation in light
a demo set up since the learning process can take hours. from a window. In the Kohonen neural network, the
neurons compete to decide the winner or next position.
The network will consider Octofungi’s current state
THE PHILOSOPHY OF LIVING SCULPTURE including the position of its eight legs, which may be 1 of
Living Sculpture is also a philosophical as well as a social- 256 positions. Since the Octofungi has eight legs and each
political quest that asks questions and challenges our leg can only accept a single winning position our neural
existence with our creations. Creating life as been network can have none to eight winners.
portrayed by many writers as a dangerous and insane
enterprise. The idea of another intelligent life form
frightens religious institutions and they condemn it as The winning positions are transmitted from the neural
profanity. History has shown that we as a species are both network processors to the shape memory alloyed driver
attracted to and afraid of the unknown. We feel that (SMA) processor via a serial line, which sends a pulse
caution is definitely in order. However, if taken seriously, width modulated (PWM) signal to each of the eight legs.
artificial life can help us understand our own existence. It As the legs move a digital encoder will give a feedback of
may even provide us with a means to transform into our the position of the leg. The feedback is also send to the
next evolutionary form, a being of the cosmos. Living neural network where it is used to learn the discrepancies
Sculpture is about bringing these ideas with their pros and of the legs mechanism and environment such as friction
cons into a dynamic and interactive dialogue. Here the and obstacles. The neural network acts as the brain by
sculptures strive to understand us as we try to grasp their controlling the sculpture form giving it a life-like
significance. behavior.
Some questions that we might ask ourselves include: Is Octofungi Alive? Octofungi is quite a complex system
Should we have Artificial Intelligence and Artificial Life in but it lacks elements that are indispensable to any life
our society? If we do achieve artificial life forms, what form. First it does not look for food. If we consider the
right should those life forms have? What is life? Is life intake of energy as eating, Octofungi is fed intravenously,
dependent on carbon-based structures found on earth or is but it does not "know" that it needs to eat and it does not
it information, which is independent from its basic know how to get food. However, plants are also "wired" in
material? a sense to the nutrients in the ground and to the sun's
energy. Plants, however, know how to search for these
nutrients and energy by moving their roots or leaves closer
OCTOFUNGI to the sources.
Octofungi has eight legs and eight eyes and lives on a Octofungi's awareness at present is purely instinctual. It
pedestal in a museum’s gallery or collectors favorite spot. has no higher thought processes. It is probably comparable
Viewers interact with Octofungi by waving their hands to a non-social insect such as a moth or to a mollusk such
over its eyes. Octofungi reacts to viewers by moving its as a snail. Although Octofungi presents some of the same
body based on its interpretation of the viewer's actions. elements as simple life forms, it still lacks full autonomy.
Octofungi is a reactive piece. It is sensitive to changes in
72
Only Octofungi's behaviors are presently controlled by its migratory birds. This is what Arius and many of its
will. Nonetheless, we believe that the line that separates descendents will be. A flock of hydrogen balloons that no
the living from the inert is fuzzier than we think. one wants to mess with due to their explosive contents.
The flock of Arius will float away along the pacific coast
as some strange birds from a different time. As creatures
THE FUTURE OF LIVING SCULPTURE of the sky and sea, they use the sun as their source of
Living Sculpture started by making simple reactive energy and water, and as their food. They can fly at great
sculptures. Today we are making evolvable behavioral art speeds with their hydrogen engines or glide peacefully at
forms and tomorrow we hope to bring more technologies sunset. They can communicate with each other using
to my pallet in order to show how biology and technology sounds and light signals or use GPS to provide their
could merge into our future forms. Here we describe four positioning via radio signals. However, most of the time,
of the projects to come: they will use their intelligence and senses to navigate.
Arius is one step closer to a fully independent creature.
Lumedusa
Lumedusa is an intelligent micro-robotic wearable Living Space Ribbon
Sculpture. Lumedusa is hydra/squid-like creature with six
tentacles that lives in a pendent containing its aqueous Space Ribbon is a sculpture that will travel in orbit around
solution. Lumedusa will be approximately 4-5 mm across our planet to remind us of the beauty of Mother Nature.
when fully assembled. Lumedusa is designed so that it can Our transformation begins in our mind. I hope to be able
be micro-machined as a flat device and then fold up into a to create a colony of space ribbons so that we can admire
3 dimensional robot using its EAP actuators. Lumedusa's their dance at sunset and sunrise. For a few minutes each
body will be composed of silicon plates connected by poly- day they will remind us that we should all play and smile.
pyrrol(PPy) micro-actuators and illuminated by poly- They will then disappear to re-introducing us to our
pyrrol(PPy) LED's. Lumedusa will be tethered to it's beautiful night sky. They will remind us not to pollute our
pendent via a flexible umbilical cord used to communicate atmosphere with light and chemicals, as we can already see
with it's brain, a micro-controller built into the back of the our night vanish in the glow of our cities. Space Ribbon is
pendent. composed of a body and two long ribbons made of electro-
active polymers that can bend like the tentacles of the
nautilus. Each side of the ribbon is covered with gold-
Cello plated electrodes that reflect the sun's light while at the
Our cells are a tight swarm of microorganisms all working same time move it's tentacles.
for the colony... us. As we eat we are essentially recycling
organic mater to repair our damaged cells and give birth to
ACKNOWLEGEMENTS
new ones. When the swarm matures it will spend a great
deal of time and energy for the sole purpose of creating a We would like to thank Martin Kirk and Jason Harris for
new colony. And then the cycle continues… Cello their work on Octofungi. Thanks to Keith Causey for his
embraces this idea. Cello is a sculpture that is made of work on the FORTH Board, our current embedded
hundreds of cells that can combine together according to controller. Thanks to John Kariuki for editing this paper.
their genetic code. During its lifespan, Cello will add cells We would also like to thank Dr. Elizabeth Smela and the
to the colony to grow or replace cells where there's Small Smart Systems Center (S.S.S.C) at the University of
damage. After a genetically dictated period of time Cello Maryland for their contribution to the Lumedusa Project.
will die so that other Cellos will live by reusing the cells.
Cello's cells are identical in shape but not in behavior. As
REFERENCES
the body is being formed cells become specialized to serve
the colony. A Cello begins life as one cell, but for Cello 1. De La Croix, H., Tansey R., and Kirkpatrick, D.
life never ends. Gardner’s Art Through the Ages vol. 2 Renaissance
and Modern Art. Harcourt Brace Jovanovich. New
York, NY. 1991.
Arius
Some Living Sculptures are wild creatures going on about
their business eluding us, like a pack of lions or a flock of
73
Place Lab’s First Step:
A Location-Enhanced Conference Guide
Anthony LaMarca1, David McDonald3, Bill N. Schilit1,
William G. Griswold4, Gaetano Borriello1,2, Eithon Cadag3, Jason Tabert3
1
Intel Research Seattle, 1100 NE 45th Street, 6th Floor, Seattle, WA 98105
2
Dept. of Computer Science and Engineering, University of Washington, Seattle, WA 98195
3
Information School, University of Washington, Seattle, WA 98195
4
Dept. of Computer Science and Engineering, UC San Diego, La Jolla, CA 92093
74
Figure 2: The main page of the Place-Enhanced
Conference Guide presents images of interesting
“sights” from around the conference venue
(conference Hotel). The user’s location is detected
through WiFi hotspots that have been previously
mapped. The content (images, factoids, opinions, and
links) are both manually created, culled from the Web
prior to the conference, categorized, geo-coded, and
placed in an install package. When a particular sight
is selected, more detailed information is displayed.
The entire web site runs without network connectivity
and uses beacons from the last seen WiFi hotspot to
approximate location.
only requires a mobile computer to have standard WiFi relevance. The map view on each page will place the user
capability. Whenever a user’s computer is in the presence on a map of downtown Seattle (or a detailed map of the
of a beaconing access point, it looks up the MAC addresses conference Hotel). The page will also present images of
of nearby hotspots in a cached directory and determines the nearby locales. The users can drill down from the basic
user’s location. This location can then be made available to view to find interesting images, facts and opinions.
both local applications as well as enterprise and web One of our concerns designing the conference guide was
services. In the case of our demo, the user’s location is that the location algorithms we are using provide rough
used to present relevant tourist information such as grain information. Although we expect that in time other
historical nuggets, nearby restaurants, hotels and points of researchers will apply better algorithms to improve this
interest. Achieving the full Place Lab vision [6] requires aspect of Place Lab, we knew that it was possible that
that a number of technical and social issues to be addressed position reports could be off by a city block or more! Our
including: how to build, maintain, and distribute the first interface had a text-based style and included specific
hotspot database, privacy mechanisms to help users control descriptions of computed position. We decided to
who can see their location data, and how to code web generalize the interface with imagery, including a map
content for its relevance to different locations. containing of few blocks, in order to avoid confusion if the
THE CONFERENCE GUIDE APPLICATION positioning broke down.
At UbiComp 2003 we are demonstrating a proof-of- The user should find the tool to be responsive to changes in
concept system to launch our community development their location and have a fairly high density of places of
effort. We have developed a stand-alone system that interest within a few blocks of the conference venue. To
conference participants can download and install onto their simplify the design of the system and to make it available
laptops that will give them a location-aware conference to as many users as possible, the prototype will not require
guide for the neighborhood that surrounds the Ubicomp ‘03 that the user have network connectivity. Rather, the content
venue, similar to GUIDE [3] and Cyber Guide [5]. We now will be bundled with the software, allowing the Place Lab
describe what we hope the user experience will be, how the pages, as well as the pages for the places of interest to be
demo system will be architected and finally describe what cached locally. The demo system the users install will
we expect to gain from the experience. contain four main components. The first is a WiFi spotter
In our demo, users will interact with the conference guide that can identify locally available base stations. The second
via a standard web browser accessing HTML pages. component is a cache of web pages for the places of
Sample pages are shown in Figures 1 and 2. All of the interest around the conference. The third is a customized
pages generated by the system will be coded for location database that maps both base station MAC addresses and
75
web pages to geographic locations. Lastly, the system volunteers. The feedback provided by these users will be
contains a small web server to make web pages available to invaluable for our research and development team. We
a web browser. will ask volunteers to release their usage (location and
When a user points their browser at the server port on their click) data to us for anonymous and aggregate analysis.
local machine, the web server runs a PHP script that takes (The release will be handled through a clear and concise
the user’s location as provided by the spotter and selects form they can optionally sign when they download their
and renders an appropriate set of nearby places of interest. demo. If they do release their usage data, we’ll ask them to
Using a local web server limits the types of services that email us the data when they return home. No one will be
we can provide, but eliminates a number of hard technical required to divulge any data without their explicit
problems such as how to ensure user’s privacy (not a permission.) We hope to learn about the coverage hotspots
problem since they are running locally), how to handle currently provide for the paths taken by users and the
periods of disconnection (not a problem since we don’t rely breadth and depth of web navigation at the different
on connectivity), how to deal with a large database of locations.
MAC addresses (our database will be very small) and how More importantly, however, our hope is to this conference
to tie legacy content to physical locations (we plan to do venue as a springboard for the start of our community-
this by hand for our sample corpus). building effort and approach to collaborative ubiquitous
computing research. Our goal is to build awareness for our
OFF SITE VERSUS ON SITE
One of the risks we see with our demo is that people may activities and enlist collaborators and users. This
not take their notebook computers when they leave the demonstration is a first step in an iterative design process
conference center or hotel. In a way we are presuming a that we expect to gather momentum and a wide range of
usage model of mobile workers who pull out their applications.
notebook computers in coffee shops, airports, copy stores, REFERENCES
internet cafes, hotel lobbies, and this may not match the 1. William G. Griswold, Patricia Shanahan, Steve W.
conference attendees. In an ideal world the conference Brown, Robert Boyer, Matt Ratto, R. Benjamin
guide would be provided on smaller devices that people do Shapiro, Tan Minh Truong, “ActiveCampus –
carry. In the future, as WiFi PDAs are more widely Experiments in Community-Oriented Ubiquitous
adopted, we will likely following that avenue. In the Computing”
meantime, we are very interested in learning how and why 2. Simon Byers and Dave Kormann, “802.11b access point
people travel with and open their notebook computers and mapping”, Communications of the ACM, 46(5), 2003,
this demonstration will be an opportunity, through informal pp 41-46.
interviews, to gather data.
3. Cheverst K., Davies N., Mitchell K. & Friday A.,
In order to give the conference guide experience to people “Experiences of Developing and Deploying a Context-
who never take notebooks offsite, we will provide smaller Aware Tourist Guide: The GUIDE Project,”
grain information and services keyed to the hotspots in the Proceedings of MOBICOM 2000, Boston, ACM Press,
conference center and hotel. Much of this content will have August 2000, pp. 20-31.
to be created by hand, but we see a nice opportunity to
create color and fun. For example our team might 4. Jim Krane, “Burgers, Fries, And Wi-Fi.” Information
photograph and interview the bartender and include a hotel Week, March 11, 2003.
bar page, as well as scan the hotel restaurant menus. 5. Gregory D. Abowd, Christopher G. Atkeson, Jason
For location inside the conference center we will use the Hong, Sue Long, Rob Kooper, and Mike Pinkerton.
existing conference APs and may add “warmspot” access Cyberguide: a mobile context-aware tour guide. ACM
points with a smaller range. These additional beacons may Wireless Networks, Volume 3(5), 1997, pp. 421-433,
be necessary to provide complete coverage of interesting Kluwer Academic Publishers.
areas. 6. Bill N. Schilit, et. al., “Challenge: Ubiquitous Location-
CONCLUSIONS
Aware Computing The Place Lab Initiative. To Appear
Our demonstration will be continuous and will be available in The First ACM International Workshop on Wireless
to volunteer users from the start of the conference. During Mobile Applications and Services on WLAN Hotspots
the demonstration session, we will show the location- (WMASH), 2003, San Diego, CA..
enhanced browser to all attendees and seek more
76
AuraLamp: Contextual Speech Recognition in an
Eye Contact Sensing Light Appliance
Aadil Mamuji, Roel Vertegaal, Jeffrey S. Shell,
Thanh Pham and Changuk Sohn
Human Media Lab
Queen’s University
Kingston, ON, Canada K7L 3N6
{mamuji, roel, shell, pham, csohn} @cs.queensu.ca
77
amount of eye contact made with an interlocutor [10].
Humans use eye contact in the turn taking process for two
reasons:
1) Eye fixations provide the most reliable indication of
the target of a person’s attention, including their
conversational attention [12].
2) Eye contact is a nonverbal visual signal, one that can
be used to negotiate turns without interrupting the
verbal auditory channel.
In many cases, the eye gaze of the user, as an extra channel
of input, provides an ideal candidate for ubiquitous devices
to sense when their user is paying attention to them, or to
another device or person. By tracking whether a user
ignores or accepts requests for attention, interruptions by
ubiquitous appliances can be made more subtle and
sociable. As demonstrated by Maglio et al. in a Wizard of
Oz experiment, when users interact with devices using a
speech interface, they do indeed tend to look at the device
at which the command is directed [4]. This principle is
known as Look-to-Talk [6], and it allows for devices to
deduce when to listen to the user.
AuraLamp
AuraLamp (Figure 1) illustrates an attentive gaze and Figure 1. AuraLamp light fixture with embedded eye
speech enabled appliance, or EyePliance [7]. It is a lava contact sensor.
lamp augmented with an eye contact sensor and speech
recognition capability. By looking at the lamp, a person Sensing Eye Contact
indicates attention to the device, thereby activating its AuraLamp senses the user’s looking behavior through an
speech engine. When the user does not look, its speech embedded eye contact sensor mounted on top of the device.
engine deactivates and does not listen to the user. This Eye contact sensors are cheap eye tracking input devices
avoids problems of multiple appliances listening at the especially designed for the purpose of implementing Look-
same time, removing ambiguity in user speech command To-Talk with ubiquitous appliances. Unlike traditional eye
interpretation. Since only one appliance is the active trackers, their only requirement is to detect the user looking
listener, users can use deictic references when referring to straight at the device. We designed a sensor that can be
the device. Having only one of several appliances be the built for less than $500, consisting of a camera that finds
active listener allows the use of a single centralized speech pupils within its field of view using a simple computer
recognition engine, as it greatly reduces the speech vision algorithm [11] (see Figure 2). A set of infrared LEDs
processing load for the total set of appliances. AuraLamp is mounted around the camera lens. When flashed, these
responds only to the two actions it is capable of – turning produce a bright pupil reflection (red eye effect) in eyes
on and turning off. By switching the active speech within range. Another set of LEDs is mounted off-axis.
recognition lexicon on the server to that of the EyePliance Flashing these produces a similar image, with black pupils.
currently in focus, the accuracy of speech recognition is By syncing the LEDs with the camera clock, a bright and
increased, while at the same time presenting the user with a dark pupil effect is produced in alternate fields of each
small reusable vocabulary. AuraLamp is a model for how video frame. A simple algorithm finds any eyes in front of
we may use visual attention with speech to interact with the camera by subtracting the even and odd fields of each
any household appliance. Each speech command in the frame [5]. The LEDs also produce a reflection from the
lexicon is associated with an X10 home automation surface of the eyes. These appear near the center of the
command. A serial interface routes these commands from detected pupils when the onlooker is looking at the camera,
the speech processing server to the electricity grid [13]. allowing the detection of eye contact without any
Over standard electrical wiring, the commands reach a calibration. Eye contact sensors obtain information about
simple controller unit capable of turning the appliance on or the number and location of pupils, and whether these pupils
off. The X10 interface makes it easy to extend our are looking at the device. When mounted on a ubiquitous
interaction model to any appliance in the household. device, the current prototypes can sense eye contact at up to
a distance of 2 meters.
78
On-axis
LEDs Off-axis
LEDs
79
EyeReason allocates voice control only to the EyePliances deictic references to refer to any appliance. Focusing the
currently in focus, it allows duplicate voice grammar active speech grammar to that of the currently active
definitions across devices. EyeReason uses the Microsoft EyePliance increases speech recognition accuracy, while at
Speech API 5.1 SDK to implement these context-sensitive the same time presenting the user with a small reusable
grammars through XML-based lexicons. Processing speech vocabulary.
by AuraLamp through EyeReason involves two steps. First,
REFERENCES
the AuraLamp device driver detects activity information
1. Duncan, S. Some Signals and Rules for Taking
representing the attention of the user by polling the
Speaking Turns in Conversations. Journal of Pers. &
associated eye contact sensor over a TCP/IP connection.
Social Psychology 23, 1972.
When a sufficient level of eye contact is detected, the driver
loads the EyePliance’s context specific grammar. When an 2. Horvitz, E., Jacobs, A., and Hovel, D. Attention-
EyePliance driver activates its grammar, EyeReason sensitive alerting. In Proceedings of UAI’99.
automatically deactivates grammars for EyePliances not in Stockholm: Morgan Kaufmann, 1999. pp. 305-313
the focus of user attention. 3. Jabarin, B et al. Establishing Remote Conversations
Through Eye Contact With Physical Awareness
OTHER EYEPLIANCE PROTOTYPES Proxies. In Extended Abstracts of CHI 2003
We have developed a number of other EyePliance Ft.Lauderdale: ACM Press, 2003. pp 948-949.
prototypes that form part of the EyeReason architecture. 4. Maglio, P. et al. Gaze and Speech in Attentive User
Apart from Look-To-Talk interfaces, these include Interfaces. In Proceedings of the Third International
appliances that use eye contact sensing in novel ways to Conference on Multimodal Interfaces (2000). Beijing,
streamline interactions with the user with a minimum of China.
interruptive requests for attention. The Attentive TV uses 5. Morimoto, C. et al. Pupil Detection and Tracking
an eye contact sensor to determine whether someone is Using Multiple Light Sources. Image and Vision
watching it [7]. If nobody is watching, the TV pauses its Computing 18, 2000.
feed. When the viewer returns, the program resumes. This 6. Oh, A. et al. Evaluating Look-to-Talk. In Extended
concept can generalize to other devices that are fitted to use Abstracts of CHI 2002, Minneapolis: ACM Press, 2002
visual cues of attention to perform meaningful actions. pp.650-651
EyeProxy [3] is an attentive desk-phone that consists of a
7. Shell, J. S. et al. EyePliances: Attention-Seeking
pair of actuated eyeballs augmented with an eye contact
Devices that Respond to Visual Attention. In Extended
sensor. The proxy acts as a surrogate for a remote person’s
Abstracts CHI 2003. Ft. Lauderdale: ACM Press,
eyes. It demonstrates how a device like a phone may
2003, pp 770-771.
request attention from its user by simulating eye contact,
rather than by producing a disruptive auditory notification. 8. Shell, J. S. et al. Interacting with Groups of Computers.
When a remote person wishes to engage in a phone Communications of the ACM 46(3), (March 2003), pp.
conversation with the user, EyeProxy conveys that person’s 40-46.
interest by orienting its eyeballs towards the user’s eyes. 9. Short, J., Williams, E., and Christie, B. The Social
The user can pick up the phone by producing a prolonged Psychology of Telecommunications. London: Wiley,
fixation at the EyeProxy. If the user does not wish to 1976.
answer the call, he simply looks away. 10. Vertegaal, R. and Ding, Y. Explaining Effects of Eye
Gaze on Mediated Group Conversations: Amount or
We are currently in the process of evaluating the principle Synchronization? In Proceedings of CSCW 2002. New
of turn-taking EyePliances. Initial results are encouraging, Orleans: ACM Press, 2002, pp. 41-48.
suggesting that the use of eye contact sensing to regulate 11. Vertegaal, R. et al. Designing Attentive Cell Phones
communications with ubiquitous appliances may in fact Using Wearable Eyecontact Sensors. In Extended
improve the efficiency of verbal interactions. Abstracts of CHI 2002. Minneapolis: ACM Press, pp.
CONCLUSIONS 646-647.
We presented AuraLamp, an attentive gaze and speech 12. Vertegaal, R., Slagter, R., Van der Veer, G., and
enabled appliance, or EyePliance. AuraLamp is a lava Nijholt, A. Eye gaze patterns in conversations: There is
lamp augmented with an eye contact sensor and speech more to conversational agents than meets the eyes. In
recognition capability. The lamp listens to simple voice Proceedings of CHI 2001. Seattle: ACM Press, 2001,
commands such as “On” or “Off”, but only when the user pp.301-308.
focuses his attention on the lamp. AuraLamp demonstrates 13. X10 Home Solutions, http://www.x10.com, 2003.
how ubiquitous speech-enabled appliances may enter into a
turn taking process with the user, allowing the use of
80
The Ubiquitous Computing Resource Page
(www.ucrp.org)
Joseph F. McCarthy J. R. Jenkins, David G. Hendry
Intel Research The Information School
1100 NE 45th Street, 6th Floor University of Washington
Seattle, WA 98105 USA Mary Gates Hall
[email protected] Seattle, WA 98195-2840
{jrj4,dhendry}@u.washington.edu
81
An online resource for intelligent environments resource page is now open to any person who wishes to add
(www.research.microsoft.com/ierp) was created shortly themselves to the listing or submit resources. The site is
after a symposium on that topic in 1998 [Coen, 1998]. This moderated to prevent abuse.
collection of material on projects, organizations, hardware
Projects
and events relating to intelligent environments is clearly
Each project listed in the UCRP is labeled with its name,
relevant to ubiquitous computing, but as it has not been
which is linked to the project homepage, and a brief
maintained, it now represents a snapshot of the state of
description of the project, copied or adapted from text on
research in this area circa 1998-2000.
the project home page or from a publication associated with
THE UBIQUITOUS COMPUTING RESOURCE PAGE the project. The space limitation is intended to ease
We are creating the ubiquitous computing resource page scrolling through the list of all projects, which are listed in
(UCRP), an online resource that can be used by researchers, alphabetical order.
educators, students, the press and the general public to learn Although we have considered adding links to both people
more about the field of ubiquitous computing, to allow and organizations associated with the projects
members of the community to share information easily (unidirectional or bidirectional), our initial version does not
outside of conferences, workshops and other gatherings, have such links, in order to simplify the initial organization
and to do so within a framework that allows easy and subsequent maintenance of the collection.
maintenance via contributions from members of the
community. The resource page is available at We have further simplified the ontology represented by this
www.ucrp.org. collection by including large-scale initiatives – that
incorporate many projects, sometimes from many
The development work is divided into two phases. The aim organizations – in the list of “projects”. Thus our notion of
of phase I, collection development, is to assemble a projects spans a broad range of endeavors by researchers in
collection of resources of sufficient size to attract members ubiquitous computing.
of the Ubiquitous Computing community. At present, the
collection emphasizes breadth of coverage rather than depth Organizations
and primarily focuses on people, projects, and Organizations represent another category that has varying
organizations. The aim of phase II, community levels of granularity (as with projects and initiatives). For
development, is to enable the collection of resources to be example, academic researchers are associated with their
self-sustaining through peer review and discussion. At university, their school, their department, and any number
present, people may submit resources to a moderator for of centers or groups or labs. Academic researchers with
inclusion. In the future, we would like to move to an open joint appointments have even more options, and researchers
model with rich opportunities for open-ended collaboration who have joint appointments across industry and academia
and collection development. have even more options.
The UCRP has been implemented in ASP.NET using the While representing a hierarchical organization structure in
Community/Portal starter kit1. The starter kit provides a the UCRP that reflects the organizations in the ubiquitous
variety of features for content management and online computing community would be ideal, we believe the
communities. maintenance of such a structure would be burdensome, and
so we are currently implementing a flat organizational
Resources are organized into the following categories:
structure.
People, Projects, Organizations, UbiSites, News, and
Events. These categories are now described. UbiSites
UbiSites is a generic term used for resources that fall
People
outside projects and organizations but do have potential
Each person listed in the UCRP is represented by their
relevance to those interested in UbiComp. Example
name, affiliation and publications. The person’s name is
UbiSites include previous resource lists, online
linked to their homepage, the affiliation is listed, and
publications, and recommended reading lists. UbiSites are
publications are represented by links to other resources
open to submission and are moderated.
(currently the ACM Digital Library, www.acm.org/dl, and
the DBLP Bibliography Server, www.informatik.uni- News
trier.de/~ley/db/index.html). News items can be posted on the UCRP, with a title, brief
textual description, and link to the source of the news item.
We have seeded the resource with members of the
Currently, only administrators may post news items, but we
UbiComp 2003 Conference & Program Committees. The
hope to soon allow anyone to submit a proposed news item
to a moderator, and eventually to have the system be more
1
Community/Portal starter kit is available at self-regulated (for news, and in general).
http://asp.net/Default.aspx?tabindex=9&tabid=47
82
Events ACKNOWLEDGMENTS
Events of interest are currently associated with a calendar The authors gratefully acknowledge the contributions of the
on the UCRP. Events include such things as conferences, following people in helping us formulate and refine the
workshops and events of interest to the community. As with ideas for the ubiquitous computing resource page: Gaetano
news items, events may be submitted to a moderator for Borriello, Sunny Consolvo, Anthony LaMarca, James
consideration. Landay and Mike Perkowitz.
Other Resources REFERENCES
While we hope the UCRP becomes the primary online 1. Abowd, Gregory D., and Elizabeth D. Mynatt. 2000.
resource for the field of ubiquitous computing, there are a Charting Past, Present and Future Research in
number of other resources developed by other people and Ubiquitous Computing. ACM Transactions on
organizations that will continue to be useful to the Computer-Human Interaction (Special Issue on HCI
community. We will link to these resources from the Research in the New Millenium), Vol. 7, No. 1, pp. 29-
UCRP (and hope the UCRP, in turn, is linked to from 58.
them). 2. Abowd, Gregory D., and Bill N. Schilit (organizers).
DISCUSSION 1997. Workshop on Ubiquitous Computing: The Impact
Methods used to gather resources include such popular on Future Interaction Paradigms and HCI Research at
search engines as Google, Teoma, and Vivisimo as well as the 1997 ACM Conference on Human Factors in
online publication databases such as DBLP and the ACM Computer Systems (CHI ’97).
Bibliography. Initial search results have revealed previous 3. Abowd, Gregory D., Barry Brumitt and Steven A.
attempts to create a UbiComp Webpage bibliography or Shafer (Eds.). 2001. Proc. of the Int’l. Conf. on
resource list, but most sites represent a small area or time- Ubiquitous Computing (UbiComp 2001), Atlanta,
frame in the last five years, rather than a living collection. Georgia, September 2001. Lecture Notes in Computer
There are a variety of community features provided by the Science, Vol. 2201, Springer–Verlag.
ASP.NET framework, which, for example, allow 4. Borriello, Gaetano, and Lars Erik Holmquist (Eds.).
discussions, newsletter generation, e-mail updates, enable 2002. Proc. of the Int’l. Conf. on Ubiquitous
content to be syndicated with RSS, enable use of Web Computing (UbiComp 2002), Gothenburg, Sweden,
Services, and so on. As we gain experience with the UCRP, October 2002. Lecture Notes in Computer Science,
we hope to explore such features with the aim of Vol. 2498, Springer–Verlag.
developing a self-sustaining community.
5. Coen, Michael (Ed). 1998. AAAI Spring Symposium on
Ubicomp.org, the Home Page for the Annual Conference Intelligent Environments. AAAI Tech Report SS-98-02.
on Ubiquitous Computing, has many features that are
intended to foster online community awareness and 6. Gellersen, Hans W. (Ed). 1999. Handheld and
discussion, through its Community Directory and Ubiquitous Computing: Proceedings of the First
Discussion Forums sections. We hope to work with the International Symposium (HUC ‘99), Karlsruhe,
webmaster so that the UCRP and ubicomp.org can Germany, September 1999. Lecture Notes in Computer
complement each other effectively. Ubicomp.org has Science, Vol. 1707, Springer-Verlag.
traditionally provided support for the conference, whereas 7. Perlman, Gary. 1999. The HCI Bibliography: Ten
we hope ucrp.org will provide greater continuity and Years Old, But What's It Done for Me Lately? ACM
persistence, and prove to be a valuable resource for a interactions, v.6, n.2, p.32-35.
broader population. 8. Weiser, Mark. 1991. The Computer for the 21st
One of our most basic challenges has been the Century. Scientific American, 265(30):94-104.
determination of whether a resource is indeed related to 9. Weiser, Mark, and John Seely Brown. 1997. The
ubiquitous computing. With so many definitions and terms Coming Age of Calm Technology. In Peter J. Denning
– ubiquitous, pervasive, disappearing, sentient, ambient – it & Robert M. Metcalfe (Eds), Beyond Calculation: The
is difficult to characterize what is not an example of Next Fifty Years of Computing. Springer – Verlag, pp.
ubiquitous computing. We hope the UCRP will serve as a 75-85.
forum for discussing the very definition of ubiquitous
computing.
83
Proactive Displays &
The Experience UbiComp Project
Joseph F. McCarthy, David H. Nguyen, Al Mamunur Rashid, Suzanne Soroczak
Intel Research
1100 NE 45th Street, 6th Floor
Seattle, WA 98105 USA
{mccarthy,dnguyen,arashid,ssoroczak}@intel-research.net
84
the Alien Technology 915 MHz readers and tags. We may Since conference attendees ought to be prepared to state
make provision for the inclusion of other sensing their name and affiliations, verbally, anytime they rise to
technology and/or communication protocols, such as ask a question during a paper (or panel) presentation, we
Bluetooth [cf. Want, et al., 2002]. propose to augment this common practice by using a
proactive display as a visual aid. An RFID reader at the
Application Clients & Servers
microphone stand will identify the RFID tag worn by the
The RFID reader for each application will be connected to
person approaching the microphone, and communicate this
a local computer, which will run the application and access
to the AutoSpeakerID application which will, in turn,
a server containing both profile information about the
display a picture of the person, along with his or her name
attendees as well as other sources of content that might be
and affiliation, on a display near the front of the room.
shown on the proactive display. The profiles will reside on
a central server so that any updates made during the Those who do not wish to have their profile information
conference can be propagated immediately to the different displayed when they approach a microphone stand can
client applications. Each application client will provide the either opt out of participating at registration time or at any
capability for an administrator to stop the application, in point during the conference using a kiosk in the registration
case of unexpected and unwanted behavior. area, or may simply either remove the RFID tag from their
badge or leave their badge at their seat when they go to ask
Profile Creation & Maintenance
the question. They may also, of course, choose to “game”
Conference attendees will be given the option to opt-in to
the system by wearing another person’s tag.
any / all of the proactive display applications by creating
profiles during the registration process. No information We are, with this application and the others, very eager to
will be used in proactive display applications unless an learn whether, how and why people participate in the
attendee provides explicit consent to use that information. system.
Attendees will be also be given the option of creating or Ticket2Talk
modifying their profiles during the conference at a A paper / panel presentation session is a rather formal
computer adjacent to the conference registration table, and context in which to deploy a proactive display. We also
at one or more kiosks in the Demonstration & Posters area have applications we plan to deploy in more informal
of the conference. contexts, such as a break area or a demo or poster session.
PROACTIVE DISPLAY APPLICATIONS FOR UBICOMP One such application is Ticket2Talk, which will run on a
We plan to deploy three applications at the conference: large plasma display – in a portrait mode orientation [cf.
AutoSpeakerID, which displays the picture, name and Churchill, et al., 2003] – and cycle through visual content
affiliation of a person asking a question at the microphone explicitly contributed by attendees that represent “tickets to
during a question & answer period following a paper or talk”: some visual marker for a topic about which the
panel presentation; Ticket2Talk, which displays explicitly attendee would be happy to talk with someone. This may
specified content (a “ticket to talk” [Sacks, 1992]) for any be a research poster the attendee is presenting at this, or
single person as he or she approaches a proactive display in another, conference, the cover of a recently published book,
the coffee break area; and Neighborhood Window, which a picture of a favorite pet, vacation spot or piece of art.
displays a visualization of implicit or “discovered” content
The ticket to talk will be displayed in the central region of
(from explicitly provided homepage information) for a
the screen, with a picture and name of the attendee who
group of people who are in the neighborhood of a proactive
posted the ticket to talk appearing at the top, and a
display in an informal, open area at the conference. These
collection of thumbnail pictures & names of other people
applications are described in more detail in the sections
whose RFID tags have been detected near the display
below.
appearing in a row at the bottom. Each image will be
AutoSpeakerID selected for display based on a priority determined by both
After a paper presentation during UbiComp (and other the recency of the attendee’s badge being detected (higher
conferences), people often approach a microphone stand in priority for more recently sighted badges) and the recency
the audience to ask questions about the work described in of the attendee’s ticket having been shown (higher priority
the presentation. Everyone in the audience knows who the for less recently displayed tickets). Images will be
presenter is, but don’t always know much about the person displayed for a preset interval, probably in the range of 5 to
asking the question. A diligent session chair may remind 10 seconds. There will also be a time limit on the duration
the questioner to state his or her name & affiliation, but this for which a ticket might be in the queue of potential content
is often not the case, and even when encouraged to identify to display: although we want to focus on content for those
themselves, questioners’ names or affiliations may not be currently gathered nearby, we also might maintain a small
heard clearly by others in the audience (especially if the amount of “history” about people who have passed by
questioner is hurrying to get to his or her question). recently.
85
We will deploy this proactive display next to a table used words and phrases, and the links between them. Our goal is
for a coffee urn during a break. The serial nature of the to provide opportunities for attendees to start topical
movement of people through the line will correspond to the conversations, or at least become more aware of the
sequencing of tickets, providing each person who comes interests they share with others in the community.
through the line – who has chosen to participate – an
EVALUATION
opportunity to both learn more about those nearby in the
Our goal is to introduce technology to bridge the gap
line and allow those same people to learn more about him
between people’s digital profiles and their presence in the
or her.
physical world to enhance the conference experience for all.
The goal of this application (and Neighborhood Window) is We are assuming that the applications we have designed
to provide opportunities for conversation for attendees who will have a positive impact, but we will be carefully
do not already know each other. However, we also want to assessing the experience at the conference, to see how these
ensure plausible ignoreability, i.e., no one should feel applications impact attendees’ experience – and why.
compelled to strike up conversation with a fellow attendee
We want to allow others to learn from our experience, so
who happens to be nearby. By cycling through content, one
that the community as a whole may be able to better design
can simply notice the stream of tickets, without acting on
future proactive display applications, and other types of
any particular one. Even if the opportunity for direct
applications that seek to enhance the experience of groups
conversation is not taken, we expect that the displays will
of people using information from digital profiles.
contribute to raising the level of awareness about other
attendees’ interests – helping people learn things about their Our plan is to collect data using both qualitative and
colleagues that they may later choose to act on (e.g., at a quantitative methods. Observations and on-site interviews
demonstration or poster session, or the conference will be conducted throughout the conference. This data will
reception). then be coded and evaluated for trends and themes in
interaction. A follow-up questionnaire will also be
Neighborhood Window conducted to gauge the impact of the proactive displays on
Another context in which we plan to explore the utility of the attendees’ overall conference experience, and to identify
proactive displays in a conference setting is the areas for further research and development.
demonstration and poster session. Attendees often mill
about such a session, forming ad-hoc groups as they cluster RELATED WORK
around a demonstration or poster of interest. The Previous work [Woodruff, et al., 2001] has explored the
Neighborhood Window application will display a use of technologies to encourage conversations among
visualization of interests of those in its vicinity, based on small groups during museum visits; we are seeking to
the collection of words found on their respective broaden the context and scope of people who might engage
homepages. in conversation, and to use situated, peripheral displays
rather than handheld devices. Other researchers have
Although we could simply run the Ticket2Talk application
explored the use of ambient displays [Mankoff, et al., 2003;
on a display in the demonstration and poster session, we
Weiser & Brown, 1997] and other forms of public displays
want to take advantage of this context to explore other
[O’Hara, et al., 2003]. We seek to extend this work
dimensions of proactive display applications (and people’s
through the use of sensing technologies (in this case, RFID)
experience with them). Neighborhood Window utilizes
that enable to public displays to be more proactive –
implicit or latent profile information that can be attained
responding to the people nearby, as well as other elements
through attendees’ explicit profiles, and generates
of the local context.
visualizations of this content based on the group that is
nearby. GROUPCAST [McCarthy, et al., 2001] is an earlier
application that runs on a large display that responds to the
In addition to offering attendees the capability of providing
people nearby. However, GROUPCAST ran in a corporate
their pictures, names, affiliations and/or tickets to talk, we
environment where all the passersby were members of the
also offer them the option of providing a link to their
same company (indeed, most were members of the same
homepages in the registration process. An offline
research group within the organization), and had profiles
application then analyzes the content of their homepages,
for approximately 20 people. We seek to extend this work
collecting words and phrases, and constructing a profile
by deploying applications in a less restricted context, with a
vector that can be used to select content that is likely to
much larger number of people from multiple organizations.
represent interests shared by those near the display, but not
widely shared among the more general population. There has also been some other, promising, research into
the use of technology to enhance the conference experience
For example, two UbiComp attendees approaching the urn
for attendees. The Intellibadge system [Kindratenko, et al.,
may have references to “motes” or “ambient displays” on
2003] included a suite of visualization applications based
their homepages, and these phrases may be highlighted in
on aggregate information collected through active radio
the visualization that depicts people’s names, associated
frequency (RF) tags worn by approximately 20% of the
86
attendees of the SC 2002 conference. As an example, one deploy at UbiComp 2003: Gaetano Borriello, Sunny
application showed the distribution of interests among the Consolvo, Anind Dey, Anthony LaMarca, Sean Lanksbury,
people attending each parallel session (e.g., the number of David McDonald, Eric Paulos, Trevor Pering and Bill
compiler people vs. middleware people, etc.). Our work Schilit.
explores applications that directly react to the small number
REFERENCES
of people in the vicinity of the displays, rather than showing 1. Barrows, Matthew. 2002. The Signs have Ears: Two Billboards will
more general, aggregate data regarding the overall Scan Car Radios and Tailor Pitches to Match Listening Preferences.
conference population. Sacramento Bee, November 24, 2002.
nTAGs (http://www.ntag.com, see also Borovoy, et al., 2. Borovoy, Richard, Fred Martin, Sunil Vemuri, Mitchel Resnick,
Brian Silverman and Chris Hancock. 1998. Meme Tags and
[1998]) are devices that include infrared and radio Community Mirrors: Moving from Conferences to Collaboration. In
frequency communication capabilities, as well as a small Proc. of the ACM 1998 Conf. on Computer Supported Cooperative
display and buttons for interaction. These devices have Work (CSCW ’98), pp. 159-168.
also been deployed at a conference, with a similar goal as 3. Chai, Winston, and Richard Shim. 2003. Benetton Takes Stock of
our work (creating conversation opportunities and raising Chip Plan. CNET (news.com), April 7, 2003.
mutual awareness among the people attending the 4. Churchill, Elizabeth F., Les Nelson and Laurent Denoue. 2003.
conference). We believe that the use of large, situated Multimedia Fliers: Information Sharing with Digital Community
Bulletin Boards. To appear in Proc. of the Int’l. Conf. on
displays that react to RFID tags embedded in ordinary
Communities and Technologies (C&T 2003).
conference badges worn by attendees fits more closely into
5. Gellersen, Hans-W., Albrecht Schmidt and Michael Beigl. 2003 (to
existing practices at conferences. Also, showing content appear). Multi-Sensor Context-Awareness in Mobile Devices and
that may spark conversations on a peripheral display leaves Smart Artefacts. Mobile Networks and Applications.
more room for plausible ignoreability – it is easier to glance 6. Kahn, J. M., R. H. Katz and K. S. J. Pister. 1999. Next Century
at (and ignore) a display on the periphery than to ignore Challenges: Mobile Networking for “Smart Dust”. In Proc. of the
content shown on a display worn by a person in front of you Fifth Annual ACM/IEEE Int’l. Conf. on Mobile Computing and
– and thus will engender different types of interactions (and Networking, pp. 271 - 278.
reactions) among the conference attendees. 7. Kindratenko, Volodymyr, Donna Cox and David Pointer. 2003.
IntelliBadge: Towards Providing Location-Aware Value-Added
Yet another approach to enhancing the conference Services at Academic Conferences. To appear in Proc. of the Fifth
experience is being explored by SpotMe Conference Int’l. Conf. on Ubiquitous Computing (UbiComp 2003).
Navigator (http://www.spotme.ch), a handheld device that 8. Mankoff, Jennifer, Anind K. Dey, Gary Hsieh, Julie Kientz, Scott
people can use to detect other devices used by attendees Lederer and Morgan Ames. 2003. Heurstic Evaluation of Ambient
with similar interests. The profiles used by SpotMe contain Displays. In Proc. of the 2003 ACM Conf. on Human Factors in
Computer Systems (CHI 2003), pp. 169-176.
many of the same elements as the profiles we have
designed, but as with the nTags, we believe that using a 9. McCarthy, Joseph F., Tony J. Costa and Edy S. Liongosari. 2001.
UNICAST, OUTCAST & GROUPCAST: Three Steps toward Ubiquitous
handheld device is less proactive, and deviates further from Peripheral Displays. In Proc. of the Int’l. Conf. on Ubiquitous
existing conference practices, than the use of displays that Computing (UbiComp 2001), Lecture Notes in Computer Science,
may show content on the periphery of attention. Vol. 2201, Springer–Verlag, pp.332-345.
One of the reasons we are planning on extensive 10. O’Hara, Kenton, Mark Perry, Elizabeth Churchill and Daniel Russell.
2003 (to appear). Public and Situated Displays: Social and
evaluations during and after the conference is to facilitate Interactional Aspects of Shared Display Technologies. Kluwer
our ability to compare experiences with Proactive Displays Academic Publishers.
with experiences with other technologies and approaches at 11. Sacks, Harvey. 1992. Lectures on Conversation. Basil Blackwell,
other conferences. Oxford.
CONCLUSION 12. Want, Roy, Trevor Pering, Gunner Danneels, Muthu Kumar, Murali
Sundar, and John Light. 2002. The Personal Server: Changing the
We have designed a suite of proactive display applications Way We Think About Ubiquitous Computing. In Proc. of UbiComp
intended to enhance the conference experience for attendees 2002: 4th Int’l. Conf. on Ubiquitous Computing, Springer LNCS
by providing conversation opportunities and fostering 2498, pp. 194-209.
greater awareness among the community. UbiComp 2003, 13. Weiser, Mark, and John Seely Brown. 1997. The Coming Age of
as a community that is exploring the use and implications of Calm Technology. In Peter J. Denning & Robert M. Metcalfe (Eds),
new display and sensing technologies, will provide an ideal Beyond Calculation: The Next Fifty Years of Computing. Springer –
Verlag, pp. 75-85.
venue in which to deploy these applications, assess their
impact, and further the research agenda in this area. 14. Woodruff, Allison, Margaret H. Szymanski, Paul M. Aoki and Amy
Hurst. 2001. The Conversational Role of Electronic Guidebooks. In
ACKNOWLEDGMENTS Proc. of the Int’l. Conf. on Ubiquitous Computing (UbiComp 2001),
The authors gratefully acknowledge the contributions of the Lecture Notes in Computer Science, Vol. 2201, Springer–Verlag,
pp.332-345.
following people in helping us formulate and refine the
ideas for the proactive display applications we propose to
87
Networking Pets and People
Dan Mikesell
Interactive Telecommunications Program
254 E. 7th St. Apt. 24
New York, NY 10009 USA212 673 0696
[email protected]
88
Technology Prototype Fig. 2
89
Responsive Doors
Greg Niemeyer
Dept. of Art Practice
345 Kroeber Hall
Berkeley, CA 94720 USA
+1 510 642 5376
[email protected]
90
1 day, 1 week and 1 year. The average data generates an often confuse observers, and the dramatic representation of
innovative graphic which displays the CO2 values inside such sets of data in real time would provide observers with
and outside as segments of a ring. The display is animated, rapid cognition and therefore, competitive advantages.
changes in the levels of CO2 generate an action surplus,
Conclusion
an animation which dramatizes the change of levels and This project has not been extensively tested at this time.
displays which climate, the indoor or the outdoor climate, First observations confirm that information can generate
features better air. non-quantitative values such as ambiance, community,
The subtle drama of the graphics is designed to reflect the poetry, reflection, luxury and comfort if the interfaces for
subtle, but increasingly vital question of air quality. In input and output are carefully tailored to non-quantitative
office settings, air quality is a significant factor in interpretations of the information in question.
productivity, and the Responsive Door device can inform The question of developing such interfaces touches on
people about this question in an ambient, intuitive and three traditional areas of the arts in addition to the
aesthetic fashion. The Responsive Door Device could also technology: Drama, Architecture, and the Visual Arts.
improve monitoring of air quality conditions in heavy
industry settings, where CO2 deprivation or other air Qualities of such interfaces include the action surplus,
quality factors can lead to fatigue and fatal accidents. Even coupling, and responsiveness. Action surplus is the
in a home setting, a front door with a Responsive Door amplification a system provides for a user action. Action
device could enhance behavior: the device could inform surplus can exceed, match or disappoint the user. Coupling
occupants of the need to let in fresh air, or it could inspire is the presentation of intangible information with several
them to bike to work on a day where air quality is poor tangible means, such as image and sound. Coupling can be
outdoors. too explicit and feel didactic, too vague and feel confusing,
or “just right” and feel self-evident (no manual or wall
Discussion label needed). Responsiveness is the speed of the
The relevant advantage of the Responsive Door device interaction between a human body and a system.
over normal air quality meters is the dramatized Responsiveness can be too slow: then it makes users think
comparison between two sets of data. Traditional sensors the system is dull. It can bee too fast: then users feel the
answer the question: What is the air quality in parts per system is too hard to control. It can be “just right” and in
million of CO2. This device answers the question: Which sync with human response rates and other patterns
side of the door has better air, or will “win the air quality relevant to the human sensory system. Then, users feel the
battle”? That question is much more accessible for most system is an extension of their own bodies, or even can feel
audiences, and also leads to more effective modifications the system as being “alive”.
of behaviors. Nobody wants to be on the losing side of the
battle. Resulting “just right” interfaces are specific to the types of
information provided. One general problem with
The basic concept is to use game and entertainment information technology is that the standard interface is not
principles as well as narrative strategies to engage viewers tailored to specific types of content. The resulting interface
in the consideration of fairly dry data. The main purpose of is not particularly well matched with any type of content, it
narratives is to make information interesting, engaging is usually bland. Often, interfaces are also not thoughtfully
and memorable, but most narratives deal with static contextualized within the environment of the interface.
information. Information technology can make stories out Artists and designers can help solve this problem by
of real time information in real time. Thereby, our highly matching interfaces to content more deliberately, with
developed sense of understanding stories can be invested greater aesthetic variation and with more consideration for
in understanding difficult and abstract sets of data very the changing relation between a device, its interface and
directly. its context. For example, windows for different programs
This observation requires further studies as it provides a on a PC could look distinct, they do not all have the same
connection between traditional media, such as television, fonts, frames and borders. Windows could also look
and information technology. In combining the two, different depending on the location of a computer: why
dramatic renderings of real-time data could become the does an interface look the same in Toronto as it looks in
news of the future, and new media technology would be Tijuana?
much more invested in the authoring of media content. In The Responsive Door is a possible candidate for of a
games, and particularly in pervasive games, players can successful match between content and interface. A door
generate narratives for information they acquire as the regulates indoor and outdoor relations. It is therefore a
game unfolds. The narrative itself is an emerging history good site to place a device (coupling) which describes air
which makes information relevant and memorable. quality inside and outside. The display itself is well
Immediate applications of this concept are also matched to the door, with blue elements describing the
conceivable for financial markets, where large sets of data
91
blue side of the door and red elements describing the red REFERENCES
side of the door. The responsiveness of the system to 1. Madhu C. Reddy and Paul Dourish. 2002. A Finger on
changes matches that of the expected dissipation of air in a the Pulse: Temporal Rhythms and Information Seeking
room. Neither irritating nor dull, the display elegantly in Medical Work. In Proceedings of the ACM
draws the user’s attention only if there is a dramatic Conference on Computer-Supported Cooperative Work
change in air quality. CSCW 2002 (New Orleans, LO). New York: ACM.
In conclusion, I think that the viability of using 2. Goldberg, Ken, Packer, Randall, Matusik, Wojciech
information technology for non-quantitative purposes and Kuhn, Gregory, Mori: An Internet-Based
depends on the degree to which the interface connects Earthwork, Leonardo Journal, 35(3), Spring 2002.
humans to information on human terms. 3. Alexander, Christopher. 1977. A Pattern Language.
ACKNOWLEDGMENTS Oxford University Press. Oxford, UK.
I thank Intel Corp. and Dana Plautz for supporting this 4. Shanken, Edward. 1998. Gemini Rising, Moon in
media art project, and I thank the following collaborators Apollo: Attitudes on the Relationship Between Art and
for sharing their expertise for realizing the “Responsive Technology in the US, 1966-71, Anders Nereim, ed.,
Doors: Collaborators: Julie Daley, Ben Dean, Richard ISEA97: Proceedings of the Eighth International
Mortimer Humphrey, Scott Snibbe, and Preetam Symposium on Electronic Art. Chicago: ISEA97,
Mukherjee. 1998.
92
Squeeze Me: A Portable Biofeedback Device for Children
93
and contact is made with the surface mounted thermistor. who worked closely with patients in our age range, and
Based on that reading (which is continuous) LEDs of a who in some cases had previously been patients
specific color appear (blue for cool, green for normal, themselves. We focused in on the handheld, light based
yellow for slightly warm and red for warm). If the child feedback features, to allow children portability and also to
squeezes a leg of the starfish, the color temperature offer clear but non-distracting feedback (sound and music
feedback display shuts off and is replaced with multi- were considered as feedback but identified as potentially
colored light patterns, reflecting the child's hand pressure. disruptive in a hospital setting). Other concepts included a
The patterns can then guide a child through breathing screen projector or installation for visual feedback and
exercises. The continuous readings and feedback allow the pulsing sound based feedback as well as some networked
child to see if the exercises are working. Through repetition application to allow the devices to communicate with each
of the exercises, focused concentration on the activity and other.
the visual and tactile appeal of the device, the child may be Design of form and interaction
calmed and/or distracted, with the likely result of reduced
stress. Shape, materials and interaction design happened in
parallel and simultaneous phases. Our team began
The portable device does not require a child to be tethered materials research and quickly identified rubber for the
to a computer, giving him or her freedom to play in a object body. We'd evaluated popular toys among our age
relaxing environment. It would also allow children to share group earlier in our process and recognized that squishy
what they have learned and compare their bio-readings. matter was popular among our audience. Thus we wanted
to offer an inviting, touchable surface that would allow for
PROCESS squeezing.
Audience selection We developed several prototype shapes, including both
abstract and representative shapes in different silicone
After researching the characteristics of various age groups hardness. We tested the shapes with our classmates initially
we focused on ages 8- 11. Children of this age are able to to determine if certain shapes were more hand-friendly than
connect cause and effect, and capable of logical and others. We focused on an apple and several shells, as those
organized thought. [2] For this reason, we believed this age seemed to be the easiest to hold. With some basic
to be ready to learn more about their bodies, and able to functionality -- light based feedback responding to hand
conduct self care activities. Research in the current toy temperature -- implanted into the molds; we then tested
market reflected that this age group seems to be maturing more extensively with several children in our target age
away from babyish toys, but is still playful and curious and range.
interested in natural forms.
The children responded well to the sticky and squishy traits
Biofeedback consultation & research
of the Dragon Skin and favored the shells for shape. They
In parallel to audience research, we investigated the current also enjoyed seeing the lights respond to their touch. Many
state of non-invasive biofeedback applications for children tested the limits of our prototypes, squeezing as
children. We consulted with a pediatrician to understand hard as they possibly could. A few commented that they
the medical perspective on biofeedback. Her guidance led wanted to see a response to their squeeze in addition to
us to focus on a soothing and entertaining application and their hand temperature. (This made sense in context of
steer away from a traditional therapeutic application. Most children without stress symptoms using the device) With
of what exists she felt was dumbed down and not engaging these observations, we decided to pursue both temperature
for children. She also advised considering more general and hand pressure feedback for our project. We also
feedback, rather than quantitative (e.g. temperature). The discovered that this age group liked a range of colored
staff at Montefiore also agreed with this sentiment, as they lights. One child did remark that he wanted to see his exact
felt high temperature readings in particular might cause temperature. Since SqueezeMe is not intended to be a
heightened stress in a child who was already ill and thermometer, but instead a general indicator, we noted the
anxious. We then consulted with a child psychologist who feedback, but thought this feedback was not enough to
uses biofeedback, to understand her typical practices and make it a development priority.
patient needs. Subsequent web-based research provided us The shape was still in question, as we'd not received much
with examples of screen-based applications that were detailed feedback in that direction from the testing. We
narrative or generally game-like. then asked several children in our target age group to play
After evaluating what we learned about current practices, with some Sculpy clay. Their assignment was to create as
we critiqued the existing methods, and held several many shapes as they wanted. The only requirement was
brainstorming sessions about characteristics we thought that the shape be something they would like to hold, carry
could improve upon existing solutions. We developed a around with them, and play with whether they felt well or
few general concepts and then discussed those with peers, sick. All of the shapes produced were animal related --
classmates, and instructors. We then presented one to the from dog bones to clam shells. The starfish model that
Montefiore hospital staff, including teen-aged volunteers came out of this 'test' provided our best option to date. It
94
was recognizable, hand-friendly for a variety of hand sizes,
and offered a good surface for visual feedback. The shape
then led us to our current functionality and we refined the
interaction. The thermistor could be mounted on the
starfish body, where we observed most people would touch
when picking up the object. The legs, which could be
squeezable, each had the potential to trigger a different type
of feedback.
Sensors and circuit construction
Our sensor evaluation included thermistors and galvanic
skin response (GSR) for the hand touch feedback; force
sensing resistors (FSR) and flex sensors for the squeeze
feedback. The first two attempts with thermistors brought
failure -- both types were too delicate to be touched
repeatedly, too slow in capturing and transferring the data,
and too sensitive for heat mounting to other parts of the
circuit. In consultation with YSI Temperature we selected
a high precision thermistor with a sizeable surface area that
Colored LEDs responding to ‘normal’ hand temperature
would withstand repeated touch and capture the
temperature data quickly. Due to deadline limitations we
only briefly evaluated GSR and were not able to produce
reliable results. With the thermistor functioning, we
decided to postpone GSR evaluation to later in the project
lifecycle.
For squeeze feedback, we looked at flex sensors first and
quickly dismissed them as an option. The ones we had
access to were too delicate for repeated squeezing and
bending. We tested various sizes of the FSRs and found
them to be reliable and responsive to the squeeze.
Additional Testing
With a working prototype (see photos above right) we
demonstrated Squeeze Me for the hospital staff, and then
participated in a semi annual group show at NYU.
Approximately 250 people interacted with Squeeze Me
during the show, from grandparents to children. Feedback
was positive, and observing usage was invaluable. Most Pattern Feedback responding to hand pressure
people expressed interest in the device, and felt it would
result in stress reduction. We were surprised at how
roughly some children interacted with it, which led us to
consider a more protected environment for the light circuit.
Generally it performed well, with some issues with drift in
thermistor redings due to heat that we're currently
addressing.
95
PROJECT PARTICIPANTS
Development Team: Christine Brumback, Ed Guttman and
Amy Parness collaborated on concept, design and
development of this project.
Advisors: Dr. Jan Leupold (child psychologist), Marianne
Petit (ITP instructor), Dr. Kim Putalik (pediatrician), Jeb
Weisman and staff (Montefiore Childrens Hospital),
provided insight and feedback on the conceptual, design
and behavioral aspects of this project. Ken Allen (YSI
Temperature), Tom Igoe, Greg Shakar, and Jeff Feddersen
(ITP instructors), provided technical advice and assistance
in sensor selection and circuit design.
96
The Personal Server:
Personal Content for Situated Displays
Trevor Pering, John Light, Murali Sundar, Gillian Hayes,
Vijay Raghunathan, Eric Pattison, and Roy Want
Intel Research
[email protected]
ABSTRACT
The Personal Server is a small, lightweight,
and easy-to-use device that supports personal
mobile applications. Instead of relying on a small
mobile display, the Personal Server enables
seamless interaction with situated displays in the
nearby environment. The current prototype is
supported by emerging storage, processing, and
communication technologies. Because it is
carried by the user and does not require data to
be either hosted in the local infrastructure or
retrieved from a remote web-site, it provides a
platform that increases users’ control over their
personal data. Furthermore, it enables additional
novel applications, such as a personal location
history, that would not be appropriate for the
computing infrastructure.
OVERVIEW
The Personal Server (PS) [1] is a system
designed to provide access to a user’s personal Figure 1: Personal Server Prototype
applications and data, stored on their mobile
device, through large-screen displays in the • Accessibility – the Personal Server enables
infrastructure. The device itself does not have an quick and easy access from multiple potential
built-in display, allowing it to exist as a small, access points, not requiring access through the
yet powerful, mobile device. By providing a device itself, which may be conveniently and
flexible platform for personal information safely located in the user’s bag or pocket.
access, the PS concept explores issues in Attention – the Personal Server platform is
personal information control, trade-offs between capable of automatically interacting with local
mobility and situated displays, and environment on the user’s behalf, not requiring
environmental customization. them to immediately respond to location-
The Personal Server is designed to overcome triggered events or notifications.
several shortcomings of current mobile systems, The underlying concept behind the Personal
some of which are listed below: Server is creating and presenting an
individualized digital presence surrounding the
• Usability – most mobile devices have a small user, making it easer to access personal content
screen that makes it very difficult and and also allowing the environment to adapt to
inconvenient to access content. By enabling personal preferences. A crucial metric in
access through displays located in the nearby evaluating mobile systems is often ease of use
environment, the Personal Server allows the and the user’s attention level. By allowing easy
use of large screen displays to access one’s access through any nearby convenient display,
data without having to carry a bulky laptop and not restricting access through a phone or
around. laptop, the Personal Server enables streamlined
97
ubiquitous interaction and thus ranks very highly proactive customization of the immediate
with respect to the aforementioned metrics. vicinity without direct user involvement.
The current operational prototype of the
Personal Server is an instantiation of the overall These applications highlight how the
concept, and is designed to demonstrate the Personal Server overcomes the difficulties with
novel characteristics of the device. Although current mobile platforms by exploiting three
currently a stand-alone device, in the future the important emerging technology trends. It
Personal Server may be integrated with other provides a small, powerful, and non-obtrusive
mobile devices such as a cell-phone, laptop, or platform for supporting mobile interactions. As
wristwatch – providing the same functionality technology becomes more ubiquitous, the
without burdening the user with an additional connection between mobile users and the
device. Rapid advances in three technology areas environment around them will become more
directly enable the Personal Server concept: important, strengthening the need for
personalized mobile systems, such as the
• High density storage – high-density storage Personal Server.
technologies, both solid state and magnetic,
are increasing at an extremely high rate, DEMO APPLICATION HIGHLIGHTS
doubling approximately every 12 months. For the conference demonstrations, the three
• Power efficient processing – both the power applications mentioned above highlight the
efficiency and computational capability of Personal Server’s core capabilities: personal data
embedded processors is rapidly increasing, access, location collection, and environmental
enabling smarter and more powerful devices customization. Multiple devices, each carried by,
that also have higher battery lifetimes. and associated with, a particular individual,
• Short range communication – emerging short- provide the personalized content for each of
range wireless standards afford easy, low- these applications. By exposing the unique data
power, ubiquitous point-to-point wireless contained on each device, these applications
connectivity. highlight how advances in mobile storage,
processing, and communication can be used to
Specifically, the current prototype has an enable new types of personal interactions.
Intel® XScale™ family processor, Bluetooth™ For example, Fred’s Personal Server may
wireless radio, and a compact flash slot for contain pictures from his recent vacation to
permanent storage. The resulting device is about Japan, a web-page describing him and his
the size of a deck of cards, and supports a full general interests, and his personal collection of
Linux distribution with up to 4GB of removable rare bluegrass music. Additionally, the device
storage. As a baseline, it supports web-browser could contain detailed research data describing
and file-share access, but is also capable of his power and latency measurements of
running any compatible client- or server- side emerging wireless networking protocols. Also,
application. his personal profile may indicate that he loves
Three applications demonstrate the unique Thai food, hates coffee, and likes to browse
capabilities of the Personal Server: through antique shops.
The personal data stored on Fred’s mobile
• Personal data access – personalized content, device can be easily accessed through any
such as a photograph collection, music number of nearby situated displays, allowing
collection, or working documents, can be convenient access to data without relying on a
stored on the Personal Server platform and small-screen display. For example, Fred could
easily accessed from nearby situated displays. walk up to an available display and show his
• Location collection – information from short- friend a collection of photographs from Japan.
range beacons in the environment are collected Similarly, he could show his other colleague his
and managed by the device, allowing for latest research results. Streamlining this basic
location-based services that do not constantly interaction through a simple web and file-sharing
require the user’s attention. interface, supports a mobile lifestyle without
• Environmental customization – personal requiring a bulky mobile platform, such as a
preferences, such as music selections or laptop
immersive game profiles, can be automatically The second application, termed the
transferred to the environment, allowing Ubiquitous Walkabout, receives information
from nearby information beacons and other
98
devices to form a picture of where users travel will spur many of these explorations and
and who/what they have been around. Data is discussions.
collected in real-time as the user passes by
nearby points of interest, and can be viewed later SUMMARY
on a situated display. Because the Personal The Personal Server demo environment
Server gathers and records the data, users consists of several demonstration stations that
maintain control over their personal information: detect and respond to devices representing
it allows them to track themselves, but does not individuals. The display stations, either in the
require the trust of any third-party or the use of form of large public displays or smaller touch-
infrastructure such as GPS. Additionally, since screen displays, will show content served from
the system knows that Fred is partial towards nearby users’ Personal Server devices. At any
Thai food and antique shops, it will highlight any given time, only a few devices will be in the
Thai restaurants or antique stores he regularly vicinity of the display station, adapting the local
walks by, but doesn’t notify him about coffee environment to the preferences of nearby
shops. individuals.
Finally, the Personal Server provides a The individual demonstrations have been
platform for customizing the music or audio selected to highlight personal control over
present in communal spaces. Because of the information. Although it relies on public
significant storage capacity, Fred can store a infrastructure to access content stored on the
considerable collection of bluegrass music on his user’s mobile device, the Personal Server
device, creating, in essence, a “ubiquitous MP3 controls access to personal data, providing a
warehouse” that makes his music available balance between mobile and ubiquitous
through music players in the environment. computing. These demonstrations provide a
Although his tastes in music are rare, he can concrete discussion point for conference
listen to his music when he likes, although he is attendees to explore ideas surrounding personal
not likely to find his favorite bluegrass playing information control and access.
on the radio. Furthermore, the environment can
combine music from other nearby users’ to ACKNOWLEDGEMENTS
automatically mediate the music played in a Brian Landry, Lamar Jordan (sp?) – Mtunes,
particular space, customizing the local Adam Rea (RFID), David Nguyen (Proactive
experience. This concept is similar to MusicFX Displays), Robbie Adler (iMotes)
[3], except music is sourced off of users personal
devices, instead of being provided through a REFERENCES
centralized agency.
As an alternative to playing entire songs, the [1] R. Want, T. Pering, J. Light, M. Sundar,
system can play a different short sound chirp or "The Personal Server - Changing the Way
show a representative graphic associated with the We Think about Ubiquitous Computing",
participants in the immediate vicinity, served Proceedings of Ubicomp 2002: 4th
from their mobile devices. For example, one International Conference on Ubiquitous
person might choose the sound of a chirping Computing, Springer LNCS 2498,
bird, while another, a snare drum hit. This Goteborg, Sweden, Sept 30th-Oct 2nd,
conglomeration of personal media signatures 2002, pp194-209.
automatically constructs a dynamic environment
based on the identity of nearby participants, [2] T. Pering, R. Want, J. Light, M. Sundar,
creating an immediate and dynamic "Photographic Authentication for Un-trusted
demonstration of environmental adaptation as Terminals", Intel Research; accepted for
individual participants come and go. IEEE Pervasive Computing Issue #5, March
Current mobile devices already possess many 2003
of the technologies necessary to implement a
Personal Server, such as processing, storage, and [3] J. F. McCarthy and T. D. Anagnost,
communication. However, accessing stored "MusicFX: An arbiter of group preferences
content through situated displays and other for computer supported collaborative
devices has yet to be fully explored. The workouts," in Proceedings of the ACM 1998
Personal Server concept provides a platform that Conference on Computer Supported
Cooperative Work, pp. 363--372, ACM
Press, New York, 1998
99
Ambient Wood: Demonstration of a Digitally Enhanced
Field Trip for Schoolchildren
Cliff Randell Ted Phelps Yvonne Rogers
Department of Computer Science School of Cognitive and Computer Sciences School of Cognitive and Computer Sciences
University of Bristol University of Sussex University of Sussex
[email protected] [email protected] [email protected]
ABSTRACT
This demonstration shows parts of the Ambient Wood expe-
rience project which has taken place in an English woodland
setting during the past year. The project provides a play-
ful learning experience for schoolchildren on a digitally en-
hanced field trip. A WiFi network was installed in the woods
to enable communication with PDAs, and a collection of in-
novative devices was designed to aid interactive exploration Figure 1: Using the probing device to find (i) moisture
and (ii) light levels and (iii) reading the resultant visuali-
of the woods. Most of the devices that were employed are sation on a PDA screen
available for conference attendees to use along with a facil-
itator’s terminal. A video of the schoolchildren using the
devices in the woodland is also shown. collaboratively discover a number of aspects about plants and
animals living in the various habitats in the wood during a
Introduction visit lasting around one hour. Their experiences are later re-
The Ambient Wood project is a playful learning experience flected upon in a ‘den’ area where both pairs of children share
which takes the form of an augmented field trip in English their findings with each other and the facilitators. The chil-
woodlands. Pairs of children equiped with a number of de- dren hypothesise about what will happen to the wood in the
vices explore and reflect upon a physical environment that long term under various conditions e.g. drought or lack of
has been prepared with a WiFi network and RF location bea- light through the trees.
cons. The intention is to provoke the children to stop, wonder
and learn when moving through and interacting with aspects Following on from a successful run late in 2002, the expe-
of the physical environment (see Figure 1). The children are rience was enhanced for children visiting the wood in June
able to communicate with a remote facilitator using walkie- 2003. Building on the experiences of the previous year we
talkies and are sent questions and information by a remote continued exploring our theme of augmenting the experience
facilitator using the network and handheld PDAs. with digital tools. An ‘Ambient Horn’ was added to enable
the children to have more control over when digital sounds
A variety of devices and multi-modal displays were used to within the wood were heard. The Horn provided a way to ac-
trigger and present the added digital information, sometimes cess sounds representing processes invisible to the eye, and
caused by the children’s automatic exploratory movements, to events that had happened at a different time.
and at other times determined by their intentional actions. A
field trip with a difference was thus created where children The Demonstration
discover, hypothesize about, and experiment with biological The demonstration consists of most of the devices which
processes taking place within a physical environment. were used as part of this project; a simplified wireless net-
work which enables a remote facilitator’s application to be
Two spaces were designed for the initial trial run, and each shown in conjunction with handheld Jornada PDAs; and a
activity space offered its own aims with focus on the differ- display showing a video of the children using the devices in
ent kinds of technologies and activities that have an overall the woodlands. The devices, laptop and Jornadas are all in-
link into habitat distributions and dependencies. These aims terconnected and functioning as designed and used.
are: Exploring, Consolidating, Hypothesising, Experiment-
ing, Reflecting. Pairs of children around the age of 10 years The Network Infrastructure
The project required that data should be collected by the chil-
Funding for this work is received from the U.K. Engineering and Phys-
ical Sciences Research Council, Grant No. 15986, as part of the Equator
dren; their positions in the woods be monitored; and that lo-
IRC. Further support is provided by Hewlett Packard’s Art and Science cation based information could be triggered. This was ach-
programme. ieved by the use of 418MHz license exempt transmitters with
100
facilitator’s laptop PC using the WiFi network. The cards
showed images of plants and wildlife; illustrations of natu-
ral processes such as photosynthesis; or alternatively could
pose questions to stimulate the children’s thought processes.
The facilitators were also able to monitor the progress of the
children through the woods by using a GPS tracking system.
101
GPS data [6].
Pinging Probe A Pinging Probe was designed to provide
interaction between the physical world, by sensing mois-
ture and light levels, and the digital world by graphically
displaying the results on the PDA. Again a simple data-
packet is constructed with bytes representing the values
measured and which type of measurement the children were
interested in as indicated by a rotary switch. The Pinging
Probe was set to transmit at 10Hz to ensure that there was
no detectable latency in the interaction.
Ambient Horn A novel audio player, the Ambient Horn,
was designed to play tracks cued by Location Pingers, and
to transmit ping notifications each time a sound is played.
During the first run of Ambient Wood experiments with
hidden loudspeakers failed to generate consistent interac-
tion with the children - the sounds were too ambient. This
device was subsequently designed with the intention of
providing the children with a greater level of stimulus us- Figure 3: Children using the Horn, PDA and Walkie-
ing the prerecorded audio effects. The audio tracks were Talkie.
stored on a sound chip and then cued when a location trig-
ger was received. The Horn produced a ‘honking’ sound
tain aspects of the habitat they might not have noticed other-
and LEDs flashed when the new track was cued; and the
wise, and providing relevant contextual knowledge that they
track played when a push button was activated. A physi-
could integrate with what they saw. Sometimes this approach
cal horn extension provided both an organic metaphor for
worked, and the children related the digital information that
the device and encouraged the children to listen to, and to
was being sent to them on the PDA with what they saw in the
probe for, sounds (see Figure 3).
wood in front of them (e.g. a real thistle). However, at other
times, the children were too engrossed in another activity and
Device Performance
so would miss the beginning part of a voice-over or not even
The Pinging Probe device - used for both collecting and sub- notice a sound. In these moments, the children were often
sequent viewing of the data - provided a thoroughly engross- reluctant to switch their attention to what was happening on
ing experience. The pairs of children made frequent probes the PDA from what they were already doing.
for both moisture and light, usually with one child doing the
probing and the other holding the PDA, reading off the visu- The audio playing Horn device was designed to address this
alisation. Sometimes both children would look at the PDA problem and was successful in giving control of the sound
screen together, and other times the one holding it would tell playing to the children. While this was less ‘ambient’ it
the other what they had seen on the screen. The probe design still gave the opportunity for the serendipitous triggering of
was particularly successful as the digital information result- sounds and also enabled the children to replay particular sounds
ing from the children’s activities was tightly coupled with the on request. The similar physical design of the Horn and
activity, and the children readily understood the connection Probe encouraged the children to seek sounds associated with
between the two. locations by probing with the Horn. We repeatedly observed
the children associating sounds with locations.
Initially the Location Pingers were less successful. While
the technology performed as intended, we had engineered The GPS Pinger performed well enabling positions to be re-
the digital information to be presented to the children in a corded for all the children’s activities. The need for the Dead
more pervasive way i.e. where their bodily presence in an Reckoning Pinger was largely obviated by the use of a high
area triggered the digital information to appear on the PDA, gain active patch antenna with the GPS receiver. Neverthe-
or sounds to be played through nearby wireless loudspeak- less initial results from the DR Pinger indicated that this ap-
ers. In these contexts, the children did not have control, but proach could be useful in situations where poor GPS recep-
relied on the serendipity of their movements as to whether tion is experienced. Figure 4 illustrates the combined posi-
they passed in the vicinity of the Location Pinger. The chil- tioning performance of the GPS and DR Pingers. We also
dren were never quite certain when this would happen and experimented with virtual location beacons created using the
were often surprised when they heard a sound or saw an im- GPS data however these were found to be unsatisfactory due
age on the PDA screen. Part of our intention of using this to inaccuracy, drift and occasional spurious readings.
pervasive technique was indeed to introduce an element of
surprise and the unexpected. Another reason was to augment The PAN, though simple with no protocol stack or handshak-
their physical experience, by drawing their attention to cer- ing, worked well partly due to the redundancy inherent in
102
Acknowledgements
The Ambient Wood is an Equator IRC project and we thank
our collaborators especially Eric Harris, Sara Price, Paul Mar-
shall, Hilary Smith, Mia Underwood and Rowanne Fleck of
the School of Cognitive Sciences (COGS), University of Sus-
sex; Mark Thompson, Mark Weal and Danius Michaelides
of the Intelligence, Agents, Multimedia Group (IAM) at the
University of Southampton; Henk Muller of the Department
of Computer Science at the University of Bristol; Danae Stan-
ton of the School of Computer Science at the University of
Nottingham; and Danielle Wilde of the Royal College of
Art. Thanks also go to the children and teachers of Varndean
School who approached this project with such enthusiasm
and without whom it would not have been possible.
REFERENCES
1. R. Hull, P. Neaves, and J. Bedford-Roberts. Towards
situated computing. In Proceedings of The First Interna-
tional Symposium on Wearable Computers, pages 146–
153, October 1997.
2. D. Wilde, E. Harris, Y. Rogers, and C. Randell. The
periscope: Supporting a computer enhanced field trip for
children. In Proceedings of The First International Con-
Figure 4: Aerial Photograph showing Position Sensing ference on Appliance Design, May 2003.
using GPS and Dead Reckoning. The white pixels rep-
resent the readings from a GPS receiver, the black pixels 3. B. Segall and D. Arnold. Elvin has left the building: a
show the positions estimated by dead reckoning. publish/subscribe notification service with quenching. In
Proceedings of AUUG97, September 1997.
the design. By setting the transmission rate of the Pinging 4. R. Bartle. Interactive multi-user computer games. Tech-
Probes to be significantly higher than for the GPS and Loca- nical report, BT Martlesham Research Laboratories, De-
tion Pingers it was ensured that the Probes appeared to func- cember 1990.
tion with no latency and took priority over the other Pingers.
Any delay in receiving a location ping was not critical as 5. E.F. Churchill and S. Bly. Virtual environments at work:
the user interaction appeared to be serendipitous in any case. on-going use of muds in the workplace. In Proceedings
The GPS pings provided a monitoring function and were not of the International Joint Conference on Work Activities
critical to the progress of the trials. While we estimate that Coordination and Collaboration (WACC99), pages 99–
around 5% of the pings were lost, in practice the users of the 108, 1999.
system were not aware of any latency or data loss in the PAN. 6. C. Randell, C. Djiallis, and H. Muller. Personal posi-
tion measurement using dead reckoning. In Proceedings
Contribution
of The Seventh International Symposium on Wearable
Computers, October 2003.
This project is notable for its location away from any in-
frastructure whatsoever. It required careful consideration of 7. L.E. Holmquist, F. Mattern, B. Schiele, P. Alahuhta,
power requirements and the effects of woodland on RF prop- M. Beigl, and H-W. Gellersen. Smart-its friends: a
agation under differing climatic conditions. It also benefited technique for users to easily establish connections be-
from a lack of any possible external RF interference. The tween smart artefacts. In UbiComp 2001: International
range of uses of the Pinger technology is unusual and its in- Conference on Ubiquitous Computing, pages 116–122,
tegration to form a PAN for collecting minimal data packets September 2001.
extends the concept of using devices such as Smart-Its [7]
8. J. M. Kahn, R. H. Katz, and K. S. J. Pister. Next cen-
and the Berkeley Motes [8] for the collection of pervasive
tury challenges: Mobile networking for ”smart dust”. In
data. The Probe and Horn devices both had great appeal to International Conference on Mobile Computing and Net-
the children who enjoyed using them constructively to learn
working (MOBICOM), pages 271–278, 1999.
about the environment. We believe that these inventions may
inspire others to develop further interesting ways of interact-
ing with ubiquitous computing systems.
103
‘Wall_Fold’: The Space between 0 and 1
Ruth Ron [1]
Archi-TECH-ture
www.ruthron.com
+1 312 753 5064
[email protected]
ABSTRACT
The Wall_Fold installation analyzes personal space in the
contemporary reality of portable computing and wireless
communication. It conveys a more sensitive and complex
environment than the typical Modernist white cube. The
physical architectural element generates an ambiguous
spatial condition: smooth and flexible folds between the
inside and the outside, open and closed. The space thus
becomes continuous and dynamic.
Six pairs of servomotors, connected by flexible bands,
create a smooth surface. The motors alternate between two
positions (0° & 180°), stretching the binary ON/OFF into a
continuous transition, a whole grayscale or gradient
between 1 and 0.
Keywords
Personal space, Smooth space, interactivity, installation
INTRODUCTION
Wall_Fold is a theoretical prototype for a ‘smart’
architectural partition with programmed behavior and
changing patterns. It may suggest domestic or public Europe. The rigid coding scheme was adopted in many
interior wall partition, or an interactive opening. It can be urban reconstructions after World War II. However, the
developed further into a full three-dimensional spatial functional planning strategy was later criticized for being
version. The installation generates a subjective, hybrid, inhuman, inhospitable, socially destructive and damaging
flexible, immersive and dynamic personal space. It leaves for the urban fabric.
the existing Modern space intact and undermines it with
Zoning
digital media.
In the practice of ‘Urban Planning’, the preparations of
CONTEXT AND BACKGROUND zoning maps and strict coding documents are still the
Modern architecture standard and most common approach to planning. In
Modern architecture, which includes many of the spaces response to the increasing criticism of the crudeness and
we inhabit today, has emerged out of the industrial rigidity of modernism, the four categories of C.I.A.M -
revolution. It is based on standard, industrialized, rational, dwelling, work, recreation and transportation, were
functional, efficient and orthogonal spaces. It evolved from extended to include more groups, such as: industry,
Le Corbusier’s ‘Radiant City’ and C.I.A.M (Congrès commercial district, retail, natural resorts, public services
Internationaux d'Architecture Moderne, founded in Swiss and more. ‘Mixed-use’ areas started to appear on the
in 1928) proposals for ‘The Functional City’. planning maps, breaking the zoning blocks into finer
In contrast to the traditional city patterns, Le Corbusier grains. For example, the same building was divided into
envisioned hygienic, regimented large-scale high-rise commercial areas at the lower floors and residential areas
towers, set far apart in a park-like landscape. His rational above them.
city would be separated into discrete zones for working, Zoning in ‘my scale’
living, transportation and leisure [2]. Consequently, At the present, technological and communication
C.I.A.M was committed to standardized functional cities developments, such as the Internet, wireless phones,
with a similar scheme in its 1933 congress [3]. These ideas modems and hand-held computers, have a major impact on
had a profound influence on public authorities in post-war our lives. The work environment has been tremendously
104
influenced; a large part of the work is done with Context in Contemporary Art
computers, and Internet connectivity has altered The work of some contemporary artists can serve as
communication with clients and co-workers. Time and precedents to formal approach, space transformations and
place are now much more flexible (24/7). Our social lives the use of new technology by artwork.
and leisure time are changing as well. Contemporary sculptures
The modernist zoning (the assignment of human activities James Turrell's installations are powerful examples of
into separate areas) has become obsolete. In the same space deformation with immaterial assets. They succeed in
manner, the functional Modern apartment design, which altering the viewers’ perceptions of air, light and shape.
‘zones’ family activities of leisure, work, eat, rest and bath, He creates conditions that are neither ‘object’ nor ‘image’
must be adjusted. With increased possibilities to stay at and manipulates space using light and form.
home, (for work, education, communication and more) the Gordon Matta-Clark explored architecture’s inextricable
design of the personal space needs to be changed. relationship to private and public space, urban
Technology is getting closer to the personal scale and at development and decay. His provocative approach to
the same time allows the individual to connect to conventional building and his social criticism undermined
‘everyone’ from ‘everywhere’, as a node in the global the rational and function of buildings, using ‘negative’
network. Our customized and intimate relationship with actions like subtracting material from walls and floors. His
technology should challenge architecture to evolve from site-specific installations, the "building cuts," in which he
the ‘standard’ and ‘universal’ values of modernism to cut into and dismantled abandoned buildings, created
support these new needs and living patterns. unexpected aesthetic qualities, views and accessibility in
Modern architecture was characterized by the reference to an unconventional spatial way.
new building materials, such as steel, concrete and glass, Anish Kapoor creates curved biomorphic shapes that exist
and by the industrialized production process that became as an indeterminate form between object and space. Many
available due to the technological inventions of that era. of his pieces have been incorporated into the walls and
In the same manner, contemporary architecture should floors of exhibition areas. He intends to provoke the
reflect to the current technological developments of audience into a permanent doubt about the way it
computation and communication, which affect our comprehends reality. The theme of duality reappears in
everyday lives. This project employs micro computing and many of his pieces: positive and negative, physical and
sensors to explore new ways of architectural expression. mental, present and absent, form and non-form, light and
Alternative contemporary theories dark, male and female, place and non-place, solid and
Looking for alternative theories for complex and sensitive intangible.
spaces, I turned to the French philosopher Gilles Deleuze Kapoor has often incorporated into large-scale works more
and the contemporary architect and theorist Greg Lynn. literal versions of interiority, being drawn repeatedly
towards the use of concave and convex shapes to create
Gilles Deleuze
areas of emptiness, pockets of absence within dense
In ‘1440: The Smooth and the Striated’ [4] Deleuze and
material. His work challenges our sense of natural
Guatarri define a ‘smooth space’, in contrast to a ‘striated
boundaries, interior and exterior, and undermines the
space’, as amorphous, heterogeneous, nomad, intensive,
conventional space with new geometry. He establishes
rhizomatic and haptic. They point out that in reality the
physical precedents of ‘smooth’ space deformations.
‘smooth space’ co-exists in a mixture with the ‘striated
space’. Kinetic and Electronic Art
Kinetic art explores how things look when they move. It is
Greg Lynn
about processes of motion and evolution. It creatively
In ‘The folded, the pliant and the supple’ Greg Lynn
employs inert materials as carriers of forces, so as to
recounts the advantages that architecture can gain by
extend three-dimensional works beyond the static
introducing ‘smooth’ systems: “Pliancy allows
occupation of space into time and motion. Some kinetic
architecture to become involved in complexity through
sculptures engaged the viewers’ interaction with moving
flexibility. It may be possible neither to repress the
forces, and are generally regarded as a precursor to the
complex relations of differences with fixed points of
digital, computer, and laser art of today.
resolution nor arrest them in contradiction, but sustain
The artist Alan Rath [6] manipulates electronics as both
them through flexible, unpredicted, local connections”[5].
formal and metaphorical elements. He creates inventive
The fold encourages architecture to become more sensitive
sculptures that comment on the symbiotic relationship
to the complex changing needs of the contemporary
between humans and machines. Unlike the mobility in
person, and the ‘smooth mixture’ allows continuous co-
many of the kinetic works, that depends on chance
existence of different conditions, while maintaining their
elements, such as air movement and temperature, Rath
identity.
programmed his machines to ‘understand’ and respond to
105
their environment. Some of his sculptures are programmed Example: FluxSpace Ross Gallery, New York 1999
to move in response to the presence of people around them. (Maya/VRML 3D animation, projectors and speakers) [9].
Some robots interact with each other and some have an Using a 3D virtual model of the gallery, and projecting it
algorithm of randomness. His work focuses not only on the back into the same space, real and virtual spaces
movement of the sculpture but on its behavior and overlapped. The superimposition of sound, light, text and
movement patterns -- how it reacts and actively responds color reconstructed, distorted and deformed the virtual
to the dynamic environment and the viewers. model, and thus influenced our perception of the real
Rath’s work is an example of robotic aesthetics that space. The gallery functioned as a filter of data and media.
embodies human gestures and organic qualities. His work The project allowed the viewer to be simultaneously in real
choreographs form, movement and interaction to create and virtual spaces and perceive these spaces from the
new meaning. inside, as an immersive environment, rather than as a
The Wall_Fold installation is interested in continuing detached spectator. The gallery was projected with a
Rath’s investigation using new media and virtual space to rendered reality and was in a constant state of flux.
convey doubt in, and deform real space. It explores the
INSTALLATION
alteration of the physical space by the use of digital media.
Concept
This allows me to add new attributes to architecture, such
The goal of ‘Wall_Fold’ is to create a ‘smart’ physical
as interaction with the viewer, dynamic changes over time,
architectonic element with programmed behavior and
sound, movement and immateriality, while preserving the
changing patterns, in order to generate visual and tactile
physical nature of the space itself.
qualities. Computation and media are used in a physical
Previous investigation way, trying to achieve a subjective, temporary, hybrid,
In my previous work [7] I investigated the relationship flexible, immersive and dynamic personal space.
between architecture and media, while criticizing This installation takes advantage of the availability,
modernism rigidity and reductively. I have experimented efficiency and rationality of Modern design. At the same
with two main strategies (or platforms): time, it criticizes the rigidity and stiffness of Modern
Web Art: bringing space into media architecture. I propose a strategy, which opposes the basic
Extension of screen-based applications by exploring three approach of Le Corbusier and the modernists: ‘destroy and
dimensional (3D) space and navigation (using 3D rebuild’ – but leave Modern space intact and ‘undermine’
modeling software, animation and interactive programs, it with digital media. This is the act of smoothing out
such as Maya, Flash and QTVR). (“retroactively”) using embedded computers (micro
Example: VOLUME 1.0 - 2002 [with Inbar Barak] controllers).
The term volume refers to the intensity of sound and to Develpement
the dimensions of a space. In this work, the volume is First prototype: one-dimensional LED sequence
interpreted in the same duality - SOUND and SPACE are An experiment with a simple system of two micro
defining, and evolving around, each other. The position of controllers (e.g. a microprocessor which operates as an
the sound-object deforms the space by changing its embedded system, in this case I used PIC 16F877)
perspective and depth. In return, the transformation of the connected by wires to each other, and light-emitting diodes
space influences the sound level and panning. This project (LEDs). The micro controllers were programmed with a
simulates reality, by positioning sound in space, and at the simple logic code, which consisted of 'IF' statements, and
same time extends the real into the potential of the virtual, sent 0 or 1 signals between them. Every time a signal was
by allowing the user to move the usually static space sent (0 or 1), the program turned a correlated LED ON or
around the sound-objects. OFF. An adjustable delay period was set by viewer’s input
Installations: bringing media into space (in this case, using a potentiometer: a component with
Merging and overlapping real and virtual, in an attempt to variable resistance). This experiment created a close,
deform the architectural space by using images and 3D linear, binary system: the LEDs turn ON or OFF in a
models (Maya, VRML, Director, C, sensors). These sequence over time. It was a one-dimensional situation:
installations took advantage of the efficiency and LED = point (0 dimensions) turning ON/ OFF on the axis
availability of the Modern space and undermined it, while of time, while the state of each LED was determined by the
leaving it intact and trying to activate Deleuze’s state of its adjacent LEDs.
'retroactive smoothing' [8]. Modern space and media were Second prototype: Servo sculpture
blended together to create smooth space and extend their In this phase I challenge the setup of the one-dimensional
dimensions beyond the traditional perception (i.e., media LED sequence and transform it into a two dimensional
was materialized into a 3 dimensional space and the surface. I translate the logic of the code into spatial
Modern ‘white cube’ was stretched beyond its limited architectural qualities. The surface is made out of pairs of
orthogonal rigid characteristics). servomotors connected by flexible vinyl strips (see image
106
in the first page). Instead of switching LEDs ON and OFF, the binary ON/OFF signals into a continuous transition.
it turns and folds the surface inside/ out. Like a ‘Moebius The fabric strips have two distinct sides: silver, smooth
strip’ (a single topological surface with only one side and and shiny front and a white, interwoven and matte back.
one edge) which continues from the inside to the outside, I read this configuration as a two-dimensional condition,
this experiment creates a pliant system that dynamically as a surface, which is dynamically twisting between inside
evolves through different variations, and flips the space and outside, open and closed.
from inside to outside and from closed to open. My
ACKNOWLEDGMENTS
intention is to create a continuous transition, a whole I would like to thank Tirtza Even and Tom Igoe for their
grayscale between 1 and 0. The experiment is generated by guidance and support.
simple code, but results in a much richer spatial condition.
Similarly to the users’ input in the first prototype, this REFERENCES
version may in future development react to the viewers’ 1. Architect and New Media Artist, M.S.A.A.D (Advanced
proximity by changing the speed of the motors and the Architectural Design), Columbia University 2000;
rotation patterns. M.P.S. (Interactive Tele-communication), New York
The limited static conditions of ‘open’, ‘close’, ‘inside’ or University 2003; B. Arch., Israel Institute of
‘outside’ are now only a single option in this multiple and Technology (Technion) 1998.
variable sets of complex positions, which are dynamically 2. Le Corbusier, (Etchells, F.- Translation), The City of
changing to adjust to the individual needs and wishes. For To-Morrow and Its Planning (1929), Dover Pubns,
example, the partition can be 10% closed at the top while 1987.
90% is open, or 40% inside and 60% outside. This way, I
3. C.I.A.M- an avant-garde association of architects
materialize the ‘smooth mixture’ concept, described by
intended to advance Modernism and internationalism
Greg Lynn as: “intensive integration of differences within
in architecture. The 1933 congress had the theme: "The
continuous yet heterogeneous system. Smooth mixtures are
Functional City". Its conclusions were published in the
made up of disparate elements which maintain their
controversial documents "The Athens Charter".
integrity while being blended within a continuous field of
other free elements”. [10] 4. Deleuze, G., and Guattari, F., (Massumi, B.-
In the ‘mixing’ and ‘folding’ process I experiment with Translation), A Thousand Plateaus, University of
the following dualities: input/ output, on/ off, front/ back, Minnesota Press, 1987.
single/ plural, light/ dark, sedentary/ dynamic, shiny/ 5. Lynn, G., Folds, bodies & blobs, collected essays, La
matte, 0°/180°, open/ closed, and inside/ outside. Letter Volee, 1998, p. 111.
Prototype Technical Description 6. Rath, A., ROBOTiCS, RAM publications, 1999.
Pairs of servomotors are mounted to a 2’ x 2’ Plexiglas
7. See: http://www.ruthron.com
frame and controlled by micro controllers (PIC16F877)
(and future input from proximity sensors). The micro 8. Deleuze, G., and Guattari, F., (Massumi, B.-
controllers’ code consist of 'IF' statements, sending signals Translation), A Thousand Plateaus, University of
to rotate the motors in relation to the positions of adjacent Minnesota Press, 1987.
servomotors, programmed patterns and input from the 9. In collaboration with Renate Weissenboeck, Atsunobu
sensors. Between each pair of servos a horizontal vinyl Maeda and Gernot Riether.
fabric band is stretched, creating a surface that follows the
10. Lynn, G., Folds, bodies & blobs, collected essays, La
logic of the program. The motors are alternating between
Letter Volee, 1998, p. 112.
two positions, from 0° to 180° and back, and translating
107
Digital Poetry Modules
James G. Robinson
Interactive Telecommunications Program
Tisch School of the Arts / NYU
c/o 142 Nelson Street, #3
Brooklyn, NY 11231 USA
+1 347 613 6239
[email protected]
ABSTRACT (as in Muzak), glass (views of the outside) and screens (to
This article details a system of digital word modules, based display news, weather, etc).
on the popular phenomenon of refrigerator magnet poetry,
that alleviate the tedium of public "in-between" places by What all of these strategies had in common was that they
providing a means of interactive play. relied on distraction, rather than interaction. We felt that
this was a limiting view of how to improve the elevator
Keywords experience, especially with the opportunities for interaction
Social awkwardness, digital word modules, magnetic provided by digital technology. Our challenge was to build
poetry, digital text. an installation that could solve the same problems in a
more interactive way -- not only between people and the
CONTEXT / MOTIVATION outside world, but between each other.
This project was originally conceived as a digital solution
to the social awkwardness endemic to elevators. Thus, its Theoretical Parameters
design parameters reflect the limitations of its original The first step in this project was to list a set of general
location. However, since many public spaces share the design parameters that this project would have to follow to
psychic and physical characteristics of elevators, it has the be successful. In our view, any elevator installation would
potential to be useful in spaces far beyond its original have to be:
context.
• Immediately understandable, since one's stay in an
The Elevator Space elevator is an ultimately brief one;
Muzak is regarded by many as a lite-pop monstrosity that is
• Unobtrusively engaging, because people should feel at
to elevators what bubonic plague was to medieval Europe.
ease when interacting with the technology, yet still
But muzak originally served a purpose, "piped into
absorbed in the experience;
elevators to help people feel safe in this new form of
technology." [1]
• Easily ignorable, as riders sometimes do not want to
Nowadays, of course, elevators are considered a very old be disturbed, whether they are already interacting with a
form of technology, and most people feel comfortable friend in the elevator or simply want to be left alone; and
enough in them for muzak to serve as more of an irritant
than a comfort. But for many people, anxiety remains, even • Warmly inclusive, to encourage riders to interact with
if it is more of a social fear than a physical one. A number each other, not just with the technology.
of emotions can be felt between various combinations of
people, such as boredom, shyness, flirting, or awkwardness Practical Considerations
– few of them comfortable. The goal of this project was to Of course, elevators have their own specific, practical
eliminate, or at least minimize, these emotions. demands. Electricity is often difficult to access, and an
installation cannot be too large or obtrusive in the cramped
space due to fire codes, building regulations, and the
DESIGN PARAMETERS
comfort of its passengers. Thus an installation should
As noted, this has not been the first attempt at making the
ideally be small and self-powered. Since the solitude of an
elevator experience more meaningful. We decided that
elevator can also invite larceny or vandalism, the
there have historically been three broad strategies that have
installation would also ideally be self-contained and
been used to engage elevator riders -- that is to say, music
inexpensive.
108
THE SOLUTION selected. Each module contains around 200 words in a
The best inspiration for a digital installation that could given part of speech; there are noun modules, verb
satisfy each of these parameters was found in the now- modules, adjective modules, and so forth. Arranged
famous magnetic poetry sets found stuck on refrigerators together, they form phrases that range from the cryptic to
across the country. After all, an elevator is like a the profound to the entertaining to the baffling. Since the
refrigerator -- a cramped gateway to a more interesting words are pre-selected, what results is not necessarily
destination. poetry per se, but rather more like one-sentence proverbs or
brief unfinished haikus.
Magnetic poetry sets consist of hundreds of tiny magnets,
each imprinted with a different word in various parts of
speech. They can be arranged on an refrigerator in
combinations from the ridiculous to the sublime, allowing
for the entertainment of the "author" while providing a
means for indirect communication of jokes, ideas, and
various degrees of poetic thought to others visiting the
space.
109
Networked Modules
Because of the lack of a network connection, the wordlists ACKNOWLEDGMENTS
in the prototype had to be hard-coded onto the chip before Thanks to Eric Liftin, adjunct professor at NYU's
installation. In network-enabled environments a connection Interactive Telecommunications Program, for his useful
could be set up to dynamically update the lists in real-time. feedback throughout the design process. Thanks also to the
Additionally, the words selected could be broadcast to a anonymous reviewers who provided helpful feedback on
website to reflect the compositions presented in a given this project proposal after it was submitted to the 2003
place to the world at large. Ubiquitous Computing conference.
Thus, they are ideal for "transitory spaces," that is, public
areas where people often pass by, allowing for virtual self-
expression and subtle, anonymous communication between
strangers. In this sense they are best thought of as
ephemeral graffiti, although while graffiti is used mainly
by gangs to mark territory, these modules are used by
everyday people to communicate, however cryptically. In
that sense they are not objects of distraction but also true
artifacts of interaction that hopefully can serve to relieve
the social awkwardness of public spaces.
110
The Verse-O-Matic
James G. Robinson
Interactive Telecommunications Program
Tisch School of the Arts / NYU
142 Nelson Street, #3
Brooklyn, NY 11231 USA
+1 347 613 6239
[email protected]
111
DEVICE DESIGN introduced to a snippet of poetry that touches on the
The Verse-O-Matic is designed to look exactly like a themes he/she has selected. Secondly, the sticker
regular printing calculator, with one exception: the invites the user to share that verse, either personally or
usual digits are replaced by nine words, each anonymously, by either forwarding it along to a friend
representing a different poetic theme or or displaying it in a public place.
emotion: LOVE, HAPPINESS, BEAUTY, Thus the shared verse travels far
HUMOR, AGE, NATURE, SEPARATION, beyond the original digital database,
SADNESS, and DESPAIR. Despite the appearing in a multitude of non-
transformed key meanings, the digital spaces. It is no longer a static
universally-recognized calculator format resident of a digital database but a
allows new users to easily grasp how the dynamic, living object to be
device is meant to be used without experienced in our everyday lives.
special instructions.
When this project was first
INTERACTION demonstrated at NYU’s Interactive
When a key is pressed, the calculator Telecommunications Program,
searches its memory to select all of the students found a host of novel uses
poems that refer to that theme. for the poetry stickers. Snippets of
Additional themes can be added ("+" = AND) or verse can now be found in the most unexpected places,
subtracted ("-" = AND NOT) from the "poetic from trash can lids in the student lounge to microchip
equation" simply by pressing the appropriate keys. programmers in the physical computing lab. This
When the user presses "=", the equation is completed proliferation of verse in unexpected places represents
and the calculator prints a randomly-selected poem the best expression of the usefulness of the poetry
that fulfills all of the thematic boundaries that the user calculator.
has set.
PROTOTYPE
For instance:
Hardware
“LOVE” The original prototype for this project was built using
a custom-made keypad and a standard commercial
+
label printer interfaced serially with a Toshiba
“SEPARATION” 335CDS laptop running Linux. The laptop and label
printers, while bulky, were used so that a preliminary
+ prototype could be easily constructed and tweaked
“SADNESS” according to user feedback. In later iterations of this
project the laptop will be replaced by a microcontroller
= and the label printer by a custom serial printer, each
embedded in the calculator itself, so that the entire
"This bud of love, by summer's ripening breath,
May prove a beauteous flower when next we meet." device is completely portable for use anywhere. As
WILLIAM SHAKESPEARE [4] noted below, future prototypes will also incorporate
networked elements.
If no poems are found, the device emits a warning. The
equation resets and the user is prompted accordingly. Software
Poems are stored in a simple database on the host
OUTPUT computer, mediated by a Perl program that monitors
In the prototype, the poem is printed on a mailing the input from the keypad and distributes text to the
label, rather than a supermarket receipt (as originally serial printer accordingly. New verse is entered into
conceived). This allows the poem to be easily shared the database via a web-based CGI form on the local
once read; it can be used to seal an envelope or affixed machine’s Apache server, accessible either locally or
to a personal calendar. The printout also affords a through a connected LAN. In future prototypes the
tactile intimacy with the words that cannot be matched Verse-O-Matic will be networked to the Internet via an
in the hulking glare of a computer monitor. embedded Ethernet controller so that poems can be
collected from around the world.
Thus, the project's original purpose of distributing
verse is achieved in two levels. First, the user is
112
POETRY AND TECHNOLOGY the user’s original emotions and assumptions is in
Despite a generally positive response to this project, many ways far more preferable to one that exactly
several individuals have raised concerns about the reflects them.
implications of trying to represent the intricate,
emotional art of poetry through a mechanical, “Commodification of Aesthetics”
“logical” device. Others have questioned the Because the calculator can be loaded with
use of a typically mass-produced device to any snippet of verse, classified by the
distribute verse, arguing that it may evoke “the contributor, we would argue that it
commodification of expression and aesthetics”. represents the sharing, rather than the
Another typical response is that the commodification, of aesthetics. In this
presentation of excerpts, rather than complete sense, the calculator can be seen less as a
poems, abandons the depth and complexity of static reference and more like a highly-
the author’s original intent in favor of a less structured, asynchronous instant
meaningful soundbite. messenger device. If these devices were to
proliferate, our Verse-O-Matic would not
These questions are all valid, provocative be the same as yours; it would contain
responses to the project. However, we believe different poems from a different circle of
that they are not unique to this device, but will contributors, from myself, to my friends,
be asked of any effort to distribute verse to a family, and classmates. Thus, each
wider audience. Since any attempt to tackle calculator would be like a literary iPod --
these questions is to tempt participation in a a highly-individualized representation of a
host of broader, more contentious, intellectual circle of aesthetic expression.
debates about poetry and literature in general,
we think that within the context of this paper it is best
to address them through the original intent of the Poetry vs. Soundbite
piece. The small format of the printed sticker meant that in
most cases each verse in the Verse-O-Matic is an
“Emotional Art” vs. “Rational Calculator” excerpt of a larger poem, rather than a complete poem.
One of the largest challenges in designing this project Does an excerpt fully represent the depth and
was selecting the nine themes for the calculator’s complexity of a poet’s complete piece? Of course not.
keypad. Of course, the reason this was so difficult is But wonderful ideas can be found in the simplest of
that any attempt to reduce all verse to nine themes is sentences, even if they are merely small components of
patently ridiculous. How irrational to argue that the the artist’s broader concept.
calculus of thematic interpretation is in base ten!
Rather, the decision to select nine themes was not to For this same reason, people often feel comfortable
make any grand statement about the structure of poetry using snippets of poetry in other contexts. If pieces of
but rather to simply mirror the familiar structure of a verse can be used to introduce and enliven essays,
calculator keypad so as make the device as simple as prose and speech, why can they not serve as epigraphs
possible for anyone to use. for our daily lives?
113
ACKNOWLEDGMENTS
CONCLUSION We are indebted to Dr. Natalie Friedman, Director of
There is a Chassidic tradition that insists that the Writing Center at Marymount College of Fordham
everything in the world contains a joy that we must University for her useful perspectives on poetic
continually discover and unlock. The Verse-O-Matic themes, and to Camille Norment, adjunct professor at
was inspired by that philosophy. Even a humble ITP, whose patient encouragement inspired us to
calculator can be a gateway to revelation; to happiness; pursue this idea to completion. Thanks also to the
to thought and introspection. If anything, it is a anonymous reviews who provided invaluable feedback
challenge not to poetry or literature but rather to the after the first submission of this project to the 2003
idea that the joy of beautiful verse can only be Ubiquitous Computing conference committee.
discovered in the musty halls of libraries. Rather, their
ideas should surround, envelop and inspire us REFERENCES
wherever possible, freed from the typical boundaries 1. Pearce, J. When Poetry Seems to Matter. The New
that sequester them in the realm of academia. York Times (February 9, 2003).
2. Rosenberg, H. 'NewsHour' Finds Poetry in the Soul
of America. The Los Angeles Times (May 1, 2000)
p. F1.
3. Coeyman, M. To Her, Every Spot Needs A Touch
Of Poetry. The Christian Science Monitor (April 3,
2001) p. 17.
4. Shakespeare, William. Romeo and Juliet, act 2, sc.
2, l. 121-2.
114
AURA: A Mobile Platform for Object and Location
Annotation
Marc Smith, Duncan Davenport, Howard Hwa
Microsoft Research
One Microsoft Way
Redmond, WA 98052 USA
+1 425 706 6896
{masmith, duncand, a-hhwa}@microsoft.com
115
Symbology-
Mapping
Payload
Cache
Input
Data
116
calculations, and other tasks. The local database stores
contain user profiles, barcodes, ratings, written and speech
annotations, which are stored in a SQL2000 database.
Information on books and UPC’s are provided by multiple
remote data stores including the Amazon Web Service for
books and music, and the ServiceObjects Web Service for
UPC lookup.
MOBILE CLIENT SOFTWARE
The client is a standalone application on the Pocket PC (as
opposed to a web front-end) to support improved user
interactivity. Network connectivity is not assumed to be
continuous for the mobile client. The client application
provides queuing and retry services for the storage and
retrieval of data to and from the backend servers. These
services are not possible for a thin web based client.
Caches or local stores on the client can dramatically reduce
the demand on network access for content. In addition, a Figure 3. User scenario for grocery and related
client side application allows for a richer user interface. retail environments. Query highlighted the recall of
This is especially true when considering delays and the breakfast cereal by the FDA.
intermittent network connectivity.
These payloads are linked to the resolution service registry
CLIENT INTERFACE COMPONENTS which contains pairs of pattern matches and pointers to
Users can login to the system by creating a unique related web resources. When a tag is scanned it is matched
username and password combination either from the mobile to an appropriate payload on the basis of the structure of the
device or through the web portal interface. Without an identifier string. For example, ISBN codes start with “978”
account the device can be used to scan objects but the and have a total of 13 digits. All bar codes starting with
device creates an Anonymous User account and all the that series of numbers with that number of digits are
comments created in that context are by default public. assumed to be an ISBN and are submitted to web services
When a user sees an object that interests them and finds a that are listed in the client’s directory of resolution services
bar code printed or affixed to it they point the head of the that are registered as resolving such codes. We made use of
device at the bar code from a distance of about 6-12 inches a web service offered by Amazon.com that returns metadata
and press the scan trigger button which we mapped to the about books and music when passed an ISBN number.
thumb button normally used to invoke the voice recorder
feature of the Pocket PC. If the device acquires the tag’s
data and the application gives the user feedback and based
on some properties of the bar code data and sends a series
of network queries out to appropriate web services.
We have initially created or linked to services to support
three types of bar codes: tags created for a local art gallery,
UPC (Universal Product Code) codes commonly used to
tag consumer products and foods, and ISBN (International
Standard Book Number) codes for books. Any number of
additional or alternate payloads are possible within this
framework to provide services for these or other forms of
object identifiers.
117
ServiceObjects.Net, a commercial web service provider. CONCLUSION
This service returns a set of meta-data about the object and A wave of annotation systems for physical objects is likely
the client presents this data and creates hyperlinks to search to be about to break. Cell phones are already integrating
engines based on the results. For example, when a box of digital cameras and have the processing power needed to
breakfast cereal is scanned the resulting display provides natively decode bar codes. As pocket computers merge
two tap access to search results, the first of which notes that with cell phones the resulting hybrids will no doubt
the product has been recalled due to food safety issues combine a vision system with network connectivity and
related to undocumented ingredients that might cause fatal computation. The widespread distribution of such devices
allergic reactions for some people (figure 4 and 5). is likely to have dislocating effects in many sectors of life.
Retail environments seem the most likely to change as
consumers bring the power of the Internet to bear at the
point of sale.
REFERENCES
1.Rheingold, Howard. Smart Mobs: The Next Social
Revolution, Cambridge, MA: Perseus Publishing, 2002.
2.Fiore, Lee Teirnanan and Smith, 2001
3.Service Objects Universal Product Code Web Service
http://www.serviceobjects.com/products/dots_upc.asp?bh
cp=1
4.Abowd, G. D., et al. “Cyberguide: a Mobile Context-
aware Tour guide.” Wireless Networks, vol. 3, (1997) pp.
421-433.
5.Kindberg, T., et al. “People, Places, Things: Web
Presence for the Real World.” Proceedings
WMCSA2000, (2000).
Figure 5. Search results linked from UPC meta data 6.Ljungstrand, P. , J. Redström, and L. E. Holmquist.
“Webstickers: Using Physical Tokens to Access, Manage
WEB PORTAL and Share Bookmarks to the Web.” Proceedings of
Users can access the system through a web portal as well as Designing Augmented Reality Environments (DARE)
the mobile device. Users can log into the web site and view 2000 (2000).
their scan history sorted by various properties of the items.
7.MIT (2002) Project Oxygen. http://oxygen.lcs.mit.edu/
Scans can be sorted by time, by product category (books,
food stuffs, etc.), or by the ratings or comments of other 8.Priyantha, N. B., A. Chakraborty, and H. Balakrishnan.
users or data found in other systems. This creates a simple “The Cricket Location-Support System.” 6th ACM
way to assemble inventories of tagged objects, for example International Conference on Mobile Computing and
a collection of books, videos or music CDs. Alternatively, Networking (2000).
it creates a diary-like history of the series of objects 9.Schilit, B., and R. Want. “Context-Aware Computing
scanned while, for example, browsing through a shopping Applications.” IEEE Workshop on Mobile Computing
mall or museum gallery. Systems and Applications (1994).
10.Want, R., et al. “The PARCTAB Ubiquitous Computing
Experiment.” Technical Report CSL-95-1, Xerox Palo
Alto Research Center (1995).
11.Want, R., et al. “Bridging Physical and Virtual Worlds
with Electronic Tags.” In Proceedings CHI 1999 (1999).
118
Anatomy of a Museum Interactive:
"Exploring Picasso's 'La Vie' "
Leonard Steinbach Holly R. Witchey, Ph.D.
Chief Information Officer Manager, New Media Initiatives
Cleveland Museum of Art Cleveland Museum of Art
11150 East Boulevard 11150 East Boulevard
Cleveland Ohio 44106 Cleveland Ohio 44106
216 707 2642 216 707 2653
[email protected] [email protected]
ABSTRACT Therein lies a tale of art, artist, science, and discovery that
"Exploring Picasso's 'La Vie,'" a gallery installation as part of the Museum wanted to tell. And Picasso's La Vie would help
a major exhibition, demonstrates how an interactive display tell it.
can address various learner styles, foster both social and This presentation demonstrates and explores how the
individual interaction, and seamlessly command a Cleveland Museum of Art developed and exhibited a large
fundamental understanding of the rather complex scale interactive display which addresses various learner
relationship of artist's methods, artist's life stories and the styles, fosters both social and individual interaction, and
scientific methods that enable their discovery. The seamlessly commands a fundamental understanding of the
interactive demonstrates the roles of x-radiography and rather complex relationship of artist's methods, artists' life
infra-red reflectography as important tools in understanding stories and the scientific methods that enable their
the artist's processes. The museum found that the interactive discovery. At the same time, this interactive strove to
gave visitors the information and insight they needed to inspire users to return to the real object of delight, the
embrace new ways of looking at art. Its effectiveness may nearby painting itself. As such, it served to augment and
have been enhanced by the use of nearby, static, enhance the personal experience of the painting, rather than
complementary material. Additionally, by conforming the distract from it. The interactive also had to meet the aesthetic
installation of the interactive to the aesthetic of the rigors of a major art museum exhibition, as well as be easily
exhibition it seemed to be more readily accepted by both used by a large number of visitors of diverse age,
museum visitors and staff, which may have added to its aggregations, and cultural and technological experience.
effectiveness. Various aspects of intent, design, user
THE INTERACTIVE: "EXPLORING PICASSO'S 'LA VIE'"
experience, and lessons learned are also discussed.
Exploring Picasso's' La Vie' was presented on a 50"
Keywords diagonal plasma screen, mounted on a wall in vertical
interactive, constructivist, learning, art, museum, orientation, thereby suggesting the size, orientation and
INTRODUCTION gallery context of the painting as well as echoing the
In the fall of 2001 the Cleveland Museum of Art presented proportions and scale of the actual work. The aim is to give
the exhibition, Picasso: The Artist's Studio. . For Pablo the visitor the sense of ‘seeing through’ layers of the work
Picasso (1881-1973), the studio was the crossroads of all that and personally uncovering the secrets revealed by the
occurred in his life and contemporary society. Approximately investigative techniques of the conservation department.
36 paintings and 9 drawings demonstrated the central place Forward facing speakers were mounted beneath the screen.
of this theme in Picasso's work and presented the remarkable A wireless mouse was placed on a small pedestal
variety of ways in he explored the artist's studio through approximately 10' from the screen. (See Figure 1.)
portraiture, still lifes, interiors, landscapes, and allegories of Interface Design
artists at work. Picasso developed distinctive methods of The interface design would only be successful if it were
creating, destroying, and revising images. Because he immediately intuitive, if content could be reached in a
derived meaning from the very act of creation, studying his minimum of steps, if it fostered both group and individual
process can be crucial to unlocking the meaning of his art. experiences and, if the overall design respected differences
This understanding is revealed to conservators and art in learning styles, responsive to a broad range of visitors.
historians in great part through x-radiography, infrared For example, it would have to accommodate constructivist
reflectography, and other forms of scientific analysis. learning methods for the self-directed learner. These would
119
be the visitors who would want to create their own learning choose stories. At any time during the narrative they
experiences from non-linear encounters with could hit an "Interact" button which would switch the
image to the area of the painting being discussed.
They could then use a slide bar to morph the image
between x-ray and/or infrared or normal states, as
pertinent. A time bar helped users easily decide whether
they wanted to view the whole narrative, or proceed to
"Interact," or move to another section entirely. We
believed that this information helped the visitor make
the most efficient use of his or her time and eliminated
the frustration of not knowing how long a narrative will
take. Finally, a small representation of the entire
painting highlighted the area being discussed. In all of
these ways, the visitor experience could range from a
sequential and rather passive playing of a series of
interesting stories to a non-linear discovery of stories
(or parts of stories) and personal explorations.
• The Explore bar provided a choice of "magnifying"
glasses with which users could examine either a
magnified view of the painting, the infra-red image, or
the x-ray image If the user passed over a significant
area, a pop-up text box would tell its story. A "Reveal
Clues" button caused the painting to be overlayed with
white circles where the stories could be found. This
section of the interactive served two important
functions. First, it reinforced the Story narratives (or
vice versa) through more of a discovery approach.
Second, it familiarized the museum public with how to
Figure 1. "Exploring Picasso's 'La Vie'" as installed. read infra-red and x-ray images, much as a conservator
Other gallery walls (not shown) displayed static, back-lit x- would do. This newly acquired skill could be put to
ray and infra-red images of the painting. good use as visitors looked at the large static infra-red
and x-ray images hung on the walls nearby.
various types of rich media, in this case graphics, video,
• The Examination Techniques bar brought users to six
audio, narratives, and interactive tools for exploring the
scientific tools of conservation discovery: x-
painting. The interactive would also have to respect the
radiography, infrared-reflectography, optical
needs of the more traditional learner who requires that
microscopy, ultraviolet light analysis, sampling and
material be presented in a more sequential, less demanding
cross-sections, and scanning microscopy. These
didactic form. Both of these responses would have to use
features included animations (e.g. x-ray penetration) and
the same media objects and interface. To achieve this, the
behind-the-scenes videos showing conservators
following design features were employed(see Figure 2):
applying these techniques in the Museum's
• Instantly expanding navigation bars along the themes of conservation lab on real works of art. We believe that
"Introduction," "Stories," "Explore," and "Examination the understanding of process results in a better
Techniques" burst to the left when their iconic understanding of result. Also, museum visitors are often
representations were rolled over at any time during the intrigued by "behind the scenes" activities, and that
interactive's use. some visitors, more interested in science and
• The Introduction bar allowed the user to view an technology than art, might use this section as an entry
Introduction, Picasso biographical information point, and be intrigued enough to explore the rest.
(Quicktime movies) or Credits. • Overall, this interactive provided visitors with ample
• The Stories bar allowed users to experience illustrated opportunity to pursue their own approach and interests.
main themes of discoveries about the artist's process The traditional learner could literally "start at the top"
through an animated, narrated detailed look at specific and work down through the introductions to stories to
areas of the painting. A section of the painting was explorations to the techniques, with very little demand
panned either in normal, x-ray or infrared view, as for interaction --- no content ever requires more than
appropriate. Iconic cues and story names helped users one click. [A second click moved from a story to
120
"interact," or changed modes of magnifying glass.] On volunteers to the objective of their assistance: the comfort
the other hand, more discovery-oriented users could of visitors with a new means of self-directed discovery
explore all the options, carefully choosing those items and education, rather than use the device as a teaching or
that seemed of interest at any moment and in any order, demonstration tool. It should also be noted that this type
allowing knowledge to built in more personalized way. of experience did not preclude visitor use. many visitors
Upon opening the exhibition, staff believed that it would be did try and usually had little problem with the interactive.
helpful for volunteers to assist visitors with the interactive. Nonetheless, between the slight dissuasion of some
This proved counter-productive, as will be described below. visitors from use of the interactive, and the apparently
higher than expected level of user computer proficiency,
In addition to assuring the interactive's ease of use, the the use of volunteers was abandoned.
museum recognized that many visitors were unlikely to wait
long to use it or spend a lot of time experiencing each
feature. These concerns were accommodated in two ways.
First, the use of the large screen and distant pedestal with
mouse made group viewing feasible and comfortable.
Visitors could easily benefit from the stories or other
activities that the user was initiating either while waiting
their turn, or in lieu of it. Second, large, static, rear-
illuminated x-radiography and infra-red reflectography
images of La Vie were in the same room with the interactive,
providing analogous insights from the conservation
research. These served as a preparatory resource for those
who were waiting to use the interactive, bolstered the
information gleaned from the interactive, or provided
information in lieu of using the interactive.
Finally, regardless of visitors' learning style or comfort with
technology, the goal of this project was to inspire them to
return to and look more closely at the art. We also hoped
that visitors would internalize this experience and apply their
new insights to the way they viewed Picasso's other
paintings. We believe we were successful.
FINDINGS AND LESSONS LEARNED
Rather than pursue a formal evaluation, the Museum chose
to rely on periodic observation by staff and anecdotal
feedback for its overall assessment. Findings and lessons
learned follow:
• Because some staff believed that computer experience
among museum visitors would be very low, a volunteer
was initially present to help use the interactive. However,
rather than foster the visitors' personal exploration, the
volunteer often became a guide through the content and
visitors remained passive and complacent about this. This
defeated the purpose of self-directed learning and
exploration. We believe this situation occurred for several
reasons. Both volunteers (or docents) and museum
visitors are accustomed to, and comfortable with, the
traditional museum education/lecture/tour model whereby Figure 2. Full screen view of "Exploring Picasso's 'La
visitors, for the most part, are rather passive receivers of Vie,'" showing all menu bars "burst" to the left, for
structured information. Therefore it was easy for both illustrative purposes only.
groups to fall back into these roles. It is also likely that the In the absence of volunteers, we observed that users who
seeming appreciation of visitors for the assistance (even did not immediately grasp how the interactive worked
among those who might have liked to just give it a go seemed to work it through and often received help from
without being observed by a staff member) reinforced the other members of their party (such as their children) or even
situation. This pointed out the need to reorient the other visitors (simply as a polite gesture or because they
121
were waiting to use it themselves). This help was an exhibit. In the future we will give more consideration to
interesting phenomenon, and we can surmise that it was at the use of supplemental material.
least in part borne of the open and shared experience it • We believe the resemblance of the installation to a
presented (more on this below). Conversely, if this painting hung on a wall, and its accord with the
interactive were constructed as a single-person or small exhibition's overall aesthetic, helped engender its broad
group device, we don't think such unsolicited help would acceptance and success. Although the vertical
have been forthcoming; if it did occur that it might have orientation of interactive's image required resolution of
been perceived as more of means of hurrying up the hesitant some interesting programming issues, it was well worth
user rather pure benevolent assistance. If we decide in the the effort. The images providing an excellent proxy for the
future that visitors should receive assistance then those actual painting. The ready association of the interactive's
providing assistance would have to be trained to focus on screen image with the art made the "technology" more
the visitors' independent use of the device. transparent and brought greater focus to the content. Yet,
• Clusters of visitors, both users of the interactive and for the visitor, it did not at all replace the experience of the
observers, appeared to simultaneously find it engaging. original object. Rather, viewers went back in search of the
We attribute this to both the quality of content and the actual painting, which was several rooms away. Having
comfort and ease with the experience could be shared. learned that there were hidden images within the painting,
• In family groups, parents seemed pleased to see their some of which were were indeed somewhat perceptible if
children enthusiastically engaged with the interactive; one knew where to look, many visitors sought out the
they sometimes had to drag the kids away. actual painting and took ownership of the ability to
discern the heretofore undiscernible. This has significant
• A gender difference with respect to how the interactive implications for the potential of museums interactives to
was used has been observed. It seemed that women most teach visitors tools and techniques for their
often engaged in a random hunt and peck through the understanding and appreciation of art, beyond factual or
menu system and sampled content, while men seemed to contextual information.
engage sections in more depth, were more likely to interact
with the content, and would go through more stories • Curators and exhibition designers were not accustomed to
sequentially and completely. However, this may be biased technological augmentation of traditional art exhibitions,
by placement of this portion of the exhibition near the concerned that an interactive device near an object would
Exhibition gift shop; we suspect that the interactive distract from the original. Yet, the effort visitors made to
proved a good diversion for men waiting for wives to compare the information from the interactive to the actual
finish shopping. art demonstrates how interactive media can stimulate
interest in, rather than supplant the art experience.
• We were surprised at the importance of the static images
in the gallery. For many parties, a sort of "teamwork" • Feedback about the interactive from staff, visitors, the
occurred. While one member was using the interactive, the Trustees, the press and others has been overwhelmingly
other(s) would study complementary information in the positive, and has helped engender support for continued
wall-mounted images; then they sometimes switched roles. use of interactives in permanent and temporary
In sum, the combination of the two activities appeared to exhibitions. In 2002, the Museum the project received an
lengthen the overall duration of their experience with this American Association of Museums’ Muse Award.
section of the exhibition. The static images also provided CONCLUSION
more opportunity for visitors to focus on a single aspect Far reaching and complex goals were established for the
of the painting. Additionally, we observed that while production of Exploring Picasso's "LaVie," and we believe
some visitors did not use the interactive and only referred that our goals were substantially met. The number of users
to the static images, virtually everyone who used the who returned to the actual work of art to take a closer look is
interactive also referred to the static images; virtually no especially noteworthy. In the future more consideration will
one relied on the interactive alone. This suggests that the be given the role of supplementary materials with
effectiveness of interactives which portray rich interactives as reinforcements or adjuncts; and, the way in
information and complex concepts might benefit from which tools and techniques may be taught to visitors, as
accompanying complementary and reinforcing material. compared with facts and context.
However, we do not know how effective the interactive
ACKNOWLEDGEMENT
alone would have been. It is also possible that the
The authors wish to acknowledge the contribution of
existence of the static images mainly allowed interactive
Cognitive Applications, Inc., Brighton, England, and
users to pursue an interest that had been piqued, while
Washington, D.C. who created Exploring Picasso's 'La Vie'
allowing someone else to try the device. Perhaps in part
with the Cleveland Museum of Art
the interactive acted as a dynamic sampler of the static
122
Facilitating Argument in Physical Space
123
children – the beginnings of rhetorical education [7,13]. explore further the possibilities of using our system with
Meanwhile work in the field of computer-supported older children and adults.
cooperative argument has focused on rhetoric in its most
TECHNICAL APPROACH
accomplished forms of industrial negotiation and legal
One of the constraints imposed by the classroom context is
argument [4,9]. There has however been very little work
that the technology base for the ubiquitous computing
which has focused on the first exercises in persuasive
system must be extremely robust. The classroom is a
rhetoric that are used to lead the student step by step to the
physically demanding environment, with little tolerance for
heights of rhetorical complexity. We have chosen to bridge
equipment failure. We therefore selected a well-established
this gap and focus on the classical rhetorical exercises of
communications and sensing infrastructure, based on radio
encomium and vituperation, where a student praises or
frequency ID tags and readers (RFID). The physical tokens
criticises a topic or an individual. These exercises break
of argument contributions are augmented with RFID tags,
down the construction of an argument into a series of
and the argument structure is represented by a series of
manageable steps that ensure the participants cover all of
RFID readers. The RFID readers are networked to a central
the necessary ground and organise their knowledge and the
server, which generates a real-time visualisation of the
fruits of their research as effectively as possible.
developing argument for projection onto the wall of the
Both the rhetorical focus of our system and our approach to classroom.
ubiquitous computing were designed for use in schools, to
Users interact with the application by placing statements
facilitate part of the English national curriculum [11] that
which are augmented with RFID tags on the readers. Each
teaches argumentation and discussion skills to students (see
reader has a prompt and together they form a trail which
Figure 1). It is particularly useful to see visualisations of
takes the user through an argument – either for a position,
against it or showing understanding of both sides – in small
and easily managed steps. Every time the reader places a
statement on a reader this change in state in the TUI is
reflected in the GUI. The aim is to use the GUI and TUI in
combination to allow the user to do two things: firstly to
organise the statements relevant to an argument according
to the loose structure provided by the prompts on the
readers; and secondly to deliver a speech for the point of
view she has set out using both the TUI and GUI as visual
aids.
EVALUATION
We have evaluated our design approach over a period of six
Figure 1. Argument Formation Cycle months, with a range of prototypes exploring the technical
approach above. Our iterative prototyping design method
argument structure in the classroom. Teaching argument commenced with “low fidelity” prototypes that explored the
demands that the teacher be able to refer explicitly to the use of the spatial interface within an actual classroom
argument structures being developed by the children, in lesson, but only provided limited automated functionality
order to provide a relevant critique of a malformed through the use of RFID. Some automated functionality was
argument, or explain ways the argument could be made simulated during these experiments via the “Wizard of Oz”
more persuasive. In addition to this natural fit to the technique [8] where a researcher controlled the computer
classroom context, we also believe that it is especially interface to test alternative designs with minimal
valuable to design ubiquitous computing systems that are development effort. After ten generations of prototypes, we
constrained by a specific application domain. Many have developed an effective and technically operational
ubiquitous computing research projects have created system that has been evaluated under lesson conditions
products, middleware or technical architectures that have no [14].
clear application. To avoid this trap, we voluntarily
accepted the strict design constraints of the school Evidence Selection
environment, and of the highly prescriptive English Our early prototypes focused on the collection and labelling
National Curriculum, in order to focus our activities on the phases of the Argument Formation Cycle. We had observed
creation of a system that addressed a genuine need. We that children read source web pages, they then evaluated
have called this research strategy Curriculum Focused and selectively highlighted, and then grouped together
Design [12]. relevant pieces of evidence. These groups were then named
(e.g. ‘trust’, ‘sightings’, ‘evidence and backing up’) and
We are however convinced that the classroom is not the
claims or statements on each theme used to structure the
only forum that will benefit from computer support for the
argument. We gave children small stands incorporating a
learning of skills in argument and persuasion. We intend to
whiteboard on which to write a statement, and a set of clips
124
to attach collected evidence supporting that statement Figure 4. Selection tags each with a RFID & LED
(Figure 2). We intended that RFID tags in the
Argument Presentation
documents would be
The final stage of argument formation is linearization: the
argument structure turned into a linear form which can be
delivered as a speech. By arranging the TUIs the users
have to construct an argument which is also represented on
a projected display. Users can use the TUI to trigger the
GUI to display content relating to the specific section of the
argument they are verbally presenting.
125
of ubiquitous computing. This provides a far richer Educational Sense-Making. Springer Verlag, Great
outcome from ubiquitous computing than previous attempts Britain, 2003. p117-136.
to integrate video and screen-based visualisation. This 6. Corbett, Edward P.J. and Robert Connors. Classical
argumentation system, while a novel use of technology, also Rhetoric for the Modern Student. 4th Ed. Oxford UP,
provides a good example of user-centered design. Careful Oxford 1999.
iteration and attention to the needs of users will help ensure
a socially appropriate interface for collaborative 7. Druin, Allison, Jason Stewart, David Proft, Ben
argumentation which is more likely to be adopted by Bederson, Jim Hollan. KidPad: A Design Collaboration
potential users. Our system will promote natural interaction Between Children, Technologists, and Educators. CHI
with evidence, presented via TUIs and paper documents. 97.p463-470.
The demonstration should be of interest both as an 8. Erdmann, R.L., Neal, A.S. Laboratory vs. Field
illustration of this type of application in use, and as a novel Experimentation in Human Factors—An Evaluation of
way for delegates to engage with an important question for an Experimental Self-Service Airline Ticket Vendor.
the future of ubiquitous computing. Human Factors. 13 1971. p521-531.
ACKNOWLEDGMENTS 9. van Gelder, Tim. Enhancing Deliberation through
This research is funded by European Union grant IST- Computer Supported Argument Visualization.
2001-34171. This paper does not represent the opinion of Visualizing Argumentation; Software Tools for
the EC, which is not responsible for any use of this data. Collaborative and Educational Sense-Making. Springer
The industrial design of the prototypes is thanks to Chris Verlag, Great Britain, 2003. p97-115.
Vernall. We would like to thank to Philip Wise and 10. Horn, Robert E. Infrastructure for Navigating
Gordon Williams for their assistance in preparing the demo Interdisciplinary Debates: Critical Decision for
photography. Representing Arguments. Visualizing Argumentation;
REFERENCES Software Tools for Collaborative and Educational
1. Barnard, P., May, J. & Salber, D. Deixis and points of Sense-Making. Springer Verlag, Great Britain, 2003.
view in media spaces: An empirical gesture. Behaviour p165-84.
and Information Technology.1996. 15 (1), 37-50 11. National Curriculum: http://www.nc.uk.net/home.html.
2. Buckingham Shum, S., V. Uren, G. Li, J. Domingue, 12. Rode, J., M. Stringer, E. Toye, A. Simpson, and A.
and E. Motta. Visualizing Internetworked Blackwell. Curriculum Focused Design. Interaction
Argumentation. Visualizing Argumentation; Software Design and Children. (2003) 119-26.
Tools for Collaborative and Educational Sense-Making.
13. Stanton, Danae, and et al. Classroom Collaboration in
Springer Verlag, Great Britain, 2003. p185-203.
the Design of Tangible Interfaces for Storytelling. CHI
3. Buckingham Shum, Simon. Graphical argumentation 2001. p482-489.
and design cognition. Human-Computer Interaction.
14. Stringer, M., J. Rode, E. Toye, A. Blackwell. Iterative
12(3) 1997. p267-300.
Design of Tangible User Interfaces. BCS-HCI 2003. (In
4. Carr, Chad S. Using Computer Supported Visualization Press)
to Teach Legal Argumentation. Visualizing
15. Viegas, Fernada B. and Judith S. Donath. Chat Circles.
Argumentation; Software Tools for Collaborative and
CHI ’99. ACM, p 9-16.
Educational Sense-Making. Springer Verlag, Great
Britain, 2003. p75-96. 16. Weiser, Mark. The Computer for the 21st Century.
Scientific American. September, 1991. pg 94-104.
5. Colkin, Jeff. Dialog Mapping: Reflections on an
Industrial Strength Case Study. Visualizing 17. Weiser, Mark. Some computer science issues in
Argumentation; Software Tools for Collaborative and ubiquitous computing. Communications of the ACM, 36
(7): 75-84.
126
Box. Open System to Design your own Network
Victor Vina
Researcher
Interaction Design Institute Ivrea
Ivrea, TO 10015 Italy
+39 0125 422 11
[email protected]
127
Server Application Their size responds to the need for objects that are portable,
The server application, developed in Lingo (Macromedia objects that you can place in any location but you do not
Director and Macromedia Shockwave Multiuser Server) carry around with you. They present a small antenna to
maintains the networks and visualises them through a indicate they can communicate wirelessly with other
visual language: Type and location of the boxes, channels entities.
for the flow of data, objects that collect information from
web databases and other constructs which dictate how the
information is transformed and transmitted between each
one of the physical devices.
Visual Language
A simple, visual language has been integrated with the
system to allow creation and visualization of dynamic
information structures. This visual language, based on an
model that represents the flow of information, allows the
visualization of a variety of information networks: from a
web log to an e-mail list, from an ATM machine to a cell-
phone voice messaging system.
128
numbers of users with an internet connection can construction of platforms for communication and
collaboratively view and modify this internal structure. information exchange. As soon as members of the
As the structure is stored on-line, boxes can be placed on community are able to define a code and agree on the role
remote locations, allowing platforms for communication each box undertakes, they can actively participate on the
and information exchange combining boxes far away from construction of information networks.
each other.
129
He contends that best way to develop better intuitions Thus, we, as designers, can create open systems open for
about decentralized systems is to construct and play with interpretation, integrating participants into the design
such systems. process, encouraging creativity and turning them from
The Box system follows this line of enquiry, proposing an passive consumers into active interpreters of information.
open platform to encourage research and experimentation, With the proliferation of ubiquitous technological devices,
allowing new devices to be incorporated into the system the development of a semantic web that will be integrated
and providing an environment where self-regulated with these devices and the extensive use of computer
networks combining these objects can be created. networks to play, work and communicate, we, as designers,
need to consider the issues, values and opportunities
PHYSICAL COMPUTING
offered by these new technologies.
The Box system has been integrated with the academic
program of Interaction Design Institute Ivrea, in order to By abstracting the basic elements of these networks and
teach the fundamentals of physical computing and experimenting with them free from commercial constrains,
networked appliances. Visiting professor Bill Verplank this program expects to raise up issues about current trends
directed the course. Students were asked to create a network of the information society, which values are being imposed
of two devices: one input and one output box. on information consumers or simply whether they are
desirable.
ACKNOWLEDGMENTS
Thanks to students, professors and administration staff of
Interaction Design Institute Ivrea for their support and
contributions during the development of this project. In
particular to Gillian Crampton-Smith, Dag Svanaes, Casey
Reas, Massimo Banzi and Bill Verplank for their valuable
insights.
LINKS
http://projects.interaction-ivrea.it/box
Fig 10: Luther Thie and Belmer Negrillo’s Whispering t o
Birds, an exploration done based on the Box system for the REFERENCES
Physical Computing course at Interaction-Ivrea. 1. Ishii, H., Ullmer, B. Tangible Bits: Towards Seamless
Interfaces between People, Bits and Atoms.
Outcomes covered a broad range of interactions, from Proceedings of CHI ‘97. ACM Press.
exploration of physical behaviors representing emotions to
a network where the fall of a leaf on the input Box would 2. Jeremijenko, N. Delusions of Immateriality. Doors of
trigger the sound of a bird on the output Box located on a Perception 6: Lightness. (2000). Available online at:
far away tree. http://museum.doorsofperception.com/doos/doors6/
transcripts/jeremijenko.html
CONCLUSIONS
When users are allowed to set up and configure their own 3. Norman, D. A. (1998): The Invisible Computer. Basic
personal networks based on the recombination of simple Books, 232-239.
modules, emergent platforms will appear that best reflect 4. Resnick, M. Behavior Construction Kits. Commun.
the social networks that maintain them. ACM 36, (1998)
130
Demonstrations of
Expressive Softwear and Ambient Media
Sha Xin Wei1, Yoichiro Serita2, Jill Fantauzza1, Steven Dow2, Giovanni Iachello2, Vincent Fiano2,
Joey Berzowska3, Yvonne Caravia1, Delphine Nain2, Wolfgang Reitberger1, Julien Fistre4
1 2 3
School of Literature, Communica- College of Computing/GVU Center Faculty of Fine Arts
tion, and Culture / GVU Center Georgia Institute of Technology Concordia University
Georgia Institute of Technology {seri, steven, giac, ynniv, delfin}@ Montreal, Canada
[email protected], {gtg760j, cc.gatech.edu, [email protected]
4
gtg937i, gtg711j}@mail.gatech.edu [email protected]
131
sound synthesis; (5) media choreography based on statis-
tical physics.
We demonstrate new applications that showcase elements
of recent work. Although we describe them as separate
elements, the point is that by walking from an unprepared
place to a space prepared with our responsive media sys-
tems, the same performers in the same instrumented
clothing acquire new social valence. Their interactions
with co-located less-instrumented or non-instrumented
people also take on different effects as we vary the locus
of their interaction.
Softwear: Augmented Clothing Fig. 1. Solo, group and environmental contact circuits.
Most of the applications for embedding digital devices in Demonstration A: Greeting Dynamics (Fantauzza, Ber-
clothing have utilitarian design goals such as managing zowska, Dow, Iachello, Sha)
information, or locating or orienting the wearer. Enter-
Performers wearing expressive clothing instruments walk
tainment applications are often oriented around control-
through a conference or exhibition hall. They act accord-
ling media devices or PDA’s, and high-level semantics
ing to heuristics drawn from a provisional phenomenol-
such as user identity [1, 7] or gesture recognition [28].
ogical schema of greeting dynamics, the social dynamics
Our approach to softwear as clothing is informed by ear-
of engagement and disengagement in public spaces built
lier work of Berzowska [2] and Orth [19].
from a glance, nod, handshake, embrace, parting wave,
We study the expressive uses of augmented clothing but backward glance. Our demonstration explores how peo-
at a more basic level of non-verbal body language, as in- ple express themselves to one another as they approach
dicated in the provisional diagram (Fig. 1). The key friends, acquaintances and strangers via the medium of
point is that we are not encoding classes of gesture into their modes of greeting. In particular, we are interested in
our response logic but instead we are using such diagrams how people might use their augmented clothing as expres-
as necessarily incomplete heuristics to guide human per- sive, gestural instruments in such social dynamics. (Fig.
formers. 2)
Performers, i.e. experienced users of our ”softwear” in-
strumented garments will walk through the floor of the
public space performing in two modes: (1) as human so-
cial probes into the social dynamics of greetings, and (2)
as performers generating sound textures based on gestural
interactions with their environment. We follow the per-
formance research approach of Grotowski and Sponge
[10, 25] that identifies the actor with the spectator. There-
fore we evaluate our technology from the first person
point of view. To emphasize this perspective, we call the
users of our technologies "players" or "performers" Fig. 2. Instrumented, augmented greeting.
(However, our players do not play games, nor do they act In addition to instrumented clothing, we are making ges-
in a theatrical manner.) We exhibit fabric-based control- tural play objects as conversation totems that can be
lers for expressive gestural control of light and sound on shared as people greet and interact. The shared object
the body. Our softwear instruments must first and fore- shown in the accompanying video is a small pillow fitted
most be comfortable and aesthetically plausible as cloth- with a TinyOS mote transmitting a stream of acceler-
ing or jewelry. Instead of starting with devices, we start ometer data. The small pillow is a placeholder for the
with social practices of body ornamentation and corporeal real-time sound synthesis instruments that we have built
play: solo, parallel, or collective play. in Max/MSP. It suggests how a physics-based synthesis
Using switching logic from movements of the body itself model allows the performer to intuitively develop and nu-
and integrating circuits of conductive fiber with light ance her personal continuous sound signature without any
emitting or image bearing material, we push toward the buttons, menus, commands or scripts. Our study of these
limit of minimal on-the-body processing logic but maxi- embedded dynamical physics systems guides our design
mal expressivity and response. In our approach, every of expressive clothing using wireless sensors, conductive
contact closure can be thought of and exploited as a sen- fabrics and on-the-body circuit logic.
sor. (Fig. 1)
132
Whereas this first demonstration studies the uses of soft-
wear as intersubjective technology, of course we can also
make softwear more explicitly designed for solo expres-
sive performance.
Demonstration B: Expressive Softwear Instruments Using
Gestural Sound: (Sha, Serita, Dow, Iachello, Fistre, Fan-
tauzza)
Many of experimental gestural electronic instruments
cited directly or indirectly in the Introduction have been Fig. 3. Gesture mapping to sound and video.
built for the unique habits and expertises of individual
professional performers. A more theatrical example is The motto for our approach is "gesture tracking, not ges-
Die Audio Gruppe [16]. Our approach is to make gestural ture recognition." In other words we do not attempt to
instruments whose response characteristics support the build models based on a discrete, finite and parsimonious
long-term evolution of everyday and accidental gestures taxonomy of gesture. Instead of deep analysis our goal is
into progressively more virtuosic or symbolically charged to perform real-time reduction of sensor data and map it
gesture. with lowest possible latency to media texture synthesis to
provide rich, tangible, and causal feedback to the human.
In the engineering domain, many well-known examples
are mimetic of conventional, classical music performance. Other gesture research is mainly predicated on linguistic
[15]. Informed by work, for example, at IRCAM but es- categories, such as lexicon, syntax and grammar.
pecially associated with STEIM, we are designing sound McNeill [17] explicitly scopes gesture to those move-
instruments as idiomatically matched sets of fabric sub- ments that are correlated with speech utterances.
strates, sensors, statistics and synthesis methods that lie in However, given the increasing power of portable proces-
the intersection between everyday gestures in clothing and sors, sophisticated sub-semantic, non-classifying analysis
musical gesture. has begun to be exploited (e.g. [30]). We take this ap-
We exhibit prototype instruments that mix composed and proach systematically.
natural sound based on ambient movement or ordinary
gesture. As one moves, one is surrounded by a corona of Interaction Scenario
physical sounds " generated" immediately at the speed of
matter. We fuse such physical sounds with synthetically In all cases, performers wearing softwear instruments will
generated sound parameterized by the swing and move- interact with other humans in a public common space.
ment of the body so that ordinary movements are imbued But when they pass through a space that has been sensi-
with extraordinary effect. (Fig. 3) tized with tracking cameras or receivers for the sensors
tracking their gesture, then we see that their actions made
The performative goal is to study how to bootstrap the
in response to their social context take on other qualities
performer's consciousness of the sounds by such estrang- due to the media that is generated in response to their
ing techniques (estranging is a surprising and undefined movement. This prompts us to build responsive media
word here) to scaffold the improvisation of intentional, spaces using our media choreography system.
symbolic, even theatrical gesture from unintentional ges-
ture. This is a performance research question rather than Ambient Media
an engineering question whose study yields insights for After Krueger's pioneering work [14] with video, classical
designing sound interaction. VR systems glue inhabitants' attention to a screen, or a
Gesturally controlled electronic musical instruments date display device and leave the body behind. Augmented
back to the beginning of the electronics era (see extensive reality games like Blast Theory's Can You See Me Now
histories such as [13]) . put some players into the physical city environment, but
still pin players' attention to (mobile) screens [4].
Our preliminary steps are informed by extensive and ex-
pert experience with the community of electronic music Re-projection onto the surrounding walls and bodies of
performance [25, 31, 32]. the inhabitants themselves marks an important return to
embodied social, play, but mediated by distributed and
tangible computation.
The Influencing Machine [12] is a useful contrasting ex-
ample of a responsive system. The Influencing Machine
sketches doodles apparently in loose reaction to slips of
colored paper that participants feed it. Like our work,
their installation is also not based on explicit language. In
133
fact it is designed ostensibly along “affective” lines. It is can insert our responsive video into non-standard geome-
interesting to note how published interviews with the par- try or materials.
ticipants reveal that they objectify the Influencing Ma- We suspend (pace T. Erickson [8]) a translucent ribbon
chine as an independent affective agency. They spend onto which we project processed live video that trans-
more effort puzzling out this machine's behavior than in forms the fabric into a magic membrane. The membrane
playing with one another. is suspended in the middle of public space where people
In our design, we aim to sustain environments where the will naturally walk on either side of it. People will see a
inhabitants attend to one another rather than a display. smoothly varying in time and space transformations of
How can we build play environments that reward repeated people on the other side of the membrane. (Fig. 4) The ef-
visits and ad hoc social activity? How can we build envi- fects will depend on movement, but will react additionally
ronments whose appeal does not become exhausted as to passersby who happen to be wearing our softwear
soon as the player figures out a set of tasks or facts? We augmented clothing.
are building responsive media spaces that are not predi- The challenge will be to tune the dynamic effects so that
cated on rule-based game logic, puzzle solving or ex- they remain legible and interesting over the characteristic
change economies [3], but rather on improvisatory yet time that a passerby is likely to be near the membrane, the
disciplined behavior. We are interested in building play affect induces play but not puzzle-solving. Sculpturally,
environments that offer the sort of embodied challenge the membrane should appear to have a continuous gradi-
and pleasure afforded by swimming or by working clay. ent across its width between zero effect (transparency)
This motivates a key technical goal: the construction of and full effect. Also it should take about 3-4 seconds for
responsive systems based on gesture-tracking rather than a person walking at normal speed in that public setting to
gesture-recognition. This radically shortens the compu- clear the width of the inserted membrane.
tational path between human gesture and media response.
But if we allow a continuous open set of possible gestures
as input, however reduced, the question remains how to
provide aesthetically interesting, experientially rich, yet
legible media responses.
The TGarden environment [25] that inspired our work is Fig. 4. Two players tracked in video, tugging at spring
designed with rich physicalistic response models that projected onto common fabric.
sustain embodied, non-verbal intuition and progressively
more virtuosic performance. The field-based models Above all the membrane should have a social Bernoulli
sustain collective as well as solo input and response with effect that will tend to draw people on the opposite sides
equal ease. to one another. The same effects that transform the other
person's image should also make people feel some of the
By shifting the focus of our design from devices to proc- safety of a playful mask. The goal is to allow people to
esses, we demonstrate how ambient responsive media can gently and playfully transform their view of the other in a
enhance both decoupled and coordinated forms of playful common space with partially re-synthesized graphics.
social interaction in semi-public spaces.
Artistic Interest and Craft
Our design philosophy has two roots: experimental theater
transplanted to everyday social space, and theories of We do not try to project the spectator's attention into an
public space ranging from urban planners [20, 33] to avatar as in most virtual or some augmented reality sys-
playground designers [11]. R. Oldenburg calls for a class tems. Instead, we focus performer-spectator's attention in
of so-called “third spaces,” occupying a social region the same space as all the co-located inhabitants. Moreo-
between the private, domestic spaces and the vanished in- ver, rather than mesmerizing the user with media "ob-
formal public spaces of classical socio-political theory. jects" projected onto billboards, we try to sustain human-
These are spaces within which an easier version of friend- human play, using responsive media such as calligraphic,
ship and congeniality results from casual and informal af- gesture/location-driven video as the medium of shared
filiation in “temporary worlds dedicated to the perform- expression. In this way, we keep the attention of the hu-
ance of an act apart.” [18] man inhabitants on one another rather than having them
forget each other distracted by a "spectacular" object [6].
Demonstration C: Social Membrane (Serita, Fiano, Reit-
berger, Varma, Smoak) By calligraphic video we mean video synthesized by
physicalistic models that can be continuously transformed
How can we induce a bit more of a socially playful ambi- by continuous gesture much as a calligrapher brushes ink
ence in a dead space such as a conference hotel lobby? onto silk. Calligraphic video as a particular species of
Although it is practically impossible in an exhibition set-
ting to avoid spectacle with projected sound or light, we
134
time-based media is part of our research into the precon-
ditions for sense-making in live performance. [10, 5].
ARCHITECTURE
For high quality real-time media synthesis we need to
track gesture with sufficiently high data resolution, high
sample rate, low end-to-end latency between the gesture
and the media effect. We summarize our architecture,
which is partly based on TinyOS and Max / Macintosh
OS X, and refer to [24, 25] for details.
Our current strategy is to do the minimum on-the-body
processing needed to beam sensor data out to fixed com-
puters on which aesthetically and socially plausible and Fig. 5. Architecture comprises clothing; sensing:
rich effects can be synthesized. We have modified the TinyOS, IR camera; logic and physical synthesis in OSC
TinyOS environment on CrossBow Technologies Mica network: Max, MSP, Jitter; projectors, speakers.
and Rene boards to provide time series data of sufficient Technical Comment on Lattice Computation
resolution and sample frequency to measure continuous Our research aims to achieve a much greater degree of
gesture using a wide variety of sensory modalities. This expressivity and tangibility in time-based visual, audio,
platform allows us to piggy-back on the miniaturization and now fabric media. In the video domain, we use lat-
curve of the Smart Dust initiative [13], and preserves the tice methods as a powerful way to harness models that al-
possibility of relatively easily migrating some low level ready simulate tangible natural phenomena. Such models
statistical filtering and processing to the body. Practi- possess the shallow semantics we desire based on our
cally this frees us to design augmented clothing where the heuristics for technologies of performance. A significant
form factors compare favorably with jewelry and body technical consequence is that such methods allow us to
ornaments, while at the same time retaining the power of scale efficiently (nearly constant time-space) to accom-
the TGarden media choreography and synthesis apparatus. modate multiple players.
(Some details of our custom work are reported in [24].)
ACKNOWLEDGEMENTS
Now we have built a wireless sensor platform based on We thank members of the Topological Media Lab, and in
Crossbow's TinyOS boards. This allows us to explore particular Harry Smoak, Ravi Varma and Kevin Stamper
shifting the locus of computation in a graded and princi- for assisting with the experimental construction, and
pled way between the body, multiple bodies, and the Junko Tsumuji and Shridhar Reddy for documentation.
room. Tazama St. Julien helped adapt the TinyOS platform.
Currently, our TinyOS platform is smaller but more gen- Erik Conrad and Jehan Moghazy worked on the prior ver-
eral than our LINUX platform since it can read and sion of the TGarden. Pegah Zamani contributed to the de-
transmit data from photocell, accelerometer, magnetome- sign seminar.
ter and custom sensors such as, in our case, customized We thank Intel Research Berkeley and the Graphics,
bend and pressure sensors. However, its sample fre- Visualization and Usability Center for providing the ini-
quency is limited to about 30 Hz / channel. tial set of TinyOS wireless computers. And we thank the
Our customized TinyOS platform gives us an interesting Rockefeller Foundation and the Daniel Langlois Founda-
domain of intermediate data rate time series to analyze. tion for Art, Science and Technology for supporting part
We cannot directly apply many of the DSP techniques for of this research.
speech and audio feature extraction because to accumulate This work is inspired by creative collaborations with
enough sensor samples the time window becomes too Sponge, FoAM, STEIM, and alumni of the Banff Centre
long, yielding sluggish response. But we can rely on for the Arts.
some basic principles to do interesting analysis. For ex-
ample we can usefully track steps and beats for onsets and REFERENCES
energy. (This contrasts with musical input analysis meth- 1. Aoki, H., and Matsushita, S. Balloon tag: (in)visible
ods that require much more data at higher, audio rates. marker which tells who's who. Fourth International
[21]) Symposium on Wearable Computers (ISCW'00), 77-
86.
The rest of the system is based on the Max real-time me-
dia control system with instruments written in MSP sound 2. Berzowska, J. Electronic Fashion: the Future of Wear-
synthesis, and Jitter video graphics synthesis, communi- able Technology.
cating via OSC on Ethernet. (Fig. 5) http://www.berzowska.com/lectures/e-fashion.html
135
3. Bjork, S., Holopainen, J., Ljungstrand, P., and Akes- 19. Orth, M., Ph.D. Thesis, MIT Media Lab, 2001.
son, K.P. Designing ubiquitous computing games -- a 20. PPS, Project for Public Spaces, http://pps.org
report from a workshop exploring ubiquitous comput-
ing entertainment. Personal and Ubiquitous Comput- 21. Puckette, M.S., Apel, T., and Zicarelli, D.D. Real-time
ing, 6, 5-6, (2002), 443-458, Springer-Verlag, 2002. audio analysis tools for Pd and MSP. ICMC 1998.
4. Blast Theory. Can You See Me Now? 22. Reddy, M. J. The conduit metaphor. in Metaphor
http://www.blasttheory.co.uk/v2/game.html and Thought, ed. A. Ortony, Cambridge Univ Press;
2nd edition. 1993, pp. 164-201.
5. Brooks, P. The Empty Space. Touchstone Books, Re-
print edition. 1995. 23. Richards, T. At Work with Grotowski on Physical
Actions. London: Routledge. 1995.
6. Debord, G. Society of Spectacle. Zone Books. 1995.
24. Sha, X.W., Iachello, G., Dow, S., Serita, Y., St. Ju-
7. Eaves, D. et al. NEW NOMADS, an exploration of lien, T., and Fistre, J. Continuous sensing of gesture
Wearable Electronics by Philips, 2000. for control of audio-visual media. ISWC 2003 Pro-
8. T. Erickson and W.A. Kellogg. Social Translucence: ceedings.
An Approach to Designing Systems that Support So- 25. Sha, X.W., Visell, Y., and MacIntyre, B. Media
cial Processes. ACM Transactions on Computer- choreography using dynamics on simplicial
Human Interaction , 7(1):59-83, March, 2000. complexes. GVU Technical Report, Georgia Tech.,
9. f0.am, txOom Responsive Space. 2002. 2003.
http://f0.am/txoom/. 26. Sonami. L. Lady’s Glove,
10. Grotowski, J., Towards a Poor Theater, Simon & http://www.sonami.net/lady_glove2.htm
Schuster, 1970. 27. Sponge. TGarden, TG2001.
11. Hendricks, B. Designing for Play, Aldershot, UK and http://sponge.org/projects/m3_tg_intro.html.
Burlington, VT: Ashgate. 2001.
28. Starner, T., Weavr, J., and Pentland, A. A wearable
12. Hook, K., Sengers, P., and Andersson, G. Sense and computer based american sign language recognizer ,
sensibility: evaluation and interactive art, Proceedings ISWC 1997, pp. 130-137.
CHI'2003, Computer Human Interaction. 2003.
29. Topological Media Lab. Georgia Institute of Technol-
13. Kahn, J. M., Katz, R. H., and Pister, K. S. J. . ogy. Ubicomp video.
“Emerging Challenges: Mobile Networking for 'Smart
Dust', Journal of Communications and Networks, Vol. http://www.gvu.gatech.edu/people/sha.xinwei/topo
2, No. 3, September 2000. logicalmedia/tgarden/video/gvu/TML_ubicomp.mov
14. Krueger, M. Artificial Reality 2 (2nd Edition), 30. Van Laerhoven, K. and Cakmakci, O., What shall we
Addison-Wesley Pub Co. 1991. teach our pants? Fourth International Symposium on
Wearable Computers (ISCW'00), 77-86.
15. Machover, T., Hyperinstruments project, MIT Media
Lab. http://www.media.mit.edu/hyperins/projects.html 31. Vasulka, S and W. Steina and Woody Vasulka: In-
strumental video. Langlois Foundation Archives.
16. M a u b r e y , B. Die Audio Gruppe. http://www.fondation-
http://home.snafu.de/maubrey/ langlois.org/e/collection/vasulka/archives/intro.html
17. McNeill, D. Hand and Mind: What Gestures Reveal 32. Wanderley, M. Trends in Gestural Control of Music,
About Thought. University of Chicago Press, 1995. IRCAM,- Centre Pompidou, 2000.
18. Oldenburg, R. The Great Good Place. Marlowe & 33. Whyte, W.H., The Social Life of Small Urban Spaces.
Company, 1999. Project for Public Spaces, Inc., 2001.
136
Mobile Capture and Access for Assessing Language and
Social Development in Children with Autism
David Randall White1, José Antonio Camacho-Guerrero2, Khai N. Truong1,
Gregory D. Abowd1, Michael J. Morrier3, Pooja C. Vekaria3, and Diane Gromala1
1
GVU Center, Georgia Institute of Technology, Atlanta, GA 30332 USA
{drwhite, khai, abowd}@cc.gatech.edu, [email protected]
2
Instituto de Ciencias Matematicas e de Computacao, Universidade de Sao Paulo, Sao Carlos/SP, Brazil
[email protected]
3
Emory Autism Center, Emory University School of Medicine, Atlanta, GA 30322 USA
[email protected], [email protected]
137
Treatment plans for CWAs are written at the beginning of tabulate the data quarterly. Because sessions are not
each child’s tenure at Walden. The plans are reviewed videotaped, they cannot be reviewed for accuracy, or be
quarterly and updated annually to meet each child’s used for demonstrating visually to parents that progress is
changing needs. Plans are divided into goals — such as being made. The assistant director uses the tabulated data
improved language development, social interactions and to prepare reports that indicate progress on each objective
engagement, and independent-living and school-readiness and can easily be fifteen pages long.
skills — which are then broken into measurable Parents receive these reports quarterly, and discuss them
objectives set progressively over the school year. Data on with classroom coordinators. However, parents can obtain
these objectives are collected daily, in quantitative visual evidence of their children’s progress only by
experiments incorporated into classroom routines. observing classroom activities through one-way mirrors or
Research assistants also observe CWAs unobtrusively and by watching videotapes. There is no artifact that combines
capture data on video or on a paper spreadsheet known as visual evidence with expert assessment. We believe our
a Pla-Chek (pronounced “PLAY-check”; Figure 1(a)), on system will do this effectively.
which these variables are recorded:
RELATED WORK
• proximity to adult (within three feet)
Our prototype follows the principle of “voluntary,
• adult interacting with CWA
explicit, task-appropriate interaction” that Arnstein et al.
• proximity to typical child
support in the second version of Labscape [1]. The cell-
• typical child interacting with CWA
biology lab for which Labscape was designed is similar to
• proximity to another CWA
Walden in that data must be recorded with scientific rigor.
• other CWA interacting with target CWA
The first version of Labscape relied on sensors that could
• verbalization (words listed in dictionary)
not “provide the detail, completeness, and reliability
• engagement
sufficient to the task.”
• focus on an adult (if the child is engaged)
• focus on another child (if the child is engaged) Steurer et al. have chosen a sensor-based approach for
• focus on a toy (if the child is engaged) another education environment, the Smart Kindergarten
• autistic behaviors [6]. The authors suggest that data collected by sensors in
a classroom can help teachers identify and address the
Video data are coded later but for the same variables,
learning problems of individual children.
except proximity to other CWAs, interactions with other
CWAs, and autistic behaviors. This difference exists DESIGN OF PROTOTYPE
because research assistants may not know which children With our prototype — designed in Macromedia Director
in videos are CWAs. Because of this similarity, we chose and later implemented in Java — we transferred the Pla-
the Pla-Chek for our prototype. Chek to a Tablet PC (Figure 1(b)). The prototype
Pla-Cheks place cognitive burdens on research assistants. captured handwritten data as well as video from a webcam
They observe children for intervals of ten seconds, which worn at the research assistant’s beltline. The system
are counted mentally, then record values in a line of cells. tabulated the data as they are collected rather than
Each line is followed by ten more seconds of observation. requiring a teacher to do so later. The interface reduced the
The next line is filled and the process repeated until research assistants’ cognitive load by providing a timer
twenty intervals are done. Counting time complicates the that counted two ten-second intervals for each line of data:
recording, which requires strict objectivity. an observation interval, then a handwriting interval.
Pla-Cheks for each CWA are recorded on ten consecutive The access interface (Figure 2(a)) contained the video and
days each calendar quarter. Classroom coordinators two visualizations of the data: a “macro” timeline of the
ten sessions recorded quarterly for each child, and a
Video
capture
Video window
capture
window
(a) (b) ( c)
Figure 1: The paper Pla-Chek (a) was the template for our initial capture interface (b), in which we maintained, as
much as possible, the look and feel of the original. User
138feedback led to the second iteration of the interface (c).
“micro” timeline of the session being viewed. Data were The system has three INCA modules: a capture module to
represented on these timelines by dots. Variable names record annotations and video; a storage module to hold
were displayed on the Y-axis and grouped by dot colors: that information for later access; and an access module to
red for proximity to and interaction from adults, gray for provide synchronous access to multiple integrated streams
proximity to and interaction from typical children, green of information gathered from context-based queries.
for proximity to and interaction from other CWAs, black The capture interface is built on INCA’s capture module,
for verbalization, blue for engagement and focus, and pink which supports the recording of video data and behavioral
for autistic behaviors. Graphed on the X-axis of the macro variables (Figure 3(a)). The video and handwritten
timeline were the ten quarterly sessions; on the X-axis of annotations captured — with metadata describing when,
the micro timeline, numbers indicated the progression of what, and for which child information is being captured
time, measured in minutes, through the video. — are stored in a relational database using the storage
Dots in the micro timeline were uniform in size, and module (Figure 3(b)). The access module draws on this
represented single positive recorded occurrences of database to compose the access interface (Figure 3(c)). In
variables; dot sizes in the macro timeline varied to this interface, each marked behavior is an index into the
indicate the percentage of positive results recorded in each video (Figure 3(d)).
session. There were five sizes of dots, representing values The first capture interface used the Quill toolkit as a
in 20-percent increments. We considered using more sizes gesture recognizer, with a few changes that allowed for
for finer granularity, but we believed that constraints of automatic interpretation and tabulation of the observers’
screen real estate would prevent clear distinctions in sizes. data [2]. While this design supported a familiar method of
When the user rolled over a dot in the macro timeline, the data input, its deployment on a Tablet PC failed. Writing
interface displayed the percentage represented. In both on a tablet was different from writing on paper in two
timelines, the percentages and number of occurrences of important ways: calibration and resolution. Annotating
each variable were displayed at the end of the line. The boxes in the electronic form that were the same size as
user selected a session for review by clicking on its those on a paper version proved to be noticeably difficult,
column in the macro timeline. That session’s micro and the imperfect handwriting recognition resulted in a
timeline and video then appeared. A vertical line moved significant amount of time and effort being spent
along the micro timeline to help viewers relate variables correcting the data. The research manager also found it
to the actions displayed in the video. The access interface difficult to keep children in the video frame while
does not necessarily have to be viewed on the Tablet PC, observing and annotating behaviors.
although doing so would allow access in many settings. We redesigned the prototype to simplify capture. We used
SYSTEM IMPLEMENTATION screen real estate more economically by replacing the
The Walden system was developed on top of the spreadsheet with click boxes for “yes,” “no,” and “can’t
INfrastructure for Capture and Access Applications tell” (Figure 1(c)). The same set of boxes is used for each
(INCA) toolkit [7]. INCA provides abstractions and recording interval, with the number of the interval noted
reusable components that address capture-and-access at the top. We replaced the cells for writing the names of
concerns and facilitate application development. teachers and classroom activities with drop-down menus
from which the names can be selected. We added buttons
Video
access
window
(a) (b)
Figure 2: The access interface (a) has at the bottom a “macro” timeline that shows an overview of a child’s ten
quarterly Pla-Chek sessions. The micro timeline at the top right shows the results of the selected session,
and the video for that session appears at the top left. A researcher performs capture during naturally occurring
classroom activities, using a Tablet PC with
139 a head-mounted camera attached (b).
Video
capture
window
Figure 3 : The capture interface (a) is built on the capture module of INCA, which supports the recording
of video data and behavioral variables. The storage module (b) saves the data for use by the
access module (c) in composing the access interface (d).
that can be used to place marks in the timeline when 2. Long, A.C. Jr., Landay, J.A., Rowe, L.A., and
teachers or activities change; these marks remind the Michiels, J., “Visual Similarity of Pen Gestures,” in
research assistants to make the changes using the drop- Proceedings of CHI 2000, The Hague, Amsterdam,
down menus after the session, avoiding interruptions. the Netherlands.
Handwriting and gesture recognition are no longer issues. 3. Mackay, W.E., Fayard, A.-L., Frobert, L., and
Each ten-second interval is added to a canvas that renders Médini, L., “Reinventing the Familiar: Exploring an
a quick review of the CWA’s behavior throughout the Augmented Reality Design Space for Air Traffic
session. A head-mounted bullet camera — which ensures Control,” in Proceedings of CHI 1998, Los Angeles,
all data are recorded during the heads-up observation California.
interval — replaced the beltline webcam (Figure 2(b)). A 4. Maurice, C., Green, G., & Luce, S.C. (eds.). Preface
notepad was also added, allowing the research assistants to Behavioral Intervention for Young Children with
to associate handwritten notes with each recorded interval. Autism: a Manual for Parents and Professionals.
FUTURE WORK Austin, Texas: PRO-ED Inc., 1996.
We will add a harness to support the weight of the Tablet 5. Romanczyk, R.G., “Behavioral Analysis and
PC, as well as a belt-worn pack to hold the battery and Assessment: the Cornerstone to Effectiveness.” In
controller for the bullet camera. We will develop a plan Maurice, C., Green, G., and Luce, S.C., (eds.),
for deploying the capture and access modules, recording Behavioral Intervention for Young Children with
and reviewing quarterly data for several children, and Autism: a Manual for Parents and Professionals (pp.
evaluating the usefulness and usability of the system. 195-217). Austin, Texas: PRO-ED Inc., 1996.
ACKNOWLEDGMENTS 6. Steurer, P., and Srivastava, M.B., “System Design of
We are grateful to the staff, parents, and children of the Smart Table,” in Proceedings of PerCom 2003,
Walden Early Childhood Center. Dallas–Fort Worth, Texas.
REFERENCES 7. Truong, K.N., and Abowd, G.D. “Enabling the
1. Arnstein, L., Borriello, G., Consolvo, S., Franza, R., Generation, Preservation & Use of Records and
Hung, C.-Y., Su, J., and Zhou, Q.H., “Labscape: Memories of Everyday Life.” Georgia Institute of
Design of a Smart Environment for the Cell Biology Technology Technical Report GIT-GVU-02-02.
Laboratory.” Intel Research Seattle Technical Report, January 2002.
IRS-TR-02-008, 2002.
140
The Narrator : A Daily Activity Summarizer Using Simple
Sensors in an Instrumented Environment
Daniel Wilson Christopher Atkeson
Robotics Institute Robotics / Human Computer Interaction
Carnegie Mellon University Carnegie Mellon University
5000 Forbes Avenue 5000 Forbes Avenue
Pittsburgh, PA 15217 USA Pittsburgh, PA 15217 USA
[email protected] [email protected]
141
automatically by a tracker in an instrumented environment. • Daniel passed through the first floor hallway,
entered the kitchen and stayed for 10 minutes.
This summary represents important daily events in a
compact, readable format, although the tracker provides • Daniel walked to the kitchen and stayed for 10
minutes.
many thousands of second-by-second location predictions.
On the most basic level, the Narrator could produce an Sensor Granularity
English account of the second by second location The tracker can predict location at the granularity of
predictions. In our instrumented environment there were an individual sensors, although the current implementation
average of 2000 readings per day. This scheme would reports at room level. The Narrator allows the user to scale
produce volumes of not very useful information. Instead, the granularity from room level to floor level and to the
we make a few simplifying assumptions and provide user- entire house. The sentences below demonstrate room level,
scalable levels of abstraction. floor level, and house level granularity, respectively.
• Daniel woke at 8am. He walked to the bathroom
We make two assumptions. First, although we track several and stayed for 15 minutes. He walked downstairs
occupants simultaneously, we choose to create summaries to the kitchen and stayed for 10 minutes. He
for one occupant at a time. We also report only movement passed through the foyer to the front porch and
left the house.
information and do not attempt activity recognition, except
for sleeping. For sleeping we use a simple rule – if an • Daniel woke at 8am. He stayed on the second
floor for 15 minutes. He went to the first floor
occupant spends more than four hours in the bedroom, that and stayed for 10 minutes. He left the house.
time is tagged as sleeping. Second, the Narrator directly
• Daniel woke at 8am. He stayed home for 25
uses the maximum likelihood predictions of the tracker. minutes. He left the house.
Each of these predictions has an associated posterior
Algorithm
probability, which we ignore for now. In future work we
plan to incorporate this confidence measure into the The Narrator algorithm is a conceptually simple
Narrator's output. deterministic finite state machine. It is composed of a set
of states, an input alphabet, and a transition function that
We identify two areas in which reporting may be maps symbols and states to the next state. The set of states
abstracted. First, we use duration of time spent in a location represent English words and phrases, while the input
to scale the amount of information reported on that alphabet is composed of sensor readings and times. To add
movement. Second, we use sensor granularity to scale some variety to the language, some states have more than
reporting from room level up to house level. one transition for a given symbol. A lookup table maps the
Transient Locations room and occupant ids reported by the tracker to room and
Some locations are less interesting than others, because occupant names.
they are traversed constantly and quickly in order to reach TRACKER
end locations. Usually, transient locations are stairways and We wish to estimate the state of a dynamic system from
hallways. These locations demonstrate a marked decrease sensor measurements. In our case, the dynamical system is
in the average amount of time spent compared to other one or more occupants and the instrumented environment.
locations. For example, in our experiments the staircases For this paper we track people at the room level, so a
had mean durations of 5.5 seconds and hallways had mean person's state, x, indicates which of N rooms they are in.
durations of 10.3 seconds. On the other hand, the living Measurements include data from motion detectors, pressure
room and study had a mean of 8.2 minutes. mats, drawer and door switches, and radio frequency
The transience property of a location determines what identification (RFID) systems. We solve the tracking
detail to report travel through that location. We use a problem off-line with a technique commonly known as
threshold on mean duration spent in a room to identify smoothing which uses information from both past and
transient spaces. We fit a Gaussian to the amount of time future time steps, providing higher accuracy for off-line
spent in these rooms to obtain an overall measure of purposes, such as a daily summary of movement activity.
transience. The Narrator tags travel through any room as Technological Infrastructure
transient if the amount of time spent there is within the We instrumented a house in order to conduct experiments
transient mean and variance. In this way we simplify the using real data. The three story house is home to two
summary without restrictive rules that completely ignore males, one female, a dog, and a cat. Our environment
certain areas. With this information the user may choose to contains forty-nine sensors and twenty different rooms.
fully or partially ignore transient locations, and focus
instead upon end locations where the occupant spends the • Radio Frequency Identification (RFID): We use low
most time. The below sentences were generated by the frequency RFID to identify occupants entering and
Narrator and demonstrate the three scales. leaving the environment. Each occupant and guest is
given a unique transponder, or 'tag'. When the credit card
• Daniel entered the first floor hallway and
stayed for 2 seconds. Daniel entered the kitchen sized tag nears the RFID antenna it emits a unique
and stayed for 10 minutes. identification number. Upon recognition of a tag the
142
tracker places a high initial belief that the occupant is at Data Association
the antenna location. Note that using this tag is no Each sensor reading must be assigned to at least one
different than using a house key; it is not necessary to occupant or to a noise process. This is the data association
carry the tag throughout the environment. step. Our solution is to use an EM process to iteratively 1)
estimate the likelihood of each occupant independently
• Motion detectors: We use wireless X10 Hawkeye ™
generating a given sensor sequence, and then 2) maximize
motion detectors. Upon sensing motion a radio signal is
by re-assigning ownership of sensor values [10]. We use
sent to a receiver, which transmits a unique signal over
the forward-backward algorithm to estimate the posterior
the power line. This signal is collected by a CM11A
beliefs, and then maximize the following quantity:
device attached to a computer. The detectors are pet-
resistant, require both heat and movement to trigger, and
run on battery power for over one year. There are twenty ∑p
x
u
t ( y | x) ⋅ Beltu ( x) .
four motion detectors installed.
• Contact switches: Inexpensive magnetic contact Occupant Independence
switches indicate a closed or open status. They are Currently, we assume that occupants behave independently,
installed on every interior and exterior door, selected an obvious approximation. In reality occupant movements
cabinet drawers, and refrigerator doors. There are twenty are highly correlated. Conditioning on the presence of
four contact switches. several other occupants increases the computational
complexity of the problem, while including guests causes
The sensors are monitored by a single Intel Pentium IV 1.8 further growth in the number of required models. For this
GHz desktop computer with 512MB ram. We use an paper we were interested in testing the performance of a
expanded parallel port interface to monitor contact simpler model.
switches, a serial interface to a CM11A device to monitor
motion detector activity, and a serial interface to the RFID Motion model
reader. All activity is logged in real-time to a MySQL The equation, p u ( xt | xt −1 ) , represents the motion model for
database. a specific occupant. This model takes into account where
Tracking Formulation the occupant was at the previous time step and predicts
Our goal is to estimate the probability distribution for each how likely the current room is now. Our data is a time
person's location, conditioned on sensor measurements. series of sensor measurements. All occupants are
This probability distribution, the tracking system's “belief” constantly generating streams of data that are combined in
or “information” state, is encoded as a length N vector, the database. For this reason, we learn motion models for
whose elements give the probability of being in the each occupant using the entire database of sensor readings
respective rooms. We use a discrete state Bayes filter to in which that occupant is home alone. We map each sensor
maintain the belief state Bel. Our belief that a person u is in to a state that represents a room and counted to generate an
room i at time t is: [N x N] table of transition probabilities.
EXPERIMENTS
Beltu [i ] = ptu ( x = i | y1 ,..., yt ) .
We performed an uncontrolled experiment on a single
Here p() indicates probability and y1 ,..., yt denotes the occupant using 1288 sensor readings from when that
occupant was home alone, collected over a two-day period.
data from time 1 up to time t. Given a new sensor value, we During this time one person moved through the house,
can update the beliefs for all rooms. For room i : visiting every sensor and moving with varying speed and
Beltu+1[i ] = direction. The occupant conducted several common tasks,
such as making a sandwich and using the computer. The
system was not running while the occupant slept. The
η ⋅ ptu+1 ( y | x = i ) ⋅ ∑p
j =1... N
u
( xt +1 = i | xt = j ) ⋅ Beltu [ j ].
tracker used a motion model trained for the occupant being
tracked. Accuracy is measured as the fraction of time that
The variable, η , is a normalizing constant, so that the the room location was predicted correctly. We performed
elements of any Bel vector sum to 1. In using a Bayes filter, 10 trials, training motion and sensor models on 90% and
we assume that our room-level states are Markov. This is testing on a rolling 10%. Using smoothing we found an
an approximation, and one research question is whether we accuracy of 99.6% ± 0.4.
can accurately track people after making this We also report results from five days of continuous,
approximation. We assume that each person u has a unplanned, everyday movement of one to three people in
different motion model p u ( xt +1 = i | xt = j ) and sensor the house. We measured tracker performance over a
model p u ( y | x) . continuous five-day period. The tracker used individual
motion models for the three occupants. There were no
guests during this period. To evaluate performance we had
143
to hand-label the data. To make hand labeling feasible we 5. Burgio, L., Scilley, K., Hardin, M., Hsu, C. (2001).
gathered additional information from eight wireless Temporal patterns of disruptive vocalization in elderly
keypads. The keypads have one button for each of the three nursing home residents. International Journal of
occupants and one for guests. During that week when Geriatric Psychiatry. 16, 378-386.
anyone entered a room with a keypad, they pushed the 6. Clarkson, B., Sawhney, N., and Pentland, A. (1998).
button corresponding to their name. This information acted Auditory Context Awareness via Wearable Computing.
as road signs to help the human labeler disambiguate the In the Proceedings of the Perceptual User Interfaces
data stream and correctly label the movements and identity Workshop, San Francisco, CA.
of each occupant.
7. Davis, L., Buckwalter, K., Burgio, L. (1997).
There were approximately 2000 sensor readings each day Measuring Problem Behaviors in Dementia: Developing
for a total of 10441 readings. When the house was a Methodological Agenda. Adv. Nurs. Sci, 20(1),40-55.
occupied on average there was one occupant at home 13%
of the time, two occupants home 22% of the time, and all 8. Huang, T., and Russell, S. Object identification in a
three occupants home for 65% of the time. Note that each Bayesian context. In Proceedings of the Fifteenth
night every occupant slept in the house. On the whole, the International Joint Conference on Artificial Intelligence
tracker correctly classified 74.5% sensor readings (IJCAI-97), Nagoya, Japan, August 1997. Morgan
corresponding to 84.3% of the time. There was no Kaufmann.
significant difference in accuracy between occupants. The 9. Kanade, T., Collins, R., Lipton, R., Burt, P., and
tracker was accurate 84.2% of the time for one occupant, Wixson, L. Advances in cooperative multi-sensor video
81.4% for two occupants, and 87.3% for three occupants. surveillance. In Proceedings of the 1998 DARPA Image
Accuracy for three occupants drops to 74.5% when Understanding Workshop, volume 1, pages 3-24,
sleeping periods are removed. November 1998.
CONCLUSION 10. McLachlan, G.J., and Krishnan, T. (1997). The EM
We described the Narrator, a service that uses information Algorithm and Extensions. Wiley Series in Probability
from a tracker to provide daily movement summaries. We and Statistics, 1997.
described algorithms that exploit information from binary 11. Mozer, M. C. (1998). The neural network house: An
sensors to perform tracking of several occupants environment that adapts to its inhabitants. In M. Coen
simultaneously. We validated our algorithms using (Ed.), Proceedings of the American Association for
information gathered from an instrumented environment in Artificial Intelligence Spring Symposium on Intelligent
a series of experiments and provided example output of the Environments (pp. 110-114). Menlo, Park, CA: AAAI
Narrator. Press.
REFERENCES 12. Pasula, H., Russell, S., Ostland, M., and Ritov, Y.
1. Abowd, G., Atkeson, C., Bobick, A., Essa, I., Tracking many objects with many sensors. In
MacIntyre, B., Mynatt, E., and Starner, T. (2000). Proceedings of the Sixteenth International Joint
Living Laboratories: The Future Computing Conference on Artificial Intelligence (IJCAI),
Environments Group at the Georgia Institute of Stockholm, Sweden, 1999. IJCAI.
Technology. In Proceedings of the 2000 Conference on
13. Schulz, D., Fox, D., and Hightower, J. People Tracking
Human Factors in Computing Systems (CHI 2000), The
with Anonymous and ID-Sensors using Rao-
Hague, Netherlands, April 1-6, 2000.
Blackwellised Particle Filters. In Proceedings of the
2. Addlesee, M., Curwen, R., Hodges, S., Newman, J., Eighteenth International Joint Conference on Artificial
Steggles, P., Ward, A., Hopper, A. Implementing a Intelligence (IJCAI), 2003.
Sentient Computing System. IEEE Computer Magazine,
14. Sidenbladh, H. and M. J. Black. (2001), Learning image
Vol. 34, No. 8, August 2001, pp. 50-56.
statistics for Bayesian tracking. In: IEEE International
3. Bennewitz, M., Burgard, W., and Thrun, S. Learning Conference on Computer Vision, ICCV, Vol. 2. pp.
motion patterns of persons for mobile service robots. In 709-716.
Proc. of the IEEE Int. Conference on Robotic &
15. VanHaitsma, K., Lawton, M.P., Kleban, M., Klapper,
Automation (ICRA), 2002.
J., and Corn, J. (1997). Methodological Aspects of the
4. Burgio, L., Scilley, K., Hardin, M., Janosky, J., Bonino, Study of Streams of Behavior in Elders with Dementing
P., Slater, S., and Engberg, R. (1994). Studying Illness. Alzheimer Disease and Associated Disorders.
Disruptive Vocalization and Contextual Factors in the Vol. 11, No. 4, pp. 228-238
Nursing Home Using Computer-Assisted Real-Time
Observation. Journal of Gerontology, Vol. 49, No. 5,
Pages 230-239.
144
Part III
Interactive Posters
Device-Spanning Multimodal User Interfaces
Elmar Braun, Andreas Hartl
Telecooperation Group
Department of Computer Science
Darmstadt University of Technology
Alexanderstr. 6, 64283 Darmstadt, Germany
{elmar, andreas}@tk.informatik.tu-darmstadt.de
147
cific modality and/or device. As a result, physical widgets fixed. If users move in and out of range of associated de-
are device dependent. The mapping subsystem utilizes con- vices at runtime, the virtual target device of the mapping
text metadata such as the device used, its primary interaction changes drastically at runtime, making mapping the UI
method (graphics based or voice based) and additional infor- for multiple devices a much more dynamic process.
mation about the capabilities of the interface (e.g. the voice
recognizer used, window metrics, etc.). • Despite constantly changing context, the mapping should
not present the user with a constantly changing interface.
Physical widgets are registered to the mapping subsystem This would inhibit usability since the user would have
with the information what logical widget they map to, what no chance to become accustomed to the UI. Therefore a
modality they implement, and what constraints they have. history of how a UI was presented to a user before needs
Several physical widgets may be registered for one logical to be considered as an additional form of context.
widget, e.g. for different modalities or for different imple-
mentations of one modality. At runtime, the mapping sub- • In the case of a single device, each widget is rendered
system searches for the physical widget which best fits the exactly once. When using a federation of devices, it can
device and chooses it to substitute the logical widget. make sense to render an element more than once, e.g. in
different modalities to achieve multimodality, or on dif-
In the second step, the physical widgets render themselves ferent devices to create some form of remote control.
onto the user interface. How this is done is modality specific.
For voice based interaction, this could involve using text- We are investigating several mapping methods that take these
to-speech for doing the output and generating context free criteria into account. Currently we are building a test bed
grammars for specifying the input. The equivalent physical that allows us to create distributed UIs, and to automatically
widget for GUIs may just call the appropriate element of the send components of these to different devices in the room
operating system’s widget toolkit. infrastructure. This allows us to experiment with distributed
UIs, and to try out mapping algorithms for the distributed
4. ASSOCIATION AND MULTIPLE DEVICES case. In future work it will be used for user studies.
A user interface can obviously not exceed the limitations of 5. CONCLUSION
the device it runs on. When mapping an interface to a small We have presented a way to create multimodal applications
mobile target device, it may allow basic interaction in the whose user interface may span across several devices. The
absence of a better terminal. However, mobile devices are approach is based on a generalized concept of widgets as
not always used in isolation. Often, the surrounding infra- interaction elements. We have developed a mapping sub-
structure could provide additional means of interaction. We system that determines the appropriate mapping of a logical
intend to dynamically associate mobile devices with devices widget at runtime based on the target device. By mapping at
from the infrastructure in order to overcome their limitations runtime we can support several modalities concurrently.
regarding interaction. Since the number of possible combi-
nations of devices is rather large, hand-coding a specialized We have shown how several devices may be integrated into
UI for each combination is infeasible. The mapping subsys- federations in order to define the target for device-spanning
tem will provide a scheme to render an interface on a fede- user interfaces. While the mapping subsystem is designed
ration of multiple devices. This concept has so far only been to cope with several modalities, distributed user interfaces
considered for playback of multimedia content [4]. pose additional challenges to the mapping which we have
identified and are currently working on.
Before such a federation can be established, one must detect
that a device is within the user’s range and that the device can REFERENCES
be associated. We currently use two methods for detecting [1] A. Hartl. A Widget Based Approach for Creating Voice
possible association between users and devices. One is the Applications. In Proceedings of MobileHCI, Udine,
TA, which determines its wearer’s head position (using two Italy, 2003. to appear.
cameras tracking an infrared beacon on the TA) and gaze
direction. The other consists of tags on each device, which [2] M. Mühlhäuser and E. Aitenbichler. The Talking
transmit their ID using short range infrared, and badges on Assistant Headset: A Novel Terminal for Ubiquitous
each user, which receive a tags’ ID if the user is standing in Computing. In Microsoft Summer Research Workshop,
front of the tagged device, and relay them to the network. Cambridge, Sep 2002.
The advantage of the latter solution is the low cost. [3] D. R. Olsen, S. Jefferies, T. Nielsen, W. Moyes, and
Mapping a user interface to span multiple devices introduces P. Fredrickson. Cross-modal interaction using XWeb. In
a number of novel problems: Proceedings of the 13th annual ACM symposium on
User Interface Software and Technology, pages
• When adapting for a single device, there is no choice re- 191–200, San Diego, USA, 2000. ACM Press.
garding which device to present a widget on. If several
devices are available, the mapping needs decide how to [4] T. Pham, G. Schneider, and S. Goose. A Situated
distribute widgets to devices, factoring in usability and Computing Framework for Mobile and Ubiquitous
device characteristics. Multimedia Access Using Small Screen and Composite
Devices. In Proceedings of the 8th ACM international
• While there are some dynamic device characteristics (e.g. conference on Multimedia, pages 323–331, Marina del
battery status), most characteristics of a single device are Rey, USA, 2000. ACM Press.
148
On the Adoption of Groupware for Large Displays:
Factors for Design and Deployment
Elaine M. Huang Alison Sue, Daniel M. Russell
College of Computing IBM Almaden Research Center
GVU Center, Georgia Institute of Technology USER Group
Atlanta, GA, 30332-0280 USA 650 Harry Road
+1 404 385 1102 San Jose, CA, 95120 USA
[email protected] {alisue, daniel2}@us.ibm.com
ABSTRACT seven systems that had had varying success in being adopted
Groupware systems on large displays are becoming into normal workgroup tasks.
increasingly ubiquitous in the workplace. While these FACTORS AFFECTING THE ADOPTION OF LDGAs
applications face many of the same challenges to adoption as Our research uncovered five important factors that were
conventional desktop-based groupware, the public and shared common across many of the systems we studied. Each
nature of these systems heighten these challenges as well as stemmed from the four common characteristics of LDGAs
present additional difficulties that can affect adoption and that we identified. The factors are a combination of technical
success. Our field study of seven large display groupware and social issues that influence system design as well as
applications (LDGAs) uncovered several factors of their techniques for deployment that affect adoption and usage.
design and deployment that influenced their adoption and
usage within the workplace. 1. Task specificity and integration
The value and usefulness must be more evident than for
Keywords conventional groupware because users may spend less time
Large displays, groupware, collaboration, adoption patterns exploring and experimenting with LDGAs.
INTRODUCTION In many LDGAs, the specificity of the tasks involved was
In his seminal CSCW article, Grudin outlined a number of crucial to the adoption of a tool that seemingly supported
challenges for the successful creation of groupware general collaboration practices. Systems introduced for the
applications [1]. In the realm of LDGAs, we have found that sake of promoting specific collaboration or information
common characteristics of these systems that distinguish sharing tasks generally were more successfully adopted than
them from desktop applications heighten the existing those introduced for general collaboration purposes. Tools
challenges and present new ones. Four of these characteristics designed or deployed to support specific tasks were more
are: likely to be successful if they either deployed for a task for
• Form factor – The size and visual impact of large which their use was critical or a task whose content itself was
displays cause users to perceive and interact differently. critical to the user. In one example, professors teaching
certain classes chose to make use of a collaborative display
• Public audience and location – The location in shared
for teaching and class discussions. The use and interaction
space affects the amount of attention users direct at
with the technology was critical for the tasks of taking or
LDGAs as well as the visibility and privacy of
teaching the class; students taking the class used the display
interactions.
not because they were required or told to do so, but because
• Not in personal workspace – The location outside of it was deeply integrated into critical tasks involved with
users’ personal workspaces affects the amount and type being a part of the class. In another case, an LDGA was
of interaction and exploration in which users engage. introduced and adopted for space exploration planning, a
• Not individually owned—The lack of personal ownership critical task whose inherently collaborative nature increased
of LDGAs affects the extent to which people use them or scientists’ ability to carry out the task efficiently.
interact with the content. 2. Tool flexibility and generality
We conducted a study involving three different groups: a) LDGAs that support general collaborative practices may be
researchers working on LDGAs b) members of workgroups adopted by new user groups or for novel tasks because of
in which LDGAs were deployed, and c) salespeople for a their high exposure and public and shared nature.
corporation that produces large displays and LDGAs. Our Although LDGAs introduced for specific tasks or tightly
goal was to identify common factors affecting the success of integrated with important tasks have had good success in
adoption of these applications. Our study entailed face-to- being adopted, we have also observed the value of broad and
face interviews, telephone interviews, and observations of
149
flexible collaboration support in their design. Most successful term deployment. The researchers attributed this to the lack
systems we observed provided support for a breadth of of an easy installation process.
different practices that people employ to collaborate, even
5. Dedicated core group of users
though the systems were deployed to support specific tasks.
Advocates and a core set of users early on help others to
In short, tools that offer a variety of interaction methods that
perceive usefulness and reduce hesitancy to use the system
users can select as needed have been more widely adopted
stemming from their form factor and location.
than those that lock users into very specific interactions.
With all groupware applications, achieving critical mass is
A flexible tool that is deployed to support a specific task may
crucial to adoption [1]. Because LDGAs are generally less
be also appropriated for other tasks as people realize the
amenable to exploration and experimentation than desktop
tool’s potential. A system that supports a broad set of
groupware, they are more likely to fall into disuse soon after
collaborative practices may be used beyond its intended
deployment. Researchers who developed systems that were
purpose. In one case, a tool designed to help visiting
not very task specific found that adoption was aided by
scientists collaborate was appropriated by teams of resident
having a dedicated core group of users early in the
engineers because it provided them with general tools for
deployment. This group, which often included the
creating shared digital artifacts as well as an easy method of
researchers, used the system regularly and encouraged usage
distributing documents among users.
by others after the initial burst of “novelty use” died down.
3. Visibility and exposure to others’ interactions Continued use by the core group ensured that displays
The interactions of others demonstrate usage and value remained dynamic and content fresh rather than stale. The
because the form factor and public nature of these perception that displays were being used and viewed
applications can make user behaviors highly visible. encouraged further adoption into everyday use by a wider
Although certain features existed of which users were aware, audience. Additionally, the core group advocated others’ use
they were exposed to the potential value of the features after by directly encouraging others to use the applications. For
observing others making use of them. In one particular one application designed to share user-submitted items, core
instance, the item forwarding feature of an information users encouraged coworkers to post information onto the
sharing application in an LDGA existed in the interface for displays that they had previously emailed to others. This
approximately three months before it received use. Though encouragement was positive feedback to the senders of the
the feature was highly visible and people were aware of it, information and helped lower initial hesitancy they felt about
users did not perceive it as useful until they saw others using interacting with a new system, both technically and culturally.
it. Through seeing people forwarding items and possibly from FUTURE WORK AND CONCLUSIONS
receiving forwarded items, users began to use that feature and The shared and public nature of LDGAs poses unique
it became widely adopted. Because large displays are challenges for their design and deployment in addition to the
perceived as more public than desktop systems [2], the value challenges faced by conventional groupware. By surveying
of exposure to others’ interactions on LDGAs can influence several systems, we identified some common factors affecting
usage and the perception of value. their success of adoption. Future work includes applying
4. Low barriers to use these lessons to our own LDGAs and refining our findings to
Barriers must be low so users can quickly discover value better understand the dimensions, roles, and usage of these
because LDGAs may be less amenable to exploration and systems within workgroups.
have a lower frequency of use than desktop groupware. ACKNOWLEDGMENTS
It is important that users be able to interact successfully and The authors would like to thank E. Churchill, A. Fass, R.
easily with the system early in their usage in order for the Grawet, S. Greenberg, P. Keyani, S. Klemmer, L. Leifer, A.
system to be adopted into normal tasks. Systems that require Mabogunje, A. Milne, J. Trimble, R. Wales, and T.
significant time to install or configure, have time-consuming Winograd for sharing their projects, reflections, and valuable
steps to initiate use, or have functionality that is not visible insights with us.
tend to find small audiences or a drop in usage after the initial REFERENCES
deployment. In one application that requires user-submitted 1. Grudin, J. Groupware and social dynamics: Eight
content, users have the option of posting information via a challenges for developers. Communications of the ACM,
web form or an email address. Because email is perceived as 37, 1, 1994, 92-105.
quicker and easier than going to a form and filling it out, it is
2. Tan, D.S., Czerwinski, M. (2003). Information
often used to post, while the web form is not. Another system
Voyeurism: Social Impact of Physically Large Displays
that requires users to install and configure an application on
on Information Privacy. Extended Abstracts of CHI 2003,
their desktop machines in order to use the LDGA is used by
Fort Lauderdale, FL.
only a small portion of its workgroup, despite a steady, long-
150
Super - Compact Keypad
Roman Ilinski
Cybernetics Council Labs, Moscow, Russia
CRS DM, 141 N 76 St, Seattle, WA 98103
http://www.geocities.com/senskeyb
[email protected]
151
position sensors. If any key is touched, its sensor sends a fingertip of the user. Such a thin and elastic cover surface
corresponding identity to the application for preview. If can be used for speed typing and is versatile enough to be
any key is pressed, then a mechanical key sends a common made in different sizes and shapes to fit a design of ultra-
input signal, because the application already knows which portable devices.
key identity needs to be entered. Each mechanical
pushbutton key does not need to send the key’s identity
signal to the application – only the input command needs
to be sent.
152
EnhancedMovie: Movie Editing on an Augmented Desk
Yoko Ishii * Yasuto Nakanishi* Hideki Koike* Kenji Oka* * Yoichi Sato* *
153
moved to a location between the two pictures where the gestures [3]; those gestures include: drawing a circle; drawing
releasing gesture was done (Fig. 1a). When the place is not a rectangle; and waving a finger. We will make the following
between two pictures, moving the picture is not done. When functions correspond to these gestures: forwarding and
the user moves the fist outside of the desk keeping the hand rewinding a movie; inserting a frame for texts; and undoing an
closed, the picture is cut (Fig. 1b). operation (Fig. 4a). We will implement other gestures which
utilize directions of moving-hands, and those that join both
hands, and sliding a hand. The gestures will correspond to the
following functions: grouping pictures or movies; and adding
a animation effect to the movie (Fig. 4b). When the user joins
{1} {2} {3} {4} both hands on two pictures, pictures between the two will be
(a) moving some pictures at once: {1&2} grabbing pictures with both grouped together. An animation effect will be added according
hands. {3} moving the pictures. {4} releasing the pictures.
to the direction of the moving-hand. For example, when the
user moves a hand on a picture to the right, the system will add
a slide-out effect to the right direction.
154
Instructions Immersed into the Real World–
How Your Furniture Can Teach You
Florian Michahelles¹, Stavros Antifakos¹, Jani Boutellier¹, Albrecht Schmidt², Bernt Schiele¹
¹ETH Zurich, Switzerland ²University of Munich, Germany
{michahelles, antifakos, janbo, schiele}@inf.ethz.ch [email protected]
http://www.vision.ethz.ch/projects/furniture
155
provides help as needed without requiring any special
action or initiative on the part of the user.
FURNITURE INSTRUCTIONS
For the furniture application we identified five types of
feedback the user should receive:
1. direction of attention
2. positive feedback for right action
3. negative feedback for wrong action
4. fine grain direction Fig. 2: Architecture Diagram
5. notification of finished task
For the output functionality we have developed a custom
This enables users to explore how the furniture has to be layout board carrying eight dual green/red LEDs. Those
assembled. Users unwrap the flat-pack and their attention boards are attached to the connecting edges of each
gets directed immediately (1) to the parts they are supposed furniture part (Fig. 3).
to start with. User’s actions, such as turning and moving
boards are sensed and blinking green light patterns indicate
which edges have to be connected in which manner. If
boards are aligned in the proper way, a synchronized green
light pattern (Fig. 1) indicates a well performed action (2).
156
i-wall: Personalizing a Wall as an Information Environment
with a Cellular Phone Device
Yu Tanaka, Keita Ushida, Takeshi Naemura, Yoshihiro Shimada
Hiroshi Harashima NTT Cyber Space Laboratories,
The University of Tokyo NTT Corporation
7-3-1 Hongo, Bunkyo-ku, 1-1 Hikari-no-Oka, Yokosuka-shi,
Tokyo 113-8656, Japan Kanagawa 239-0847, Japan
+81 3 5841 6781 +81 468 59 3114
{yu, ushdia, naemura, hiro}@hc.t.u-tokyo.ac.jp [email protected]
157
displayed in the user’s window, not on the whole wall. To
see the whole image, he/she has to move along the wall,
which gives a feeling to the user as if he/she is seeking
treasures on the wall.
158
Healthy Cities Ambient Displays
Morgan Ames1, Chinmayi Bettadapur1, Anind Dey1,2, Jennifer Mankoff1
1 2
Group for User Interface Research Intel Research, Berkeley
EECS Dept., University of California, Berkeley Intel Corporation
ambient@guir. berkeley.edu
ABSTRACT rippling shadows, pinwheels that provide awareness
The Healthy Cities project addresses the lack of publicly- through sound and air flow [2], a pixellated ambient display
available information about city health. Through interviews [3], a “Digital Family Portrait” that gives peripheral
and surveys of Berkeley residents, we have found that city awareness of remote family members [4], and informative
health includes a wide variety of economic, environmental, art pieces [5].
and social indicators. We are building public ambient
METHOD
displays that make city health more visible and encourage
We began our investigation of city health by conducting in-
change by highlighting the value of individual
depth, exploratory interviews of six East Bay residents.
contributions.
Participants were recruited from flyers in grocery stores and
Keywords: ambient displays, peripheral displays, city posts on Craigslist (an online community forum). We
health, sustainability indicators followed up the interviews with a culture probe [6],
INTRODUCTION consisting of four postcards that encouraged our six
City datasets such as air quality, crime rates, energy usage, participants to provide additional details on their day-to-day
or recycling amounts can be powerful indicators of city perceptions of city health.
health; however, it is often difficult for city residents to Responses were categorized into broad topics, which were
access this information or interpret it. Despite the wealth of used to create a follow-up survey. The survey included 33
information collected about various aspects of city health, yes-no and Likert-scale questions and ten written-response
residents know little about this information or how they can questions, asking about the importance of various indicators
make a noticeable contribution, leading to feelings of of city health. Questions were divided into ten groups:
frustration or helplessness. The Healthy Cities project aims neighbors and safety, diversity, environment and
to make city health information more publicly visible by conservation, public events, city history, volunteerism,
displaying easily interpretable health indicators in public shopping and economics, schools, transportation, and
places such as transit hubs, shopping districts, or public individual health. Surveys were distributed to over 300
buildings. We hypothesize that this information will people in post offices and farmers’ markets in Berkeley,
empower residents to improve city health by giving them a and a link to an online survey was published on Craigslist.
better sense of what they can do and by making them feel
RESULTS
like their actions are visible.
The interviews and surveys showed us that city health
Healthy Cities includes myriad indicators such as public school conditions,
We have chosen to display city health information in the air quality, effective minimum wage, maintenance of houses
form of an ambient display, which provides a continuous and streets, unemployment, individual health, racial
stream of information in a simple format that can be diversity, pedestrians, public events, and more. Of these
interpreted at a glance. Because our target locations are indicators, the ones that are quantitative and are updated
places where people will be passing through and will have often are more suitable for public ambient displays.
only peripheral awareness of their surroundings, the easily-
Interviews
readable nature of ambient displays lends itself well to these
The interviews and culture probe postcards gave us a
locations. We have also noted that few ambient displays
qualitative sense of city health. The participants were two
have been built for the general public, and were interested
women and four men, with ages ranging from 25-55 years.
in exploring this design space.
Three were Caucasian, and the three others were Lebanese,
Ambient Displays Asian and Latina, respectively. Although our participants
Ambient displays are devices that peripherally provide a had diverse definitions of city health, most or all mentioned
continuous stream of information. Ambient displays show certain indicators: the number of locally-based businesses in
non-critical information in a simple, intuitive, and aesthetic the community (all 6 participants), the number of parks or
way, reducing the cognitive load of users. Researchers at amount of green space (5), diversity (5), uniqueness (5),
PARC, M.I.T. Media Lab, Carnegie Mellon University, safety and poverty (4), pedestrians (4), and public events
Georgia Tech., Viktoria Institute, and elsewhere have (4). These gave us a sense of areas to cover in our survey.
designed various displays, including a “dangling string” that
twitches with network activity [1], a water lamp that casts
159
Surveys has been made to work in a simulated environment where
145 residents of Berkeley and nearby Oakland, El Cerrito, the addition of a can is simulated with the clicking of a
and Richmond completed the survey, 95 from in-person button on the touch-screen.
recruiting and 50 online. Of these, 90 were female and 52
We have also designed a preliminary electricity display,
male, and the ethnic and income distribution was very
which uses computer vision to sense the amount of light
similar to Berkeley’s 2000 census data, suggesting that we
pollution given off by lights in the city of Berkeley at night.
succeeded in getting a uniform sample by ethnicity and
Multiple cameras are used to collect aerial views of the city
income, though not by gender.
every few minutes. These images are analyzed for
In our analysis of the survey, we found that thirteen brightness characteristics and aggregated across cameras.
indicators received average ratings 4.0 or above out of 5, in The resulting brightness information is overlaid on a map of
terms of their importance to city health (5 being “very Berkeley and presented on a screen to users.
important”). All of these had modes of 5. These indicators
FUTURE WORK
are summarized in Table 1.
We plan to continue design on our two display prototypes,
and possibly design more displays on other city health
indicators such as air quality or public events. These
displays should be evaluated for their effects on public
awareness and action. If successful, Healthy Cities displays
could be extended to other cities to raise awareness of city
health.
ACKNOWLEDGMENTS
We would like to thank Joseph McCarthy, Greg Niemeyer,
David Gibson, and Timothy Brooke for their feedback and
suggestions.
REFERENCES
1. M. Weiser, J. S. Brown. Designing calm technology.
Table 1. Indicators that received average ratings of at http://www.ubiq.com/weiser/calmtech/calmtech.htm.
least 4 out of 5 in importance to city health. December 1995.
Displays 2. H. Ishii, B. Ullmer. Tangible Bits: Towards Seamless
While all of these indicators could be used to develop Interfaces between People, Bits and Atoms.
interesting displays, the two indicators we chose to focus on Proceedings of the Conference on Human Factors in
first are electricity usage, as part of resource management, Computing systems, pages 234-241. ACM Press,
and recycling. Although these were not brought up in our March 1997.
interviews, we chose them because they were important to 3. J. Heiner, S. Hudson, K. Tanaka. The Information
our survey takers (which had a much larger sample size Percolator: Ambient Information Display in a
than our interview pool) and are quantitative, measurable, Decorative Object. ACM Symposium on User Interface
constrained, and frequently updated, and have accessible Software and Technology, pages 141-148. ACM Press,
data sources. These characteristics are important because November 1999.
the display should be credible and should noticeably change
for people who will see it on a daily basis. 4. E. Mynatt, J. Rowan, A. Jacobs, S. Craighill. Digital
Family Portraits: Supporting Peace of Mind for
Unfortunately, we could not gain access to citywide data for Extended Family Members. Proceedings of CHI 2001,
either source, so we have focused on the activity in one pages 333-340. ACM Press, March 2001.
recycling bin as a microcosm of city recycling, and light
pollution levels at night as an estimate of electricity usage. 5. J. Redström, T. Skog, L. Hallnäs. Informative Art:
Using Amplified Artworks as Information Displays.
We have designed a preliminary recycling display, which Proceedings of DARE 2000, pages 103-114. ACM
will use load cells to sense a can thrown into a particular Press, April 2000.
recycling bin. A visual meter rises when the weight in the
bin changes to give users a sense of what their contribution 6. W. Gaver, T. Dunne, E. Pacenti. Cultural Probes.
was worth. The interface runs on a Sony Clio, and currently Interactions, pages 21-19. ACM Press, Jan/Feb. 1999.
160
LaughingLily: Using a Flower as a Real World Information
Display
Stavros Antifakos and Bernt Schiele
ETH Zurich, Switzerland
{antifakos, schiele}@inf.ethz.ch
161
that can droop its petals or show its bud in full bloom; thus environments LaughingLily could be used to warn
representing a sad or a happy state. The flower stands on computer users from repetitive strain injuries by letting the
the middle of the meeting table and changes its stature petals droop if someone hasn’t had a break for a long time.
depending on the surrounding sound. If nobody is talking Further LaughingLily could display to co-workers how
the flower lets its petals droop. If a conversation at an interruptible one is depending on approaching deadlines,
intermediate volume is going on the flower moves towards calendar information or e-mail load.
full bloom. If an argument breaks out the flower starts In the domestic environment LaughingLily could act as a
drooping again. progress bar for simple procedures. For example, the
To be able to react on the audio activity of the participants flower could show how far the washing machine is by
the flower is connected to a microphone. Using multiple slowly elevating its petals. It could show how long the cake
directed microphones each connected to an individual has been in the oven. The petals would then simply start to
flower the display can show which participants in the droop again if the cake was is in for too long.
meeting are dominating the discussions and which have not Finally, LaughingLily can display the interaction level
spoken for some time. A similar effect can be achieved by between conference participators and the poster presenter
placing two or three flowers on the table at different at a conference such as UbiComp 2003.
positions. This way, the meeting participants are not
directly exposed as too loud or too silent, but the one side CONCLUSIONS AND FUTURE WORK
of the table is accused as a whole. In this paper we have presented LaughingLily – an ambient
display embodied by a flower. Although we have yet to
LaughingLily - Implementation conduct a comprehensive user study, we have shown how
LaughingLily is an artificial Lily extended with a electro- well such an ambient display - in the form of a flower - can
mechanical system. The microphone on a Smart-Its [4] integrate into the environment. We believe that displaying
sensor board was used to capture an audio signal. The feedback to the user in a physical object is the key to
onboard processor (PIC microcontroller) calculates the making ubiquitous computing applications calmer and
energy level of the signal, representing the loudness of the more suitable to human needs.
people speaking. To make the flower move a servo motor
was controlled directly from the sensor board. A shaft Besides exploring LaughingLily’s effects on meeting
connecting the servo motor with a cup-shaped plastic part participants in a larger user study we want to continue
actuates the flowers petals. The whole system can either be developing further physical feedback devices.
powered by batteries (working for several days using ACKNOLEDGEMENTS
4xAA batteries) or directly from the mains. The Smart-Its project is funded in part by the Commission
of the European Union under contract IST-2000-25428,
First Impressions and Discussion and by the Swiss Federal Office for Education and Science
The first afternoon LaughingLily was standing on the (BBW 00.0281).
coffee table in our hall many comments from office
colleagues were made about how sad or how pretty the REFERENCES
flower looked. They soon learnt that when someone is 1. .Holmquist L. E., and Skog T. Informative Art:
talking in the environment the flower starts lifting up her Information Visualization in Everyday Environments.
petals. After the novelty wore off everybody continued In Proceedings of Graphite 2003.
with their everyday business. LaughingLily had become 2. Johanson B., Fox A., and Winograd T. The Interactive
integrated into the physical space and became a peripheral Workspaces Project: Experiences with Ubiquitous
display. Computing Rooms. In IEEE Pervasive Computing 1(2)
We believe the quick integration of LaughingLily resulted 2002.
mainly from having a physical object as a display itself. An 3. Pinhanez C. The Everywhere Displays Projector: A
object can be moved around in space and be placed amidst Device to Create Ubiquitous Graphical Interfaces. In
people. In this way the display is adapted to the situation UbiComp 2001.
instead of having the people adapt to the display. 4. Smart-Its Project: http://www.smart-its.org/
As expected due to the natural association between 5. Weiser M., and Brown J. S. Designing Calm
LaughingLily and a real flower, first experiments showed Technology. December 1995.
how people’s emotions can be invoked. A drooping flower http://www.ubiq.com/hypertext/weiser/calmtech/calmte
is naturally associated with sadness, whereas a flower in ch.htm
full bloom can trigger happiness to a certain extent.
Exploiting these associations people have with physical 6. Wisneski, C., Ishii, H., Dahley, A., Gorbet, M., Brave,
objects could be a powerful tool in interface design. S., Ullmer, B., and Yarin, P. Ambient Displays: Turning
Architectural Space into an Interface between People
OTHER APPLICATIONS and Digital Information. In Proceedings of
Beyond the meeting application presented in the previous International Workshop on Cooperative Buildings
section many more applications are imaginable. In office (CoBuild '98).
162
Habitat: Awareness of Life Rhythms over a Distance Using
Networked Furniture
Dipak Patel and Stefan Agamanolis
Human Connectedness group
Media Lab Europe, Sugar House Lane, Bellevue, Dublin 8, Ireland
{dipak, stefan}@medialabeurope.org
163
Each station consists of a networked Linux computer, a Privacy and trust issues are dealt with implicitly as the
RFID tag reader and a video projector. furniture only connects into the personal space of a loved
Two people having a long distance relationship (Figure 1), one, a person that a high level of trust is already shared
use the Habitat system as follows: When objects (with a with. Users are also made well aware of the specific
RFID tags embedded inside) are placed on the coffee table, artifacts that trigger the communication between Habitat
they are sensed by the tag reader, which uniquely identifies stations. Reciprocity is important for limbic regulation,
each object. The tag reader is polled regularly by the since each station is a duplicate, awareness flows in both
computer to check if any items have been added or directions in a continual feedback loop.
removed. Such events cause messages to be sent to the CURRENT STATUS AND FUTURE DIRECTION
coffee table in the remote partner’s living space. The The first phase of Habitat is complete, a proof of concept
remote coffee table displays a corresponding representation demonstrator system which acts as a platform for
of the opposite person’s activity (Figure 2) and their overall conducting experiments and extending ideas. A range of
daily cycle on the surface of the table, using an visualisations that describe remote activities have been
appropriately mounted video projector. When items are created. A forthcoming trial will be used to determine the
removed, the displaying coffee table gradually fades away effectiveness and appeal of these different visualisations to
that representation. potential users.
Future versions of Habitat will concentrate on the capture
of more complex routines and activities. We plan to use
biomedical technologies in concert with the connected
furniture platform, to monitor users’ body temperatures,
heart rates and other well known metrics for tracking
biorhythms with additional accuracy. Humans have several
bodily rhythms that affect how we feel in addition to
circadian rhythms, such as ultradian (~90 minutes),
infradian (many days) and circannual (~1 year). There are
also several environmental factors that alter or reset body
clocks (known as zeitgebers) that could be accounted for
within visualisations.
The aim of this research is to determine if we can
successfully convey awareness of rhythms over a distance
and if doing so can provide similar levels of reassurance
and intimacy as physical proximity of partners in a
Fig. 2 - A typical sequence within a visualisation domestic setting.
Habitat takes into consideration several design guidelines The eventual goal would be to install suitably evolved
in creating connectedness applications [1]: - iterations of the technology with many groups of people
outside of the laboratory environment and assess their use
• The system should behave like an appliance that is
in a study - prime candidates being people who endure
always on and connected, to foster sense of continuity -
separation from family and partners for prolonged periods
an open link between the users.
of time, such as off-shore workers or military personnel.
• Participating with Habitat should require no change in
REFERENCES
the user’s normal behavior and not alter the furniture's
original use. 1. Agamanolis, S. "Designing Displays for Human
Connectedness," in Kenton O’Hara et. al., eds., Public
• The visualisations should be non-distracting, so they
and Situated Displays, Kluwer, 2003.
can be viewed across the room and in the periphery of
vision without distraction. The visualisations are designed 2. Bentley, E. Awareness: Biorhythms, sleep and
to indicate presence of the remote partner over a duration dreaming. Routledge, 1999.
of time, so that observers are free to move around the 3. Lewis, T., Amini, F. and Lannon, R. A General Theory
living space and not have to constantly watch the display. of Love. Vintage Books USA, 2001.
• The system should express the notion of a digital 4. Siio, I. et. al., “Peek-a-drawer…,” in CHI’02 Extended
wake. A digital wake is a visual construct that allows the Abstracts, ACM Press, 2002.
users to ascertain the history of previous interactions.
5. Streitz, N. et. al., “Roomware: The Second Generation,”
When an activity ends, its representation gradually fades
in CHI'02 Extended Abstracts. ACM Press, 2002.
out but is never completely removed from the display.
This gives users who return to their living space a 6. Wisneski, C. et. al, “Ambient Displays….,” Proc.
mechanism to interpret what took place while they were CoBuild’98, Springer, 1998.
absent.
164
Smart Home in Your Pocket
Louise Barkhuus Anna Vallgårda
Department for Design and Use of IT Department of Computer Science
The IT University of Copenhagen University of Copenhagen
Glentevej 67, Copenhagen 2400, Denmark Universitetsparken 1, Copenhagen 2100, Denmark
[email protected] [email protected]
165
The final example alerts the user if he is about to forget his to keep HYP simple, the options provided are limited.
cell phone in the morning. If he opens the front door When creating a timer for example, the user is left with few
between 7 and 9 in the morning and the cell phone is still selections (see figure 2), which in some cases might not
located in its cradle, the cell phone alerts the user with a satisfy the user’s needs. It is likely that the user wishes to
loud beep that he has not taken it with him. create applications that are not possible and finds that it is
difficult to define the right criteria for a specific action.
Most people lead irregular lives, resulting in exceptions that
might initiate the action at the wrong time. However, the
HYP approach makes users understand why the system acts
like it does, because they specify the conditions themselves.
RELATED WORK Since we have not performed any formal user evaluation on
A fair amount of research has focused on developing smart HYP, this is the next step. Testing how users interact with it
homes; one example is The Aware Home at Georgia and see if they are able to create desirable applications, are
Institute of Technology [5]. Here, the purpose was to make essential for further deployment. Our second goal is to
the sensors learn about the users’ habits to facilitate the connect the HYP prototype to a sensor equipped smart
development of human-centered applications for a rich home in order to develop the application further and get real
sensor infrastructure. MIT’s House_n is build with another user feed back. Finally, it should be considered which other
goal: to teach and motivate the user to take control in a environments will likely benefit from a similar approach.
sensor augmented house instead of having the smart house
overriding the user’s actions with inappropriate REFERENCES
behavior[4]. In our view, the goal of a smart home is to 1.Abowd, G. et al (1997): Cyberguide: a mobile context-
assist individual inhabitants with everyday tasks by aware tour guide, Wireless Networks 3(5):421–33.
tailoring functions to their habits and behavior. 2.Cheverst, K. et al. (2000): Developing a context-aware
electronic tourist guide: some issues and experiences,
Other relevant work includes context-aware applications for Proc. of CHI, pp. 17–24.
handheld units such as the Tour Guide and the Cyber
Guide[2,1]. These applications change their content 3.Java 2 Platform, Micro Edition, sun.java.com/j2me.
according to the surrounding context, for example location, 4.Intille, S. S. (2002): Designing a Home of the Future,
time and identity of the user. Finally, iCAP is a system that Pervasive Computing, April–June, pp. 80–86.
enables users to create context-aware applications [7]. But
5.Kidd, C. et al. (1999): The Aware Home: A Living
where iCAP focuses on end-user programming on a desktop
Laboratory for Ubiquitous Computing Research, Proc. of
computer, HYP goes all the way and enables mobile users
the 2nd In. Workshop on Cooperative Buildings,
to define their own criteria ‘on the go’.
Integrating Information, Organization., and Architecture,
pp. 191–198.
IMPLICATIONS OF THE HYP APPROACH
While HYP is in essence still an outer layer of the 6.Meyer, S. and Rakotonirainy, A. (2003): A survey of
prototype, it illustrates a new way of specifying a smart Research on Context-Aware Homes. Proc. of the
home. It is our goal to empower the users by giving them Australasian information security workshop conference
simple options for dynamic functions. By making it easy to on ACSW frontiers 2003, pp. 159–168.
revise existing sub-applications, the chance that users will 7.Sohn, T. and Dey, A.K. (2003): iCAP: an informal tool
reject context-aware functions is diminished, because the for interactive prototyping of context-aware applications,
user can change it to better fit his needs. However, in order Proc. of CHI ’03 Extended Abstracts, pp. 974–97.
166
SiteView: Tangibly Programming Active Environments
with Predictive Visualization
Chris Beckmann Anind K. Dey
Computer Science Division Intel Research, Berkeley
University of California at Berkeley Intel Corporation
[email protected] [email protected]
167
sentence: if it is raining and a weekday and morning, then
turn on the north lamp. She sets the thermostat interactor to
a warmer temperature and places it in the WIM. The rules
display now shows the new rule, which handles light and
temperature on rainy mornings, along with the original set
of rules. The environment display reflects her new rule, and
shows her office lit by her floor lamp on rainy mornings.
SiteView then turns on the floor lamp and adjusts the
temperature.
EVALUATION
An initial user study of SiteView demonstrated that end
users could create rules that control their environment. The
tangible interface appears to be intuitive and the
environmental display and rules display are useful for
Figure 1: The condition composer is at front; the large screen is the helping users create rules and view the effects of these
environment display. The laptop is the rules display, and there is a rules. Overall, the system was usable for generating a
lightbulb interactor on the WIM floorplan. The configuration shown variety of rules, each using one to three rule conditions,
creates a rule to turn on the north lamp on rainy Monday mornings.
and each triggering one or both of the lights and the
(weather, day, and time). The user can also use the thermostat settings. The system also made the effects of
environment display to simply check automation settings composing multiple active rules transparent. For example, a
for a particular set of conditions, including the current rule that turned on the lights in the evenings was
ones. understood to be combined with another rule that set the
The rules display shows the rule as it is created and shows temperature at 55 degrees on overcast weekends to turn on
other rules applicable for the given set of conditions. The the lights and set the temperature to 55 on a Saturday
rules display provides the user with explicit feedback about evening. One confusion that arose during the study was
the internal state of the control system, which supports a the duration of time-based rule conditions. While the use of
more transparent user understanding of system behavior. natural words for time-of-day appeared transparent, one
As rules are being created, SiteView displays them as user was unsure if a rule that specified turning down the
English-like sentences. SiteView also displays the relevant heat at 8 PM would still be in effect at 8:15 PM or later.
set of existing rules as the user specifies predicate Our future work includes further user evaluation of
conditions. SiteView to determine the types of tasks it is appropriate
USE SCENARIO for, providing support for disjunctive relationships, and
As an illustration, consider the following scenario. On a exploring how the tangible nature of SiteView can be used
rainy morning, Dana finds her workspace too dark and too to constrain user input for novices.
cold and wants to adjust the lighting and room temperature. REFERENCES
She consults the SiteView rules display, which, by default, 1. Åkesson, K-P. et al. “A toolkit for user re-configuration of
shows the rules active in the current situation. She notes ubiquitous domestic environments”. Companion to UIST
that the active control rule handles weekday mornings in 2002. (2002).
general, but not rainy weekday mornings in particular. 2. Blackwell, A.F. and Hague, R. “AutoHAN: An architecture
Rather than manually changing the temperature and light for programming the home”. IEEE Symposia on Human-
conditions using the available thermostat and light Centric Computing Languages and Environments. pp. 150-
switches, Dana uses SiteView to add a new rule, so the 157. (2001).
active environment will behave appropriately now and in 3. Gorbet, M.G. et al. “Triangles: Tangible interfaces for
the future. First, she places the interactors signifying rain, manipulation and exploration of digital information
morning and weekdays in the appropriate slots on the topography”. CHI '98. pp. 49-56. (1998).
condition composer (left, center and right, respectively, in 4. Ishii, H., and Ullmer, B. “Tangible bits: towards seamless
Figure 1). The rules display (far left of Figure 1) shows all interfaces between people, bits and atoms”. CHI'97. pp. 234-
applicable control rules for those conditions, including (the 241. (1997).
currently active) one for a more general condition, 5. Mozer, M.C. “The Neural Network House: An environment
weekday mornings, and the visualization display shows an that adapts to its inhabitants”. AAAI Symp. on Intelligent
image of the office similar to the office’s current Environments. pp. 110-114. (1998)
appearance. Next, she places the light on interactor on the 6. Stoakley, R. et al. “Virtual reality on a WIM: Interactive
portion of the WIM signifying her floor lamp. Now that worlds in miniature”. CHI '95. pp. 265-272. (1995).
Dana has specified a valid rule – both a condition and an 7. Smarthome X10 Kit. http://www.smarthome.com/
action – the rules display shows it as an English-like
8. Home Director. http://www.homedirector.com
168
Towards Ubiquitous End-User Programming
Rob Hague Peter Robinson Alan Blackwell
University of Cambridge Computer Laboratory
William Gates Building
15 JJ Thomson Avenue
Cambridge CB3 0FD UK
{Rob.Hague, Peter.Robinson, Alan.Blackwell}@cl.cam.ac.uk
169
result is usually discarded.) Perhaps the most unusual of the language environments be-
ing developed for use with Lingua Franca is the Media Cubes
The two types of information that are most commonly dis- language. This is a “tactile” programming language, in other
carded when translating a script from one form to another words, a language where programs are constructed by ma-
are secondary notation, such as comments, and higher level nipulating physical objects—in this case, cubes augmented
structure, such as loops. Both of these may vary greatly such that they can determine when they are close to one an-
from language to language. Lingua Franca allows multi- other. The faces of the cube signify a variety of concepts,
ple secondary notation elements to be associated with a part and the user creates a script by placing appropriate faces to-
of a script; each such element is tagged with a notation type, gether; for example, to construct a simple radio alarm clock,
to allow language environments to determine which (if any) the “Do” face of a cube representing a conditional expres-
to display. Higher level structure is represented by group- sion would be placed against a representation of the act of
ing; again, each group is tagged with a type (such as ”while switching on a radio, and the “When” face against a rep-
loop”), which may imply a particular structure, and language resentation of the desired time. In an appropriately instru-
environments may use this to determine how to display the mented house, the representation can often be an existing,
group’s members. Unlike secondary notation, any environ- familiar item, or even the object itself. In the above exam-
ment that can display Lingua Franca can display any group, ple, a time could be represented using an instrumented clock
as in the worst case it can simply display it as a grouped face, and turning the radio on could be represented by the
collection of primitive operations. radio or its on switch.
We have implemented a interpreter that stores the “corpus” The Media Cubes language is intended to be easy for those
of scripts that have been entered into the system. Language unfamiliar to programming, and as such would provide a
environments communicate with this interpreter via HTTP, low-impact path from direct manipulation to programming.
allowing them to read, add to and update the Lingua Franca However, the language as it stands is unusual in one very sig-
code (represented as XML) that makes up the corpus. In nificant respect—scripts do not have any external represen-
addition, the interpreter is responsible for executing Lingua tation. This means that it is only feasible to construct small
Franca code, and interfacing the Lingua Franca environ- scripts, and that, once created, scripts may not be viewed,
ment with the rest of the ubiquitous computing system. and hence may not be modified. However, as the language
exists within the Lingua Franca framework, we do not need
A MENAGERIE OF PROGRAMMING LANGUAGES
to abandon the language, with its substantial advantages.
A wide variety of scripting languages are being developed Lingua Franca makes it feasible to include niche languages
in order to demonstrate the flexibility and range of the Lin- such as the Media Cubes in a system without sacrificing
gua Franca architecture. These languages are designed to functionality.
complement each other, in that they may be used to perform
different manipulations on the same script with ease. Each REFERENCES
language is embodied in a language environment that pro- 1. Blackwell, A.F., Hewson, R.L. and Green, T.R.G. (2003)
vides an interface via which the user can view and/or ma- Product design to support user abstractions. Handbook
nipulate a particular notation, translates between the nota- of Cognitive Task Design, E. Hollnagel (Ed.), Lawrence
tion and Lingua Franca, and communicates with the Lingua Erlbaum Associates.
Franca interpreter via HTTP.
2. Blackwell, A.F. and Hague, R. (2001). AutoHAN: An
A textual language provides an interface familiar to those Architecture for Programming the Home. Proceedings of
with experience of conventional scripting languages. It is en- the IEEE Symposia on Human-Centric Computing Lan-
visioned that this will be primarily used for editing substan- guages and Environments, pp. 150-157.
tial scripts, a task most likely to be undertaken by someone
with at least some programming background. (It is of course 3. Blackwell, A.F., Robinson, P., Roast, C, and Green, T.R.G.
possible to manipulate Lingua Franca directly in XML form, (2002). Cognitive models of programming-like activity.
but this is needlessly difficult and carries the risk of intro- Proceedings of CHI’02, 910-911.
ducing malformed code into the database, or accidentally re-
moving or modifying data associated with another language 4. Green, T.R.G, Petre, M. and Bellamy, R.K.E, Compre-
environment.) hensibility of visual and textual programs: A test of su-
perlativism against the ‘match-mismatch’ conjecture, Em-
Two forms of visual language are in development, serving pirical Studies of Programmers: Fourth Workshop,
slightly different needs. The first is a purely presentational J. Koenemann-Belliveau, T.G. Moher, S.P. Robertson (Eds):
diagram that cannot be used to create or edit scripts, but Norwood, NJ: Ablex, 1991
only to display them. This allows it to be specialized in or-
der to facilitate searching, navigation and comprehension of 5. Peyton Jones, S., Blackwell, A and Burnett, M. (in press)
scripts. The second, a mutable diagram, allows scripts to A user-centred approach to functions in Excel. To appear
be edited, and is likely to be the main environment for the in proceedings International Conference on Functional
manipulation of mid-sized scripts. Programming.
170
TunA: A Mobile Music Experience
to Foster Local Interactions
Arianna Bassoli, Cian Cullinan, Julian Moore, Stefan Agamanolis
Human Connectedness Group
Media Lab Europe, Sugar House Lane, Bellevue, Dublin 8, Ireland
{arianna, cian, julian, stefan}@medialabeurope.org
ABSTRACT what someone else is listening to. This application has been
Can the Walkman become a social experience? Can anyone developed following a recent social study that we
become a mobile radio station? With the TunA project we conducted for a project called WAND (Wireless Ad hoc
are investigating a way to use music in order to connect Network for Dublin)[2]. WAND is an infrastructure, based
people at a local scale, through the use of handheld devices on 802.11b and in the process of being installed in the city
and the creation of dynamic and ad hoc wireless networks. centre of Dublin. It is designed to support and run
TunA gives the opportunity to listen to what other people applications that exploit an ad hoc, decentralised, and peer-
around are listening to, synchronized to enable the feeling to-peer type of communication. An ethnographic study was
of a shared experience. Finally, TunA allows users to share organised in order to understand the socio-cultural
their songs in many situations, while moving around, dynamics of the area covered by the network, to involve
fostering a sense of awareness of the surrounding physical users in the project development, and to inform and inspire
environment. content and service providers for the design of new
Keywords applications. In this framework, we see TunA as targeted to
802.11, music, synchronisation, local networks, shared some of the communities identified during this study, in
experience, ad hoc networks particular students, skaters, and commuters. The goal of the
INTRODUCTION
project is not only to create new social links but also to
R. D. Putnam claimed a few years ago: “Social networks strengthen existing ones; established communities like the
based on computer-mediated communication can be skaters could in fact use TunA to reinforce their identity,
organised by shared interests rather than by shared and to express themselves in new creative ways.
space”[1]. As the market of PDAs spreads and new
wireless technologies are being improved, we research
instead a way to create and support social networks of
people who share the same physical space. In the
application we are currently developing music constitutes
the main interest around which communities, virtual and
real, can be formed.
We wish, in general, to contribute to the understanding of
how wireless networks, so far mainly considered for their
“globalising” potential, could also make people more aware
of their local reality. By connecting PDAs in an ad hoc way
with 802.11b, we focus on the creation of dynamic local
networks in which users are able to share information and
resources with others who are in range.
In order to find a subtle and non-intrusive way to connect Fig. 1: Example scenario of TunA usage—people on a bus
people who are nearby through mobile devices, we decided
to explore the concept of a “shared music experience.” TECHNOLOGY
Music is commonly used as a form of mobile Tuna is ideally meant to work on any handheld device that
entertainment, through personal devices such as Walkman supports 802.11 technologies. We are now working on a
or digital players. While so far listening to music when prototype for iPaqs, running the GPE 0.7 version of Linux
moving around has been mostly an individual and quite Familiar, connected in ad hoc mode through 802.11b.
isolating process, we are here suggesting making it a fun TunA can be used as a standard mp3 player for personal
and socialising experience. music; at the same time it visualises, in one single screen,
MOTIVATION all the other TunA users who are in range, and gives
The TunA project is about being able to access the playlists options to access their playlists, their profiles, and the
of other users who are near, and to listen synchronously to songs they are listening to. The user has an option to “tune
171
in” and start listening to what another person is listening to. in common their passion for skating along with a
An important aspect of this work is the synchronisation of specific set of rules and behaviours. TunA could help
the listening experience. The "tune in" option gives in fact this community to reinforce their identity through
only access to the song another user is currently listening music. Instead of bringing their stereo and listening to
to, and this is what we refer to as a "shared music their songs loudly, which would cause problems for the
experience". Finally, in order to keep track of the songs and surrounding environment, they could use TunA to have
the users encountered TunA gives the possibility to have a a shared music experience, while still keeping their
record of "favourites". privacy and an individual listening process. At the
same time they could provide a source of music, a sort
of "skaters' radio station", for other people around.
RELATED WORK
The recent success of the new version of Apple iTunes,
which uses the Rendezvous technology to share music
playlists over the same local network, has proven the
potential of wireless peer-to-peer applications that count on
the physical proximity of the users. iTunes is mostly
suitable for office spaces or in general "static" settings,
while TunA focuses on a mobile fruition of music, and on
the social dynamics fostered by an ad hoc shared music
experience. It is moreover based on handheld devices
instead of desktop computers, and this makes it a very
flexible application.
Fig 2: TunA interface in development Along the same lines as TunA, the SoundPryer project [3]
is about a peer-to-peer wireless exchange of music files
through devices, especially designed for car travellers.
SCENARIOS TunA, targeted mainly to people moving around in an
TunA can accommodate a number of occasions in which urban environment, translates the profiling process that
people gather during the course of the day. While SoundPryer uses to identify vehicles into a more personal
conducting the ethnographic study for WAND, previously one. With TunA the identity of each source of music is
mentioned, we ran across some recurring situations linked to the information users want to give about
happening in the city centre of Dublin, where TunA could themselves. Moreover, the shared experience TunA wishes
play an active role in connecting people who are nearby. to provide is connected to the concept of synchronisation,
Queuing for the Bank. On Thursdays most of the which is for us at this stage one of the main technical issues
employees receive their salary. A wide number of to face.
people gather in the main branch of AIB (Allied Irish FUTURE WORK
Banks) to collect the money to spend over the In order to make TunA progressively more flexible and
weekend. To make the action of queuing more engaging, we plan to implement, in future versions, ad hoc
interesting and engaging, music enthusiasts could use networking protocols, to allow search options. We also see
TunA to feed their curiosity about what other people in TunA as ideally integrated with an Instant Messaging
the queue are listening to. application; messages exchanged among users could in fact
Commuters. The 123 bus is one of the main links become the result of the shared music experience.
between opposite sides of the city. Many commuters ACKNOWLEDGMENTS
spend part of their daily routine on this bus, sometimes This research has been supported by sponsors and partners
getting curious about each other's presence. TunA of Media Lab Europe.
could provide a platform for light-weight interactions,
REFERENCES
in which people can discover who else commutes 1. Putnam, R., Bowling Alone, Simon & Schuster, New
during the same hours, find out if they have music
York, 2000, p.172.
tastes in common, and finally listen to what others are
listening to. 2. Bassoli, A. et al, Social research for WAND and new
media adoption on a local scale, Proc. of dyd02.
Skaters of the Central Bank Square. A well-established
community of teenagers gathers everyday in front of 3. Axelsson, F., Östergren, M., SoundPryer: Joint Music
one of the main buildings of the city centre. They have Listening on the Road, Adjunct Proc. of UBICOMP'02.
172
AudioBored: a Publicly Accessible
Networked Answering Machine
Jonah Brucker-Cohen and Stefan Agamanolis
Human Connectedness group
Media Lab Europe
Sugar House Lane, Bellevue, Dublin 8, Ireland
{jonah, stefan}@medialabeurope.org
173
this would allow for an ongoing protest to take place
through contributors experiencing the event online.
3. Public Voice Histories: With most physical answering
machines and voice-mail systems, there is a limited
amount of message storage and lack of a way to sort
incoming messages into separate storage mailboxes.
AudioBored addresses this by storing all threaded
messages on a server that can be instantly accessed
through the physical interface. The device gains
importance in public spaces where PC access to
messages might be awkward or prohibitive, and it exists
as a shared community resource. Over time, personal
voice histories of messages left by community members
Figure 2. Close up of LCD display and slider can accumulate while the hardware architecture can
SCENARIOS scale to adjust for the new messages. This database of
Below are a few specific examples of possible applications public voice messages could possibly provide an
of the system. invaluable historical resource for future generations.
1. Voice-based Online Forums: AudioBored allows for FUTURE RESEARCH
people to contribute to a shared online public space Future versions of AudioBored will allow for more
without a computer. Since standard telephones customized message information that will be catalogued
(including mobile and fixed lines) are ubiquitous and along with individual clips and made into a directory
exist in far greater numbers than computers, they searchable by contributor and subject matter. We plan
provide an alternative entry point to the Internet. Using additional work on interactive visualizations of information
VXML as the voice input system, the project opens up collected by the system, such as the geographic origin of
the landscape for public contribution to distributed messages. The device could also gain Internet access
online audio forums where a greater number of people through public wireless hotspots, allowing for it to be
can potentially contribute to the discussion. Since most placed in a wider variety of public spaces in order to
online bulletin boards exist in text format, identity and maximize its user base. A detailed study is also planned on
authenticity of users can be concealed. Voice message uses of the system along with an analysis of message
posting can still maintain anonymity, but it potentially content to gain inspiration for potential deployment
adds a more personal touch to messaging applications. locations and future refinements.
For instance, users who communicated on a text-based ACKNOWLEDGMENTS
web board could use AudioBored as a means of This research has been supported by sponsors and partners
“hearing” each other’s voices for the first time, which of Media Lab Europe.
may ultimately bring their community closer together
by adding a more human element to their previous REFERENCES
interactions. (All web references last visited 3/03)
2. Situated Voice Posting: AudioBored provides a shared 1. Kerbango (discontinued), 3COM Corporation,
public outlet for people to post candid voice messages http://www.rnw.nl/realradio/features/html/
on the Internet from any phone. This becomes kerbango010322.html
especially interesting in the midst of events where 2. Schneidler, T., Remote Home,
Internet access is not easily available. For example, http://www.remotehome.org/
people in the midst of a crowded protest march could
3. VoiceMonkey, http://www.voicemonkey.com
voice their opinions from the center of the action. These
candid comments might better reflect the electric 4. AudioBlog, http://www.audblog.com
atmosphere and excitement of such a live event, adding 5. Lakshmipathy, V., Schmandt, C., & Marmasse, N.,
a sense of immediacy to the collected messages. Each “Talkback: a conversational answering machine”, Proc.
voice message is immediately recorded, stored in a of UIST ’03, Vancouver, Canada (to appear).
database and made public for people to listen to on the
device or online. Since all messages are sorted by topic,
174
Dimensions of Identity in Open Educational Settings
Alastair Iles Matthew Kam Daniel Glaser
Energy and Resources Group Computer Science Division Interdisciplinary Doctoral Program
U.C. Berkeley U.C. Berkeley U.C. Berkeley
[email protected] [email protected] [email protected]
ABSTRACT that ubicomp needs not only to focus on digital identity, but
Based on our deployments of Livenotes, a Tablet-based also on social and physical identities where educational and
application for collaborative note-taking in open collaborative work settings are concerned.
educational settings, we observe that communication OBSERVATIONS
breakdowns, potentially affecting learning, arise from We made observations while analyzing five multi-session
imperfect knowledge about other users' identities. This deployments of LN in educational settings (at UC Berkeley
leads us to argue that user identity is an under-explored and the University of Washington). The deployments were
topic in ubicomp. We show that the concept of identity not in controlled settings [1], but in open contexts including
needs to be expanded to include digital, social, and physical a graduate seminar (STS), reading group (TSD), design
features. We conclude with preliminary design implications. studio (DMG1, DMG2) (Figure 1), and undergraduate
Keywords lecture (CS). The data includes transcripts of the written
identity, education, tablet computing, proximity, familiarity conversations (~500 pages) and, in some cases (~12 hours),
video and audio recordings.
INTRODUCTION
We study how people learn via distributed dialogue.
Livenotes (LN) [1] is an application for collaborative note-
taking and drawing in classrooms. Using LN, groups of 3-7
students are wirelessly connected to one another via their
handheld tablets, such that students may exchange notes
synchronously on a multi-user, multi-page whiteboard with
peers from the same group. LN users are currently
identified by being assigned unique ink colors and through
logging in, with login names defaulting to machine names.
The most prevalent method for users to identify themselves
to a computer is through logins, an explicit form of input.
Nonetheless, traditional logins center heavily on the
desktop model, assuming a single user who is bound to a
given computer terminal for a substantial period of time. In
contrast, Livenotes uses the “common pool” model, in Figure 1. Livenotes deployed in an architectural studio
which Tablet PCs do not have fixed users and can be easily review session DMG1. Graduate students and faculty
swapped between users in a session. Pea and Rochelle [2] swapped, picked up, and set aside tablets at will.
argued for device mobility in the education context. In this In the deployments, we discovered a number of disruptions
model, however, users can lose track of who is engaged in to small group dialogue, and we then explored the
communication at a specific moment. mechanisms that people develop to resolve these problems.
Ubicomp therefore becomes important as a way to address In each deployment, groups were confused over who was
this problem. Abowd et al. [3] highlight the relevance of making what inputs on the whiteboard at different points
context, such as identity, to ubicomp, where applications because users would drop out of the dialogue, swap Tablets,
accurately keep track of their users through implicit or come and go from the classroom. Group dialogue
sensing, instead of relying on logins. We argue, however, improved over time with greater familiarity with technology
and user identity, provided that groups remained stable and
did not swap Tablets freely. Still, break-downs occurred
from time to time because of user identity issues. In a
computer science lecture in April 2003, for example, group
dialogue stopped when the group realized that a member
had just entered the room, and wondered “who is red?”
175
They asked “red” to identify himself, resuming dialogue frequently. DMG2 had high scores on all dimensions.
when he did so. However, when people overcome the lack of identity
To avoid such communication break-downs, users can knowledge through social, participative processes like a
“challenge” one another and identify themselves throughout roll-call, they appear to engage in greater dialogue. Other
a session, particularly at the outset. Once, users even variables such as personality and the classroom setting
performed a “roll call” where people took the initiative to (lecture or studio) also affect the level of dialogue.
report who they are (e.g. red, “roll call”, red, “mark”, green Developing this framework leads us to conclude that the
“john”, blue, “jeremy”, green “hi”, as seen in a computer concept of identity needs further development in ubicomp.
science lecture). Identities are established through a social In ubicomp literature, key distinctions between digital,
process that everyone can witness and participate in. The social, and physical identities are usually not made.
group becomes more aware of each other.
DESIGN IMPLICATIONS
Finally, we observed that group members developed a sense Potential design solutions for identity issues exist to aid
of user identity through non-explicit but physical means, ubicomp applications in open educational settings. These
such as associating Tablet use with screen activity, or solutions can use our framework to determine how identity
gesturing to and looking at each other. is being continuously influenced in conditions where people
ANALYSIS swap Tablets, drop out and re-enter dialogue, come and go
To explain how user identity is one important factor from classrooms, or are mobile.
shaping collaborative group dialogue and how users resolve One solution has been proposed by Maniates [4]: the
identity problems in the absence of cues provided by LN introduction of a “person” layer to the network protocol
user interfaces, logins, or social processes like roll-calls, we stack used in wireless, mobile systems, or routing messages
developed a framework that extracts four dimensions of by recipient instead of machine names. Another solution is
each educational setting that LN is deployed in. These to change the user interface to enable a roll-call feature to
dimensions are: physical stability (did people come and go, help people identify each other through social, participative
or change groups), temporal stability (did people stay with means. Another is that user activity can be incorporated into
the tablet conversation), proximity (were people sitting near the group awareness display, thus augmenting user login
each other), and social familiarity (did users know each and ink color information. Data from other sources of input
other previously). Each dimension affects how much group (Active Badges, computer video cameras, and
members are aware of each other. The higher the level of all microphones) can also be cross-referenced to help
dimensions, the more likely it is that groups will effectively determine identity. Hence, there are computational ways of
resolve identity issues and generate sustained dialogue. enhancing stability and familiarity, overcoming the
We did an initial analysis, to measure all five deployments challenges that open classroom settings and workplaces
in terms of the framework and created a relative scale to pose to discourse. All these solutions are co-existing and
compare them along each dimension: see Table 1. This target social, physical, and digital identities jointly. We plan
scale runs from low to high, based on our joint judgments to investigate how the solutions can be integrated in future
of how much of each dimension each group appeared to iterations of LN interface design and deployments.
have. ACKNOWLEDGMENT
We gratefully thank MS Research for providing the TabletPCs in this
Physical Stability Temporal Stability Proximity Familiarity study. Also to John Canny and Ellen Yi-Luen Do for their support.
STS
REFERENCES
TSD
1. A. Iles, D. Glaser, M. Kam, and J. Canny, “Learning via
DMG1
Distributed Dialogue: Livenotes and Handheld Wireless
DMG2 Technology”, in Proceedings of Computer Support for
CS Collaborative Learning ’02 (Boulder CO, January 2002),
legend: high med o low
Lawrence Erlbaum Associates, Inc., NJ, 408-417.
Table 1. Dimensions of each educational setting for the 2. J. Rochelle and R. Pea, "A walk on the WILD side: How
five deployments. wireless handhelds may change computer-supported
collaborative learning," International Journal of Cognition
Deployments varied greatly in their dimensions and and Technology, vol. 1, pp. 145-168, 2002.
therefore the level of their distributed dialogue, measured 3. G. Abowd, E. Mynatt, and T. Rodden, "Human Experience,"
by learning metrics such as: amount of dialogue, extent of IEEE Pervasive Computing, vol. 1, pp. 48-57, 2002.
participation by everyone, or the depth of ideas generated. 4. P. Maniates, M. Roussopolous, E. Surierk, K. Lai, G.
Two architecture studio groups differed markedly in their Appenzeller, X, Zhao, and M. Baker, “The Mobile People’s
dialogue rate and content because one group (DMG2) sat Architecture,” ACM Mobile Computing and
together and could see who the users were, while the other Communications Review, vol. 3, no. 3, July 1999.
group (DMG1) was more dispersed and swapped Tablets
176
Digital Message Sharing System in Public Places
Seiie Jang and Woontack Woo Sanggoog Lee
KJIST U-VR Lab. SAIT Ubicomp Lab.
Gwangju 500-712, S.Korea Suwon 440-600, S.Korea
+82-62-970-2226 +82-31-280-6953
{jangsei,wwoo}@kjist.ac.kr [email protected]
177
establishing the connection, it delivers the context to the PDA according to the user’s identity. Also, cPost-it
Server. The interface transfers the user’s identity to provides a user with personalized information services such
ubiSensor. The identity specifies the right of access to the as classified messages by exploiting the user profile about
shared information classified by the name of a user or the message of interest entities.
group. Note that unspecified persons in public places The cPost-it guarantees to keep the individual notes and to
belong to an “All” group. The resulting messages are share personalized messages among just group members.
provided in the form of Web. Because all messages are categorized into three parts;
‘Personal’, ‘Group’, and ‘All’, it provides users in public
places with proper messages according to the access right
which the user will specify. As long as the user’s access
right is preserved, the private messages can be safely
shared in public places. In addition, all services of cPost-it
are protected by the security mechanism of a Web server.
178
The Spookies: A Computational Free Play Toy
Tobias Rydenhag12, Jesper Bernson1, Sara Backlund12, and Lena Berglin1
ToyLabs Ltd1 & PLAY, Interactive Institute2
Hugo Grauers Gata 3
SE-41296 Gothenburg, Sweden
{tobias, jesper, sara, lena}@toylabs.se
179
in the network. Enriching the creative use of Spookies all already ongoing play of Hide-and-Seek, enriching it with
units can also be physically connected to each other the ability of secretly perceiving and communicating
combining their abilities in order to create more complex information about the seeker among the hiders. Used
functions. All units are connectable to each other in a separately, Spookies proved a good tool for supporting
consistent model of physical assembly without any active play events like: sneaking; hiding; seeking and
limitations to how many Spookies can be included in one running, stimulating spontaneous and physically active
combination. Spookies are combined by magnet connectors play. By combining different Spookies as bricks or
hidden under the texture surface on the top, bottom, left building blocks the children could create new patters of
and right side of the units. When connected, the state of a functionality supporting their creativity but also stimulating
unit is important. Active units (defined by input sensor their understanding of logics. Most interestingly, the
threshold or signal sent from the transmitter unit) can force children were able to easily come up with new areas or
or permit the activation of other physically connected units ways of usage not previously though of. This supports our
depending on the pattern of assembly. This physical idea of Spookies as a tool for inventive Free Play
distributed network is controlled by IR-diodes enabling behaviour.
sending and receiving information through the texture.
180
k:info: An Architecture for Smart Billboards for Informal
Public Spaces
Max Van Kleek
MIT Computer Science and Artificial Intelligence Laboratory (CSAIL)
200 Technology Square
Cambridge MA, 02139 USA
[email protected]
Keywords
The need for a shared context model. CoBrA maintains a
model of the current context that can be shared by all de-
Context-aware systems, smart spaces, semantic web, agent
vices, services and agents in the same smart space. The
architecture
shared model is a repository of knowledge that describes the
1. INTRODUCTION context associated with an environment. As this repository
Context-aware systems are computing systems that provide is always accessible within an associated space, resource-
relevant services and information to users based their situ- limited devices will be able to offload the burden of main-
ational conditions [3]. Among the critical research issues taining context knowledge. When this model is coupled with
in developing context-aware systems are context modeling, a reasoning facility, it can provide additional services, such
context reasoning, knowledge sharing, and user privacy pro- as detecting and resolving inconsistent knowledge and rea-
tection. To address these issues, we are developing an agent- soning with knowledge acquired from the space.
oriented architecture called Context Broker Architecture that The need for a common policy language. CoBrA includes
aims to help devices, services and agents to become context a policy language [5] that allows users and devices to de-
aware in smart spaces such as an intelligent meeting room, a fine rules to control the use and the sharing of their private
smart vehicle, and a smart house. contextual information. Using this language, the users can
By context we mean a collection of information that char- protect their privacy by granting or denying the system per-
acterizes the situation of a person or a computing entity [3]. mission to use or share their contextual information (e.g.,
In addition to the location information [6], an understand- don’t share my location information with agents that are not
ing of context should also include information that describes in the CS building). Moreover, the system behavior can be
system capabilities, services offered and sought, the activ- partially augmented by requesting it to accept new obliga-
ities and tasks in which people and computing entities are tions or dispensations, essentially giving it new rules of be-
engaged, and their situational roles, beliefs, desires, and in- havior (e.g., you should inform my personal agent whenever
tentions. my location context has changed).
Research results show that building pervasive context-aware 2. CONTEXT BROKER ARCHITECTURE
systems is difficult and costly without adequate support from Our architecture differs from the previous systems [3, 7] in
a computing infrastructure [1]. We believe that to create such the following ways:
infrastructure requires the following: (i) a collection of on- • We use Semantic Web languages such as RDF and the
tologies for modeling context, (ii) a shared model of the cur- Web Ontology Language OWL [8] to define ontologies
rent context and (iii) a declarative policy language that users of context, which provide an explicit semantic represen-
and devices can use to define constraints on the sharing of tation of context that is suitable for reasoning and knowl-
private information and protection of resources. edge sharing. In the previous systems, context are of-
The need for common ontologies. An ontology is a formal, ten implemented as programming language objects (e.g.,
Java class objects) or informally described in documenta-
∗This work was partially supported by DARPA contract F30602-
tion.
97-1-0215, Hewlett Packard, NSF award 9875433, and NSF award
0209001. • CoBrA provides a resource-rich agent called the context
183
broker to manage and maintain a shared model of con- places, agents (both human and software agents), devices,
text1 . The context brokers can infer context knowledge events, and time. We have also prototyped a context broker
(e.g., user intentions, roles and duties) that cannot be eas- in JADE2 that can reason about the presence of a user in a
ily acquired from the physical sensors and can detect and meeting room. In our demonstration system, as a user enters
resolve inconsistent knowledge that often occurs as the the meeting room, his/her Bluetooth device (e.g., a SonyEr-
result of imperfect sensing. In the previous systems, indi- icsson T68i cellphone or a Palm TungstenT PDA) sends an
vidual entities are required to manage and maintain their URL of his/her policy to the broker in the room3 . The broker
own context knowledge. then retrieves the policy and reasons about the user’s context
using the available ontologies. Knowing the device owned
• CoBrA provides a policy language that allows users to by the user is in the room and having no evidence to the
control their contextual information. Based on the user contrary, the broker concludes the user is also in the room.
defined policies, a broker will dynamically control the
granularity of a user’s information that is to be shared and 4. FUTURE WORK AND REMARKS
select appropriate recipients to receive notifications of a We believe an infrastructure for building context-aware sys-
user’s context change. tems should provide adequate support for context modeling,
context reasoning, knowledge sharing, and user privacy pro-
tection. The development of CoBrA and the EasyMeeting
system are still at an early stage of research. Our short-term
objective is to define an ontology for expressing privacy pol-
icy and to enhance a broker’s reasoning with users and ac-
tivities by including temporal and spatial relations. A part of
our long-term objective is to deploy an intelligent meeting
room in the newly constructed Information Technology and
Engineering Building on the UMBC main campus.
REFERENCES
[1] C HEN , G., AND KOTZ , D. A survey of context-aware
mobile computing research. Tech. Rep. TR2000-381,
Dartmouth College, Computer Science, Hanover, NH,
November 2000.
[2] C HEN , H., F ININ , T., AND J OSHI , A. An ontology for
Figure 1: A context broker acquires contextual informa-
context-aware pervasive computing environments.
tion from heterogeneous sources and fuses it into a co-
Special Issue on Ontologies for Distributed Systems,
herent model that is then shared with computing entities
Knowledge Engineering Review (2003).
in the space.
[3] D EY, A. K. Providing Architectural Support for
Figure 1 shows the architecture design of CoBrA. The con- Building Context-Aware Applications. PhD thesis,
text broker is a specialized server entity that runs on a Georgia Institute of Technology, 2000.
resource-rich stationary computer in the space. In our pre-
liminary work, all computing entities in a smart space are [4] FIPA. FIPA ACL Message Structure Specification,
presumed to have priori knowledge about the presence of December 2002.
a context broker, and the high-level agents are presumed to [5] K AGAL , L., F ININ , T., AND J OSHI , A. A policy
communicate with the broker using the standard FIPA Agent language for a pervasive computing environment. In
Communication Language [4]. Proceedings of the IEEE 4th International Workshop on
Policies for Distributed Systems and Networks (2003).
3. EASYMEETING: AN INTELLIGENT MEETING ROOM
To demonstrate the feasibility of our architecture, we are [6] P RIYANTHA , N. B., C HAKRABORTY, A., AND
prototyping an intelligent meeting room system called Easy- BALAKRISHNAN , H. The cricket location-support
Meeting, which uses CoBrA as the foundation for building system. In Proceedings of MobiCom 2000 (2000),
context-aware systems in a meeting room. This system will pp. 32–43.
provide different services to assist meeting speakers, audi-
ences and organizers based on their situational needs. [7] S CHILIT, B., A DAMS , N., AND WANT, R.
Context-aware computing applications. In Proceedings
We have created an ontology called COBRA-ONT [2] for of the 1st IEEE WMCSA (Santa Cruz, CA, US, 1994).
modeling context in an intelligent meeting room. This on-
tology, expressed in the OWL language, defines typical con- [8] S MITH , M. K., W ELTY, C., AND M C G UINNESS , D.
cepts (classes, properties, and constraints) for describing Owl web ontology language guide.
1
http://www.w3.org/TR/owl-guide/, 2003.
Notice that we have a broker associated with a given space, which 2
can be subdivided into small granularities with individual brokers. Java Agent DEvelopment Framework: http://sharon.
This hierarchical approach with collaboration fostered by shared cselt.it/projects/jade/
3
ontologies helps us avoid the bottlenecks associated with a single The description of the URL is sent to the broker in a vNote via the
centralized broker. Bluetooth OBEX object push service
184
Containment: Knowing Your Ubiquitous System’s Limitations
Due to the dynamicity and complexity present in the ubiq- 1.3.0.1 View
uitous world, it is unrealistic to expect humans to be able to
reason and act effectively to address security risks. We pro-
pose a new security paradigm that aims to mitigate security
risks and threats present in contexts for data objects by au-
tomatic, proactive data format management.
As data objects are viewed on sub-file granularity the
format management can, in addition to bulk file operations,
provide fine-grained transformations such as e.g. anonymiza- Figure 1. A spherical and a directed view.
tion, partial data quality degradation and other types of se-
lective data constraining. A view (Figure 1) represents one-hop reachability within
a communications channel. Each view has a view generator
We identify three main aspects of the proactive data man-
agement system as: the policy definition language, reason- 1
Referred to as containment.
185
and a view type. For example, if a PDA is IrDA equiped and reasoning at differing levels of granularity as required by
is within range of a IrDA capable mobile phone we say that the highly heterogeneous environment.
the mobile phone is within a view of the PDA; the view type
is IrDA and the view generator is the PDA. Consequently, 1.3.0.5 Collaboration model
we define visible relation which is reflexive, antisymmet- As entities migrate through the ubiquitous environment they
ric and intransitive. By migrating through the environment, will experience differences in quality and quantity of avail-
entities dynamically enter and leave views. able Container-View model data. This will be reflected on
the accuracy of the model. The model should support three
1.3.0.2 Container ways of obtaining relevant information: through environment-
embedded services which provide precomputed models as
required; by using hints based on entity’s sensing capa-
bilities or obtained from other entities present locally; and
through inference process. The model will also incorporate
a trust management infrastructure for the collaboration and
supports reasoning about containment information captur-
ing confidence.
Figure 2. Nested containment.
1.3.0.6 Inference
The notion of a container defines a physical enclosure To provide entities with a certain level of independence from
(Figure 2). Containers may be nested. The main charac- ubiquitous infrastructures, we are currently working on suit-
teristic of a container is that any movement action and its able inference mechanisms. There are two stages in the
consequences are directly reflected onto enclosed entities. model operation at which inference is needed: determin-
For example, a data object is contained within a PDA; the ing, i.e. capturing, current model state and reasoning about
PDA is contained within a car; as the car moves, so do the the model. For the former, in cases where the model infor-
PDA and the data object. Contains relation is irreflexive, mation is unobtainable from trusted third parties, we are fo-
antisymmetric and transitive. Physical enclosure, i.e. con- cusing on Bayesian inference methods [2, 3]. Reasoning is
tainer, can be determined through the visibility property, in to be supported by an algebra roughly based on Egenhofer’s
presence of landmarks, or by using dedicated infrastructural Container-Surface algebra [1] and substantially extended to
support (1.3.0.5). support: considerable difference between physical surfaces
and views, container-view relationship constraints, mobility
1.3.0.3 Container-View relations and information vagueness and indeterminacy.
A container can be within a view of another entity. A con-
tainer can be either transparent or opaque to a view. The 1.4 Summary
former means that the contents are in the direct view as well.
The only per se inference that can be made is that if a con- By considering security issues in ubiquitous computing,
tainer is not within a view then its contents are not within the we have identified a need to address the problem of fre-
view either. To support other types of inference the model quently changing threat models for migrating data objects.
provides for constraints to be specified. Furthermore, we We propose a system for proactively managing data object
define inter-container and inter-view paths to denote one- format. As a first step, we define context as containment
hop links along which data objects can migrate among dif- with respect to physical world and communications chan-
ferent containers and views; e.g. a door between rooms or a nels. The Container-View model represents formalization
bridge between IEEE 802.11 and GPRS respectively. of the notion of containment based on local inter-entity re-
lationships and is independent of absolute location and lo-
1.3.0.4 Formal model cation infrastructures. We are set to evaluate the expres-
Owing to envisaged device constraints, heterogeneity of de- siveness and applicability of the Container-View model as
vice capabilities and differing model usage the decision was envisaged.
made to model physical containment and views separately.
As physical containment is highly hierarchical we are in-
clined to use lattices to model it. Apart from being compu- REFERENCES
tationally feasible, lattices, aid reasoning about neighbors, [1] M. Egenhofer and A. Rodrguez. Relation algebras over containers
ancestors and descendants and can incorporate the notion and surfaces: An ontological study of a room space.
of paths. Modeling views, on the other hand, is more de- [2] N. Friedman, D. Geiger, and M. Goldszmidt. Bayesian network
manding as the model has to be chosen based on the nature classifiers. Machine Learning, 29(2-3):131–163, 1997.
[3] P. Korpip, J. Mntyjrvi, J. Kela, H. Kernen, and E.-J. Malm. Bayesian
of a view type and its propagational characteristics, e.g. di- approach to sensor-based context awareness. Personal and
rectional vs. omni-directional etc. We intend to develop a Ubiquitous Computing, 7:113–124, 2003.
taxonomy of views to aid appropriate model choice. Indi- [4] C. A. Patterson, R. R. Muntz, and C. M. Pancake. Challenges in
location-aware computing. IEEE Pervasive Computing, 2(2):80–89,
vidual models can be aggregated into a bigger picture based Apr. 2003.
on individual application requirements. Our approach facil- [5] A. Schmidt, M. Beigl, and H.-W. Gellersen. There is more to context
itates partial, distributed, model evaluation and constrained than location. Computers and Graphics, 23(6):893–901, 1999.
186
ContextMap: Modeling Scenes of the Real World for
Context-Aware Computing
Yang Li, Jason I. Hong, James A. Landay
Group for User Interface Research, Computer Science Division
University of California, Berkeley
Berkeley, CA 94720-1776 USA
{yangli, jasonh, landay}@cs.berkeley.edu
ABSTRACT by a human being has not only intentional aspects but also
We present a scenegraph-based schema, the ContextMap, to operational aspects. This reveals how social activities can
model context information. Locations with hierarchical be performed through physical actions and objects.
relations are the skeleton of the ContextMap where nodes of Context information itself is recursively related. For
people, objects and activities can be attached. Context example, linguistically, the context of a word is the
information can be collected by traversing the ContextMap. sentence, which in turn gets its context from the paragraph.
The ContextMap provides a uniform method to represent The Berkeley campus has the climate context of the City of
physical and social semantics for context-aware computing. Berkeley, which inherits it from the San Francisco Bay
In addition, context ambiguity can be modeled as well. Area of California based on location containment.
Keywords To leverage the abundant interaction semantics of context,
Context-aware computing, scenegraph, context ambiguity it is necessary to have an efficient way to model the
INTRODUCTION context. We devised the ContextMap (see Figure 1) to
Context is the glue to link the real world with the virtual model the situation of the real world for context-aware
world. Context is “any information that can be used to computing as a scenegraph-like structure. The ContextMap
characterize a situation” [4]. We call the situation a scene of provides a consistent way to model context information and
the real world. The information can be the temperature of a addresses the correlation and ambiguity of context data.
region. It also can be the activity of a person, e.g., reading a RELATED WORK
book, or the activity of a group, e.g., having a meeting. The Active Map [5] provides a basic organization of
Both the physical and the social semantics of a situation are context that consists of a hierarchy of locations with a
required by context-aware computing. Social semantics are containment relation. We employed the location hierarchy
embodied through physical activities, and physical as the skeleton of the ContextMap, but we include relations
activities can be fully understood only under certain social in addition to location containment.
circumstances. For example, we can see “running” as a Crowley et. al. [2] described context as a network of
status of a person at a physical level. It can mean “catching situations concerning a set of roles and relations. Roles may
a bus” at a social level. Activity theory [1] sees an activity be “played” by one or more entities. Dey formulated three
as functionally subordinated hierarchical levels, i.e., kinds of entities for context-aware computing: people,
activities, actions, and operations. Each action performed places and things (or objects) [4]. We model these roles and
UC Berkeley Campus
entities as nodes and edges of a ContextMap.
happen The scenegraph [6] has been widely used in computer
contain
contain contain
graphics. Its dynamic propagation of graphical attributes
Research Soda Hall Cory Hall South Hall greatly simplifies the representation of a scene and it proves
Education
happen contain happen an efficient way to model complicated scenes. To model
contain 0.3 happen
scenes of the real world, we extended the scenegraph to
EECS
happen
Soda 523
SIMS
deal with the context semantics of the real world.
contain 0.9
contain
INTRINSIC AND RELATIONAL CONTEXT ATTRIBUTES
UI research Bob Alice
conduct
The context information of an entity can be classified into
use
use use use 0.9
intrinsic and relational attributes. Intrinsic attributes of an
writing pen whiteboard computer entity can be described without referring to others, e.g., the
contain contain identity of a person can be his name. A person’s status can
be his age or health condition. However, relational
camera tablet
attributes of an entity can only be specified by its relations
Figure 1: An example ContextMap. Rectangles indicate with other entities. For example, the position of an entity
Place nodes. Diamonds stand for Activity nodes. People can usually be described as a relative spatial relation with
nodes are represented as ellipses and Object nodes are other entities, e.g., near or far and in or out.
ellipses in gray.
187
NODES AND EDGES OF A CONTEXTMAP MODELING CONTEXT AMBIGUITY
Like a traditional scenegraph, a ContextMap is a directional In reality, both sensed and interpreted context is often
acyclic graph (see Figure 1) and the context attributes are ambiguous [3]. The ContextMap models context ambiguity
collected by a depth-first traversal. An entity, i.e., a place, a by tagging edges and the intrinsic attributes of nodes with
person or an object, is represented as a node of the graph. confidence values. For example, the intrinsic attribute
Each node maintains the intrinsic attributes of an entity that “health condition” of “Alice” could be 0.8. In Figure 1, the
it represents. Relational attributes of an entity are confidence of “Alice” in “Soda 523” is 0.9 and in “Cory
represented by edges directly or indirectly linked to its Hall” it is 0.3. Edges without labelled values have the
node. So the context of an entity is represented not only by default confidence value “1.0”.
the attributes in its node but also by the node’s position in Here we describe a simple method to calculate the
the entire ContextMap. A ContextMap is a view of the real confidence of transitive relations.
world that can be shared by multiple applications.
α β αβ
Another kind of node in a ContextMap is the Activity node, Given x → y and y → z , x → z .
which represents the social semantics of an entity or a For example, the confidence of “Alice” using “computer” is
group of entities, e.g., reading a book or having a seminar. 0.9. Since the confidence of Alice in Soda 523 is also .9, the
It can be applied to a sub-graph of a ContextMap like the confidence of “computer” in “Soda 523” is 0.81.
dynamic propagation of graphical attributes in a
scenegraph. It means that the activity is conducted by However, the confidence of “whiteboard” in “Soda 523” is
people with certain tools (physical objects) at a certain the average of the confidences of all paths from “Soda 523”
location. For example, in Figure 1, “UI Research” happens to “whiteboard”. It is 0.95 based on [Soda 523, Bob,
in Soda 523 and it indirectly indicates the activity of Bob, whiteboard] = 1 and [Soda 523, Alice, whiteboard] = 0.9.
Alice and the tools they are using to achieve this activity. CONCLUSION AND FUTURE WORK
Place nodes stand for entities that are places or sites. They ContextMap enables an efficient representation of
can refer to a large region (“California”) or a small area complicated situations, particularly for relational context,
(“close to whiteboard”). The containment relation between by using dynamic attribute propagations and transitive
Place nodes is stable and hierarchically structured, e.g., the relations. Both social and physical semantics of context can
UC Berkeley campus contains Soda Hall and will always be represented in a consistent manner. Attributes and
do so. Place nodes and their containment relations relations of nodes can be updated based on sensed
constitute the skeleton of a ContextMap, which can be information, e.g., a person’s location and its confidence, or
enriched by nodes describing people, physical objects, and manually, e.g., an Activity node can be manually added in
activities. A ContextMap can be built by establishing a or manipulated beforehand or in runtime. ContextMaps will
static Place hierarchy first. Directional edges from Place be provided as an infrastructure service to applications. We
nodes can indicate contain relations for physical are continuing to refine the representation and evolution
containment and happen relations for locations where some mechanisms of the ContextMap, and to enable easy
events, i.e., social activities or roles, happen. For example, construction of and access to ContextMaps.
“education & research” happens on the “UC Berkeley REFERENCES
Campus”. An Object node is for a physical object, e.g., a 1. Bertelsen, O.W. and Bodker, S. Activity Theory. HCI
pen, which can have directional contain edges to its sub- Models, Theories, and Frameworks Ed. by Carroll,
components. Contain relations are transitive. J.M. Morgan Kaufmann Publishers. 2003, pp. 291-324.
A Person node represents a person entity. Directional edges 2. Crowley, J.L., Coutaz, J., Rey, G. and Reignier, P.
from a Person node can indicate conduct or use relations, Perceptual Components for Context Aware Computing,
specifying the person is conducting an action or using a Proceedings of UBICOMP2002, Sweden.
physical object (tool), respectively. A use relation can
3. Dey, A.K., Mankoff, J., Abowd, G.D. and Carter, S.
transfer the semantics of a contain relation. For example,
Distributed mediation of ambiguous context in aware
the fact that “Bob” is in “Soda 523” and he is using the
environments, Proceedings of UIST 2002, pp. 121-130.
“pen” indicates that the “pen” is also in “Soda 523”.
4. Dey, A.K., Salber, D. and Abowd, G.D. A Conceptual
A node can be referenced by multiple nodes. For example
Framework and a Toolkit for Supporting the Rapid
in Figure 1, both “Bob” and “Alice” are using the
Prototyping of Context-Aware Applications, Human-
“whiteboard”. The multi-reference to a node can also be
Computer Interaction, 2001, 16(2-4), pp. 97-166.
used to model context ambiguity. For example, “Alice”
could be either in “Soda Hall” or “Cory Hall” in Figure 1. 5. Schilit, B. and Theimer, M. Disseminating Active Map
Information to Mobile Hosts, IEEE Network, Vol. 8,
Intrinsic attributes of a node can be tagged by a timestamp
pp. 22-32, 1994.
to indicate when they are updated or a time span to indicate
their validity. Moreover, a directional edge can be tagged to 6. Strauss, P.S. and Carey, R. An Object-Oriented 3D
indicate the valid period of a relation. Graphics Toolkit, ACM Computer Graphics, 1992,
26(2).
188
Service Platform for Exchanging Context Information
Daisuke Morikawa Masaru Honjo Akira Yamaguchi Masayoshi Ohashi
KDDI R&D Laboratories Inc.
2-1-15 Ohara Kamifukuoka, Saitama 356-8502 JAPAN
+81 49 278 7883
{morikawa, honjo, yama, ohashi}@kddilabs.jp
189
and ALI setting to SP and/or CU. The CM should be CONTEXT EXCHANGING SERVICE
under perfect control of the CR and should maintain We designed the messaging service of exchanging user’s
independence from other functions for the purpose of activities with each other, which is provided by an SP.
user’s privacy protection. Mobile terminal A (MTA) has the functions of both
registering CRA’s context information and requiring CUB’s
Context Manager (CM) and CUC’s context information. The functions equipped
-context repository
-management of context 4) Request for user’s
with MTB and MTC are determined in the same manner as
-access control context info in MTA. An example sequence in exchanging user’s
context information is presented in Fig. 2. The context
2) Registering context info Context-based
with open level indicator (OLI) Service Providers (SPs)
exchange service is based on the trigger of CU and CR.
5) messaging Context user Context-based SP Context manager Context registrar
1) Capturing (mobile terminal) ( networked server) (personal server) (mobile terminal)
context information service
3) Setting to exchange user’s AB C AB C A BC
access level context info Trigger require
indicator: ALI from CRB's context register
sofa TV PC context forwarding CRB 's
Context
Registrar (CR) user process context
Having Watching Web Access require
a rest TV Email Exchange
CR B's context
A case of context : user’s activities Context User (CU) Access Control
(based on the relation between “object” and “activity”)
result
Figure 1 Schematic illustration of a service platform for re-formatting
exchanging user’s context information result
190
The State Predictor Method for Context Prediction
Jan Petzold, Faruk Bagci, Wolfgang Trumler, and Theo Ungerer
University of Augsburg
Institute of Computer Science
Eichleitnerstr. 30, 86159 Augsburg, Germany
{Petzold, Bagci, Trumler, Ungerer}@Informatik.Uni-Augsburg.DE
ABSTRACT
CAPTURING INTERACTIONS BY MULTIPLE SENSORS
2
Ubiquitous sensors (video
camera, microphone, IR tracker)
#$%
#$%
&'%
Video camera,
IR tracker,
(
LED tag
(
PC
)
Figure 1: Setup of the ubiquitous sensor room.
4
!
*
)
+
/ %0
./-
*
+
%-
,
/
./- *./- +
*%0
%0
+
*
./-
+
(
,
./-
1
)
2
)
3"
/
566
576
666
'
,
INTERPRETING INTERACTIONS
'
8
%0
)
2
9:
4
193
, *
+
,
%0 ;
./-
Summary video
of the user’s
entire visit
,
Staying Coexistence
List of
highlighted
scenes during
the user’s visit
Annotations for
each scene:
time,
Gazing at an object Joint attention Attention Focus: Socially important event description,
duration
Conversation
=
$ &&% 06*1$ *444%
194
Ubiquity in Diversity – A Network-Centric Approach
Rajiv Chakravorty, Pablo Vidales, Boris Dragovic, Calicrates Policroniades, Leo Patanapongpibul
Cambridge Open Mobile Systems (COMS) Project Initiative
University of Cambridge Computer Laboratory and Engineering Department
William Gates Building, JJ Thompson Avenue
Cambridge CB3 0FD, U.K.
COMS Web: http://www.cl.cam.ac.uk/coms/
Wireless networking has witnessed strong growth recently due to device or transmitted through a communication link in the het-
the popularity of WiFi (802.11b-based WLANs) and world-wide erogeneous space. However, if the context in the heterogeneous
deployment of wide-area wireless networks such as GPRS and 3G. space is known, we can easily identify relevant security and pri-
Devices that can connect to multiple networks (e.g., GPRS-WLAN vacy threats that the data object is exposed to; and then mitigate
cards) are becoming increasingly affordable, and in future mobile the identified risks by proactively managing the data object for-
devices such as laptops, PDAs and handhelds will be equipped to mat. The challenge in this context model is to match heterogeni-
connect to multiple different networks. As the environment be- ety with device capabilities, quality as well as confidence levels
comes more diverse and heterogeneous with a range of networks, available from the model, while at the same time tapping the full
devices and services to choose from – a key issue that will need to potential of myraid technologies for sensing the context.
be addressed is that of heterogeneity. In this poster abstract, we dis-
cuss our practical efforts in building a truly ubiquitous environment In Cambridge Open Mobile Systems Project [1], we are investigat-
for secure heterogeneous networking. ing how we can achieve this vision of secure heterogeneous net-
working. As a first step, we have already investigated the extent to
Using an experimental testbed that creates a heterogeneous environ- which Mobile IPv6 can be used to successfully migrate TCP con-
ment, we are investigating the following: nections during inter-network handovers [2].
handoffs within and across different networks, and to other en- IEEE 802.11b
WLAN APs
195
and detection steps can overlap, as there are scenarios when deci- mobile environments, we will use the Sentient Car that is situa-
sion process may require more probing of the network (for example, tion aware based on its location (using GPS), movement direction
duplicate address detection time). and speed. The Sentient Car is an outcome of joint research of
different departments of the University of Cambridge.
We have partitioned the handoff (execution) latency into three com-
ponents – detection, configuration and registration times. We have
investigated the extent to which Mobile IPv6 could be used to suc- Mobile Access Router (MAR).
cessfully migrate TCP connections during inter-network handoffs. MAR [4] is a system consisting of a MAR Client – a multimode
Using the testbed, we have evaluated the impact layer-3 hard hand- mobile device used as a mobile accesss router, and connected
offs have on transport protocols such as TCP – a more thorough to different wireless networks simultaneously (e.g., GPRS, 3G,
description is available in the form of a separate technical report WLAN etc.) to communicate with a MAR Server proxy located
[2]. Besides, we have experimentally evaluated schemes that im- in the wired infrastructure. The MAR client is a mobile access
prove vertical handovers – Fast Router Advertisements (RAs), RA router to be placed in a car, bus, train etc., and performs band-
Caching, and Binding Update simulcating in Mobile IPv6, smart width striping (aggregation) across multiple network interfaces
buffer management using TCP proxy in GPRS, and soft handovers to exploit the distributed spatial diversity available from differ-
that improves TCP performance dramatically [2, 3]. ent wireless access networks. Diversity provides a highly reliable
Building further on this work, our ongoing research is focussed at “always-on” wireless communication channel. The MAR project
broadening the concepts of secure and efficient heterogeneous net- can extend the use of Mobile IPv6 in this environment.
working under the aegis of COMS project [1]. As previously dis- Our poster illustrates several such practical intricacies using a real
cussed, we have already evaluated schemes that improve handover testbed, and provides a sound description of our ongoing research
performance and we are currently focussed into exploiting several on secure heterogeneous networking. Please visit our project COMS
potential areas for secure heterogeneous mobility – mobility man- web-page,
agement and networking with context, and using feedback informa- http://www.cl.cam.ac.uk/coms/
tion from this context to provide fine-grained adaptation for data, for further details and information about our ongoing research and
and to identify the threats of this data model. papers.
Other practical applications of the testbed includes two potential re- REFERENCES
search areas for mobile networking – Context Aware networking 1. Cambridge Open Mobile System Project.
using the Sentient Car, and Mobile Access Router (MAR) [4]. The http://www.cl.cam.ac.uk/coms/
two research areas are closely knitted, and both require a good un-
2. R. Chakravorty, P. Vidales, L. Patanapongpibul, K. Subramanian, I.
derstanding of mobility in heterogeneous environments. Pratt and J. Crowcroft. “On Inter-network Handover Performance us-
Sentient Car for Context-Aware Networking. In this project, ing Mobile IPv6”. University of Cambridge Computer Laboratory–
Technical Report, May 2003.
we are investigating how networking context (situation aware-
http://www.cl.cam.ac.uk/coms/publications.htm
ness) based on location, movement direction and speed can be
used to make better, informed decisions during inter-network han- 3. R. Chakravorty, P. Vidales, K. Subramanian, I. Pratt and J. Crowcroft.
dovers. “Practical Experiences with Wireless Networks Integration using Mo-
bile IPv6”. Poster and 2-page Extended abstract in ACM MOBICOM
2003, San Deigo, October 2003.
http://www.cl.cam.ac.uk/coms/publications.htm
196
A Peer-To-Peer Approach for Resolving RFIDs
Christian Decker, Michael Leuchtner, Michael Beigl
TecO, University of Karlsruhe
Vincenz-Priessnitz-Str. 1, 76131 Karlsruhe, Germany
http://www.teco.edu
{cdecker, leuchtner, beigl}@teco.edu
197
vides a fixed identification string of 8 bytes and a service extensibility by just adding another resolving service or
identification string of 44 bytes from its memory. The en- enquirer and the anonymity. A manufacturer providing a
quirer uses the service identification to query the resolver. resolving service does not need to share any information
The network replies with peer advertisements matching the with an authoritative organization, and can use his own
service identification. At this point the authenticity of the identification scheme for his items. Anonymity grants that
resolving service has not yet been proven. The enquirer queries for item identifications are not traceable by others.
therefore connects to all advertised peers. A message Mq Furthermore, the asymmetric encryption ensures authentic-
containing a randomly chosen session ID, the service iden- ity and protects exchanged data. The control of information
tification and the RFID from the transponder is encrypted is completely on the manufacturer's side. Therefore we also
with the public key of the resolving service, signed using see an application area in workflow management systems
the enquirer's private key and then sent to each connected controlling processes interwoven between various manu-
peer. A resolving service can now verify the authenticity of facturers.
the message using the public key of the enquirer and de-
RELATED WORK
crypt the message using his private key. The query request Auto-ID center[5] aims to create standards for an "Internet
can then be fulfilled by the resolving service. A message of things". Identification of objects is based on RFID trans-
Mr containing the received session ID, the service descrip- ponders. The resolving service uses a DNS like tree-based
tion and the response data is then encrypted, signed and system called Object Naming Service (ONS) returning a
sent back to the enquirer, which can now prove the authen- resource address for extensive information about an object.
ticity of the resolving service. The next figure summarizes With CueCat[6] users could scan an item's barcode which
the resolving mechanism. was sent encrypted over the Internet to CueCat's manufac-
Enquirer P2P Resolving Service
turer returning the URL of an appropriate website about the
query (serv
ice) item. The encryption was cracked and it was found that the
ply (pee rs) manufacturer collected personal data from each scanner
re
A connect device. In research on security on P2P networks reputation-
based approaches and protocols like XREP[2] were devel-
sigE (pk (M oped to handle various attacks. However, reputations need
RS q ))
to be shared and as in our scenario enquirers don't share
))
sigRS (pkE (Mr information this method cannot be applied here.
CONCLUSION AND FUTURE WORK
We presented a system design and its implementation for
Figure 2: Resolving Mechanism resolving RFIDs using a P2P network where queries and
Our tests showed an average response time of six seconds responses are encrypted and signed. This approach is
for a query, mainly caused by the encryption algorithm and marked by anonymity, security and non-traceability of que-
the delays while waiting for replies of peer advertisements. ries and responses. Furthermore, it enables easy adhoc and
non-authoritative extension and redundancy. Ubicomp ap-
DISCUSSION AND APPLICATIONS
Apart from the strengths like anonymity, authenticity and plications benefit from this system as it provides a middle-
security, there are also weaknesses. The exchange of the ware for resolving associations between real-world objects
public keys is an overhead during protocol initialization, and their virtual presence. Future investigations will look
making the setup of new resolving services and enquirers into group creation for performance and redundancy rea-
inconvenient. An initial direct and secure connection be- sons and into possibilities of using this system as a generic
tween enquirer and resolving service can be applied. Fur- resolving mechanism.
thermore, the management of possibly several thousand REFERENCES
keys on a machine requires a large effort to secure the en- 1. Kindberg T. et al. (2000). People, Places, Things: Web
quirer and resolving services. There are also performance Presence for the Real World. WMCSA 2000, p 19.
issues: the signature of all messages arriving at the resolv- 2. Damiani E. et al. A reputation-based approach for
ing service must be checked for each known enquirer, choosing reliable resources in peer-to-peer networks.
which causes a huge load when the network scales up. Ad- ACM CCS 2002, 207-216
vanced features like group creation implemented in JXTA 3. Project JXTA. http://www.jxta.org [ac-
might be helpful to balance the load. On the application cessed:7/10/2003]
side we see a huge potential, when manufacturers can elec-
4. GNU Privacy Guard (GnuPG). http://www.gnupg.org
tronically trace their items. Applications in the field of
[accessed: 7/10/2003]
SCM and CRM systems might benefit from the ubiquity of
extensive information about items, which becomes easily 5. Auto-ID Center. http://www.autoidcenter.com [ac-
and securely accessible by our approach. The major cessed: 7/10/2003]
strengths of the P2P approach are the non-authoritative 6. CueCat. http://www.cuecat.com [accessed: 7/10/2003]
198
Single Base-station 3D Positioning Method using
Ultrasonic Reflections
Esko Dijk1,2 Kees van Berkel1,2 , Ronald Aarts2 ,
1
Eindhoven University of Technology Evert van Loenen2
2
5600 MB Eindhoven Philips Research Laboratories Eindhoven
The Netherlands Prof. Holstlaan 4, 5656 AA Eindhoven
Phone: +31-40-2742256 The Netherlands
[email protected] [email protected]
Since fewer base stations leads to lower cost and easier setup,
0.6
a novel method is presented that requires just one base sta-
tion. The method uses information from acoustic reflections 0.5
Keywords 0.1
Location awareness, location systems, ultrasonic positioning
0
0 1 2 3 4 5 6 7 8 9 10
1. INTRODUCTION Distance (meter)
In future consumer electronics, context awareness will play
an important role. Often, the locations of people, devices Figure 1: Measured signature at a receiver position. The
and objects are part of the required context information of horizontal axes show time (top) and corresponding dis-
which consumer devices need to be ‘aware’. Within the tance interval [0, 10] m.
PHENOM project [1], several application scenarios were de-
veloped that require in-home 3D device position informa-
tion. trasonic waves against the walls, floor and ceiling of a room.
How these reflections may help in position estimation will
The required position accuracy (typically ≤ 1 m) can not be be explained in this section. A typical (processed) ultrasonic
delivered by wide-area systems like GPS. Therefore, a spe- signal measured at some receiver in a box-shaped room is
cialized indoor positioning system is required. It may use ra- shown in Fig. 1. At time t = 0 a source emits a burst-like
dio waves (RF), magnetic fields, ultrasonic waves, or combi- signal. Using time synchronization between devices (e.g.
nations thereof. We investigate systems based on ultrasonic by an RF link such as in the Cricket system [4]) the re-
waves, because of the potential high accuracy at low cost. ceiver can measure the time-of-flight of ultrasonic signals,
State-of-the-art ultrasonic systems calculate distances from and then calculate the distance to the source. In the figure,
ultrasound time-of-flight measurements, and then use trian- the first peak at 2.89 m is the line-of-sight distance. The sub-
gulation algorithms to calculate a 3D position. A disadvan- sequent peaks are caused by reflections. These reflections
tage of this approach is that several units of infrastructure are were found to contain information about the position of the
required at fixed known positions in a room. Generally four receiver. The information is contained within the pattern of
base stations (BS) are required in a non-collinear setup to amplitude peaks, called the signature, shown in the figure.
estimate 3D position. In special cases like ceiling-mounted
BSs, three is sufficient. Fewer BSs would make positioning Note that the fixed BS can be chosen to be either transmit-
systems cheaper, and easier to set up. Therefore we investi- ting or receiving ultrasound. We chose it to be a transmitter,
gate whether a positioning system can work with fewer BSs, to allow many mobile device receivers to co-exist without
or with just one BS (of small size) in the extreme case. causing ultrasonic interference problems between devices.
Image Source 2
Since the acoustic model also needs a candidate orienta-
Source tion to calculate a signature, this orientation has either to
be known in advance or estimated on-the-fly. Initially the
former approach was used [3], but currently methods of ori-
entation estimation are being developed.
3. RESULTS
Receiver
A measurement setup was built to test the method. It consists
of one piezo-electric ultrasound transmitter base station (BS)
Figure 2: 2D top view of a room, containing one acous- and one receiver, both connected to a measurement PC. All
tic source and one receiver. Two acoustic reflections (ar- processing steps are implemented in software. Preliminary
rows) and associated image sources (crosses) are shown. experiments have been performed in an empty office room,
to verify the acoustic room model and to test the method
in best-case conditions. The transmitter BS was fixed at a
lowing example will show the model’s principle. Figure 2 wall and the mobile receiver was placed at 20 different po-
shows a top view of a room with an ultrasound source. Two sitions. A good position estimate was found in 18 positions,
reflections of ultrasonic waves off walls are shown. These all with a positioning error of less than 20 cm. Two positions
reflected waves can be considered as originating from two had higher errors of 0.77 m and 1.20 m. The errors were
conceptual image sources marked by crosses. Many more caused by a combination of three effects in the measured
image sources than those shown exist in a room, which can signature (‘missing’ peaks, ‘noise’ peaks, and random devi-
be calculated using the image method [2]. From here on we ation of peak-amplitude from its expected value) that will be
assume that the source shown is a BS at a fixed known po- further investigated.
sition. It will give rise to many image sources, that can be 4. CONCLUSIONS AND FUTURE WORK
seen as virtual base stations (VBS). We can think of VBSs It can be concluded that measured ultrasonic signals contain
as possible replacements for real BSs, thereby reducing the useful information about the mobile device’s 3D position.
number of real BSs. To calculate the positions of VBSs, the We propose to use this information to perform device posi-
room dimensions have to be known. The current room model tion estimation, using a single base station per room. The
includes 91 VBSs, and room dimensions are measured ± 5 signature matching method was developed for this purpose.
cm accurate. Initial experiments show that the method works within an
However, signatures are not only affected by position but empty office room.
also by device orientation. Therefore, source/receiver orien- Future work is aimed at applying the method in realistic non-
tations and the directional beam pattern of ultrasound trans- empty rooms. To realize this, several improvements to the
ducers are included in the acoustic model. The model fur- basic method are being considered for increased robustness
thermore includes the attenuation of ultrasound in air, res- and calculation speed. One approach is a tracking system
onance characteristics of piezo-electric ultrasound transduc- that integrates information from several measurements over
ers, acoustic interference effects between reflection peaks (in time. Other approaches are based on small-sized transducer
case reflections arrive approximately at the same time), and arrays, embedded in the base station.
wall reflection attenuation factors [3].
REFERENCES
2.2 Signature matching method [1] PHENOM project, 2003. www.project-phenom.info.
Using the acoustic model, it is possible to calculate an ex-
pected acoustic signature given a 3D position and orienta- [2] J. Allen and D. Berkley. Image Method for Efficiently
tion. However, the reverse problem, of directly calculat- Simulating Small-Room Acoustics. J. Acoust. Soc. Am.,
ing 3D position and orientation given a measured signature, 65(4):943–951, 1979.
proves to be much harder. Therefore the former approach [3] E. O. Dijk, C. van Berkel, R. Aarts, and E. van Loenen.
was used as our initial method for 3D position estimation, Ultrasonic 3D Position Estimation using a Single Base
the signature matching method. It simply tries a set C of Station. In Proc. European Symposium on Ambient
mobile device 3D candidate positions in the room, calculates Intelligence (EUSAI), Veldhoven, The Netherlands,
an expected signature at these positions using the model, and 2003. Springer Verlag (to be published).
compares those to the measured signature. Finally the best-
matching candidate position is picked as the likely mobile [4] N. Priyantha, A. Miu, H. Balakrishnan, and S. Teller.
device 3D position. Note that set C is a well-chosen subset The Cricket Compass for Context-Aware Mobile
of all possible room positions. Its size Nc ranged from 7243 Applications. In Proc. ACM 7th Int. Conf. on Mobile
to 11131 in our experiments, with a space between candi- Computing and Networking (MOBICOM), pages 1–14,
date positions of ≤ 5 cm. The current computational load Rome, Italy, 2001.
for signature matching over set C is of the order O(Nc · 105 )
200
Prototyping a Fully Distributed Indoor Positioning System
for Location-aware Ubiquitous Computing Applications
Masateru Minami Hiroyuki Morikawa Tomonori Aoyama
Shibaura Institute of Technology Graduate School of Frontier Sciences Graduate School of Information Science
3-9-14 Shibaura The University of Tokyo and Technology,
Minato-ku, Tokyo Japan The University of Tokyo
[email protected] 7-3-1 Hongo Bunkyo-ku, Tokyo Japan 7-3-1 Hongo Bunkyo-ku, Tokyo Japan
[email protected] [email protected]
ABSTRACT location of the nodes. The RF transceiver is used for time
This paper describes an indoor positioning system called synchronization and message exchange among nodes.
DOLPHIN (Distributed Object Localization System for The key idea in our positioning algorithm is based on hop-
Physical-space Internetworking) that enables various by-hop localization. For example, in the bottom left of
physical objects to obtain their location in a fully distributed figure 1, node D can determine its position by receiving
manner. We present prototype implementation and ultrasound pulses from the reference nodes A, B, and C.
experimental evaluation of the DOLPHIN system made However, node E and F cannot receive ultrasonic pulses
from off-the-shelf hardwares. from reference nodes due to physical obstacles such as wall.
KEYWORDS Here, if the position of node D is determined, and node E
Indoor Positioning System, Distributed Algorithm can receive ultrasonic pulse from node D, node E can
compute its position by using distances from node B,C, and
INTRODUCTION D. If the locations of node D and E are determined, node F
In ubiquitous computing environment, physical location of can compute its position using node C, D, and E. In this
indoor objects is one of the key information to support way, all nodes in the DOLPHIN system can be located.
various applications. To obtain indoor location information, There are two main advantages to this mechanism. First, the
several positioning systems have been proposed. Active Bat system requires only a few (minimum three) reference nodes
[1] and Cricket [2] use ultrasonic pulse TDOA (Time to determine all node positions. Second, nodes can
Difference of Arrival) to measure high precision 3D determine their positions even if they cannot receive
position and orientation in indoor environment, but they ultrasound from any reference nodes directly.
require an extensive hardware infrastructure. However, such
The positioning algorithm runs by exchanging several
systems usually require manual pre-configurations of the
messages as shown in figure 2: ID notification message
locations of reference beacons or sensors. The setup and
(IDMsg), measurement message (MsrmtMsg), and the
management costs would be unacceptably high if we apply
location notification message (LocMsg). The nodes in the
them to large scale environment such as an office building.
system play three different roles: there is one master node,
Ad-hoc localization mechanism described in [3] can be
one transmitter node, and the rest are receiver nodes.
applied to such problem. In [3], the authors proposed
Consider the example depicted in figure 1, where nodes A,
collaborative multilateration algorithm to solve localization
B, and C are reference node, and nodes D, E, and F are
problem in a distributed manner, and performed detailed
normal nodes (the position of the nodes are unknown).
simulation-based analysis of a distributed localization
Here, we assume that nodes A, B, and C have node lists [B,
system. To design practical location information
C], [A, C], and [A, B] respectively. We also assume that
infrastructure, we believe that experimental analysis is also
node E and node F could not receive ultrasonic pulse from
needed to discover practical problem in distributed
node A because of obstacle such as a wall.
localization system.
Now consider that node A acts as a master node. Figure 2
From this point of view, we have developed a distributed
shows the timing chart of our positioning algorithm. First,
positioning system called DOLPHIN (Distributed Object
node A chooses one node randomly from its node list [B,
Localization System for Physical-space Internetworking)
C]. If node B is chosen, node A transmits MsrmtMsg
that can determine objects’ position using only few
including ID of node B. On receiving the message, node B
manually configured references. The system is made from
becomes transmitter node and generates ultrasonic pulses.
off-the-shelf hardware devices, and implements a simple but
At the same time, nodes C, D, E, and F become receiver
practical distributed positioning algorithm.
nodes and start their internal counters (synchronization
Positioning Algorithm phase). When a receiver node detects ultrasound from node
Figure 1 shows overview of the DOLPHIN system. The B, it stops its internal counter and calculates its distance
system consists of a number of DOLPHIN nodes that from node B. After several ms (this depends on the time
containing 2400bps RF transceiver, several 40kHz omni- taken by the overflow of the internal counter), node B sends
directional ultrasonic transducers, and a HITACHI LocMsg to notify receiver nodes of its position. Receiver
H8S/2215 16MHz CPU. The CPU is for calculating the nodes that could detect the ultrasound pulse from B store the
201
location of node B and their distances to node B in their MsrmtMsg from other nodes within a certain period (e.g. 10
position table (measurement phase). After that, all nodes seconds), the advertisement timer in the node expires. Thus,
listen IDMsg for several ms (advertisement phase). If there that means the node is not recognized as a node capable of
is node that could determine its position based on three or master node by other nodes. In this case, the node
more distances, it advertises its ID in this phase. This ID is retransmits IDMsg at advertisement phase in each
added to the node list of every other node. In the above positioning cycle. Note that to avoid IDMsg collision at
example, because nodes D, E, F cannot determine their advertisement phase, the node sends IDMsg at a certain
positions, no IDMsg is sent in this phase. The sequence of probability which determined by the number of nodes in the
the above phases defines one cycle of the positioning node list.
algorithm in the DOLPHIN system.
Experimental Result and Future Work
In the next cycle, node B, which acted as a receiver node in We placed seven nodes as shown in figure 3, and computed
the previous cycle, becomes a master node. And the the average and the variance of the measured position of
positioning algorithm proceeds in the same manner. After each normal node (nodes D-G) for 1000 cycles. The results
three or more cycles of positioning, node D can determine showed that the system could determine objects’ position
its position based on measured distances from nodes A, B, with an accuracy of around 15 cm in actual indoor
and C. At which time, node D can send its IDMsg in the environment. However, positioning error increases at nodes
advertisement phase. All other nodes that received the E-G compared to that at node D. This is because the
IDMsg from node D add the ID of node D to their node positioning error at node D affects position determination of
lists, and node D is recognized as a candidate master node. nodes E-G that determine their position based on node D.
After node D becomes master node, node E and node F can Although this error propagation problem is inherently
measure their distances from node D. Then, node E can unavoidable in the DOLPHIN system, we expect to
determine its position and advertise its IDMsg. Finally, minimize positioning error by placing reference nodes at
based on nodes C, D, and E, node F can determine its appropriate locations.
position. In this way, we can locate all nodes in the
Since current prototype is a handmade system, the
DOLPHIN system.
performance of the system may be insufficient to support
In the DOLPHIN system, we have to prepare for two types many indoor location-aware applications. In addition, the
of failures, node failure and recognition failure, to number of nodes is too limited to measure the performance
continuously execute the above mentioned positioning in large scale environment. Currently we are designing
algorithm. The node failure is that the node suddenly stops improved version of the system that can handle practical
because of unpredictable accident, and the recognition problems such as multipath propagation, node mobility as
failure is that the IDMsg transmitted from the node capable well as scalability problem in large scale environment.
of master node does not reach other node because of bad
communication channel or message collision. To recover References
from those failures, each node in the system has a recovery [1] A.Ward, et al.: A New Location Technique for the
timer and an advertisement timer. The recovery timer is set Active Office. IEEE Personal Communications Magazine,
when nodes receive MsrmtMsg, and expires if there has Vol. 4, No. 5, October 1997. [2] N.Priyantha, et al.: The
been no MsrmtMsg for a certain period (e.g. 1.5 second). If Cricket Compass for Context-aware Mobile Applications.
the recovery timer expires, a node is chosen to become a Proc. MOBICOM2001, July 2001. [3] A. Savvides, et al.:
master node randomly, and the positioning algorithm Dynamic Fine Grained Localization in Ad-Hoc Sensor
continues. If a node capable of master node does not receive Networks. Proc. MOBICOM2001, July 2001.
202
Connectivity Based Equivalence Partitioning of Nodes to
Conserve Energy in Mobile Ad Hoc Networks
Anand Prabhu Subramanian
School of Computer Science and Engineering,
College of Engineering, Guindy,
Anna University, Chennai – 600 025
Tamil Nadu, India
[email protected]
ABSTRACT that every node is treated equally and the life time of the
The nodes in Mobile Ad Hoc Networks (MANETs) work over all network is increased.
on low power batteries. So, reducing energy consumption RELATED WORKS
has been the recent focus of wireless adhoc network Reducing energy consumption has been the recent focus of
research. The power in the nodes dissipates even when the wireless adhoc network research. The Geographic Adaptive
network interface is idle. In this paper, we present a Fidelity (GAF) [5] scheme of Xu et al. self configures
topology maintenance algorithm, Equivalence Partitioning redundant nodes into small groups based on their
method which is based on the connectivity among the geographic locations and uses a localized, distributed
nodes in the network. This algorithm partitions the network algorithm to control node duty cycle to extend network
into equivalence sets in which one of the nodes in the set is operational lifetime. But in many settings, such as indoors
active and the other nodes in the set turn off their radio. or under trees where GPS does not work, location
This algorithm takes care that the capacity or connectivity information is not available. The dependency on global
of the network does not diminish significantly. This is a location limits GAF’s usefulness. In addition, geographic
simple, distributed, randomized algorithm where nodes proximity does not always leads to network connectivity.
make local decisions to form the equivalence partitions and The SPAN [1] scheme of Chen and Jamieson proposes a
go to on or off state. In addition, this topology maintenance distributed algorithm for approximating connected
algorithm can be made to work along with the 802.11 dominating sets in an adhoc network that also appears to
power saving mode to improve communication latency and preserve connectivity. SPAN elects coordinators by
system lifetime. actively preventing redundant nodes by using randomized
Keywords slotting and damping. Equivalence partitioning differs from
Equivalence partitioning, on state, off state, active node GAF as it constructs the partitions based on the
connectivity information rather than the geographic
INTRODUCTION
location of the nodes. Also unlike SPAN, it constructs
Wireless multi-hop adhoc networking has been the focus of
equivalence partitions and randomly rotates the active
many recent research and development efforts for its
nodes within the partition.
applications in military, commerce and educational
environments. Most of the protocols that have been EQUIVALENCE PARTITIONING DESIGN
proposed to provide multi-hop communication in wireless In Equivalence Partitioning technique, we divide the
adhoc networks [2, 3] are evaluated in terms of route length network into different sets of equivalent nodes, so that one
[4], routing overhead, and packet loss rate. But minimizing of the nodes in the partition can be active in order to
the energy consumption is an important challenge in maintain the connectivity and the rest can remain in their
mobile networking. Since the network interface may be power saving mode. The role of the active node is
often idle, power could be saved by turning off the radio randomly chosen so that the burden of forwarding, sending
when not in use. But the coordination of power saving with and receiving data is distributed evenly to all nodes.
routing in adhoc wireless networks is not straight forward. Partitioning the network into Equivalence Sets
The subject of this paper is to present a topology
maintenance algorithm which partitions the network in
such a way that one on the nodes in each partition must be
active so that the connectivity of the network does not
diminish and the other nodes can turn off their radio. The
responsibility of the active node is randomly changed so Figures 1: A network with five nodes
203
This is a distributed randomized algorithm for connecting Compatibility with 802.11 Power saving mode
equivalence partitioning among the nodes in the network. This topology maintenance algorithm can be used along
Consider the network shown in Figure 1. The nodes B, C, with the 802.11 power saving mode to improve the system
D are in the path between the nodes A and E. In this case lifetime. An interesting question is how a node in the off
all the three nodes need not be awake to forward the state handles traffic originating from it or destined to it. In
packets from node A to E. We treat that the nodes B, C, D the former case, if the node has data to send it can simply
form an equivalent partition it is sufficient that one of the power on its radio and send out data. In the later case, the
nodes to be awake to maintain the connectivity. This 802.11 power saving mode can be used in which the active
Equivalence Partitioning algorithm is as follows. nodes can temporarily buffer data for the nodes in the off
state and send data later.
• The node Ni constructs its neighbor set by sending
HELLO packets to its one hop neighbors. The nodes RESEARCH CHALLENGES AND FUTURE WORK
hearing this packet respond with a HELLO reply so that The simplicity and the fast convergence of the Equivalence
the node Ni constructs its neighbor set. Let NHi be the partitioning algorithm would further lot of research
neighbor set of node Ni challenges. We are currently working on in finding the
optimal way of choosing the active node in a partition and
• Now, Ni advertises its neighbor set to its one hop
the random rotation policy. Different heuristics related to
neighbors so that it can find out the number of pairs of its
the rotation of the active nodes are being analyzed so that
neighboring nodes connected via this node.
all the nodes in the network are treated evenly and the
• Find the intersection between the neighbor sets of the overall network lifetime increases. More evaluation of the
adjacent nodes. Let C be the cardinality of the partitioning algorithm should be performed, to determine
intersection set with the first neighbor. convergence time and the adaptability to network mobility.
• If the cardinality is equal to or more than two, then The cases in which the active node moves far from the
form an equivalence partition and assign a unique remaining nodes and the value of the optimal time after
partition id to the nodes. which the partitioning algorithm must be undertaken
should be analyzed. We have presented a topology
• Consider the next neighbor. Let C יbe the cardinality maintenance algorithm, and have shown its benefits. It is
of the intersection set between the node Ni and its our belief that this approach opens up new areas of
neighbor currently considered. If C > יC, a new group is research in energy conservation in mobile adhoc networks.
formed between the node Ni and this neighbor, We have provided a basis for discussion of a number of
destroying the previous partition. research issues that need to be addressed to improve the
performance of the overall network.
• If C = יC with same elements then add the new
neighbor to the same partition and assign the partition id. REFERENCES
1. B. Chen, K. Jamieson, H. Balakrishnan, R. Morris
• Repeat the above process until each node receives the SPAN. In the proceedings of ACM/IEEE International
neighbor set from all its one hop neighbors. Conference on Mobile Computing and Networking
Each and every node is exactly in one of the partitions. (MobiCom) (Rome Italy, July 2001)
Active Node Announcement 2. C. Perkins. Ad hoc on demand distance vector
Once the Equivalence partitions have been constructed and (AODV) routing. Internet-Draft, draft-ietf-manetaodv-
the nodes have their partition id, the active node in the 04.txt, pages 3-12, October 1999, Work in Progress.
partition must be elected. The following strategies can be 3. J. Broch, D. B. Johnson, and D. A. Maltz. The dynamic
used to elect the active node. source routing protocol for mobile ad hoc networks.
• When we start with a new network all the node will INTERNET-DRAFT, draft-ietf-manetdsr-03.txt.,
have the same power. In this case, the node with the least October 1999. Work in Progress.
id in the partition can be chosen to become the active 4. J. Broch, D. Maltz, D. Johnson, Y. Hu, and J. Jetcheva.
node. A performance comparison of multihop wireless ad hoc
• When the power among the nodes in the partition is network routing protocols. In Proceedings of the
not equal, then the node with the maximum power or the ACM/IEEE International Conference on Mobile
maximum estimated lifetime can be chosen to be active. Computing and Networking, pages 85-97 October 1998.
The nodes remain active for a time T seconds which is 5. Xu, Y., Heidemann, J., Estrin, D. Geography informed
dependent on the application. The active nodes can be Energy conservation for AdHoc Routing. In the
randomly rotated in round robin fashion or based on Proceedings of the Seventh Annual ACM/IEEE
heuristics which take the expected life time of the node into International Conference on Mobile Computing and
consideration. Networking(MobiCom)(Rome Italy, July 2001)pp.70-84
204
Self-configuring, Lightweight Sensor Networks
for Ubiquitous Computing
Christopher R. Wren and Srinivas G. Rao
Research Laboratory
Mitsubishi Electric Research Laboratories
201 Broadway; Cambridge MA USA 02139
205
Graph Estimated from Ground truth Distances Graph Estimated from Delays
6 8
4 6
4
2
2
0
−2
−2
−4
−4
−6
−6
−8 −8
−10 −8 −6 −4 −2 0 2 4 6 8 −8 −6 −4 −2 0 2 4 6 8
Figure 1: The ground truth overlap (left) compared to Figure 2: The ground truth distance map (left) compared
the statistical transition probability matrix (right). to the peak-delay map (right). Distance in meters.
206
Grouping Mechanisms for Smart Objects Based On
Implicit Interaction and Context Proximity
Stavros Antifakos, Bernt Schiele Lars Erik Holmquist
ETH Zurich, Switzerland Viktoria Institute, Göteborg, Sweden
209
disrupts nor fundamentally alters the object’s original use, Network Communication and System Design
but enhances the functionality already present. When Inside/Outside sits on top of the DAWN ad-hoc network.
presenting the collected digital data back to the user, the [Fig.2] Sensor data is sent through the communications
decorative qualities of the handbag are accented creating a stack to the desktop application. The aggregate data
spontaneous street “performance” for the user and casual provides a data diary of environmental exposure levels.
observers. People who do not own an Inside/Outside bag The project has the potential to make use of all the ad-hoc
can benefit from being able to view and interpret the data networking capabilities of DAWN, as modular design of
presented on bags carried by other people on the street. the system will allow for additional functionality to be
easily added to the system as development of the project
Personally Invested Information Access
continues.
Inside/Outside can function as a stand-alone personal
environmental monitoring system or be part of a CONCLUSION &FUTURE RESEARCH
distributed sensor network. The environmental sensing Initial prototypes for Inside/Outside are complete. Early
capabilities of the bag belong personally to the user. This evaluations show promising results. There is interest in the
creates a sense of identification and empowerment as data is project from individuals who are usually un-interested in
collected locally and stored, allowing users to decide for computing gadgets, though detailed user studies need to be
themselves how to use and interpret the information they conducted to confirm this. New scenarios and prototypes,
receive. When a network of bags form and collective which address intercommunication between the
readings of the sensors input are examined, detailed and Inside/Outside handbag and other environmental elements
locally specific information about “micro-climates” of and bag nodes, will developed, as well as continued
pollution can be identified for the community, possibly exploration and exploitation of the ad-hoc networking
changing behavioral patterns in the city over time. capabilities of the project, especially in relation to mobility
and parasitic deployment of sensor networks within the
IMPLEMENTATION
urban zone. As a wearable everyday object Inside/Outside
The Inside/Outside handbag is integrated with provides a compelling context for research into public space
environmental sensors, and smart textiles and utilizes the and urban behavior.
DAWN [5] wireless network infrastructure (DAWN is a
Trinity College wireless network test-bed). Initial ACKNOWLEDGMENTS
conceptual designs were based on informal surveys and This research is supported by the TRIP project at Trinity
workshops conducted with city dwellers and pedestrians College Dublin.
from Dublin, Ireland and Los Angeles, California. The two
REFERENCES
cities were selected to maximize differences in lifestyle,
1. Post, E.R., Orth, M., Russo , P.R., Gershenfeld N.
culture, and urban behavior.
E-broidery: Design and fabrication of textile-based
computing. IBM Systems Journal Vol. 39, Nos. 3&4,
2000, 840 – 860.
2. Hallnäs, L. and Redström, J. Abstract Information
Appliances; Methodological Exercises in Conceptual
Design of Computational Things. In DIS2002: Serious
reflection on designing interactive systems, pp. 105-
116. ACM.
3. Ishii, H. and Ullmer, B., Tangible Bits: Towards
Seamless Interfaces between People, Bits and Atoms, in
Proceedings of Conference on Human Factors in
Computing Systems (CHI '97), Atlanta, March 1997,
ACM Press, pp. 234-241.
Figure 2. System Diagram
4. Van Laerhoven , K., Schmidt, A. and Gellersen , H.W.
The Handbag "Pin&Play: Networking Objects through Pins". In
The Inside/Outside handbag uses an air quality sensor and Proceedings of Ubicomp 2002. Springer
audio microphone input connected to a microcontroller. As 5. O'Mahony, D. & Doyle, L., Beyond 3G:Fourth-
the user carries the bag through the city, changes in Generation IP-based Mobile Networks, in Wireless IP
ambient air quality and noise levels cause conductive and Building the Mobile Internet, Ed Dixit, S., Artech
embroidery on the bag surface to heat and subsequently House, Norwood, MA, 2002. Chapter 6, pp 71-86.
cool. [Fig.1] Thermo-chromic pigments mixed with acrylic
paint and applied onto a fabric substrate create a visible
color change that is both controlled and programmable.
210
i-Beans: An Ultra-low Power Wireless Sensor Network
Sokwoo Rhee Deva Seetharam Sheng Liu Ningya Wang
Jason Xiao
Millennial Net. 201 Broadway, Cambridge, MA - 02139.
{sokwoo, dseetharam, sliu, nwang, jxiao}@millennial.net. http://www.millennial.net
ABSTRACT
This paper presents a newly developed short-range, ultra-
low power wireless device called the “i-Beans”, an ad
hoc, self-organizing network protocol, and their appli-
cation to low data-rate ubiquitous computing applica-
tions.
0.1 Keywords
Wireless sensors networks, low-power sensor networks,
low data-rate networks, i-Beans.
1. i-Bean (or Endpoint) - These are the devices that 2.2.1 Power Efficiency
are directly connected to sensors and embedded in Power efficiency is a critical factor in wireless sensor
the operating environments. They are tiny (25 x 15 networks. Although power consumption must be mini-
x 5 mm) and power efficient. Each endpoint pro- mized at all points in the system, power consumed by
vides four 8-bit analog input channels, four digital endpoints must be optimized to a higher degree since
I/O channels, and an UART port for interfacing with there are more endpoints in the network than any other
211
device and also replacing their batteries would be more Dust [2], BTnodes [1], and Pushpin Computing [3]. i-
difficult, as they could be deployed in inaccessible op- Bean network is different from these platforms in the
erating environments. following respects:
We employ the following techniques to optimize power 1. These systems are composed of homogeneous nodes
consumed by i-Beans: (identical hardware) that perform specialized func-
tions in runtime by using different software; whereas
• Dual Processors - Each endpoint has two processors: i-Bean network is composed of three different types
1. a high speed processor that usually executes tasks of devices. The heterogeneous system makes it pos-
related to RF circuitry. 2. a low speed processor sible to assign complex functionality to routers and
that usually executes conventional computing and to simplify endpoints, thereby reducing their power
I/O tasks. A process called coordinator running on consumption.
one of these processors allocates tasks in such a way
that tasks are run on slower of the two processors 2. They intend to be general purpose sensor networking
and the unused processor is placed in sleep mode. A platforms, whereas i-Bean network is tuned for low
substantial amount of power is saved by putting the data-rate applications.
high speed processor in sleep mode for most of the
time. 3. Their end nodes are capable of performing relatively
complex computations. We use endpoints only to
• Heterogeneous Nodes - Endpoints, repeaters and gate- interface with sensors and actuators.
ways perform totally different functions. Endpoints
can either be source or destination of network data, 4. DISCUSSIONS AND FUTURE WORK
but cannot forward data for any other nodes. This From our preliminary studies, we find that power con-
frees endpoints from active listening and they can sumption in i-Bean networks is extremely low. For in-
conserve power by being in sleep mode while not stance, when powered by a small coin battery (CR2032)
communicating or computing. The repeaters are solely with a capacity of 220mAh, the average current con-
responsible for routing data in the network. Further, sumed by an i-Bean is approximately 100 µA, when the
i-Beans conserve power by transmitting low-power sampling rate is one sample per second and therefore
signals; the repeaters in the vicinity forward their battery will last for about 80 days. If the sampling rate
packets to the destination using high power signals. is decreased to one sample per 120 seconds, average cur-
rent consumption drops to 1.92 µA and increasing the
• Bottom-Up Networking - Endpoints do not waste battery life to about 13.1 years. 1
precious power listening to periodical beacon signals;
instead they stay in power saving mode most of the We need to perform more experiments to understand
time and wake up occasionally according to their own the impact of our design decisions and tradeoffs when
communication schedule. the network is extremely large (> 1000 nodes), since
even simple protocols and algorithms can exhibit sur-
Please see our paper [4] that focuses on power conser- prising complexity at scale.
vation strategies for complete details.
We are also working on further optimizing our algo-
2.2.2 Robust Network rithms, protocols and hardware.
The devices in the i-Bean network self-organize them-
REFERENCES
selves into a network and reconfigure themselves if there
[1] J. Beutel and et al. Bluetooth smart nodes for
is any change in the network. The network is self-
mobile ad-hoc networks. Technical report, Swiss
organizing, self-healing and yet power efficient. As shown
Federal Institute of Technology, Zurich,
in Figure 1, the topology of i-Bean network is a star-
Switzerland, 2003.
mesh hybrid. This hybrid topology takes advantage of
the power efficiency and simplicity of the star topology [2] J. Kahn and et al. Next century challenges: Mobile
for connecting i-Beans to routers and reliability and networking for smart dust. In MobiCom ’99, pages
reach of mesh networks for interconnecting routers to 217–224, 1999.
achieve fault tolerance and range.
[3] J. Lifton and et al. Pushpin computing system
We also utilize several other innovative techniques such overview: A platform for distributed, embedded,
as generating true random numbers from RF noise, pro- ubiquitous sensor networks. In International
gressive search (devices search using short messages and Conference on Pervasive Computing, 2002.
employ complete messages only after establishing con-
nections) etc to increase reliability of these networks. [4] S. Rhee and et al. Strategies for reducing power
Please see the publications on our website for further consumption in low data-rate wireless sensor
details. networks. Submitted to ACM Hotnets 2003.
3. RELATED WORK 1
Any number more than 10 years may be meaningless, since
Researchers have developed several wireless sensor net- the battery shelf life itself may be less than the computed
working platforms. A few prominent ones are Smart time.
212
A Rule-based I/O Control Device for Ubiquitous Computing
Tsutomu Terada, Masahiko Tsukamoto
Keisuke Hayakawa, Atsushi Kashitani
Tomoki Yoshihisa, Yasue Kishino, Shojiro Nishio
Internet Systems Research Laboratories,
Grad. School of Information Science and
NEC Corp.
Technology, Osaka University
ABSTRACT In conventional active databases, database operations such
In this paper, we describe a rule-based I/O control device as SELECT, INSERT, DELETE, and UPDATE are
for constructing ubiquitous computing environments, considered events and actions. Since ubiquitous computers
where we can acquire various services using embedded may have little processing power and small memory, we
computers anytime and anywhere. The capability of our simplify language specification of the ECA rule while
device is very limited, however, it has flexibility for keeping the capability to fulfill various requirements in
changing its function dynamically by applying rule-based ubiquitous computing environments.
technologies to describe the behavior of the device. We As shown in Figure 1, we suppose that various sensors and
design the behavior description language and develop a devices are connected to our device. The device evaluates
prototype of this device. inputs from these sensors and devices, and outputs
Keywords information to connected devices. With this assumption,
I/O Control, ECA Rule, Active Database we defined the events and actions as shown in Tables 1
and 2.
INTRODUCTION
In this paper, we propose a new computing style using Other Computers
Buzzer
rule-based I/O control devices for realization of ubiquitous
computing environments, where we can acquire various
services with embedded computers anytime and anywhere.
In ubiquitous computing environments, following three
characteristics are required for computers:
Button
(1) Autonomy: computers process automatically without
human operations LED
213
Table3. Commands for the SEND_COMMAND action
Name Contents
ADD_ECA Adding a new ECA rule
DELETE_ECA Deleting specific ECA rule(s)
REQUEST_ECA Request for specific ECA rule(s)
PROTOTYPE DEVICE
We developed a prototype of rule-based I/O control devices
as shown in Figure 3.
Figure 5. An Example of Connections
CONCLUSION
In this paper, we designed and developed the rule-based
I/O control device for ubiquitous computing. Using our
devices, we can construct the ubiquitous computing
environment based on rule-based architecture as shown in
Figure 6.
UC (Rule-based I/O control device)
Action Sensor
Event
USER Event
Action
USER
Event Condition
Event UC UC
UC
Figure 3. A Prototype of Rule-based I/O Control Device Condition UC Action
Condition
Action
Actuator
Action UC Action
This device consists of two parts; one is the core-part Event(Timer)
Condition Action
(34mm) that has a micro processor (PIC16F873), the other Event
is the cloth-part (59mm) that has Li-ion battery and Action Action
Event UC
connectors for attaching sensors and devices. As shown in Condition
Figure 4, the core-part has 6 input-ports (IN1-6), 12 output UC
Event Condition
ports (OUT1a-6a, 1b-6b), 6 power-supply ports (VCC), Sensor
214
Smart Things in a Smart Home
Elena Vildjiounaite, Esko-Juhani Malm, Jouni Kaartinen, Petteri Alahuhta
Technical Research Centre of Finland
Kaitovayla 1, Oulu, P.O.Box 1100, 90571 Finland
{Elena.Vildjiounaite, Esko-Juhani.Malm, Jouni.Kaartinen, Petteri.Alahuhta}@vtt.fi
215
context with the task requirements. For these tasks the conclusions, and the central node should summarise the
symbolic context value is simply this decision. (E.g. for the received messages.
task of finding parts of a business suit, an object is "bad" if Since the system is intended to be deployed everywhere in
it is waiting in the bathroom to be washed. The decision is an ad-hoc manner, and since the computing resources of
made based on location context. Similarly a food product smart objects are very limited, the system includes certain
can be "bad" if it contains more fat than specified in the interaction capabilities [3] in order to help the user to
task requirements.) In the "journey" task the objects cannot resolve ambiguous situations.
decide whether they are "good" or "bad" at this stage but 1. The items know of special situations which increase the
send their movement type as the context value. certainty of context detection and help to choose the
moment for receiving the system's opinion. For physical
objects, one such situation is shaking, and it is very easy to
distinguish items which are simultaneously shaken hard
[2]. Shaking helps to give an alarm at the right moment,
much earlier than if the system were to wait until one or
more objects disappeared from the communication range.
Further, the system would normally be silent when nothing
is missing, but it sends an "OK" message upon shaking.
2. The system includes explanation capabilities intended to
correct both the user's mistakes and its own. Objects send
explanation data either upon request from the central node
or upon shaking. Possible sources of mistakes are a change
in the usual contents of a container or the effect of
reflection on the beacons' communication range. It is
Fig. 2 Tasks of each smart object sometimes necessary to move a beacon half a metre to tune
The next step for each object is to compare the contexts of the system, but the user needs to know which beacon's data
all group members. This results in the creation of a list of caused the error in the system. This option also facilitates
"bad" IDs (objects which are absent or fail to satisfy the addition of new objects to a database.
task requirements) and the choice of a speaker (decision on 3. The system allows the user to add or remove items from
whether the item should send this information by radio a group at any moment. This helps e.g. to deal with the
itself or let another item send it). For the "journey" task, objects left somewhere intentionally or bought after the
objects can decide which are "bad" (forgotten) with greater user has left home.
or less certainty depending on the user’s preferences. CONCLUSIONS
Objects are considered "bad" with a high degree of The group work of smart objects with limited computing
certainty in two cases: 1) after disappearing from the recourses was implemented by enabling each object to
communication range of the other group members; 2) if the make conclusions about the joint context of the group
movement type of several other group members is without comparing them with the opinions of other group
"shaking", while they have a different movement pattern. members. According to our tests, members' opinions differ
Objects are considered "bad" with less certainty if they stay mainly when the situation changes (e.g. items start or stop
in the same place while the other group members are moving); however, addition of the ability to analyse
leaving. In this case false alarms are more probable, but conclusions of other group members can be useful. Group
both this and the detection by "shaking" movement type work by smart objects reduces the workload of a central
helps to identify missing objects before they pass out of the node, which is important, if there are many objects and
communication range. many central nodes and all of them have limited resources.
The energy awareness of Smart-Its is based on the fact that
REFERENCES
all boards have an identical program and the battery status
1. M. Beigl, H. Gellersen Smart-Its: An embedded
is affected mostly by the number of temporal sets in which
platform for Smart Objects Smart Objects Conference
the object has taken part and the number of messages sent.
2003, Grenoble, France
Choice of a speaker (Fig. 2) means that objects'
conclusions (result_data messages) are sent by the object 2. Holmquist, L.E., Mattern, F., Schiele, B., Alahuhta, P.,
with the freshest battery, either according to timing Beigl, M., Gellersen, H.-W.: Smart-Its Friends: A
requirements specified in the task description or upon Technique for Users to Easily Establish Connections
shaking of the objects. Each object decides for itself between Smart Artefacts. Ubicomp 2001
whether it should send result_data message or not. If some 3. Vildjiounaite E., Malm E.-J., Kaartinen J., Alahuhta P:
objects are out of the communication range of other A Collective of Smart Artefacts Hopes for Collaboration
objects, they also send result_data message with there own with the Owner. HCII 2003
216
Resource Management for Particle-Computers
Tobias Zimmer, Frank Binder, Michael Beigl, Christian Decker and Albert Krohn
Telecooperation Office (TecO) University of Karlsruhe
Vincenz-Priessnitz-Strasse 1, 76131 Karlsruhe, Germany
http://www.teco.edu
{zimmer,binder,michael,cdecker,krohn}@teco.edu
217
memory required depends on the number of tasks in all resources allocated by one task (see Figure 2). The advan-
task-sets and the maximum number of instances of theses tage of independent resource assignment is that any given
tasks. It can be computed to resource is allocated only as long as it is needed. This en-
(96 + MaxNumberOfInstances * 4 + NumbeOfTasks * 4) bytes. ables maximum parallelism of tasks. The disadvantage of
This is feasible as the Particle-Computers are equipped that approach is, the schedulability of every resource has to
with 32 Kbytes of program memory and 1536 bytes of data be checked separately and dependencies between reserva-
memory, leaving enough recourses for user applications, tions have to be handled explicitly. In the P-RMS we de-
e.g. a typical test configuration we used contains 4 real- cided to go for collective resource assignment due to the
time tasks and a background task needs about 132 bytes of fact that only one processing unit is available and no virtual
data memory. parallelism of tasks can be introduced performing non-
preemptive scheduling. Details on all design decisions in P-
The P-RMS runtime environment provides various func- RMS can be found in [4]
tionalities to the applications running on the Particle-
Computers. This includes the management of periodic tasks IMPLEMENTATION AND TESTING
by setting the period length and ensuring that they are The implementation of the P-RMS followed the “test first”
started periodically. Furthermore, the runtime environment strategy, known from extreme programming [5]. Using this
creates sporadic tasks based on input events and sets their method, tests for the functionality of every unit of code are
starting time. The runtime environment also includes the designed and implemented prior to the implementation of
service routines for switching between the different prede- the code unit itself. This results in an early detection of
fined task-sets. errors in the implementation.
Scheduler EVALUATION AND FUTURE WORK
The scheduler is responsible for assigning the processor The evaluation of the P-RMS is still in process. We were
and the other allocated system resources to activated tasks able to determine some areas where further improvements
in the order of their priority and running the background in the performance and memory consumption of the system
task when the processor is not assigned to a real-time task. may be possible. E.g. one major improvement will be a
Priorities of the real-time tasks are assigned following the further reduction of the runtime of the scheduler on the
EDF scheduling strategy. The P-RMS scheduler works Particle-Computers. The maximum runtime of the sched-
very efficient due to the fact that the schedulability tests are uler depends on the maximum number of instances of tasks
performed at development time of the application software. in a task-set. This maximum is seldom reached, so per-
This guarantees that only schedulable task-sets are con- formance enhancements can be achieved by better predic-
tained in any given application. tion of those maxima. Additionally, a simplification of the
task1 RTC structure could reduce the runtime of a RTC-query
from 299 cycles to 26 cycles at the expense of some loss in
resource2
resource1 comfort in reading the current time and data. Another im-
processor provement we already identified for future implementation
0 1 2 3 4 5 6 7 8 9
time is the introduction of a hierarchical ordering of the re-
task2 sources. This will simplify the reservation of compound
resource2 resources.
resource1
processor REFERENCES
0 1 2 3 4 5 6 7 8 9
time 1. Michael Beigl, Tobias Zimmer, Albert Krohn, Christian
Independent resource allocation
Decker, and Philip Robinson. Smart-its - communica-
tion and sensing technology for ubicomp environments.
task1 Technical Report ISSN 1432-7864 2003/2, April 2003.
resource2 2. The Smart-Its Project. http://smart-its.teco.edu. 2003.
resource1
processor 3. Q. Zeng and K. G. Shin. On the ability of establishing
0 1 2 3 4 5 6 7 8 9
time real-time channels in point-to-point packed-switched
task2 networks. IEEE Transactions on Communications, vol.
resource2 42(2/3/4) :1096-1105, February/March/April 1994.
resource1
processor 4. Frank Binder. Ressourcenverwaltungssystem für Partic-
0 1 2 3 4 5 6 7 8 9
time
le-Computer. Master thesis at the TecO, University of
Collective resource allocation Karlsruhe. May 2003.
Figure 2: Assignment of resources performing independ- 5. Ron Jeffries, Ann Anderson and Chet Hendrickson.
ent or collective allocation Extreme Programming Installed. Addison Wesley.
Resource assignment in general can be performed inde- ISBN 0-201-708-426 October 2000.
pendent for each available resource or collective for all
218
Using a POMDP Controller to Guide Persons With
Dementia Through Activities of Daily Living
Jennifer Boger and Geoff Fernie Pascal Poupart Alex Mihailidis
Centre for Studies in Aging Dept. of Computer Science Simon Fraser University
2075 Bayview Ave. University of Toronto 2628-515 West Hastings St.
Toronto, Canada, M4N 3M5 10 King's College Rd. Vancouver, Canada V6B 5K3
+1 416 480 5858 Toronto, Canada, M5S 3G4 [email protected]
[email protected] [email protected]
ABSTRACT indicate that a step has been completed. This makes them
Researchers at the Centre for Studies in Aging and at unsuitable for persons with moderate-to-severe dementia
Simon Fraser University are developing ubiquitous as this group does not possess the capacity to learn the
assistive technology to aid persons with dementia required interactions.
complete routine activities. To ensure that the system is OBJECTIVE
useful, effective, and safe, it must be able to adapt to the Our objective is to design a more robust control system by
user and guide him/her in an environment that may not be using partially observable Markov decision process
fully observable. This paper discusses the merits of using (POMDP) algorithms to model the activity of
partially observable Markov decision process (POMDP) handwashing. We anticipate that using POMDPs will
algorithms to model this problem as POMDPs are able to enable the device to guide users more effectively and offer
provide robust and autonomous control under conditions a model that can be readily expanded for more complex
of uncertainty. A POMDP controller is being designed for activities.
the current prototype, which guides the user through the
activity of handwashing. APPROACH
The current prototype, dubbed COACH, uses colour-
Keywords based tracking software to follow the user's hand position
POMDP, dementia, Alzheimer disease, ADL, assistive through a camera mounted over the sink as the user
technology, cognitive orthosis. performed the ADL of handwashing. Figure 1 depicts six
INTRODUCTION steps of handwashing and the various alternative pathways
It is estimated 1 in 3 people over the age of 85 has the user could correctly wash their hands. Our first
dementia, with Alzheimer disease (AD) accounting for artificially intelligent (AI) agent employed neural
60-70 % of cases. The number of Americans with AD is networks to associate hand position with corresponding
estimated at 2.3 million and expected to reach 14 million steps and a simple vector search through the taxonomy
by 2050 if present trends continue [1,2]. At the onset of
dementia, a family member will often assume the role of Activity started
caregiver. However, as dementia worsens, the caregiver
will experience greater feelings of burden, which Use soap Turn on water
frequently result in the care recipient being placed in a Wet hands
long term care facility. A solution to relieve some of the Turn on water Use soap
financial and physical burden placed upon caregivers and
health care facilities is a ubiquitous, autonomous system Rinse hands
that will allow aging in place by improving the quality of
life for both the care recipient and their caregiver. Turn off water Dry hands
People with advanced dementia may have difficulty
completing even simple activities of daily living (ADL) Dry hands Turn off water
and require assistance from a caregiver to guide them
through the steps needed to complete an activity. Activity finished
Examples of ADL are handwashing, dressing, and
toileting. While there have been several cognitive aids Figure 1: Acceptable sequences of steps required to complete
designed to assist ADL completion, all of them require the ADL of handwashing. Note wetting hands is considered
explicit feedback from the user, such as a button press, to optional in the prototype as liquid soap is used.
219
constructed from Figure 1. This identified what step in the DESIGN AND BENEFITS
ADL the user is attempting to complete and if the user's A POMDP model is being constructed to guide a user
actions were correct. If the user seemed unsure of the next through the ADL of handwashing. COACH is separated
step or s/he attempts an inappropriate action, COACH into four modules, as can be seen in Figure 2. POMDP
played an audio cue to guide the user to the next algorithms will create a central controller that
appropriate step. If the user did not respond to prompts, encompasses the step identification, planning, and action
the caregiver was called to intervene. COACH has been modules. A great advantage to using a POMDP model is
tested through clinical trials involving AD inpatients in a that it eliminates the requirement of explicit user
retrofitted washroom at Sunnybrook and Women’s feedback, such as a button press, because the agent
College Hospital's long term care facility [4]. It was found autonomously estimates when a step has been completed
to significantly decrease the number of caregiver through observation of the activity. Another challenging
interventions by about 75 %. aspect of this research is to design an effective method of
determining user preferences, as there is no user feedback.
The current prototype assumes full observability of its POMDPs provide an excellent solution to this difficulty
washroom environment. This simplification does not by obtaining and incorporating user preferences
account for inherent uncertainty in step identification autonomously. For example, by keeping track of which
introduced through factors such as instrumentation noise cues have been observed to be the most effective in the
and obscured views. Upgrading the AI agent to a POMDP past, the system can not only tailor itself to the user, but
based controller provides a solution to this problem by also be sensitive to changes in user performance that
directly modeling the uncertainty. The incomplete and accompany the progressive nature of AD and will
noisy information provided by the tracking system is accommodate accordingly. The self-tailoring ability of a
translated into a probability distribution over the possible POMDP controller also eliminates the need for extensive
conditions of the user and the washroom environment. interaction with the caregiver, making this technology user
This distribution is continuously updated to reflect friendly.
observations made by the tracking system as time
progresses. By combining this distribution with a SIGNIFICANCE AND OUTCOMES
stochastic model of the user's future behaviour and a cost Results from this research are applicable to the
function measuring the consequences of playing various development of ubiquitous intelligent monitoring and
prompts, the POMDP agent is able to optimize the choice prompting of all people with cognitive limitations,
and time of prompts despite uncertainty. Following the including those with traumatic brain injuries, learning
principles of utility theory, the agent selects the course of disabilities, and Alzheimer's disease. Successful
action that minimizes expected future cost based on the implementation of POMDPs to the COACH handwashing
estimate of the user's status (modeled as a probability problem would represent one of the most advanced
distribution). Please see [3] for a more detailed review of applications of this technology.
POMDPs. The ability for the COACH system to make ACKNOWLEDGMENTS
good decisions under uncertainty is especially crucial to This research has been funded in part by the Alzheimer
complex ADLs, such as toileting, where observations will Society of Canada.
likely be limited and the costs of poor control are high.
REFERENCES
1. Alzheimer’s Association. Caregiver network helps
Image Analysis
• Location of user and hand position temper significant hardship in labouring for
• Location of task specific objects Alzheimer’s relatives. Primary Psychiatry, 3, (1996),
92-94.
Task Identification 2. Cummings, J., and Cole, G. Alzheimer Disease.
• Determines which step the user is attempting Journal of the American Medical Association, 287,
18 (May 2002), 2335-2338.
Planning 3. Kaelbling, L., Littman, M., and Cassandra, A.
• Decides if the user attempting an appropriate task Planning and acting in partially observable stochastic
• Which task the user should be prompted to attempt domains. Artificial Intelligence, 101, (1998), 99-134.
4. Mihailidis, A., Barbenel, J., and Fernie, G. The use of
Action artificial intelligence in the design of an intelligent
• Whether or not a cue should be played
cognitive orthosis for people with dementia. Assistive
• Level of detail of the cue
Technology, 13, (2001), 23-39.
Figure 2: Interaction of modules that constitute COACH
controller.
220
The Chatty Environment – A World Explorer for the
Visually Impaired
Vlad Coroama
Institute for Pervasive Computing
Swiss Federal Institute of Technology (ETH) Zurich
8092 Zurich, Switzerland
+41 1 63 26087
[email protected]
221
environment object has been presented to her by the device, We are currently working on integrating a navigation fea-
the user is capable of selecting this object. ture using a locally developed location system. The system
The user is then presented with a standardized audio inter- relies on the signal strength of WLAN 802.11, Bluetooth
face to the object. In the current implementation, the inter- and active RFID tags.
face consists of four options: User Input
Information Currently, the user can only interact with the system by lis-
By choosing this option, the user receives further informa- tening to the list of nearby objects (with support for skip-
tion about the chosen entity. This information is highly ping back and forth) and then choosing one of the four
dependend on what kind of object was selected. With a options described above. Future versions should also allow
supermarket product, the information could for example be: the user to actively search for an environment entity, either
“producer”, “ingredients list”, and “expiration date”. For a using Braille or voice input. For example, it should be pos-
train, the information might be: “final destination”, “depar- sible to find a pharmacy, even if it is neither in the immedi-
ture time”, “next stop”, and “list of all stops”. ate neighbourhood, nor on a virtual signboard.
Some of these points may in turn provide further details. Communication Issues
“Ingredients” may have as subitems “vegetarian (yes/no)”, There is a huge amount of data to be transferred from the
“organically produced (yes/no)”, and “display complete environment objects to the user device. Since the tags are
igredients list”. typically small devices with limited ressources, only the
Actions object identity, some basic information and a hyperlink is
Some of the objects in our chatty environment will allow stored on the object itself. By following that link through
the user to take some action on them. One example is a the device’s Bluetooth or WLAN 802.11 network interface,
train or bus allowing the user to open its nearest door. This arbitrary additional information can be gathered from the
is a well-known problem for visually impaired people, for wide-area computing infrastructure. Note that in case of
whom it is easy to miss a bus or train because they are intermittent connectivity, the world explorer’s text-to-
unable to find its doors during its brief stop at the station. speech engine can still render the human-readable object
Leave traces identity stored directly on the tag (this could be aided by a
The user can also decide to leave virtual post-its for herself dictionary in foreign-language environments).
or other users on an object. These will typically be audio Information Filtering and Selection
files reminding her of something that she noticed the last A challenging issue is choosing which information should
time passing by. On a traffic light, for example, one could be presented to the user. For example, when entering a shop
leave the information: “Big crossroad ahead, must be the third time, a user might not want to receive the same
crossed very quickly”. Information left like this would be information again. A similar problem arises when the user
automatically pushed onto the user’s device the next time enters an area with so much information that it cannot be
she would pass this object again. presented in a timely fashion. These issues of information
Our current prototype features only two options for leaving filtering and selection are currently under investigation and
or hearing a message: leaving messages just for oneself or will be addressed in future prototypes.
for anybody else, and hearing just personal messages or ACKNOWLEDGEMENTS
hearing everybody’s messages. This approach obviously
needs to be refined in future versions of the systems. Jürgen Bohn has contributed many ideas in early stages of
the “Chatty Environment” project, while Jürgen Müller
Take me there provided many helpful pointers regarding the daily routine
By choosing this option, the user is guided to the currently of visually impaired.
described entity, e.g., for an item on a sign.
This work has been funded by the Gottlieb Daimler- and
Virtual Information Boards
Karl Benz-foundation, as part of the “Living in a Smart
Sighted people orient themselves in a new and unknown Environment – Implications of Ubiquitous Computing”
environment not only by the objects they are able to see. project.
They also learn about distant or hidden objects through
signs. By mapping visual signs to audio-signs for the visu- REFERENCES
ally impaired, they can learn about objects not only in their 1. Araya, A.A. Questioning ubiquitous computing. Pro-
immediate neighborhood, but also further away, too. ceedings of the 1995 ACM 23rd annual conference on
To realise this goal, signs in our chatty environment are Computer science, 1995, 230-237.
enhanced by the same beacons used by all other objects. 2. Weiser, M. The Computer for the 21st Century. Scien-
But instead of revealing themselves to the user, a sign tells tific American, 265(3), September 1991, 94-104.
her about the objects they are pointing to. By selecting one
of these objects, the user can subsequently be guided there 3. Berkeley Motes. http://webs.cs.berkeley.edu/tos/.
using the “Take me there” interface option. 4. The Smart-Its Project. http://www.smart-its.org/
222
Support for Nomadic Science Learning
Sherry Hsi, Robert J. Semper Mirjana Spasojevic
Center for Learning and Teaching Mobile and Media Systems Lab
The Exploratorium Hewlett-Packard Labs
San Francisco, CA 94123 USA Palo Alto, CA 94304 USA
+1 415 674 2809 +1 650 857 8655
[email protected] [email protected]
[email protected]
223
teachers charged with training and coaching other teachers understand the barriers to adopting nomadic computing
in their school districts. While this audience has some tools in the museum. Adoption of a particular solution by
unique goals, we believe that many of their needs could be the end user will only happen if we fully understand the
addressed through tools developed for helping Explainers, existing context in which the technology is being
as well as tools for capturing and extending museum visits. introduced.
DESIGN CHALLENGES Addressing these design challenges requires the
In the process of creating usage scenarios and prototypes, development of tools that go well beyond the existing
we identified several design challenges: research on mobile guides [1,5]. We are building on our
prior work which has established the feasibility of the basic
Data-driven versus inquiry-driven – One design tension is components and the infrastructure. Over a hundred users
what type of learning to support: learning that can occur have already tested the prototypes, helping us understand
because of rich media delivery or learner-centered inquiry device form factors, interfaces and usability issues for
that is supported by careful prompting. Because volumes of various audiences [2].
online science content exist, a tendency in the design
process is to focus on data-driven models of learning rather ACKNOWLEDGMENTS
than providing guidance for learner-driven inquiry, group We thank HP Labs and the I-Guides research group at the
gaming activities, or collaborative learning. We plan to Exploratorium, especially Steve Kearsley for his artistry. I-
address this issue by conducting studies that compare Guides is supported by the National Science Foundation under
different instructional designs with Explainers. Grant No.02056654. Any opinions, findings, and conclusions
or recommendations expressed in this material are those of the
Complex environment – The Exploratorium typically has authors and do not necessarily reflect the view of the NSF.
several hundred exhibits about science, art, and perception.
Many of the exhibits are noisy and involve sand, water, REFERENCES
1. Cheverst, K., et al. Developing a Context-aware Electronic
electricity, magnetism, heat, or soap. The exhibits are Tourist Guide: Some Issues and Experiences. In
frequently relocated within the museum as part of a Proceedings of the CHI 2000, pp. 17–24.
continual prototyping process. Some exhibits involve
observation or one-handed manipulation to move a knob or 2. Fleck, M., et al. From Informing to Remembering:
Ubiquitous System in Interactive Museums. IEEE
lever, while others involve two-handed manipulation or Pervasive Computing, April – June 2002, v. 1, no. 2, pp.
whole body interaction. Visitors often complain they are 13-21.
overwhelmed by the many choices, activities, and noise.
Introducing nomadic computing technologies into this 3. Hsi, S. The Electronic Guidebook: A Study of User
Experiences Mediated by Nomadic Web Content in a
environment requires deliberate design that doesn’t Museum Setting. Journal of Computer-Assisted Learning.
contribute to the complexity of the environment and September 2003, Vol.19 No.3
improves the user experience.
4. Tinker, R. & Krajcik, J.S., eds. Portable Technologies:
Addressing multiple stakeholder interests – Collecting Science Learning in Context. Netherlands: Kluwer
stakeholders’ viewpoints is critical to identifying key Publishers, 2001.
design issues in the scenarios. Stakeholders include the end 5. Woodruff, A., et al. Electronic Guidebooks and Visitor
users, museum staff, designers, technologists, industry Attention. In Proceedings of the International Cultural
partners, and others. Listening to stakeholders enables us to Heritage Informatics Meeting, 2001 pp. 437–45.
Figure 1: Sample scenario of informal science learning: general museum visitor with a smart watch
224
Development of an Augmented Ring Binder
Magnus Ingmarsson, Mikael Isaksson and Mats Ekberg
Department of Computer and Information Science
Linköping University
SE-581 83 Linköping, Sweden
{magin, x02mikis, x02matek}@ida.liu.se
ABSTRACT registering the history of such actions. On this basic functionality,
The era of ubiquitous computing gives rise to a variety of new it is then possible to build more complex and customized software
technology. We have developed an augmented binder that supports for specific applications.
document handling and workflow. This binder can provide Uses
automatic tracking of document flow, linking physical and virtual The use of patient folders in a medical setting is an area that may
documents. benefit from our approach even at the current cost levels. An
augmented binder may warn healthcare workers if any documents
We needed our binder to fulfill a few basic functional requirements. are currently missing. If new data for the patient is available that
The most essential was that the binder should be able to detect
has not yet been printed on paper, the binder may say so. Thus,
insertions and removals of documents. Also, many applications the clinicians can avoid making decisions on incomplete
will require some kind of alarm when certain conditions are
information, thereby reducing the risk of mistakes.
fulfilled. For example, important documents could be marked as
such, so that the binder may warn if they are missing. All these TECHNOLOGY
requirements had to be accommodated while keeping the Our goal was to construct a wireless and portable device, small
restrictions of weight, space and battery time in mind. and light to integrat e with a ring binder. As we shall see, this
turned out to be a very general task. The resulting design should be
Keywords useful in many similar circumstances.
Ubiquitous computing, Collaborative work, Distributed Cognition,
Document handling, Office application, Workflow, TINI, The identified technical requirements as obtained from the
Bluetooth, RFID functional ones included:
• Internet capability to enable access to external information,
such as a central document server.
• A small display fixed to the front of the binder to provide
information and feedback to the user.
• The user interface should not be more complex than a number
of pushbuttons.
• An RFID reader capable of reading multiple tags inside the
binder.
• Readily available tools for easy software development and
Figure 1: Front of binder with pushbuttons and the display prototyping.
visible.
• A battery operating time of a couple of hours.
INTRODUCTION
CPU
As shown by Luff et. al [1], artifacts play a crucial supporting
Early on we concluded that one of the several available micro java
role in today’s collaborative workplaces. For example, Bång [2]
platforms [3] would readily satisfy about half of our requirements.
points out that clinicians depend heavily on patient folders in their
We eventually chose the TINI platform, mostly because of its
daily work. In an office, a lot of activity is centered on documents
small form factor and low power requirements, but also because it
that are in binders. Therefore, by augmenting the binders with
is a mature product with a large user base. The TINI runs Java
ubiquitous computing technology, it is our hope that the work can
programs, however one should keep in mind that the TINI is
be made more efficient.
limited to a subset of the JDK1.1 specification. This is normally
Target audience no problem when developing from scratch, but may cause
Because binders are used in so many different contexts it is significant rewrites when porting present applications.
difficult to identify the typical user. However, we believe that our Wireless
conviction that most of the people that use binders should benefit The wireless property was a problem from the start. Our options
from this. We have therefore concentrated our work on the seemed limited. Many of the common solutions (WI-FI,
common denominators we have found. Among those is the ability bluetooth) were unavailable to us because they require interfaces
of the binder to detect insertions and removals, as well as the TINI doesn't have, such as USB or PC-card. We eventually
225
found a product, Blue2Link, which essentially is a virtual ethernet largest individual costs were the wireless ethernet devices with
cable over bluetooth. €400 for a pair, the RFID reader for €270 and the LCD display for
€230.
Software
None of the hardware came with suitable drivers. Luckily, most of
the protocols involved turned out to be mostly very simple but
the software writing still took a good part of our 6-month project.
We eventually produced about 10000 lines of Java code and 70
classes for the binder and supporting software (a simple document
server and a PC binder management GUI).
FUTURE RESEARCH
We have so far identified four plausible directions to pursue:
• Location and tracking. The system can be enhanced with
online tracking and location of documents. This approach
could for instance be used to compare supposed to actual
workflow.
• By adding linking between physical and virtual documents
one can for instance obtain easy access to documents in the
computer. For example, removing a physical document from
a binder, the same document could be opened in its virtual
form on the computer, removing the need for a possibly
laborious, manual search.
• Usability studies / UI-design. The current prototype does
Figure 2: Hardware as mounted in the binder. 1. TINI (viewed not emphasize usability studies or UI-design. This is an
from side), 2. TINI-experimental platform, 3. Blue2Link, 4. important aspect to consider since we want the usage of the
Display, 5. RFID connector adapter, 6. RFID-reader. folder to be kept as simple and streamlined as possible.
• Version tracking. The user can immediately know if the
RFID Reader
document they have in their hand is the latest version. This is
We expected the selection of RFID reader to be a straightforward
useful in any situation where people collaborate on a set of
task, but it turned out to be not quite that simple. Especially the
documents, for example a patient folder in a hospital setting.
capability to read multiple tags still seems to be unstable.
Eventually we ended up with the reader Feig MR100 which works SUMMARY
very well, but turned out to be more expensive and power hungry We have built a prototype wireless document handling aid
than we expected from our first quick look at the options available in the form of a binder, using off-the-shelf products. The
to us. result has many promising areas of use. However, any
specific application would require significant customization
Display
of software.
There are many options available in this area. We were however
limited somewhat by the low speed of our chosen CPU. ACKNOWLEDGEMENT
Therefore, we restricted our search to displays with built-in We want to extend a special thanks to the Vinnova, Swedish
memory and processing capability, for example character plotting Agency for Innovation Systems, for the grant, P22459-1A, they
and line drawing. This increases the cost of the display, but the gave the Department of Computer and Information Science for
extra cost is accompanied by a corresponding gain in making this project a reality..
responsiveness as well as application programmer productivity
because of the supplied high-level API. We chose a GLC24064 REFERENCES
from the US company Matrix Orbital. With a display area of 132 1. Luff P., Heath C. and Greatbatch D., 1992. Tasks-in-
x 39 mm and a resolution of 240 x 64 pixels, it is among the largest interaction: paper and screen based documentation in
such displays available. collaborative activity, In Proceedings of CSCW’92, New
Battery York: ACM Press 163-170.
A standard accumulator pack of 1500mAh provides about 10 2. Bång M., Berglund E. and Larsson A. 2002 A Paper-Based
hours of operating time, more than enough for a prototype such as Ubiquitous Computing Healthcare Environment, In Adjunct
ours. All hardware runs on the 12v accumulator, except the Proceedings Ubicomp 2002, Göteborg: Teknologtryck 3-4.
display, which requires 5v. Since the display has very low power
3. Andersson O. and Olsson P-O., 2002 Java in Embedded
usage, we have simply given it its own power supply consisting
Systems, Master Thesis, Linköping University, Linköping:
of regular battery cells.
Unitryck
Cost
Total cost of the hardware in the project is about €1000. The
226
Meaningful Traces: Augmenting Children’s Drawings with
Digital Media
Nassim Jafarinaimi, Diane Gromala, Jay David Bolter, and David VanArsdale
School of Literature, Communication, & Industrial Design Program
Culture College of Architecture
Georgia Institute of Technology Georgia Institute of Technology
[email protected], {diane.gromala, [email protected]
[email protected]}
227
interest in having a digital copy as a back-up although they
believe that these copies will not replace the actual physical
drawings on paper. However, they believe the digital
copies can replace some of the ones which are less
important to them. Seven out of ten parents currently
annotate the drawings mainly with dates, and four wished
they had time to do so. Four out of ten write descriptions
and stories down on some of the pieces.
Consequently, the goals of Meaningful Traces are 1) semi-
automatic capture which does not require parents’
involvement and time 2) portability and ease of use in
different positions by the child.
PROPOSED PROTOTYPE
The Meaningful Traces tablet is specifically designed for
the child to draw on. It is equipped with sensors on its
surface to record the pen strokes, a detachable sheet feed
scanner, a tagging system, a built-in microphone, and a
limited memory to store a number of records. Figure 1: The drawing tablet
What is captured? should not break if food or drink is spilled on it, or if it
Every record in the digital archive consists of: 1) the falls to the ground.
scanned copy of the drawing, 2) the date, 3) ID tag, 4) The user study will be extended and children behavior
child’s audio description, 5) “traces”, which are referred to concerning drawing activity will be studied. A non-
the process of the piece creation (They are in the form of functional prototype of the tablet will be tested with
snapshots of the work in progress or an animation of how children. The results will be used to revise the design.
the work was created), 6) parents’ text annotations. Later, a functioning prototype will be developed and tested.
Scenario of Use CONCLUSION
When the child starts drawing1, the device senses the Meaningful Traces can be used to keep record of a child’s
activity and the built-in sensors begin recording the pen artistic development. Parents can use it to archive and
strokes (traces). Once finished drawing, the child initiates preserve their children’s creations as reminiscences of their
scanning by pressing two buttons on the top of the device development. This preliminary research only addresses the
(Figure: 1). The device also attaches a tag (a number) to the needs and requirements of parents as the end users.
back of the paper at this step. Audio recording can be However, psychologists, art therapists, social workers, and
initiated at any time during or after the time of drawing. teachers can all potentially benefit from the proposed tool.
The system automatically attaches the audio to the most
recent record (records may be modified later). The records REFERENCES
are downloaded to a computer to be viewed, annotated and 1. Cox, M., The Child's Point of View. New York, NY, The
modified in an interface specifically designed for this Guilford Press, 1991.
purpose. Viewers can also search for keywords in the 2. Stevens, M., Vollmer, F. and Abowd, G. D., “The Living
annotations, print the drawings, or email them. They can Memory Box: Form, Function and User Centered Design,”
input the tag number to retrieve the media related to a in Extended Abstracts of CHI 2002, Minneapolis, MN, pp.
drawing on paper. The tablet can also be used to scan and 668-669.
input the drawings that have not been created on it. 3. Goodnow J., Children Drawing. Cambridge, MA, Harvard
Challenges and Next Steps University Press, 1977.
The current design of the physical prototype can only input 4. Dourish, P., Where the Action Is: The Foundations of
standard-sized paper (8”x11”) and does not accommodate Embodied Interaction. Cambridge, MA, The MIT Press,
children’s tendency to draw on larger sizes. The device 2001.
should be light and child proof: i.e. it should be safe, 5. Stifelman L., Arons B., and Schmandt C., “The Audio
Notebook: Paper and Pen Interaction with Structured
Speech,” in Proceedings of CHI 2001, Seattle, WA, pp.
1
Almost any utensil can be used. Traces may not be recorded 182-189.
for the mediums such as water color which apply very low
6. Johnson, W., Rao, R., Hellinek, H., Klotz, L., Card, S.
pressure. Finger paints may also be used, but to capture
“Bridging the Paper and Electronic Worlds: Paper as a
finger paint another set of sensors (heat sensitive) are also
User Interface,” in proceedings of INTERCHI 1993,
required. The scanner gap prevents wet paper to be smudged
Amsterdam, The Netherlands, pp. 24-19
and can input thicker paper and collages.
228
The Junk Mail to Spam Converter
Michael Weller, Mark D. Gross, Jim Nicholls and Ellen Yi-Luen Do
Design Machine Group / Department of Architecture / University of Washington
Box 355720
Seattle, WA 98195 USA
+1 206 543 1604
{philetus, mdgross, jnicholl, ellendo}@u.washington.edu
http://dmg.caup.washington.edu
229
the handy board’s analog sensor ports. A battery-powered branch post office there is a button next to the picture that
laser pointer directed at this light sensor from above says ‘shred’. If you click the shred button the letter is
functions as a break beam. The handy board control loop shredded as soon as it arrives at your local branch.
listens for a drop in the light level due to a letter
Opt-in Virtual Mail
obstructing the laser beam. When a drop in the light level
If you sign up for this program with the post office, when
is detected, the document and destroy sequence is initiated.
letters addressed to you arrive at the processing center they
are opened by a machine rather than being routed to your
local branch. The envelope and contents are scanned front
and back, and then everything is immediately shredded.
The image files are sent to your email account. By
combining this system with optical character recognition
software, your mail could be run through a spam filter with
the rest of your email to automatically filter out junk mail.
230
Part IV
Doctoral Colloquium
Communication from Machines to People with Dementia
T D Adlam
Bath Institute of Medical Engineering
Wolfson Centre, Royal United Hospital
Bath. BA1 3NG
+44 1225 824 107 / [email protected]
ABSTRACT MESSAGES
In this paper, I describe work in progress investigating The research is addressing two main classes of message to be
effective means of communicating messages to people with communicated to the person with dementia.
dementia that will be understood and in some situations effect The first is informative and does not necessarily require a
a behaviour change. Different media will be investigated for response. It informs the user that, for example, an action by a
their effectiveness. Communications will be evaluated in device has been completed or that a person will be calling to
domestic and laboratory contexts using hardware designed visit shortly.
for this work and existing hardware from the Gloucester Smart
House Project. The second class of message is directive and is intended to
modify the behaviour of its recipient. For example, a message
Keywords may be generated to discourage a person from leaving the
Dementia, communication, machine, behaviour, media, house at night in cold weather or to encourage a person to go
human/machine interface. to the toilet in the bathroom when they get up at night. Other
INTRODUCTION messages may combine these two classes.
Dementia MEDIA
Dementia is defined as ‘a progressive global impairment of Many different media are available. Most people are familiar
cognitive function in a conscious person that is usually with visual communications from televisions, computers,
untreatable.’ It mostly, but not exclusively, affects older advertising hoardings, books and magazines; and audible
people. Its primary symptom is the loss of short-term memory. communications from the telephone, radio, CD or record
Other symptoms include an inability to plan task execution, player or public address system. There are other media that
the loss of the ability to reason, the loss of the ability to learn, are not usually associated with communication that may prove
temporal disorientation, and social disinhibition. useful in this research such as music, odour and directed
Communications lighting.
It is the objective of this work to be able to present to a human/ Medium: Audio
machine interface designer a series of guidelines for the design Audio is a versatile medium for messaging in a building and
of interfaces actiong between machines and people with is frequently used in large public spaces.
dementia; specifically the most effective means of
Audio is pervasive – the message is present (for a hearing
communicating information from a machine to a person with
person) in all parts of the room simultaneously, whatever
dementia.
direction the person is directing their attention in. It is a means
This work is part of the Gloucester Smart House [2] project, of communication that people are accustomed to.
which aims to develop devices and technology that will enable
Audio is transient. When message transmission has ceased,
people with dementia to live more independently whilst being
the message is no longer present except in the hearer’s memory
supported by technology. For these devices to be successful,
which in the case of a person with dementia will be poor. It
they need to be able to communicate information to a person
may be possible to loop an audio message, but this could be
with dementia.
very irritating for the hearer.
Similar work is in progress in Canada [1] where the washroom
Other questions present themselves such as whose voice
has been used as a context to evaluate the response of people
should be used to deliver the message? Should the message
with dementia to verbal prompts during a daily living task.
use the first or third person? Should the message be delivered
by a concealed or visible audio device? A concealed device
allows for a physically present speaker.
Using a familiar existing audio device such as a radio may
habituate the user to acting on instructions, whereas a new
device may need to be introduced early in the course of the
dementia so that the user is comfortable with the device.
233
Medium: Text homes by carers and non-video sensors as they respond to
Text is another versatile message delivery medium used in prompts around key areas of the home. These areas are the
public spaces. It is persistent: it doesn’t disappear on delivery kitchen, bathroom, front door and the whole of the house at
and is there for a reader to come back to. It does not imply night-time.
that the author is physically present.
‘Wizard of Oz’ experiments with people with dementia will
Text is localised and requires the gaze and attention of the enable the testing of simulated devices in context, allowing
reader to be directed towards it for it to be effective. When a changes to be made quickly and reactively. A concealed
message is delivered, the attention of the user must be gained operator (the ‘Wizard of Oz’) can simulate the actions of an
before any communication can begin. intelligent device interacting with the subject of the
Other issues that must be addressed when designing a text experiment.
message are the type or script used, colours and size. Hardware developed for the Gloucester Smart House Project
Medium: Video is being used for medium term evaluation of user response to
Video too has advantages and disadvantages that do not make messaging systems. A compact battery powered long-term
it an obvious choice as a communication medium. data logger will record the user responses.
Like audio, video with sound is transient unless it is looped. DEFINITION OF MESSAGES.
If video is used without sound it is localised and persistent The messages used for this work are being defined for four
like text. domestic contexts.
Video does not imply the physical presence of the actor and The kitchen - a cooker monitor has been developed for the
can be used with real or animated faces. It is possible that an Gloucester Smart House project that can intervene to prevent
animated face will be perceived as a machine where as a a dangerous situation and inform the user of actions taken.
recorded face will be perceived as an remote actor. The bathroom – a bath and basin monitor has been developed
Video requires large electronic hardware overheads which for the Gloucester Smart House project that can intervene to
have cost implications for communications devices and prevent a flood and inform the user of actions taken.
networks (if the video is not stored locally in each device). The front door – a reminder system has been developed that,
Other Media with a timer and proximity sensors, will prompt a user on
Other media such as odour (which is powerful stimulus of appropriate exit from the building.
memory) and lighting will be introduced to highlight specific The bedroom – a system (the Night Light) has been developed
message from other messaging devices, or to stimulate the that uses lighting and prompts to guide a person at night-
memory of a previous communication. time.
Hardware requirements These systems in context currently use their own messaging
A versatile communication device is being developed that systems (audio) but will be developed to use a general purpose
will be able to transmit audio, video and text to the user for messaging device being designed for this project.
the purpose information and behaviour modification. Control ACKNOWLEDGEMENTS
of lighting will be achieved with the installation of a bus system I would like my supervisors for their help and advice in
in the house. In a domestic context this will be a wireless bus. preparing this programme of research.
EXPERIMENTAL WORK Dr. Roy Jones, The Research Institute for the Care of the
The first stage of the experimental work will compare the Elderly. University of Bath, UK.
responses of people with and without dementia to instructions
given for a simple task to determine key differences in the Dr. Roger Orpwood, Bath Institute of Medical Engineering,
way that people with dementia respond to instructions when University of Bath, UK.
compared to each other and people without dementia. Dr. Ian Walker, Department of Psychology, University of
The arbitrary task selected is to present the subject with an Bath., UK.
instruction to turn one of two knobs to a particular numbered REFERENCES
position. At the time of writing, the task equipment is being 1. Mihailidis, A. Barbenel, J.C. Fernie, G. (in-press). The
designed and built at BIME efficacy of an intelligent cognitive orthosis to facilitate
Secondly people with dementia will be observed in their handwashing by persons with moderate-to-severe
dementia. Neuropsychological Rehabilitation.
2. Orpwood R, Adlam T, Gibbs C, Hagan S, Jepson J. The
Gloucester Smart House. 6th Annual National
Conference of the Institute of Physics and Engineering
in Medicine; 2000; Southampton: Institute of Physics
and Engineering in Medicine; 2000.
234
Context Information Distribution and Management
Mark Assad
School of Information Technologies
Madsen Building, F09
University of Sydney, NSW, 2006 AUSTRALIA
+61 2 9351 5711
[email protected]
235
Both of these models revolve around the user’s context data buildings, and locations), and a distributed peer-to-peer
being stored in a central infrastructure controlled database. database for mobile entities that can move between
This means that the user does not have complete control geographic areas (such as people or mobile phones) (Figure
over who has access to their data. 3).
The single database model would be able to achieve the Each user is associated with a data storage solution in a
scenario, but at the cost of giving up the user’s privacy to local home network; this would be similar to a user’s mail
store the CD collection. The distributed database model server. In this way, the user is in control of his/her data,
does not support tracing the user’s location as they pass a and they have the option as to what data is made available
music store. to querying applications. Pointers back to these servers are
I aim to try and develop a system that will allow users to entered into the distributed hash table as a means of
efficiently access their context information regardless of locating the individual server.
their location. I want the user to be in complete control of Using the scenario as an example I envision a system where
their data by storing the information on their local the client would be able to leave their home network, and
resources. upon arriving in a foreign area the network would be able to
detect the user by their Bluetooth mobile phone. The
Bluetooth ID would then be used as a key to the distributed
table, returning a pointer to the user’s home database. A
message could then be sent to the user’s home system,
alerting it to the user’s immediate surroundings. This
information could then be used to inform the user about the
availability of the CD.
EVALUATION
I am to implement this strategy using the Elvin[3] content
based messaging service as means of handing the fixed
location entities, and using a distributed hash table such as
Chord[4] for creating pointers into this network for mobile
Figure 3 Proposed model with decentralised database devices. I wish to then evaluate the effectiveness of this
method as a globally distributed context framework.
PROBLEM STATEMENT
A problem arises when users start to travel from one area to
another. The infrastructure must be able to detect and REFRENCES
identify these people regardless of where they are initially 1. Dearle, A., et al., Architectural Support for Global
from. Also the users should be able to be detected without Smart Spaces, in Lecture Notes in Computer
prior arrangement with the local environment. I have Science 2574. 2003, Springer. p. 153-164.
developed applications that use the Bluetooth transmitters 2. Want, R., et al., The Active Badge Location
in mobile phones as a kind of “Active Badge”[2]. This System. 1992, Olivetti Research Ltd. (ORL):
technique passively detects the Bluetooth hardware address Cambridge.
of the user’s phone, and matches this to a known profile for 3. Segall, B. and D. Arnold. Elvin has left the
the user. The Bluetooth hardware address is not a building: A publish/subscribe notification service
hierarchical name, and as a result there is no simple way of with quenching. in AUUG97. 1997. Brisbane,
doing a global lookup between the user’s phone’s address, Australia.
and their profile 4. Stoica, I., et al. Chord: A Scalable Peer-to-peer
My research work aims to develop an infrastructure that Lookup Service for Internet Applications. in ACM
combines the features of the hierarchical naming structure SIGCOMM. 2001. San Deigo, CA.
for static entities in the environment (such as rooms,
236
Publish/Subscribe Messaging: An Active
Networking Approach
Michael Avery
School of Information Technologies
Madsen Building F09, The University of Sydney, NSW, Australia
[email protected]
237
servers to run. Before a user can send a publish or a A major advantage of an active network messaging
subscribe message, it needs to locate a server. This is system has over traditional server based messaging
unreasonable for some devices because they can be systems is that publishers and subscribers will not have to
mobile or they may have very little processing power. search for a messaging server. Instead, they will simply
Another more powerful type of content-based messaging publish their messages to the network and any active
would require the subscriber to send its subscription in network node that picks it up will be able to execute it.
the form of code. When a message is received by the This is a very useful property to have in a network filled
server, the subscription code could then be executed with with devices with low processing and battery power.
the message as input, and the result of the code could be We also plan to investigate the use of distributed hash
used to determine whether to send the message to a tables, like Chord [5], in a messaging system. Chord
subscriber. This method gives subscribers full control provides a method of locating objects in a distributed
over the messages they receive but has a number of network, as well as providing support for fault-tolerance.
drawbacks in terms of processing power required and These are properties should prove useful in a content-
security. based messaging system.
The Active Networks approach [4] attempts to place REFERENCES
computing power inside network nodes. Active network 1. Segall, B., et al. Content based routing with
nodes receive packets containing code which they then Elvin4. in AUUG2K. 2000. Canberra.
execute. With this approach it is possible to upgrade 2. Carzaniga, A., D.S. Rosenblum, and A.L. Wolf.
routers ‘on the fly’ and install new protocols simply by Achieving Scalability and Expressiveness in an
putting the code for them on the network. This approach Internet-Scale Event Notification Service. in
aims to improve network efficiency and reliability. Symposium on Principles of Distributed
PROPOSAL Computing. 2000. Portland.
We plan to investigate how active networking technology
can be applied to the problem of content-based 3. Strom, R.E., et al., Gryphon: An Information
messaging and then see how this can be used in a Flow Based Approach to Message Brokering.
ubiquitous computing environment. We hope this will Symposium on Software Reliability
lead to the development of a messaging system that is Engineering, 1998.
efficient, fault-tolerant and able to support mobility. 4. Tennenhouse, D.L., et al., A Survey of Active
We plan to view the publish and subscribe messages as Network Research, in IEEE Communications
code that is to be executed by the messaging system. We Magazine. 1997. p. 80-86.
will then investigate allowing subscribers to send 5. Stoica, I., et al., Chord: A Scalable Peer-to-peer
complex subscription messages to see what implications Lookup Service for Internet Applications. 2002,
this has on performance and scalability. MIT: Cambridge.
238
Workspace Orchestration to Support Intense Collaboration
in Ubiquitous Workspaces
Terence Blackburn
Dept of CIS, University of South Australia
Mawson Lakes SA 5095
[email protected]
239
Issues such as coordination and synchronisation are An orchestration service needs to be partly autonomic and
critical for teams to achieve their goals. Specialists who partly interactive. For example, the service may prefetch
are participants in these activities are often co-located in and load data automatically as required, but a user should
specially designed rooms that foster face to face also have the flexibility to request ad hoc data sets. This
collaborative activities but often the supporting means that the service can coordinate activities according
technological infrastructure adds little to achieving their to a preselected sequence of events with the flexibility to
goals. change the order as determined by cognitive actions. It
should augment the cognitive work activities of the users
Many of these activities have a defined flow of events. in the workspace and at the same time provide procedural
For example, an emergency response planning session guidance. The service should monitor workspace
generally has a formally defined, procedural set of activities and context to coordinate devices, displays and
activities. Workflow engines could potentially be used applications and it should operate primarily in the
background as a ubiquitous service.
LiveSpace
Knowledge
Workspace
Support Observations of laboratory experiments and ethnographic
Services Services
Participants and Activities
studies in our candidate domains will help to identify and
Context
Devices
Media
model the cognitive and procedural processes in
Interaction collaborative activities and these processes will be
Learning
Applications Orchestration mapped to computational artefacts. An experimental
Transcription
orchestration service will be developed to explore
Instrumentation
Workspace Infrastructure implementation approaches based on the use of workflow
Simulation
concepts, an inferencing engine or other candidate
Enterprise Bus approaches. This experimental apparatus will be used
within a LiveSpaces environment to evaluate workspace
orchestration concepts in two application domains:
Organisational Enterprise Information Processes and disaster relief planning and scientific “tiger teams”.
Models Policies & Rules Services Workflow
CONTRIBUTIONS
This work will test the hypothesis that “workspace
Figure 1 The LiveSpaces architecture in the e- orchestration can enhance a team’s ability to achieve its
World lab at UNISA goals in intense collaborative activities within ubiquitous
workspaces”. It will provide a workspace orchestration
to support these procedural aspects and inferencing model for building applications for collaborative
engine models may assist when more flexible approaches workspace activities and it will model of some of the
are required. These aspects will be explored further as cognitive and procedural properties of intense
part of this PhD research. collaborative activities.
REFERENCES
In addition to the procedural side of intense collaboration, [1] Vernik, R., Blackburn, T. and Bright, D., (2003):
group cognition also needs to be considered as part of the Extending Interactive Intelligent Workspace
orchestration process. The aim of this work is to identify, Architectures with Enterprise Services. Proc Evolve
model and automate some of the group cognitive Conference, Enterprise Information Integration, Sydney,
processes such as group awareness and decision making. Australia, 2003.
Approaches to be researched in this regard include [2] Johanson, B., Fox, A. & Winograd, T., (2002): The
Distributed Cognition theory [5], which focuses on Interactive Workspaces project: experiences with
changes in cognitive states at a system level, and Activity ubiquitous computing rooms. Proc IEEE Pervasive
Theory [6],which focuses on individuals along with the Computing, 2002.
activities they are engaged in. [3] Bond, A., "ODSI: Enterprise Service Co-ordination,"
CRC for Enterprise Distributed Systems Technology, St
WORKSPACE ORCHESTRATION Lucia, Queensland. 2001.
[4] Mark, G., "Extreme Collaboration," Communications of
Workspace orchestration services support both procedural
the ACM, vol. 45, pp. 89-93, 2001.
and cognitive aspects of intense collaboration and two
[5] Hutchins, E., Cognition in the wild. Cambridge, Mass:
approaches are required to explore these aspects. The first
MIT Press, 1995.
is to investigate the procedural, structured processes that
[6] Halverson, C. A., "Activity theory and distributed
lend themselves to automation and the second is to cognition: Or what does CSCW need to DO with
identify and model the group cognitive processes that theories?," CSCW: An International Journal, vol. 11, pp.
produce the less structured, ad hoc activities. 243-67, 2002.
240
Visualisations of Digital Items in a Physical Environment
David Carmichael
School of Information Technologies
University of Sydney
NSW 2006 Australia
61-2-9351-5711
241
3. PROBLEM STATEMENT
The continued expansion of Ubiquitous Computing has
resulted in the physical environment having a multitude
of computational devices and digital items added to it.
These additions continue the evolution of the Physical
Environment in to an Intelligent Environment. However
such computing environments are difficult for people to
understand. The question which my research seeks to
answer is how to provide a view of the digital informa-
tion which resides in the physical environment, in a way
which is comprehensible.
The motivation for this project is that there is no good
way to see / filter all this information. It would be use-
ful to be able to visualise this information and also be
able to browse through it or see aggregated views. This Figure 1: An example view using Augmented
should be accessible on a range of devices for example Reality to show that there is mail waiting.
a PC, PDA or mobile phone.
4. PLANNED RESEARCH work has been done on using virtual reality in control
My research aims to represent physical and digital items systems but not on a worldwide scale.
within an intelligent environment. Before examining Another way of viewing the digital items would be us-
the representation of these items we first categorise the ing augmented reality systems. In order to work effec-
items of interest, finally we look briefly at the architec- tively this would require accurate location tracking to
ture required to show these representations of digital be able to put digital information in arbitrary locations.
and physical items. A more restricted option would be to put visual markers
recognisable to an augmented reality toolkit [5]. This
4.1 Physical and Digital Items allows digital information to be display next to physical
Items within an intelligent environment fall into a num- items tagged and known to the system.
ber of categories. The first category contains physical
items with an area of effect in the environment. This in- The final way of representing the digital items is on 2
cludes WiFi access points, bluetooth beacons, proximity dimensional maps. This approach is more restricted in
sensors (such as those on door locks), motion detectors terms of interaction, but can be displayed on devices
and video cameras. with lower computational power.
The next category is physical items with some digi- 5. REFERENCES
tal/physical function. Items in this category include [1] Brown P.J. The stick-e document: a framework for
light switches, door locks, projectors. creating context aware applications. In Proceeding of
The final category contains purely digital items. These Electronic Publishing 1996, pp259-272.
can vary in the context in which they exist. Their lo- [2] Cory D. Kidd, Robert Orr, Gregory D. Abowd, Christo-
cational context can vary from a single point (for eg. a pher G. Atkeson, Irfan A. Essa, Blair MacIntyre, Eliza-
message only visible at the exact spot) to a large area beth Mynatt, and Thad E. Starner and Wendy Newstet-
(eg the area a service is available in). An example in ter. The Aware Home: A Living Laboratory for Ubiqui-
the first category might be digital post-it notes while tous Computing Research. In Proceedings of CoBuild
the second category might contain the logical services ’99: Second International Conference on Cooperative
of can print or can control lighting. Buildings: 191-198
4.2 Representation to User [3] Dearle, A, Kirby, GNC, Morrison, R, McCarthy, A,
My plan is to use a number of different visualisations to Mullen, K, Yang, Y, Connor, RCH, Welen, P, Wilson,
present information about the intelligent environment A. In: Lecture Notes in Computer Science 2574, Chen,
to the user. The first is to generate a virtual environ- M-S, Chrysanthis, PK, Sloman, M, Zaslavsky, AB (eds),
ment which mirrors the physical environment. Repre- Proc. 4th International Conference on Mobile Data
sentations of digital items related to the physical en- Management (MDM 2003), Melbourne, Australia, pp
vironment are then placed appropriately in the virtual 153-164. Springer, ISBN 3-540-00393-2. 2003.
environment. [4] H.Kato, M. Billinghurst, I. Poupyrev, K. Imamoto,
The level of detail to which the physical environment K. Tachibana.Virtual Object Manipulation on a Table-
is modelled can be varied in order to make the digital Top AR Environment. In Proceedings of ISAR 2000,
items more or less prominent. Using this system the Oct 5th-6th, 2000
user may also be able to interact with the digitally con-
trolled systems via the virtual environment and have
the changes reflected in the physical world. Previous
242
Identity Management in Context-Aware
Intelligent Environments
Daniel Cutting
School of Information Technology
University of Sydney
Sydney, NSW 2006, Australia
+61 2 9351 5711
[email protected]
243
tomated schemes for enforcing containment of information
Information Lecturer
have been explored in specific domains such as cooperative
collaboration tools [3] with some success.
Occupation: Lecturer
Intelligent
Name: Jill Name
Environment The problem can also be approached from a different, though
complementary direction, that of identity fusion [7, 5]. Sim-
ilar to sensor fusion, this is the concept of constructing prob-
Entity Nyms abilities of a nym relating to a particular entity based on the
accretion of low-level sensor data such as an entity’s loca-
Figure 1: The relationships between an entity, the en- tion or passage through security doors. Instead of trying to
tity’s information, nyms and the intelligent environment. reduce the leakage of information or entifiers, identity fusion
is at least partially concerned with exploiting such weak-
nesses. It would thus be beneficial to explore this concept
In general, it seems clear that most people would like to limit to strengthen research into reducing entity discovery.
the amount of information they provide about themselves to
the intelligent environment, or at least provide the requisite REFERENCES
information in such a way that it cannot be easily traced back [1] Clarke, R. Authentication Re-visited: How
to them unless absolutely necessary. Public Key Infrastructure Could Yet Prosper.
16th International eCommerce Conference, Bled,
Although the purpose of nyms is to present very specific sets Slovenia, 9-11 June 2003.
of limited information to the environment, it is easy to imag-
ine situations where nyms could be maliciously combined [2] Clarke, R. Certainty of Identity: A Fundamental
either by a single party or by colluding parties to allow the Misconception, and a Fundamental Threat to
discovery of additional information, or even the discovery of Security. Republished in Privacy Law & Policy
the entity underlying the nyms themselves. Reporter 8, 3 (September 2001) 63-65, 68.
PROBLEM STATEMENT [3] Godefroid, P., Herbsleb, J.D., Jagadeesan, L.J.,
I am interested in exploring mechanisms for automatically and Li, D., Ensuring Privacy in Presence
creating or modifying nyms to provide as little information Awareness Systems: An Automated Verification
as possible to an IE while still providing enough for applica- Approach. ACM Conference on Computer
tions to be useful. Supported Cooperative Work, Philadephia, 2000.
Further to this, I am interested in finding ways of reducing [4] Goldberg, I. A Pseudonymous Communications
the probability that an entity can be discovered (or linked to Infrastructure for the Internet. PhD thesis,
data) based on the nyms they expose. Computer Science Department, University of
California, Berkeley, 2000.
APPROACH
To understand and develop such nym-based mechanisms, it [5] Li, L., Luo, Z., Wong, K.M., and Bossé, E.,
may be worth considering the classification of types of infor- Convex Optimization Approach to Identity
mation referenced by a nym such that automated reasoning Fusion For Multi-Sensor Target Tracking. IEEE
can be applied to reduce or eliminate discovery of an entity Trans. Syst., Man and Cybernetics, 31, 3 (May
or deduction of further information. For example, if an en- 2001), 172-178.
tity’s address details are classified as extremely sensitive, a
[6] Microsoft, Microsoft .net Passport Review
nym-based framework may disallow inclusion of them in a
Guide. http://www.microsoft.com/net/downloads/
nym that is intended for public use.
passport reviewguide.doc
To take this further, such a framework could disallow the
use of multiple nyms which include an entity’s address in [7] Stillman, S., and Essa, I., Towards Reli-
different contexts so that it cannot be used as a way of ty- able Multimodal Sensing in Aware Environments.
ing together otherwise apparently unrelated nyms. Such au- http://citeseer.nj.nec.com/stillman01towards.html
244
Towards a Software Architecture for Device Management in
Instrumented Environments
Christoph Endres
Saarland University
Saarbrücken, Germany
[email protected]
SERVICE
2. ARCHITECTURE OF THE PROTOTYPE
2.1 Design goals POOL SERVICE
The design of the device manager is guided by several …
constraints. In the FLUIDUM project it will be used
in three differently scaled instrumented environments, SERVICE
with a potentially widely varying number of devices and
applications. Also, in order to cooperate with other, Figure 1: High level view of the system
similar projects at the same office, the device man-
ager has to be reusealbe in other contexts. In order
to achieve these goals, there are several important con- 2.3 Classification of devices
siderations. As mentioned above, one main issue in device classifi-
Since the architecture has to be open to new applica- cation is the uncertainty about future devices. At the
tions and new devices, the interfaces have to be well current pace of hardware evolvement, it is very hard to
defined and simple. The architecture has to be suffi- tell which kind of devices will have to be integrated in
ciently flexible for unforseen future devices. This will be the system in a few years, and next to impossible to
achieved by the way devices are classified, as described find a classification of devices that could handle them.
Therefore, we decided not to classify the devices, but
instead to classify the different properties of a device
(video capturing, text entering, infrared sensing, etc.)
and model a device as a list of those properties.
This approach turned out to be very flexible and useful
so far.
245
2.4 Plugboard architecture and device manager nection, there is no sophisticated mechanism yet to de-
The architecture of the plugboard reflects the approach tect failure of a device without previous disconnecting.
of device classification. A device is modelled as an
object containing a list of parameter/value pairs (e.g. 3.3 Ressource management
“name=camera01”) and a list of property APIs. The The device manager server is a useful lookup service to
inclusion of such a property API, e.g. “video in”, means find available devices and to find out about their fea-
that the device has this property. If a property of this tures. A feature and concept for scheduling devices to
type is missing, we can assume that the device can not applications is yet missing. Especially a realiable lock-
perform that task. The advantage of modelling those ing mechanism for devices or device features in use is
properties as API is that besides getting information missing. Also, mutual locking of different properties
about the device, we also acquire access to its features. on the same device is missing. For instance, a camera
The APIs are standardized, so on encountering a certain currently in use in the system is not capable of simul-
property API we know which functions can be called. taneously broadcasting a video stream and capturing a
The central part of the plugboard is the device man- high resolution photo. Those dependencies have to be
ager server. It is a lookup service to which devices modelled.
can connect or from which they can disconnect. On
the other hand, services can also connect to the server 3.4 Inclusion of future devices
and request information about devices. Each connected This is a point which should be solved with our ap-
service will be automatically informed if there are im- proach of device properties. The author would like to
portant changes in the plugged devices. Some of those discuss it and gather some more opinions.
services take care of the connection and exchange of
3.5 Dealing with virtual devices
data to the central data pool.
Some properties, for instance recognition of visual mark-
ers, do not have a hardware equivalent but are over-
Device Handle
Property API
Table of Monitor
… egant way to do this.
devices
Service
& their
…
Device Handle
DEVICE 4. ACKNOWLEDGMENTS
handles
plug/unplug
Parameters
Service
This work has been funded by the German Research
Council (DFG) and the Chair for AI at the University
Property API
of Saarbrücken, Germany.
Property API
Property API
3. DISCUSSION ISSUES
There are some unresolved issues in the current system
that I would like to discuss.
246
Ubiquitous Support for Knowledge and Work
Michael A. Evans
Knowledge Acquisition and Projection Lab
501 N. Morton, Ste 212
Bloomington, IN 47404 USA
+1 812 856 1363
[email protected]
ABSTRACT Even if you’re in a priority situation there’s (sic) a lot of
Knowledge management (KM) presents a challenge to things going on in a ship; they don’t have the time to cut a
human-computer interaction (HCI). Indeed, a reassessment message with you.
of how knowledge and work distributed across structural Don: Yeah and, in fact, just to further what BS’s saying
and cultural boundaries of organization are supported may like again the chat came in to play [in a recent
be in order. This dilemma can be summarized by stating troubleshooting action aboard a ship deployed in the
that the problem concerns how knowledge is Persian Gulf]. Because what I was doing was I was
conceptualized and at what level of organization chatting with LANT [FTSCLANT – Fleet Technical
interventions are proposed. Consequently, my dissertation Support Centre, Atlantic Division in Norfolk, VA] almost
draws upon three theories—Communities of Practice, nightly. Almost every night about what my problem was
Activity Theory, and Institutional Theory—that emphasize and, you know, what I mean and then they were in turn
knowledge and work as collective processes to counter this calling Richard [at the Naval Surface Weapons Center,
challenge. A case of the collaborative practice of virtual Crane in south-central Indiana] and actually doing, you
teams in the U.S. Navy is presented to illustrate. know, calling Richard on the phone saying, ”Yeah, you
Keywords know…[Don’s]…got these parts – he can do this, this and
Knowledge management, human-computer interaction, this,” and then they would get back on chat [to continue
Communities of Practice, Activity Theory, Institutional troubleshooting with me] and it’s all real time.
Theory The above discussion between Bill, an electronics
INTRODUCTION engineer, and Don, a subject-matter expert technician,
The U.S. Department of Navy (DON) is in a monumental encapsulates the current, yet evolving practice of
period of transition. In essence, to counter a radical maintaining and troubleshooting at a distance the
downsizing in on board personnel and to leverage what shipboard systems in the U.S. Navy. To review, a subject-
has been championed as the critical asset of tacit, expert matter expert (SME) on a “tech assist” in the Persian Gulf
knowledge as well as advanced information technologies, exploits both mundane and advanced information
the DON has formulated a strategy that promotes technologies to leverage geographically-dispersed
knowledge management and eGovernment initiatives expertise. The mission was to troubleshoot and resolve a
throughout the enterprise. To emphasize the impact of this critical problem with a complex electronic countermeasure
transition, an exchange on evolving collaborative system aboard a ship deployed to defend troops landed in
troubleshooting practice in the U.S. Navy between two Iraq.
long-time civilian employees follows: DESIGN FOR DISTRIBUTED KNOWLEDGE AND WORK
Bill: In the old days that [an exchange between at-sea The above excerpt and scenario capture nicely the hurdles
sailors and shore-based technicians engaged in a to be overcome to support sailors, engineers, and
troubleshooting action] would have been handled by technicians servicing complex electronic systems aboard
satellite phone (MRSAT) or message traffic. So the U.S. Navy ships. The matter is more critical given the
SIPRnet [the Secret Internet Protocol Router Network, DON’s explicit interest in knowledge management (KM)
used to transmit classified information about ships] has initiatives.
really helped, being able to send email because sometimes Consequently, one interpretation of this strategic initiative
it would take a day to get a message out. is to develop a knowledge management and performance
support system to aid at a distance the collaborative
troubleshooting actions of military and civilian technicians
maintaining electronic countermeasure systems aboard
U.S. Navy warships. To this end, the Knowledge
Acquisition and Projection Lab at Indiana University is
247
attempting to meet this challenge by participating in the including the object of the activity). The standard view in
Knowledge Projection Project – a joint undertaking with these situations is to deduce an ultimate set of operations
Naval Sea Systems Command (NAVSEA), Naval Surface from an abstract use activity and apply these to design and
Weapons Center (NSWC) Crane, EG&G Technical analysis. This article argues that the user interface fully
Services and Purdue University. The proposed system is reveals itself to us only when in use (p.171-172).
intended to leverage both intellectual capital (i.e., tacit In this dissertation I wish to extend her framework to
knowledge) and advanced information technologies (e.g., include Communities of Practice and Institutional Theory.
Case-Based Reasoning and High Performance Knowledge The reasons for this are threefold. First, over the past
Bases) to facilitate the collaboration between shore-based fifteen years there have been tremendous advances to
civilian technicians and on board sailors within a network theoretically-informed analyses of knowledge and work.
of distributed practice. The goals of the design team at IU Nonetheless, few attempts have been made to integrate
are to exploit KM thinking and techniques to impact key perspectives. Second, there are shortcomings to Activity
organizational variables, including a reduction in total cost Theory, particularly a lack of attention to the issue of
of ownership, an improvement in the efficiency and power that can be addressed by the other two perspectives.
effectiveness of maintenance and troubleshooting, and an Finally, a brining together of these theories can aid to
increase in fleet readiness. facilitate the continued interdisciplinary nature of HCI. By
Understandably, this presents a unique challenge to incorporating theories that are now used in cognate fields
human-computer interaction (HCI). To meet this challenge such as educational psychology, performance technology,
the suggestion forwarded here is to expand traditional information science, and organizational theory, this
frames of reference to more fully incorporate social and agenda can be further advanced.
cultural features of organization that may influence the
CONCLUSION
effective distribution of knowledge and work across My aim has been to reveal the challenges that knowledge
enterprise boundaries. As will be illustrated in the case of management initiatives bring to the theory and practice of
this collection of military and civilian technicians, social human-computer interaction. The issue is that distributed
features arise as performance is essentially a collaborative knowledge and work inevitably involve the crossing of
and distributed practice across specialized work units; social and cultural boundaries of organization. What is
cultural features arise as this coordination cuts across encouraging is that we now have appropriate,
functional identities, defined both by their status in the theoretically-based perspectives that can assist in meeting
organization (military or civilian) and role in the end-to- this endeavor.
end process (primary maintainer or first-line support). To
assist with accounting for these social and cultural ACKNOWLEDGMENTS
features, I will enlist concepts and principles from three I thank NAVSEA, NSWC Crane, and the men and women
perspectives that are appearing with increasing regularity in the U.S. Navy who maintain and operate the “Slick-32”
in the HCI literature — Communities of Practice [4], for their cooperation and participation.
Activity Theory [2], and Institutional Theory [5]. REFERENCES
Juxtaposing these three theories may better permit for the 1. Bodker, S. (1989). A human activity approach to user
examination of inherent, yet unrecognized, tensions in the interfaces. Human-Computer Interaction, 4, 171-195.
concepts of knowledge (object-process) and work
(individual-organizational) that knowledge management 2. Engeström, Y. (1987). Learning by expanding: An
principles and initiatives present. activity-theoretical approach to developmental
research. Helsinki, Finland: Orienta-Konultit.
THREE PERSPECTIVES ON KNOWLEDGE AND WORK
3. U. S. Department of the Navy (2002) Information
Almost fifteen years ago, Susanne Bødker [1] wrote:
Management & Information Technology Strategic Plan
This article presents a framework for the design of user FY2002-2003. Available at http://www.don-
interfaces that originates from the work situations in which imit.navy.mil/default.asp.
computer-based artifacts are used: The framework deals
4. Wenger, E. (1998). Communities of practice: Learning,
with the role of the user interface in purposeful human
meaning, and identity. New York: Cambridge
work…I deal with human experience and competence as
University Press.
being rooted in the practice of the group that conducts the
specific work activity…The main conclusions are: The 5. Zilber, T. B. (2002). Institutionalization as an interplay
user interface cannot be seen independently of the use between actions, meanings, and actors: The case of a
activity (i.e., the professional, socially organized practice rape crisis center in Israel. Academy of Management
of the users and the material conditions for the activity, Journal, 45(1), 234-254.
248
Anonymous Usage of Location-Based Services over
Wireless Networks
Marco Gruteser
Department of Computer Science
University of Colorado at Boulder
Boulder, CO 80309
[email protected]
addresses, which is typically only known to Internet Service every quasi-identifier. In other words, there are at least
Providers. Thus, this type of identification attack is available other individuals that any given record could pertain to.
to any provider of a location-based service.
For data mining purpose entries can be perturbed before stor-
age by adding a random value [11]. A reconstruction proce-
3. APPROACH dure then estimates the approximate distribution of a large
The privacy enhancing mechanisms seek to maintain a min- number of values; however, no specific value can be linked
imum level of anonymity. Inspired by the -anonymity con- to an individual.
cept [5] for databases, we define the level of anonymity as ,
where the adversaries observations of an individuals move-
Short Bio
ments must be undistinguishable from at least other in- Marco Gruteser is currently a Ph.D. candidate in computer
dividuals. We plan to extend this model to take into account science at the University of Colorado at Boulder. His re-
continuous data updates (i.e., location information changing search interests include privacy, context-aware applications,
over time). and wireless networks.
We address the WLAN tracking problem at the link layer During a one-year leave at the IBM T.J. Watson Research
through disposable MAC addresses. Compared to solutions Center, he developed a software infrastructure that inte-
such as directional antennas, this lightweight mechanism grates sensors to support context-aware applications in the
that can be deployed without extensive hardware modifica- BlueSpace smart office project. This work led to four pend-
tions. When addresses are switched frequently enough, it ing patents, a refereed conference publication, and coverage
prevents an adversary from tracking the movements of in- from US news media such as the New York Times and ABC
dividuals. More sophisticated adversaries, however, may be Television News.
able to link several addresses to the same individual through
monitoring signal-to-noise ratio or traffic analysis. We plan Marco received a Master’s degree in Computer Science from
to analyze WLAN traces to judge how frequently addresses the University of Colorado at Boulder (2000) and completed
must be disposed for a given level of anonymity and how a Vordiplom at the Technical University Darmstadt, Ger-
vulnerable this approach is against the more sophisticated many (1998). He is a student member of the ACM.
attacks.
REFERENCES
The system uses cloaking algorithms that change the accu- [1] P. Chou, M. Gruteser, J. Lai, A. Levas, S. McFaddin, C. Pinhanez,
and M. Viveros. Bluespace: Creating a personalized and
racy of location information, when the system intentionally context-aware workspace. Technical Report RC 22281, IBM
reveals it to third parties, such as location-based services. Research, 2001.
To date, we have designed a system architecture and algo- [2] Sastry Duri, Marco Gruteser, Xuan Liu, Paul Moskowitz, Ronald
rithms [6] that adaptively control the accuracy of transmitted Perez, Moninder Singh, and Jung-Mu Tang. Framework for security
location information so that the message could have origi- and privacy in automotive telematics. In 2nd ACM International
nated from at least users. Based on automotive traffic sim- Worksphop on Mobile Commerce, 2002.
ulations we found that 100–200m accuracy is usually suf- [3] Roy Want, Andy Hopper, Veronica Falco, and Jonathan Gibbons.
ficient on city and highway streets to maintain a minimum The active badge location system. ACM Transactions on Information
Systems (TOIS), 10(1):91–102, 1992.
level of 5-anonymity. We plan to extend this work with al-
[4] Paul Castro, Patrick Chiu, Ted Kremenek, and Richard Muntz. A
gorithms that do support more sophisticated location queries probabilistic room location service for wireless networked
than asking for a single point and with algorithms that do not environments. In Ubicomp, 2001.
rely on a central trusted server. [5] L. Sweeney. -anonymity: A model for protecting privacy.
251
their personal server. If these external devices are to make An important part of my research will be to recognize how
use of the services, they first need to know that they exist. the changes I propose to make to client-server interactions
Therefore the personal server needs a way to advertise can also change the nature of applications within an
what services it has to offer as it enters a network, so that intelligent environment. What applications do this new
the client devices will access them upon discovery. service advertisement mechanism enable that were
It is this requirement for a reverse discovery method, where previously difficult, or even impossible, to implement? For
the server informs the client, rather than the client querying example, what changes will it enable for identity checking,
the server, that forms the basis of my research direction. location tracking and personalisation applications?
Applications such as these are major areas of research
EXAMPLE SCENARIO within ubiquitous computing research today, so my
Take the example of a landline phone in an office, which is research has the potential to offer new ways to approach
equipped with an LCD display. As you approach the them.
phone, wearing your personal server, the phone becomes
aware of its presence and the services it is offering. It Finally, I intend to build a working system where personal
utilises the phonebook service on offer to present you with servers, or devices acting in a similar fashion (such as
a list of your contacts on the LCD display, from which you customised PDAs), can wirelessly join a network and make
can easily pick out who you want to call. When you hang their services known. These services should include some
up and walk away, the phone knows that your personal that the new mechanisms have made possible, as discussed
server is no longer offering these services, and ceases to above.
use them. ACKNOWLEDGMENTS
In order to make this work, there are a number of issues I would like to thank my supervisor Bob Kummerfeld and
that need to be addressed: my associate supervisor Aaron Quigley for their guidance
and support. I would also like to thank the Smart Internet
- How does the phone know that the personal server is Technology CRC for their ongoing support of my PhD.
there?
- How does it know what services the personal server is REFERENCES
offering? 1. Want, R., Pering, T., Danneels, G., Kumar, M., Sundar,
- How does the phone authenticate itself to get access to M., Light, J.: “The Personal Server: Changing the Way
the services? We Think About Ubiquitous Computing”, Proceedings
- What data representation do the devices use to of Ubicomp 2002, Goteburg, Sweden, September 30th –
communicate? October 2nd 2002, pp 194-209.
- How does the phone know when the personal server is 2. Want, R., Pering, T., Borriello, G., Farkas, K. I.:
no longer available? “Disappearing Hardware”, IEEE Pervasive Computing,
It is my intention that my research will lead me to come up Vol. 1, Issue 1, April 2002, pp 36-47.
with suitable solutions for each of these problems. 3. Mayo, R.: “TN-60 -- Reprint of the Factoid Web Page”,
RESEARCH DIRECTIONS http://www.research.compaq.com/wrl/techreports/abstra
Current networking models do not allow personal servers cts/TN-60.html, July 2001.
to introduce their services in the ad-hoc fashion that we
wish them to. Therefore my research is going to be directed 4. Staudter, T.: “The Core of Computing”,
towards finding new mechanisms for client-server http://www.research.ibm.com/thinkresearch/pages/2002
interactions and investigating what possibilities they open /20020207_metapad.shtml, February 2002.
up. 5. Moore, D. J., Want, R., Harrison, B. L., Gujar, A.,
Firstly, I need to fully investigate existing service Fishkin, K.: “Implementing Phicons: Combining
discovery and advertisement mechanisms. This allows me Computer Vision with InfraRed Technology for
to identify what problems they encounter when applied to Interactive Physical Icons”, Proceedings of ACM
devices wishing to offer network services. This knowledge UIST’99, Ashville, N.C., November 8th – 10th 1999, pp
can then be used to develop a protocol suite similar to that 67-68.
used by zeroconf, to make it simple to introduce devices 6. Mockapetris, P.: “Domain Names – Concepts and
and their services to the network. Facilities”, STD 13/RFC 1034, November 1987.
Consideration needs to be given to how external devices 7. “Zero Configuration Networking”, zeroconf IETF
authenticate with the personal server to access services. working group home page, http://www.zeroconf.org.
This would probably involve services available with 8. “Understanding UPnP: A White Paper”,
different levels of clearance, ranging from low-level http://www.upnp.org/download/
statistical information open to all, to the most privileged UPNP_UnderstandingUPNP.doc
ability to write to all parts of the disk.
252
ME: Mobile E-Personality
PEKKA JÄPPINEN
Department of Information Technology
Lappeenranta University of Technology
P.O.Box 20, 53851 Lappeenranta FINLAND
[email protected]
253
home network. This may not be possible for ser- so that any service can use it? How user can define
vices provided ubiquitously. The same problem ex- what information is available to what service? How
ists for trusted third party approaches such as Lib- is user authenticated for changing the stored data?
erty Architecture [3] and Microsoft .NET passport How does ME affect on business?
[4]. Connection to the trusted third party is required
The PhD thesis is not going to answer to all of these
for personal information retrieval. Third party ap-
questions. Since the thesis is done for the computer
proach may also require some kind of payment to
science department and the laboratory of communi-
the third party for it’s services.
cations engineering, the focus is on the communica-
In this PhD research the personal information is tion between ME and the services (Figure 1).
stored in mobile device, where the user has the con-
Initial evaluation of various personal information
trol over it. The information is delivered to the ser-
properties that can affect to the location where the
vices on request by Mobile E-Personality service
given piece of information should be stored was first
(ME). Therefore there is less need for service to have
done and published at: [6]. First version of personal
huge databases of customers personal information.
information transfer from mobile device to internet
service based on vCard transfer between browser
2 ME and research tasks plug-in and mobile phone[7]. More generic transfer
was defined for transparent services[8]. Next steps
The goal of ME research is to define the ways for for research is to define privacy rules for mobile
communicating with the mobile device holding per- device and define general structure for Mobile E-
sonal information. The access of information in ME Personality service on mobile device.
is designed so that after configuration user actions
are minimised but the privacy is preserved.
References
Transparent
Service 1. Bauer, G.W., User data management (2003) , Available at:
http://www.mozilla.org/projects/ui/communicator/
Internet browser/wallet/ [Accessed March 27, 2003]
2. Thai, B. , Wan, R., Seneviratne, A., Rakotoarivelo, T., Inte-
grated Personal Mobility Architecture: A Complete Personal
Mobility Solution, Mobile Networks and Applications vol 8
2) Internet service
AP issue 1, ACM Press (2003)
Access 3) 3. Liberty Alliance, Liberty Architecture Overview (2002),
Point (AP) Available at: http://www.projectliberty.org/ [Accessed April
ME 11, 2003]
1)
4. Microsoft, Microsoft .net passport: re-
Service view guide (2002), Available at:
Mobile Accessing
Device (SAD) http://www.microsoft.com/netservices/passport/
Device
passport.asp [Accessed March 27, 2003]
Fig. 1. Mobile E-personality and services 5. Bettstetter, C., Kellerer, W., Eberspächer, J., Personal Pro-
file Mobilty for Ubiquitous Service Usage, Book Of Visions
2000, Wireless Strategic Initiative (2000)
6. Jäppinen, P., Porras, J., Analyzing the Attributes of Personal-
In order to create universally functional Mobile E- ization Information Affecting Storage Location, Proceedings
Personality there is several questions that need to on IADIS International Conference on E-Society, Lisbon,
Portugal (2003)
be addressed. What are the benefits and drawbacks 7. Yrjölä, M., Jäppinen, P., Porras, J., Personal information
of having single device holding lot of special in- transfer from mobile device to web page, Proceedings on
formation? What are the risks? How different types IADIS International Conference on WWW/Internet, Al-
of services can request the personal information or garve, Portugal (2003)
8. Jäppinen, P., Porras, J., Transfer of Personalisation Informa-
how they even know they can request it? How user tion from Mobile Device to Transparent Services, Proceed-
privacy can be ensured i.e. how much automation ings on IASTED International Conference on Computer Sci-
can be provided? How the information is notated ence and Technology, Cancun, Mexico (2003)
254
User Location and Mobility for Distributed Intelligent
Environment
Teddy Mantoro
Department of Computer Science, Australian National University, ACT-0200, Australia
+61-2-6125 3878
[email protected]
A location is the most important aspect providing a context Proximate location detected by WLAN is an interesting
for mobile users, e.g. finding the nearest resources, proximate sensor in an AO because it can be used to access
navigation, locating objects and people. Numerous location the network and also to sense user location on the scale of a
room or an office.
We used the Bluetooth access point as a sensor for several
rooms within the range. For example, when a user is close
to a certain access point, his location will be proximately
close to the access point and it could represent user
location from several rooms.
255
WiFi does not only have a higher speed and longer range current user’s location is determined. If not, then we check
than Bluetooth but the signal strength of Wi-Fi also can be using aggregate proximate location data.
used to detect user location. We have two scenarios to Precise Location
determine user location using WiFi. Firstly, by determining User UserLoc Location RegId Loc
Id
Uid date time
the signal strength from the WiFi capable device which pc3
Ibutton4
323
125
cwj
tm
9-9
9-9
9-9
9-9
Vr2
235
125
bk
rh
9-9
9-9
9-9
9-9
signal strength from the WiFi access points and storing the MacAddress
xx-xx-xx-xx-xx-xx
Uid
cwj cwj
Id
323
Cat
Pc
Proximate Location
AP1 AP2 … APn
signal strength data in the local IE repository with the xx-xx-xx-xx-xx-xx bk 323 Px room1 999 999 … 999
sensing is in the access point, so we do not require a user’s Uid LocId Uid
Predicted Location (History)
LocId Date Time Dev
tm
…
203
…
030111 11.30
… …
ibutton
artificial neural network to cluster the signal strength data. UserLoc in IE repository
Once we get the signal strength cluster allocation in the Figure 1. Aggregate users’ location in AO
local IE, we can directly get current user location. CONCLUSION AND FURTHER EXPERIMENTS
In our experiment using WiFi, we used 11 access points to In an AO, a user has a regular work schedule. A user has a
measure signal strength in two adjacent buildings, already routine activity that can be used to predict his location in a
installed for WLAN access. specific timestamp. A user’s activity can be represented by
user mobility, and user mobility can be seen from the
The result was good enough to predict current user
user’s changing location on a significant scale. So, in an
location. On the 2nd level of one building, we found that
AO once we can capture a user’s location then we can map
most places had a good signal from more than two access
a pattern of user mobility.Our experiment using WiFi and
points and we could predict accurately (96%) in rooms of 3
Bluetooth as proximate location in an AO has showed their
meters width. On the 3rd level, where not all locations were
good results in sensing user location. The result can be
covered by more than one access point, we had only a
improved by developing interoperability between sensors
reasonable degree of accuracy (75%) in predicting a user’s
to get aggregate sensor data.
current location.
Further experiments that can be considered arising from
PREDICTED USER LOCATION this work are aggregation smart sensor (more
Since IE is also a ubiquitous and ambient computing interoperability between sensor) using notification system
environment, we assume that sensors and actuators, and to notify the difference between current location and
computer access will be embedded and available in every previous notification and managing location information in
area. We identify the user’s location by recording a Merino service layer architecture, i.e. format
historical database of events, whenever the representation, conflict resolution, privacy of location
receptor/sensor/actuator captures the user’s identity in a information.
certain location.
REFERENCES
We develop historical data from precise users locations. 1. Dey, A. K., G. D. Abowd, et al. A Context-Based
The history data can be used to predict user location. Infrastructure for Smart Environments. 1st International
Probabilistic model also possible to develop to find the Workshop on MANSE, 1999.
most probable location of user based on a certain policy. 2. Harter, A. and A. Hopper A Distributed Location System for
We’ve implemented the policies for user’s location the Active Office, IEEE Network Vol 8, No 1, (1994).
checkpoints i.e. the same day of the week (at almost the 3. Kummerfeld, B. Quigley A. et al.. Merino: Towards an
same time and same day of the week) and all the days in a intelligent environment architecture for multi-granularity
one week range (almost the same time within a week) [4]. context description Workshop on User Modelling for
We use simple extended SQL query to implement the Ubiquitous Computing, 2003..
above policies to find user location via a Java Speech 4. Mantoro, T. and C. W. Johnson. Location History in a Low-
interface. cost Context Awareness Environment. Workshop on
WICAPUC, ACSW 2003. Adelaide, 2003.
DISCUSSION
In figure 1, we show how an AO processes the information 5. Schmidt, A., M. Beigl, et al. There is more to context that
to determine user’s location by aggregating the relationship location. Computer & Graphics 23(1999): 893-901.
between user data and location data. 6. Small, J., A. Smailagic, et al.. Determining User Location For
Context Aware Computing Through the Use of a Wireless
Aggregate of precise location has first priority and is
LAN Infrastructure. Pittsburgh, USA, Institute for Complex
followed by proximate and predicted locations Engineered Systems, (2000)
respectively. This means that when the AO receives
information from aggregate precise location, then the
256
Towards a Rich Boundary Object Model for the Design of
Mobile Knowledge Management Systems
Jia Shen
[email protected]
Department of Information Systems
New Jersey Institute of Technology
University Heights, NJ, 07102
257
model). More “ portable context” [2] can be carried with technicians and between technicians, secretaries and
rich boundary objects. The understanding of context is not customers.
only from a computational perspective focusing on physical The proposed research is innovative because it
environment [3], but also from social and organizational operationalizes the conceptually important yet ambiguous
perspectives incorporating the organizational and social idea of context in mobile knowledge management systems
context [4]. The central hypothesis is that rich boundary design, using mobile multimedia data capture technologies.
objects afford multiple levels of contextualization, and The result is not only a database with information objects
enhance tacit as well as explicit knowledge transfer. and events, but a common information space [2] where
METHODOLOGY meanings of information objects can be interpreted and
To test and refine the model, it is proposed that a series of shared.
studies be conducted focusing on the exchange of case ACKNOWLEDGEMENTS
stories, which are messages that tell the particulars of an I would like to thank my advisor Dr. Quentin Jones for
occurrence or course of events that is directly related to guidance and support, Dr. Roxanne Hiltz and Dr. Steve
work processes. Similar to stories, a case story is told for a Whittaker for constructive comments, and MacKinney Oil
particular purpose. Different from other forms of stories, Company for allowing us to conduct the field study.
such as jokes, news, or notifications, case stories focus on
REFERENCES
work process experiences. Such case stories have been used
1. Ackerman, M.S., Augmenting the Organizational
to share informal information; transfer tacit knowledge;
Memory: A Field Study of Answer Garden. Proceedings
share organizational culture and norms; help form
of CSCW, 1994.
communities of practice; and catalyze organizational
change. Orr’s pioneering research, which examined Xerox 2. Bannon, L. and Bodker, S., Constructing Common
photo-copier repair technicians in 1986, showed how the Information Spaces. Proceedings of the 5th European
exchange of “ war stories” could help a community of Conference on CSCW, 1997.
practice diagnose problems, circulate information, and 3. Dey, A.K., Abowd, G.D., and Salber, D., A Conceptual
celebrate identity [8]. Framework and a Toolkit for Supporting the Rapid
Exchange of case stories and the model are being examined Prototyping of Context-Aware Applications. HCI, 2001.
through field studies at a company that provides fuel, house 16(2-4).
boiler and air conditioner repair, and maintenance service 4. Dourish, P., Seeking a Foundation for Context-Aware
to about 3000 customers in central New Jersey. The Computing. Human-Computer Interaction, 2001. 16.
company has four secretaries, seven technicians, managers
5. Erickson, T., Ask Not for Whom the Cell Phone Tolls:
and oil drivers. A significant proportion of the technicians’
Some Problems with the Notion of Context-Aware
activities can be described as mobile knowledge work, and
Computing. Communications of the ACM, 2001, 2001.
a significant proportion of the office activities support the
mobile technicians. Three studies are being proposed, each 6. Halverson, C.A. and Ackerman, M.S., "Yeah, the Rush
addressing a specific question in the context of the field Ain’t Here yet - Take a Break": Creation and Use of an
study site: Artifact as Organizational Memory. Proceedings of the
36th Annual Hawaii International Conference on System
1. Study 1 –what are the uses and limitations of boundary
Science, 6-9 Jan 2003, 2003: pp. 113 -122.
objects in current organizational knowledge sharing?
7. Lutters, W.G. and Ackerman, M.S., Achieving Safety:
2. Study 2 – what rich boundary objects are considered
A Field Study of Boundary Objects in Aircraft Technical
useful in knowledge sharing?
Support. CSCW 2002, 2002.
3. Study 3 – can the rich boundary object model be
8. Orr, J., Narratives at Work: Story Telling as
utilized to effectively guide the design of a mobile
Cooperative Diagnostic Activity. CSCW, 1986.
knowledge management system?
9. Star, S.L., The Structure of Ill-Structured Solutions.
Part of study 3 involves the development of the CAse sTory
Gasser, L. & Huhns M. (eds), Distributed Artificial
capture and Sharing (CATS) system, which will enable
Intelligence-Volume II. Morgan Kaufmann, 1989: pp.
capture of rich in situ data and the creation and sharing of
37--54.
rich boundary objects. The prototype CATS system being
built will use digital pictures and voice recording on pocket 10. Weiser, M., The Computer for the 21st Century.
PCs with digital camera attachment, and will synchronize Scientific American, 1991. 265(3): pp. 94-104.
data via 802.11-enabled network. 11. Wiberg, M., Roamware: An Integrated Architecture
STATUS AND CONTRIBUTIONS for Seamless Interaction in between Mobile Meetings.
Currently study 1 is being conducted at the field site. A Proceedings of the 2001 International ACM
variety of boundary objects and their limitations are being SIGGROUP Conference on Supporting Group Work,
identified in current case story sharing processes among 2001: pp. 288-297.
258
Part V
Videos
DigiScope: An Invisible Worlds Window
Alois Ferscha Markus Keller
Research Institute for Pervasive Computing Research Institute for Pervasive Computing
Altenberger Straße 69 Altenberger Straße 69
4040 Linz, AUSTRIA 4040 Linz, AUSTRIA
[email protected] [email protected]
ABSTRACT DIGISCOPE
Smart appliances, i.e. wirelessly networked mobile With our work we aim at supporting “human to ubiquitous
information devices have started to populate the “real computer interaction” processes by bringing back visual
world” with “hidden” or “invisible” services, thus building clues to the user on how to interact. Once computers have
up an “invisible world” of services associated with real disappeared from desks, hiding in the background, their
world objects. With the embedding of invisible technology services will most likely still be there. New artefacts and
into everyday things, however, also the intuitive perception smart appliances [7] are evolving that “carry” invisible
of “invisible services” disappears. In this video we present services, such that manipulating the appliance controls a
how we can support the perception of smart appliance service. Even if the service is not integrated into the artefact
services via novel interactive visual experiences. We have but merely “linked” to a background system [5], the
developed and built a see-through based visual perception manipulation of the physical object can manipulate their
system for “invisible worlds” to support interactive theatre virtual representative on that background system
experience in mixed reality spaces, which we call respectively. To this end it is necessary to link the physical
DigiScope. In the video we shown how e.g. the “invisible world with the virtual world [2], i.e. the linking of physical
services” of our SmartCase, an Internet enabled suitcase, objects with their “virtual counterparts” [6]. Tangible
can be visualized via graphical hyperlink annotations interface research [4] has contributed to this issue of
Keywords
physical-virtual linkage by considering physical artefacts as
Computational Perception, Smart Appliances, MR. representations and controls for digital information. A
physical object thus represents information while at the
same time acts a control for directly manipulating that
SMART THINGS information or underlying associations.
“The most profound technologies are those that disappear.
They weave themselves into the fabric of everyday life until
they are indistinguishable from it“ was Mark Weiser’s This video presents the use of DigiScope, a 6DOF visual
central statement in his seminal paper [8] in 1991. His see-through tablet we have developed to support an
conjecture, that “we are trying to conceive a new way of intuitive “invisible service” – or more generally: “invisible
thinking about computers in the world, one that takes into world” – inspection: the invisible services of the smart
account the natural human environment and allows the appliance “SmartCase” – which has been developed as a
computers themselves to vanish into the background” has demonstrator for a contextware framework [2] [3] – are
fertilized not only the embedding of ubiquitous computing inspected. We exploit the metaphor of digital annotations
technology into a natural human environment which for real world objects, and display these annotations along
responds to people’s needs and actions in a contextual the line of sight to real world objects that are seen through a
manner, but has also caused “hidden” functionality and holographic display. The user gets the ability to interact
services volatilize out of sight of humans. “Smart Things” with the virtual object and its digital information by
functionality is characterized by the autonomy of their viewing the corresponding real (physical) artefact. With
programmed behaviour, the dynamicity and context- DigiScope, the user is handling a holographic display tablet
awareness of services and applications they offer, the ad- just like a 6 DOF window that opens a view into the virtual
hoc interoperability of services and the different modes of world. The tablet is an optical see-though display which
user interaction upon those services. Since many of these allows for a very natural viewing and scene inspection. To
objects are able to communicate and interact with global implement correct views into the scene, the angle and
networks and with each other, the vision of “context-aware” perspective of the DigiScope is being tracked, instead of
[1] smart appliances and smart spaces – where dynamically tracking the position and orientation of the user. Thus the
configured systems of mobile entities by exploiting the user is freed from any system hardware obstacles like
available infrastructure and processing power of the HMDs, stereoscopic glasses, trackers, sensors, markers,
environment – has become a reality. tags, pointers and the such. To support free navigation in
the scene, the DigiScope can be fully tilt and rotated in
space by hand. The projecting beamer is fixed in the right
261
projecting angle within a 6DOF mounting frame, and is object (e.g. shirt) has been put into the SmartCase, this
used to project the computer generated image encoding the service can be queried to check whether the shirt is in the
scene annotation onto a holographic display. The case or not. A straightforward way to access this
DigiScope software architecture is based on standard information would be via a classical http interface to the
building blocks for AR application frameworks: (i) a 6 embedded web-server. Observed via the DigiSpace
DOF tracking library for position and orientation tracking however, changes to the SmartCase inventory are displayed
of the DigiScope frame, (ii) Java and Java3D for 3D scene as a graphical annotation of the real world.
modelling, rendering and implementing user interaction,
and (iii) ARToolkit for visual object tracking and scene
recognition.
CONCLUSIONS
This video presents DigiScope, a 6DOF visual see-through
INSPECTING SMARTCASE inspection tablet, as an approach towards emerging problem
In previous work we have developed SmartCase [2], a of developing intuitive interfaces for the perception and
context aware smart appliance [3]. The hardware for the inspection of environments populated with an increasing
SmartCase demonstration prototype uses an embedded number of smart appliances in the pervasive and ubiquitous
single board computer integrated into an off-the-shelf computing landscape. DigiScope envisions a new type of
suitcase, which executes a standard TCP/IP stack and MR interface with two main features: (i) a new exploration
HTTP server, accepting requests wirelessly over an experience of the physical world seamlessly merged with its
integrated IEEE802.11b WLAN adaptor. A miniaturized digital annotations via a non-obtrusive MR interface, and
RFID reader is connected to the serial port of the server (ii) an integration of ubiquitous context-awareness and
machine, an RFID antenna is integrated in the frame of the physical hyperlinking at the user interface level. The
suitcase so as to enable the server to sense RFID tags DigiScope is demonstrated in operation.
contained in the SmartCase. A vast of 125KHz and 13,56
MHz magnetic coupled transponders are used to tag real
REFERENCES
world objects (like shirts, keys, PDAs or even printed
1. Dey, A.K.: Understanding and Using Context. Personal
paper) to be potentially carried (and sensed) by the suitcase.
and Ubiquitous Computing, Special Issue on Situated
In addition, the SmartCase is equipped with optical markers
Interaction and Ubiquitous Computing, Vol. 5 No. 1
so as to enable visual recognition and tracking with the
(2001)
ARToolkit framework.
2. Ferscha, A.: Contextware: Bridging Virtual and Physical
Worlds. Reliable Software Technologies, AE 2002.
LNCS 2361, Berlin (2002) 51-64
3. Gellersen, H.W., Beigl, M., Schmidt, A.: Sensor-based
Context-Awareness for Situated Computing, Proc. of
Workshop on Software Engineering for Wearable and
Pervasive Computing, Ireland, June, (2000) 77-83
4. Gorbet, M.G., Orth, M., Ishii, M.: Triangles: Tangible
Interface for Manipulation and Exploration of Digital
Information Topography. Proc. CHI1998 (1998) 49-56
5. Kindberg, T., Fox, A.: System Software for Ubiquitous
Computing. IEEE Pervasive Computing, Vol. 1 No. 1
(2002) 70-81
6. Römer, K., Schoch, T., Mattern, F., Dübendorfer, T.:
Smart Identification Frameworks for Ubiquitous
Computing Applications. Proc. PerCom, (2003)
Figure 1: The DigiScope 7. Schmidt, A. Van Laerhoven, K: How to Build Smart
Appliances?, IEEE Personal Communications 8(4),
August, (2001) 66-71
A unique ID associated with every real world object is the
8. Weiser, M.: The Computer of the Twenty-First Century.
ID encoded in its RFID tag. It is sensed by an RFID reader
Scientific American (1991) 94-104.
which triggers a script to update the state information on the
embedded Web server. Considering now the inventory of
the SmartCase as an “invisible” service, then, once an
262
Bumping Objects Together as a Semantically Rich Way of
Forming Connections between Ubiquitous Devices
Ken Hinckley
Microsoft Research, One Microsoft Way
Redmond, WA 98052 USA
Tel: +1 425 703 9065 email: [email protected]
ABSTRACT
This research explores the use of distributed sensors to form (a)
dedicated and semantically rich connections between
devices. For example, by physically bumping together the
displays of multiple tablet computers that are facing the
same way, dynamic display tiling allows users to create a
temporary larger display. If two users facing one another
instead bump the tops of their tablets together, this creates a
collaborative face-to-face workspace with a shared
whiteboard application. Each tablet is augmented with
sensors including a two-axis linear accelerometer, which
provides sufficient information to determine the (b)
relationship between the two devices when they collide.
Keywords
Distributed sensors, context aware, multi-user interaction
INTRODUCTION
Establishing meaningful connections between devices is a
problem of increasing practical concern for ubiquitous
computing [3][4]. Wireless networking and location sensing
can allow devices to communicate and may provide
information about proximity of other devices. However, Fig. 1 (a) Dynamic display tiling by bumping together two
tablets that are facing the same direction. (b) The tablets
with many devices nearby, how does a user specify which
form a temporary larger display, with the image expanding
devices to connect to? Furthermore, connections need across both screens. Small arrows provide feedback of the
semantics: What is the connection for? Is the user edges involved in the dynamic display connection.
collaborating with another user? Is the user combining the
input/output resources of multiple devices to provide (a)
increased capabilities? Users need techniques to intuitively
form semantically rich connections between devices.
This research proposes physically bumping two devices
together as a means to form privileged connections.
Bumping introduces an explicit step of intentionality, which (b)
users have control over, that goes beyond mere proximity of
the devices to form a specific type of connection. For
example, dynamic display tiling [2] enables users to
combine the displays of multiple devices by bumping a
tablet into another one lying flat on a desk (Fig. 1). Users
can also establish a collaborative face-to-face workspace
[1] by bumping the tops of two tablets together (Fig. 2).
Bumping generates equal and opposite hard contact forces (c)
that are simultaneously sensed as brief spikes by an Fig. 2 (a) Face-to-face collaboration by bumping the tops
accelerometer on each tablet. The software synchronizes the of two tablets together. The sketch is shared with the other
data over an 802.11 wireless connection; two spikes are user for annotation. Also shown: feedback for (b) making or
considered to be simultaneous if they occur within 50ms of (c) breaking a collaboration connection.
one another.
263
The two orthogonal sensing axes of each accelerometer breaks the face-to-face connection if one of the users walks
provide enough information to determine which edges of away (walking can be sensed using the accelerometer [1]).
the tablets have collided, allowing tiling of displays along Users can also exchange information by bumping tablets
any edge (left, right, top, or bottom) or sensing that the together just as people at a dinner table might clink glasses
tablets are facing one another when bumped together in the together for a toast. This is distinguished from display tiling
case of face-to-face collaboration. Example accelerometer by sensing that both tablets are being held (as opposed to
data from bumping two devices together is shown in Fig. 3, one being stationary on a desk). Finally, one user can
as well as simultaneous but incidental handling of the “pour” data from his tablet into that of another user by
devices. The software can ignore most such sources of angling the tablet down when the users bump their tablets
false-positive signals. Details of synchronization and together [2]. These variations shown in the video suggest
gesture recognition appear in [2]. additional ways to enrich the semantics of connections that
can be formed based upon bumping objects together.
RELATED WORK
Smart-Its Friends and ConnecTables form distinguished
connections between multiple devices. Smart-Its Friends
infers a connection when two devices are held together and
shaken. ConnecTables [4] are wheeled tables with mounted
LCD displays that can be rolled together so that the top
edges of two LCD’s meet, forming a connection similar to
the collaborative face-to-face workspace proposed here.
Both [3] and [4] can form only one type of connection,
Fig. 3 Left: Example accelerometer signature for whereas bumping two objects together can support multiple
bumping two tablets together, with forward-back and left- types of connections. Furthermore, bumping can specify
right accelerometer axes for the local and remote devices. additional parameters, such as which edges of two separate
Right: Incidental handling of both tablets at the same time displays to join, or determining which tablet is the
results in signals that are distinct from intentional bumping.
connecting tablet (as opposed to the base tablet) to provide
For dynamic display tiling, one tablet (the base tablet) rests a direction (hierarchy) to the connection.
flat on a desk surface, and a second tablet (the connecting CONCLUSION
tablet) is held by a user and bumped into the base tablet This work contributes a novel and intuitive mechanism to
along one of the four edges of its screen bezel. Note that form specific types of connections between mobile devices.
this creates a hierarchy in the connection. The connecting When bumping two tablets together, a connection is formed
tablet temporarily annexes the screen real estate of the base in the physical world by manipulating the actual objects of
tablet. The software currently distinguishes the connecting concern, so no naming or selection of devices from a list is
tablet from the base tablet using capacitive touch sensors to needed. Bumping can support several different types of
determine which of the two tablets is being held. connections, including dynamic display tiling, face-to-face
Appropriate feedback confirming that a connection has collaboration, or “pouring” data between tablets. Here we
been established is crucial to the techniques. Users are focus on multiple Tablet PC’s, but in the future,
shown the type of connection being formed using overlaid dynamically combining multiple heterogeneous devices
icons on the screen as shown in Fig. 2 (b, c) for face-to-face could lead to compelling new capabilities for mobile users.
collaboration; analogous “connection arrow” icons for
REFERENCES
dynamic display docking can be seen in the video.
1. Hinckley, K., Distributed and Local Sensing Techniques
Furthermore, because the techniques involve two users, one
for Face-to-Face Collaboration, to appear in ICMI-
user’s attention may not be focused on the tablets; hence it
PUI'03 5th Intl. Conf. on Multimodal Interfaces.
is important to provide audio feedback as well. Tiling two
displays together makes a short metallic clicking sound 2. Hinckley, K., Synchronous Gestures for Multiple Users
suggestive of a connection snapping together. A different and Computers, to appear in ACM UIST'03.
sound reminiscent of slapping two hands together occurs 3. Holmquist, L., Mattern, F., Schiele, B., Alahuhta, P.,
when users establish face-to-face collaboration. Beigl, M., Gellersen, H., Smart-Its Friends: A
For display tiling, picking up a tablet removes it from the Technique for Users to Easily Establish Connections
shared display. By contrast, for face-to-face collaboration, between Smart Artefacts, Ubicomp 2001: Springer-
users may want to move their tablets apart but continue Verlag, 116-122.
collaborating; hence moving the tablets apart does not 4. Tandler, P., Prante, T., Müller-Tomfelde, C., Streitz,
break the connection in this case. Instead, users can N.A., Steinmetz, R., Connectables: dynamic coupling of
explicitly break the connection by drawing a slash across displays for the flexible creation of shared workspaces,
the handshake icon (Fig. 2b), or the system automatically UIST 2001, 11-20.
264
Ubiquitous Computing in the Living Room:
Concept Sketches and an Implementation of a Persistent
User Interface
Stephen S. Intille1 Vivienne Lee1 Claudio Pinhanez2
1 2
Massachusetts Institute of Technology IBM T.J. Watson Research
1 Cambridge Center, 4FL 19 Skyline Drive - office 2N-D09
Cambridge, MA 02142 USA Hawthorne, NY 10532 USA
[email protected] [email protected]
267
Audio Devices of additional input and output devices and the augmentation
A public loudspeaker is available to emit ambient audio of single playing pieces with information technology.
samples or atmospheric music. STARS also integrates
RELATED WORK
headsets that allow players to receive computer-generated
Mandryk et al [2] have developed a computer augmented
private messages or utter verbal commands. STARS
tabletop game called False Prophets. Similar to STARS,
provides a speech generation and a speech recognition
False Prophets’ goal is to combine the strengths of
module based on the Microsoft Speech API.
traditional tabletop gaming and computing devices. As in
BENEFITS OF THE PLATFORM STARS, mobile computers are integrated for private
Playing hybrid tabletop games in a ubiquitous computing information. However, False Prophets does not attempt to
environment offers potential benefits over traditional board create a general purpose platform for multiple games, but is
games. currently limited to a single exploration game.
Persistency and Game Session Management Björk et al. [1] presented a hybrid game system called
STARS game sessions can be interrupted and continued at Pirates! that does not utilize a dedicated game board, but
any time with the current state of a game session being integrates the entire world around us with players moving in
automatically preserved for later continuation. The RF-ID the physical domain and experiencing location dependent
reader unit at the game table assigns game sessions to RF- games on mobile computers. Thereby, Pirates! follows a
ID tags, so that the tags operate like physical bookmarks. very different, but very interesting approach to integrate
This makes session management much more intuitive and virtual and physical components in game applications.
natural than GUI-based interfaces.
ACKNOWLEDGMENTS
Complex Game Rules We thank our colleagues Norbert Streitz, Peter Tandler, and
Complex traditional board games such as conflict Carsten Röcker for their helpful feedback on our work.
simulations or role-playing games usually either involve a Also, we especially thank our student staff member Sascha
lot of table reading and dice rolling that hamper game play Nau for his dedicated efforts to complete the video
or they suffer from oversimplified rules to make them more realization in time.
manageable. In STARS, the more complex game rules are Parts of this work are supported by a grant from the
put into the digital domain, so that an accurate simulation of Ladenburger Kolleg “Living in a smart environment” of the
the game world can be realized without slowing down the Daimler-Benz foundation.
game flow.
REFERENCES
Dynamic Information Visualization 1. Björk, S, Falk, J., Hansson, R., Ljungstrand, P.: Pirates!
The interactive table display allows providing the players Using the Physical World as a Game Board. In:
with dynamic game boards. This includes alterations to the Proceedings of Interact’01, Tokyo, Japan.
boards at runtime, e.g. a fog-of-war might be lifted when
new areas of the board are explored. Also, the presentation 2. Mandryk, R.L., Maranan, D.S., Inkpen, K.M.: False
of the boards can be automatically adjusted to real-world Prophets: Exploring Hybrid Board/Video Games. In:
properties such as the positions and viewing angles of the Extended Abstracts of CHI’02, 640-641.
players. 3. Magerkurth, C., Stenzel, R.: Computer-Supported
Cooperative Play - The Future of The Game Table. (in
Generic Development Architecture
German). To appear in Proceedings of M&C’03.
The STARS software architecture relieves the game
developer from many mundane tasks such as device 4. Streitz, N.A., Tandler, P., Müller-Tomfelde, C.,
integration or game board management. Thereby, she can Konomi, S. Roomware: Towards the Next Generation of
concentrate on creating rules and providing content. So far, Human-Computer Interaction based on an Integrated
we have realized a roleplaying game called KnightMage Design of Real and Virtual Worlds. In: J. A. Carroll
and a Monopoly clone called STARS Monopoly. Both (Ed.): Human-Computer Interaction in the New
games make use of the heterogeneous device setup, e.g. in Millennium, Addison Wesley, 553-578, 2001.
KnightMage the wall display shows a public map of the 5. Tandler, P.: Software Infrastructure for a Ubiquitous-
explored game area, while the PDAs are used for inventory Computing Environment Supporting Collaboration with
management and character attributes. Multiple Single- and Multi-User Devices. Proceedings
CONCLUSIONS of UbiComp'01. Lecture Notes in Computer Science,
We have presented the STARS platform for computer Springer, Heidelberg, 96-115, 2001.
augmentend tabletop games. Apart from writing new games 6. Weiser, M.: The Computer for the Twenty-First
for the platform, our next steps will include the integration Century. Scientific American, 94-100, 1991.
268
A-Life: Saving Lives in Avalanches
Florian Michahelles Bernt Schiele
Percepual Computing and Computer Vision Group, ETH Zurich
Haldeneggsteig 4, IFW C29, Zurich, Switzerland
{michahelles, schiele}@inf.ethz.ch, http://www.vision.ethz.ch/projects/avalanche
269
snowboarders and could achieve reasonable measurements putting vital functions, air-pocket existence and orientation
of heart rate and oxygen blood saturation (Fig. 1). Further, of all victims into one interface would be too much, we
subjects reported that the sensor would not have disturbed propose separation in location and urgency (Fig. 2). First,
them during their activities, once wrapped around toe or visual map presentation of the victims’ spatial distribution
finger, they soon forget about the sensor. However, we are enables the user to select victims such that ways can be
aware of the fact that severe cold may cause retreat of kept short. Secondly, separation in urgency provides
blood from the extremities, referred to as centralization, rescuers with a global view on the emergency which allows
such that peripheral measurements at the toe may get better focus on the most urgent victims. For that, we
unreliable under harsh conditions. A more promising way introduce a decision tree defined as follows: Heart rate is
of detecting heart rate, are contact-free measurements the primary criterion, air-pocket is second, oxygen blood
through radar based on Doppler phenomena. Currently, this saturation is third and orientation the fourth. In case of
technology has been deployed for people detection in earth unavailable sensor information the fundamental concept is
quakes and border controls. However, customization for to always assume the worst case. Now multiple victims can
on-body measurements could offer solution which is robust be aligned on a one-dimensional scale where victims’
against both centralization and displacement, as this physical states can be easily compared – even under stress
technique works contact less. conditions. With this user interface, rescuers can select
victims either based on location or urgency in order to
Another important source of information is the existence of
obtain more details on their vital signs displayed in the
air-pockets in the snow, closed air bubbles in front of
right column.
mouth and nose, as they protect victims against
asphyxiation up to 90 minutes [6]. As an initial study, we CONCLUSIONS
investigated the use of oxygen sensors for air-pocket We motivated the use of sensors in avalanche rescue by the
detection. Unfortunately, oxygen sensors did not appear as importance of time during avalanche rescue. We discussed
the appropriate method for air-pocket detection: even and described how sensor technology can be used to
compact snow contains air, such that exhaled air of a provide rescuers with a valuable tool for better planning
victim does not deviate significantly from normal snow. rescue procedures. For demonstration and evaluation
purposes we have developed a first prototype, technical
Knowledge about a victim’s orientation can be very helpful
details and experiences can be found in [7].
for rescuers during extrication. As accelerometers measure
all means of acceleration, in the stationary case these ACKNOWLEDGEMENTS
sensors can report orientation derived from direction of The Smart-Its project is funded in part by the Commission
gravity. We explored how a two-axis accelerometer can be of the European Union under contract IST-2000-25428,
applied to detect the orientation of one’s spine. and by the Swiss Federal Office for Education and Science
(BBW 00.0281).
VISUALIZATION
Avalanche rescue is a situation under immense pressure. REFERENCES
Nevertheless, today’s devices still require lots of training: 1. Brugger, H. and Falk, M.. Le quattro fasi del seppellimento da
Guidance with periodical beeps or support with little valangha. Neve e Valanghe. 16:24-31, 1992 (Italian)
arrows and rough distance is rather difficult for untrained 2. Hereford, J., Edgerly, B.. 457 kHz Electromagnetism and the
users. A visual user interface displaying more appropriate Future of Avalanche Transceivers. In Proceedings International
information could help to make usage much easier. Snow Science Workshop (ISSW 2000). Big Sky MT, USA, Oct.
2000.
3. Tschirky, F. and Brabec, B. and Kern, M., Avalanche rescue
systems in Switzerland: experience and limitations, In
Proceedings International Snow Science Workshop (ISSW
2000). Big Sky MT, USA, Oct. 2000, p. 369-376
4. Genswein, M., Harvey, S.. Statistical analyses on multiple
burial situations and search strategies for multiple burials. In
Proceedings International Snow Science Workshop (ISSW
2002). British Columbia, Canada, Oct. 2002.
5. Brugger, H., Durrer, B., Adler-Kastner, L., Falk, M., Tschirky,
F.. Field management of avalanche victims. Resuscitation 51:7.
6. Falk, M., Brugger, H., Adler-Kastner, L.. Avalanche survival
chances. Nature 368:21, 1994.
Fig. 2: screen design of prototype 7. Michahelles, F., Matter, P., Schmidt, A., Schiele, B., Applying
With the introduction of unique identifiers - rather a Wearable Sensors to Avalanche Rescue: First Experiences with
a Novel Avalanche Beacon. Computers & Graphics 27:6, 2003.
standardization problem among manufacturers than a
technical challenge – multiple victims are discriminated. As
270
Breakout for Two: An Example of an Exertion Interface for
Sports over a Distance
ABSTRACT
Breakout for Two is the first prototype of a physical,
exertion sport that you can play over a distance. We
designed, developed, and evaluated Breakout for Two that
allows people who are miles apart to play a physically
exhausting ball game together. Players interact through a
life-size video-conference screen using a regular soccer ball
as an input device. In a test of 56 volunteers, the Exertion
Interface users said that they got to know the other player
better, became better friends, felt the other player was more
talkative and were happier with the transmitted audio and
video quality, in comparison to those who played an
analogous game using a non-exertion keyboard interface.
Keywords
Exertion interface, physical interface, sports interface,
social bonding, computer mediated communication,
interpersonal trust, funology, sport, video-conferencing Figure 1: Breakout for Two
INTRODUCTION It’s a cross between soccer, tennis, and the popular
“You can discover more about a person in an hour of play computer game “Breakout”. The players share a court, but
than in a year of conversation” (Plato, 427-347 BC). This stay on their side of the field, like in tennis. They see and
quotation conveys the motivation for our work perfectly. hear each other through a life-size videoconference, which
BREAKOUT FOR TWO feels like they’re separated by a glass wall.
How cool would it be if you could play football with your
friend, even though he just moved miles away? What about
playing tennis with a famous tennis player on another
continent who is preparing for a grand slam?
With Breakout for Two, you can. Breakout for Two is the
first prototype of a physical, exertion sport that you can
play over a distance.
271
Figure 3: The system also works with tennis balls Figure 5: Breakout for Two also supports two-on-two
They have to strike semi-transparent blocks, which are CONCLUSION
overlaid on the video stream. These virtual blocks are Breakout for Two is only one example of an Exertion
connected over the network, meaning they are shared Interface, which supports Sports over a Distance.
between the locations. If, for example, one player hits the Augmenting a gaming environment with exertion will
block on the upper left, the block on the upper right is hit greatly enhance the potential for social bonding, just as
for the other player. The goal is to hit all the blocks before playing an exhausting game of squash or tennis with a new
the other player hits them. You win if you hit more blocks acquaintance or co-worker helps to "break the ice" and
than the other player. build friendships. You can now have fun playing sports
with your local and remote friends!
272
Concept and Partial Prototype Video:
Ubiquitous Video Communication with the Perception of
Eye Contact
Emmanuel Munguia Tapia Stephen S. Intille John Rebula Steve Stoddard
Massachusetts Institute of Technology
1 Cambridge Center, 4FL
Cambridge, MA 02142 USA
emunguia | intille @mit.edu
275
1 2 3 4 5
Attribute Controlled Explicit Control Implicit Control Audio Feedback Visual Feedback
Camera State: Click play button None Camera clicking; LEDs on; Camera
Stop to Play Camera rotating rotates to face you;
Mirrored video
Camera State: Click play button Telecommuter sits in Same as above; Same as above; Camera
Pause to Play chair; Family/friend Camera Twitches Twitches
leaves room
Camera State: Click stop button; None Camera rotating LEDs off; Camera
Play to Stop Block camera with rotates to face the wall;
hand; Touch off button Mirrored video
Camera State: Click pause button Telecommuter stands up Same as above Same as above
Play to Pause out of chair; Family /
friend enters room
Camera State: Click stop button; Block Telecommuter leaves None Mirrored video
Pause to Stop camera with hand; the room for an
Touch off button extended period of time
Capturing angle Adjust physical or Change in Camera rotating Slider position; Camera
graphical slider camera state position; Mirrored video
Video fidelity Adjust physical or None None Control position;
graphical control Mirrored video
Audio link Moves hand over None Own voice None
microphone base
276
Hello.Wall – Beyond Ambient Displays
Thorsten Prante, Carsten Röcker, Daniel van Alphen
Norbert Streitz, Richard Stenzel, Hufelandstr. 32, D-10407 Berlin, Germany
Carsten Magerkurth [email protected]
Fraunhofer IPSI
AMBIENTE – Workspaces of the Future Daniela Plewe
Dolivostr. 15, D-64293 Darmstadt, Germany Franz-Künstler-Str 2, D-10969 Berlin, Germany
{prante, roecker, streitz, stenzel, [email protected]
magerkurth}@ipsi.fraunhofer.de
ABSTRACT current situation (e.g., distance to the wall; see below),
We present a ubiquitous computing environment that people can use ViewPorts to decode visual codes (here,
consists of the Hello.Wall in combination with ViewPorts. light patterns), to download (“freeze”) or just browse
Hello.Wall is a new wall-sized ambient display [4,2] that information, to paint signs on the wall, or to access a
emits information via light patterns and is considered message announced by a light pattern. See figure 1.
informative art. As an integral part of the physical
environment, Hello.Wall constitutes a seeding element of a
social architectural space conveying awareness information
and atmospheres in organizations or at specific places. The
display is context-dependent by reflecting identity and
distance of people passing by. Hello.Wall can "borrow"
other artefacts in order to communicate more detailed
Figure 1. Interaction at Hello.Wall using ViewPort
information. These mobile devices are called ViewPorts. as „borrowed display“
People can also further interact with the Hello.Wall using
ViewPorts via integrated WaveLAN and RFID technology. INTERACTION DESIGN
Interactions among the different components are supported
Keywords by two independent RFID systems and a wireless LAN
Ambient display, informative art, social architectural space, network to enable a coherent and engaging interaction
context-dependent, sensor-based interaction, interactive experience. The RFID systems cover two ranges and
wall, interaction design, mobile devices, smart artefacts, thereby define three "zones of interaction": ambient zone,
ubiquitous computing environment, calm technology notification zone, and cell interaction zone (see figure 2).
HELLO.WALL AND VIEWPORT They can be adapted, e.g., according to the surrounding
Hello.Wall is a piece of unobtrusive, calm technology [3] spatial conditions.
exploiting humans' ability to perceive information via
codes that do not require the same level of explicit coding
as with words. It can stay in the background, only
perceived at the periphery of attention, while one is being
concerned with another activity, e.g., a face-to-face
conversation.
Borrowing another Artefact Cell Interaction Zone
We propose a mechanism where the Hello.Wall can Notification Zone
"borrow" other artefacts, in order to communicate more Ambient Zone
detailed information. These mobile devices are called
ViewPorts and can be personalized using short-range
transponders. Due to the nature of the ViewPort's display,
the information shown can be more explicit and it can also Figure 2. Three zones of interaction
be more personal. Depending on their access rights and the The zones were introduced to define "distance-dependent
semantics", meaning that the distance of an individual from
the wall defines the interactions offered and the kind of
5th International Conference on Ubiquitous Computing information shown on the Hello.Wall and the ViewPort.
(Ubicomp’03), October 12–15, 2003, Seattle, WA, USA It should be noted that multiple people can be sensed at
Copyright by the Authors of this Publication once in the notification and cell interaction zones.
277
Interactions
When people are outside the range of the wall's sensors (in
the ambient zone), they experience the ambient mode, i.e.
the display shows general information that is defined to be
shown independent of the presence of a particular person.
InformationCell with Short-
Range Transponder
APPLICATIONS
Long-Range Transponder
Atmospheric aspects that can, e.g., be extracted from con-
versations [1] are mapped onto visual codes realized as
light patterns which influence the atmosphere of a place
and the social body around it. While the Hello.Wall serves
a dedicated informative role to the initiated members of an
organization or a place, visitors might consider it only as an
WLAN Access Point atmospheric decorative element and enjoy its aesthetic
Controlling PC
Driver Interface WLAN Adapter quality.
Long-Range Reader Short-Range Reader
Long-Range Antenna Long-Range Transponder
Communicating atmospheric aspects of an organization
includes general and specific feedback mechanisms that
Figure 3. Communication and Sensing infrastructure of allow addressing different target groups via different
Hello.Wall and ViewPort
representation codes. Individuals as well as groups create
People within the notification zone are detected via two public and private codes depending on the purpose of their
long-range readers installed in the lower part of the intervention. The content to be communicated can cover a
Hello.Wall (see figure 3) and people can identify wide range and will be subject to modification, adjustment,
themselves to a ViewPort via the integrated short-range and elaboration based on the experience people have.
reader. Once a person is detected in the notification zone,
Sample applications are presented in the video. They
depending on the kind of application, data can be
include radiating the general atmosphere in an organization
transmitted to the ViewPort and/or distinctive light patterns
or at a place, distributing more specific and directed
can be displayed for notification. These can be personal
information, various forms of playful close-up interactions,
patterns known only to a particular person, group patterns,
and support for team building and coherence through
or generally known patterns. Within the cell interaction
“secret” visual codes mediating, e.g., acitivty levels among
zone, people that are very close to the Hello.Wall can
the team’s members. To learn more about the acceptance of
interact with each single cell (= independent interactive
applications, we are currently running user experiments.
"pixel") or several cells at once using a ViewPort to read
the cells’ IDs. Simultaneous interaction using several ACKNOWLEDGMENTS
ViewPorts in parallel at a Hello.Wall is supported as well. This work is supported by the European Commission
These features allow playful and narrative interactions and (contract IST–2000-25134) as part of the proactive
there is also a charming element of surprise that may be initiative “The Disappearing Computer” of “Future and
discovered via single cell interaction. Emerging Technology” (FET) (project website:
www.ambient-agoras.org). Special thanks are due to our
TECHNOLOGY
student Stefan Zink for his contributions to implementing
Each of the 124 cells of the Hello.Wall contains an LED
the Hello.Wall hardware.
cluster and a short-range transponder (see figure 4). The
brightness of the LED clusters is controlled by a standard REFERENCES
PC via a special driver interface with control units using 1. Basu, S. et al. Towards measuring human interactions in
pulse width modulation. This interface also developed by conversational settings. Proc. of IEEE CUES 2001.
us consists of 17 circuit boards. 2. Streitz, N. et al. Situated Interaction with Ambient
The ViewPort is developed on the basis of a PocketPC with Information: Facilitating Awareness and Communi-
32bit RISC Processor, touch-sensitive color display and cation in Ubiquitous Work Environments.. Proc. of
64MB RAM. Its functionality is extended through a short- HCII 2003, to appear.
range (up to 100mm) reader unit and a WaveLAN adapter. 3. Weiser, M., Brown, J. S. Designing calm technology.
Additionally, the ViewPort is equipped with a long-range PowerGrid Journal, Vol. 1, No. 1, 1996.
transponder. Thus, the ViewPort can be detected by
stationary artefacts as, e.g., the Hello.Wall, while at the 4. Wisneski, C. et al. Ambient displays: Turning
same time identify nearby artefacts through its own reading architectural space into an interface between people and
unit. dgital information. Proc. of CoBuild '98, 22-32.
278
Browsing Captured Whiteboard Sessions
Using a Handheld Display and a Jog Dial
Johan Sanneblad and Lars Erik Holmquist
Future Applications Lab
Viktoria Institute, Box 620, SE 405 30 Göteborg, SWEDEN
{johans, leh}@viktoria.se
www.viktoria.se/fal
ABSTRACT
In previous work we introduced Total Recall, a system for
in-place viewing of captured whiteboard annotations using
a handheld display. To improve on our system we now
introduce a method for navigating through time-based
whiteboard annotations using a jog dial. By turning the
dial, the user can navigate back and forth in time to reach a
desired point in the captured session, which is then
displayed on the handheld device at the correct location.
The tracking system supports drawing as well as erasing,
which are both immediately reflected on the handheld
display. We argue that our system introduces new
application possibilities, e.g. in education.
Keywords
Whiteboard capture systems, ubiquitous computing
INTRODUCTION
During the past few years, several systems have been
introduced to augment whiteboards and making them
“smart”. The reasons are many; being able to keep digital Figure 1. (a) An annotation is created, (b) a jog dial is
copies of what has been written to the whiteboard, used to navigate back to a previous position in time,
incorporating computer acronyms such as “cut” and “paste” and (c) the session is recalled in its original location.
into a non-computer environment, and simplifying drawing
in general. Several strategies have been tested to enhance
whiteboards. One type of system replaces the entire viewing the notes on a PC usually requires a significant
whiteboard with a digital touch sensitive display. Examples amount of zooming and panning of the captured image.
of such system include the LiveBoard [4], which was part Total Recall [3] was introduced to provide in-place viewing
of Xerox PARC’s original ubiquitous computing of captured whiteboard annotations using a handheld
experiment; and current commercial products such as the display. Using a handheld computer equipped with an
SmartBoard (www.smarttech.com). Replacing the drawing ultrasonic positioning system, Total Recall makes it
area with a digital replica provides many possibilities for possible to view annotations where they were created –
enhancing the whiteboard, but it is an expensive option that even if they are partially erased! Total Recall can be seen
limits its use to specific environments. as a physical instantiation of a Magic Lens, an operator that
An alternative approach is to use a system with pens is positioned over an onscreen area to change the view of
equipped with built-in positioning systems, such as the objects in that region [1]; other similar approaches include
commercially available Mimio system (www.mimio.com). Peephole Displays [5]. We have now extended the Total
To the end-user, using the Mimio system is perceived as Recall system by introducing a jog dial (as seen in Figure
using an ordinary whiteboard – the difference is that the 1) that can be used to view how the whiteboard state looked
coordinates of each pen stroke are captured on a PC, at a specific moment in time. By turning the dial, the user
making it possible to create a snapshot of the whiteboard at can navigate to different point in time during the
a specific moment in time. While systems such as the digitization. The effect is similar to Time-machine
Mimio are portable and can be used in any environment, Computing [2], since the user can always go back in time
they do require a separate PC to view the annotations. and for instance easily retrieve previously erased content.
However, considering the size of an ordinary whiteboard,
279
the handheld computer to send XY-coordinates to the PC
we extracted the interior of a Mimio pen shell and attached
it to the back of a handheld computer. Using a switch
attached to the back of the handheld computer it is possible
to manually enable / disable sending of coordinates.
APPLICATION SCENARIO: DRAWING CLASS
The new system could be used as support in a learning
situation, such as a drawing class. Looking at a finished
image does not tell the student very much about how it was
drawn, nor does it show how much time was spent on each
specific detail. Through the ability of time-based browsing,
Total Recall could make it is possible for students to first
Figure 2. The improved Total Recall architecture. watch a tutor complete a drawing, and then go back in time
using the jog dial to study in detail how a specific section
was achieved. Unlike on a stationary PC, the student could
ARCHITECTURE
study the drawing process on the position where it actually
The Total Recall architecture comprises two parts: a server
happened, using the finished drawing as a frame of
software installed on a stationary PC to capture whiteboard
reference.
annotations, and a client software installed on a handheld
computer to view the annotations. The system has two DISCUSSION AND FUTURE WORK
“modes”: in one mode coordinates are received as paint or In the current implementation of time-based browsing we
erase strokes when pens or the eraser are used to draw on experienced issues with captured sessions with large
the whiteboard. In the other mode coordinates are received amounts of data. When the jog dial is used to move
from a handheld computer equipped with an ultrasonic and backwards in time, the stationary pc needs to resend the
infrared transmitter in the form of continuous XY- entire coordinate list up to an exact moment for the
coordinates. The user switches between these two modes handheld computer to redraw the canvas. We are currently
by pressing the top of the jog dial that is shown in Figure working on optimizing the system where the stationary pc
1b. Using our server software, the XY-coordinates received is responsible for creating “bitmap snapshots” when a
from the pens are sent as drawing coordinates to the specific number of coordinates have been received since
handheld computer when the user changes modes. The the last snapshot. Using this approach it would only be
coordinates received from the handheld computer are sent necessary to transfer the bitmap snapshot together with the
back to the device in a compressed form over a wireless coordinates received since last snapshot to the handheld
connection, where the client software periodically redraws device.
the image to reflect the whiteboard using the current XY ACKNOWLEDGMENTS
position. This project was supported by SSF and Vinnova.
The jog dial shown in Figure 1B was added to support REFERENCES
time-based browsing of the drawing session. In normal use, 1. Bier, E.A., et al. (1993). Toolglass and Magic Lenses:
the stationary PC will continuously send the XY- The See-Through Interface. Proceedings of SIGGRAPH
coordinates it receives back to the handheld device. The 1993.
handheld display will then draw a brush or erase stroke 2. Rekimoto, J. (1999). Time-machine Computing: a
depending on the current stroke type. When the jog dial is Time-centric Approach for the Information
turned counter clockwise, the display of the handheld Environment. Proceedings of UIST 1999.
device is cleared and the entire canvas is redrawn from the 3. Sanneblad, J., L. E. Holmquist (2003). Total Recall: In-
beginning up to the coordinate at the current position in place Viewing of Captured Whiteboard Annotations.
time. When the jog dial is turned clockwise, the server Extended Abstracts of CHI 2003.
software will simply send out the brush or erase strokes that 4. Weiser, M. (1991). The Computer for the 21st Century.
were drawn since the last update. Scientific American, 1991, 265 (3), 94-104.
IMPLEMENTATION 5. Yee, K.P. (2003). Peephole Displays: Handheld
We used the Mimio sensor and pens to get positioning Computers as Virtual Windows. Proceedings of CHI
information for both the PDA and the stationary PC. To get 2003.
280
eyeCOOK: A Gaze and Speech Enabled
Attentive Cookbook
Jeffrey S. Shell, Jeremy S. Bradbury, Craig B. Knowles, Connor Dickie, Roel Vertegaal
Human Media Lab, Queen’s University
Kingston, ON, Canada, K7L 3N6
{ shell, bradbury, knowles, connor, roel }@cs.queensu.ca
281
Attentive and Environmental Sensors
Increasing the knowledge of users’ activities may allow
interfaces to engage in less interruptive, and more
respectful interactions with users. Visual attention, a prime
indicator of human interest, can be deduced by adding eye
contact sensors [5,6] to items in the environment. This
information can be used to determine the appropriate
volume and timing of notifications to the user.
Additionally, temperature sensors can be used to keep track
of the status of the oven and the elements of the stove and
could be synchronized with electronic timers to increase the
system’s ability to guide the user’s cooking experience
Appliance Coordination
Integrating knowledge of the environment can result in
improved functionality, taking up less of the user’s time and
effort. For example, user recipe preferences, timing
constraints, as determined by the user’s electronic schedule,
and currently available ingredients, communicated by food
Figure 2. eyeCOOK in Page Display Mode storage areas, can be combined to suggest recipes. Once
selected, the ingredients from the recipe can be added to an
NATURAL INPUTS electronic shopping list stored on the user’s PDA.
eyeCOOK is designed to use natural input modalities, or
those that humans use in human to human, non mediated Active Environmental Actions
communication [5]. Observing and interpreting this The kitchen should not only be aware of its environment,
implicit behavior reduces the need for users to provide but it should also be able to affect it. Thus, it should be able
explicit input. Using these cues, interfaces can be designed to take actions which increase efficiency, and reduce the
such that the difficulty lies in the intended task, not the user’s action load, like automatically preheating an oven.
technological tool.
CONCLUSIONS
Gaze and Speech We have presented eyeCOOK, a gaze and speech enabled
When the user is in range of the eye tracker and looking at multimodal Attentive User Interface. We have also
the display, eyeCOOK substitutes the target of the user’s presented our vision of an Attentive Kitchen in which
gaze for the word ‘this’ in a speech command. For appliances, informed by sensors, coordinate their behavior,
example, eyeCOOK responds to the spoken command and have the capability to affect the environment. This can
‘Define this’ by defining the word the user is currently reduce the user’s workload, and permit rationalizing
looking at. However, because eye trackers are spatially requests for user attention.
fixed and have a limited range, the user will not always be
in a position where eye tracker input is available. Thus, our REFERENCES
speech grammar is designed such that system functionality 1. Ju, W. et al. CounterActive: An Interactive Cookbook
is maintained when users are not in front of the eye tracker. for the Kitchen Counter. Extended Abstracts of CHI
Instead of saying “define this” while looking at the word 2001 (Seattle, April 2001) pp. 269-270
sauté, the user simply states “define sauté.” The active 2. Norman, D. A. The Invisible Computer, MIT press,
vocabulary is dynamically generated using context- 1999
sensitive, localized speech grammars, allowing more 3. Schmidt, A. et al. How to Build Smart Appliances.
synonyms to be included for a given word. Real world IEEE Personal Communications 8(4), August 2001.
performance may be improved by adding partial terms and pp. 66-71.
colloquialisms that may only be relevant in specific 4. Selker, T., et al. Context-Aware Design and Interaction
circumstances. in Computer Systems. IBM Systems Journal (39) 3&4,
pp. 880-891.
TOWARDS AN ATTENTIVE KITCHEN 5. Shell, J. et al. Interacting with Groups of Computers.
Interfaces that recognize and respond to user attention, and Commun. ACM 46(3) March, 2003.
understand how it relates to the overall activity can help the 6. Shell, J. et al. EyePliances: Attention Seeking Devices
user efficiently engage in tasks. To achieve this, we must that Respond to Visual Attention. Extended Abstracts
augment the kitchen with attentive sensors that monitor of CHI 2003 (Ft.Lauderdale, April 2003) pp. 770-771
human behavior [5,6], augment appliances with functional 7. Tran, Q., et al. (2002). Cook’s Collage: Two
sensors [3,6], improve coordination among appliances [5], Exploratory Designs. Position paper for Families
and allow appliances to affect the environment [3,5]. Workshop at CHI 2002. (Minneapolis, April 2002)
282
Virtual Rear Projection
Jay Summet, Ramswaroop G. Somani, James M. Rehg, Gregory D. Abowd
College of Computing
801 Atlantic Drive
Atlanta, GA 30332-0280
{summetj,somani,rehg,abowd}@cc.gatech.edu
ABSTRACT
Rear projection of large-scale upright displays is often pre-
ferred over front projection because of the elimination of
shadows that occlude the projected image. However, rear
projection is not always a feasible option for space and cost
reasons. Recent research suggests that many of the desir-
able features of rear projection, in particular shadow elim-
ination, can be reproduced using new front projection tech-
niques. This video demonstrates various front projection
techniques and shows examples of coping behavior users ex-
hibit when interacting with front projected displays.
1. PASSIVE TECHNOLOGIES
Researchers have been working to resolve the occlusion prob- Figure 2: Warped front projection attempts to shift the
lem by filling in the technological space between standard user’s shadow so that they can interact with the graphics
front projection and true rear projection. We have performed in front of them.
an empirical study comparing the following projection tech-
nologies (the first three are demonstrated in the video): The output is warped to provide a corrected display on the
Front Projection (FP) - A single front projector is mounted screen. Examples are new projectors with on-board warping
along the normal axis of the screen. Users standing between functions, such as used by the 3M IdeaBoard[1], or the Ev-
the projector and the screen will produce shadows on the erywhere Displays Projector[3]. Additionally, the latest ver-
screen. This is a setup similar to most ceiling mounted pro- sion of the nVidia video card drivers includes a “keystoning”
jectors in conference rooms. function which allows any Windows computer to project a
warped display.
285
A: Excel Home Center offers virtual installation to the provider, the workbench can afford much richer
services for the products sold in its stores free of experience. For example, if needed, the provider can
charge. direct the user to an instructional video or a Web page
K: Connect me to Excel Home Center. about a task, which is shown on the LCD display.
A: Please wait while I’m binding to Excel Home
Center…
When Krishna’s request arrives at Excel Home Center, the
provider is alerted through a pager-like device. He then
walks over to a store kiosk or an in-vehicle PC. By simply
entering the service ID, he immediately gets connected to
the user. Figure 2 shows the view from the Excel Home
Center. The screen displays the recent history with the
customer and a live view of his task environment.
286
Part VI
Workshops
Ubicomp Education:
Current Status and Future Directions
Gregory D. Abowd Gaetano Borriello Gerd Kortuem
College of Computing Department of Computer Computing Department
Georgia Institute of Science & Engineering Lancaster University
Technology University of Washington Lancaster
Atlanta, Georgia 30332-0280 Seattle, WA 98195-2350 LA1 4YR
USA USA UK
+1 404 894 7512 +1 206 685 9432 +44 1524 593116
[email protected] [email protected] [email protected]
289
require now and in the future? Participation
Participants will be selected on the basis of their ubicomp
Expected Outcome
teaching experience. In lieu of a traditional position paper,
The workshop is intended as a forum for the exchange of participants are asked to complete a Ubicomp education
educational experiences and is focused on the joint questionnaire.
development of a common vision statement derived from
Workshop Web Site
the experience of each of the participants.
The workshop web site can be found at
The expected results include: http://ubicomp.lancs.ac.uk/workshops/education03.
• A survey of the current state: a (preliminary and
ORGANIZERS
possible incomplete) compilation of currently offered
ubicomp-related courses and available educational Gregory D. Abowd is an Associate Professor in the
resources College of Computing and GVU Center at the Georgia
• The scope of ubicomp education: a compilation of Institute of Technology. His research interests include
important ubicomp topics (research questions, theories, software engineering for interactive systems, with
methods, models and technologies). These topics could particular focus on mobile and ubiquitous computing
lay the foundation for the later generation of Ubicomp applications. He leads a research group in the College of
teaching modules. Computing focussed on the development of prototype
• Future agenda: identification of concrete steps that future computing environments which emphasize mobile
should be taken to improve the quality of ubicomp and ubiquitous computing technology for everyday uses. He
education. currently serves as Director for the Aware Home Research
Initiative. He was General Chair for Ubicomp 2001, held in
Dissemination Atlanta, and is an Associate Editor for IEEE Pervasive
The results of the workshop will be summarized in a report Computing magazine. Dr. Abowd has published over 90
to be published in an appropriate journal or newsletter. A scientific articles and is co-author of one of the leading
web site will serve as permanent record of the event. Most textbooks on Human-Computer Interaction.
importantly, we envision this web site to function as Gaetano Borriello is a faculty member in the University of
continuously updated public collection of educational Washington’s Department of Computer Science and
materials (lecture notes, assignments, reference list, Engineering. He is currently near the end of a two-year
software toolkits, pointers to web sites etc.). leave to serve as Director of Intel Research Seattle where
the focus of research is on new devices, systems, and usage
WORKSHOP FORMAT models for ubiquitous computing. His research interests
Activities span the categories of embedded system design,
To maximize information and idea exchange and foster development environments, user interfaces, and networking
collaboration, we plan to spend most of the time on infrastructure. They are unified by a single goal: to create
discussions rather than presentations. The primary new computing and communication devices that make life
activities at the workshop will take place in small working simpler for users by being invisible, highly efficient, and
groups made up of 3-4 people. able to exploit their networking capabilities. Prior to
receiving his Ph.D. in Computer Science at UC Berkeley,
The broad outline of the workshop is: Dr. Borriello spent four years as a member of the research
• Early morning: participants present their background staff at Xerox PARC. In 1995, he received the UW
in ubicomp education and compare educational Distinguished Teaching Award. He currently serves on the
experiences Editorial Board of the IEEE Pervasive Computing
• Mid morning: discussion of the scope of ubicomp Magazine. He is a member of the IEEE Computer Society
education; identification of important ubicomp topics and the ACM.
• Late morning: forming of break out groups Gerd Kortuem is a Lecturer in the Computing Department
• Early afternoon: break out groups identify research at Lancaster University, UK. His research interests include
questions, theories, methods, models and technologies engineering and usability aspects of interactive and
related to individual topics collaborative technologies with particular focus on mobile,
• Late afternoon: groups report back, drafting of a future wearable and ubiquitous computing applications. Dr.
agenda to improve the quality of ubicomp education. Kortuem received his Ph.D. in Computer Science at
University of Oregon for his work on Wearable
Expected Audience Communities. In the past, he worked as researcher at
The workshop is directed towards educators, researchers Apple Computer's Advanced Technology Group in
and students from a variety of disciplines (computer California, the Technical University of Berlin, and the IBM
science, computer engineering, human-computer Science Centre in Germany. He is a member of the IEEE
interaction, psychology, and sociology, …). Computer Society and the ACM.
290
2003 Workshop on Location-Aware Computing
Mike Hazas James Scott John Krumm
Lancaster University Intel Research Cambridge Microsoft Research,
Redmond
http://www.ubicomp.org/ubicomp2003/workshops/locationaware/
[2] M. Addlesee, A. Jones, F. Livesey, and F. Samaria. [16] M. Lamming and M. Flynn. Forget-Me-Not: Intimate
The ORL Active Floor. IEEE Personal computing in support of human memory. In Proc. of
Communications, 4(5):35–41, Oct. 1997. the Intl. Symp. on Next Generation Human Interface
Technologies, Meguro Gajoen, Japan, Feb. 1994.
[3] R. T. Azuma. A survey of augmented reality.
Presence: Teleoperators and Virtual Environments, [17] D. López de Ipiña, P. Mendonça, and A. Hopper.
6(4):355–385, Aug. 1999. TRIP: A low-cost vision-based location system for
ubiquitous computing. Personal and Ubiquitous
[4] P. Bahl and V. N. Padmanabhan. RADAR: An Computing, 6(3):206–219, May 2002.
in-building RF-based user location and tracking
system. In Proc. of InfoCom, volume 2, pages [18] H. Naguib and G. Coulouris. Location information
775–784, Tel-Aviv, Israel, Mar. 2000. management. In Proc. of UbiComp, pages 35–41,
Atlanta, USA, Sept. 2001.
[5] K. Cheverst, N. Davies, K. Mitchell, and A. Friday.
Experiences of developing and deploying a [19] R. J. Orr and G. D. Abowd. The Smart Floor: A
context-aware tourist guide: The GUIDE project. In mechanism for natural user identification and tracking.
Proc. of MobiCom, Boston, USA, Aug. 2000. In Proc. of CHI, The Hague, Netherlands, Apr. 2000.
[6] R. J. Fontana and S. J. Gunderson. Ultra-wideband [20] N. B. Priyantha, A. Chakraborty, and H. Balakrishnan.
precision asset location system. In Proc. of the IEEE The Cricket location-support system. In Proc. of
Conf. on Ultra Wideband Systems and Technologies, MobiCom, Boston, USA, Aug. 2000.
Baltimore, USA, May 2002. [21] Y. Sumi and K. Mase. Digital assistant for supporting
[7] D. Fox, W. Burgard, H. Kruppa, and S. Thrun. A conference participants: An attempt to combine
probabilistic approach to collaborative multi-robot mobile, ubiquitous and web computing. In Proc. of
localization. Autonomous Robots, 8(3):324–344, 2000. UbiComp, pages 156–175, Atlanta, USA, Sept. 2001.
[8] I. Getting. The Global Positioning System. IEEE [22] R. Want, A. Hopper, V. Falcao, and J. Gibbons. The
Spectrum, 30(12):36–47, Dec. 1993. Active Badge location system. ACM Trans. on
Information Systems, 10(1):91–102, Jan. 1992.
[9] J. Hightower and G. Borriello. Location systems for
ubiquitous computing. IEEE Computer, 34(8):57–66, [23] G. Welch, G. Bishop, L. Vicci, S. Brumback,
Aug. 2001. K. Keller, and D. Colucci. The HiBall tracker:
High-performance wide-area tracking for virtual and
[10] F. Hoffmann and J. Scott. Location of mobile devices augmented environments. In Proc. of the ACM Symp.
using Networked Surfaces. In Proc. of UbiComp, on Virtual Reality Software and Technology,
pages 281–298, Göteborg, Sweden, Sept. 2002. University College London, Dec. 1999.
[11] C. Jiang and P. Steenkiste. A hybrid location model
with a computable location identifier for ubiquitous
computing. In Proc. of UbiComp, pages 246–263,
Göteborg, Sweden, Sept. 2002.
293
UbiHealth 2003: The 2nd International Workshop on
Ubiquitous Computing for Pervasive Healthcare Applica-
tions
Jakob E. Bardram Ilkka Korhonen
Center for Pervasive Healthcare VTT Information Technology
Computer Science Department P.O. Box 1206 (Sinitaival 6),
University of Aarhus FIN-33101 Tampere
Aabogade 34, DK-8200 Århus N FINLAND
DENMARK + 358-3-316 3352
+45 8942 3200 [email protected]
[email protected]
294
as well as technologies for communication and collabora- and its applications, and to identify and discuss research
tion among healthcare professionals. themes and methods in this area.
Another central theme related to ubicomp and healthcare is
the intersection of ubiquitous computing, cognitive aids,
TOPICS
and/or artificial intelligence, as applied to helping people
with cognitive disabilities perform daily activities. A pri- The overall topic of this workshop is: The development
mary motivation for developing intelligent “caregiving” and application of ubicomp infrastructures and devices for
systems is the increase in the number of people who have pervasive healthcare including assisting people with cog-
some type of cognitive disability, such as young adults who nitive, mobility, and sensory impairments.
have learning disabilities, or elderly people who demon- This includes sub-topics such as (but not limited to):
strate (a form of) dementia, such as Alzheimer’s disease
• How to infer a person’s behaviors, intentions,
(AD). Recently researchers and industrial partners in ubi-
needs, and mistakes from sensory data.
comp and artificial intelligence have come together to envi-
sion systems that can act as proactive partners in assisting • Deciding on appropriate intervention strategies.
these special populations, such as [7,8,9,4]. Furthermore, • Wearable computing for health.
this community has started to facilitate the exchange of • Wireless, wearable and implantable sensors.
ideas and sharing of information through the organization • Personal medical devices.
of seminars and workshop, such as the Cognitive Aids • Applications for personal management of acute
Workshop held last year at UbiComp 2002. and chronic diseases, wellness, and health, in-
cluding self-treatment (e.g. medication), self-care,
The use of pervasive computing for delivery of health care
self-diagnosis and self-rehabilitation.
raises numerous challenges. When dealing with personal,
sensitive health related aspects of a person’s life, this puts • Smart home technology to support independent
forth strong demands for systems that are reliable, scalable, living and to support peer-to-peer support in
secure, privacy-enhancing, usable, configurable, and many health management.
other things. At the same time, one has to consider that the • Pervasive computing in hospitals and care institu-
average user of such systems is not the typical early adopter tions.
of new technology. This puts special focus on creating • Pervasive applications for caregivers and nurses.
technologies that are usable, and can adapt to, and seam- • New services that health care providers, such as
lessly melt into, heterogeneous computing environments, pharmacies and physicians, can potentially pro-
like the home of the future. vide by taking advantage of these emerging tech-
Pervasive healthcare also contains a fundamental meth- nologies.
odological challenge. Typical research into pervasive com- • Personal, social, cultural and ethical implications
puting uses methods of experimental computer science, of developing and using such technology.
where researchers design, develop, program, and evaluate • Methodological issues to evidence based medicine
prototypes of new technology. The ‘proof-of-concept’ is a (EBM) and pervasive healthcare – how to create
term often used to denote a prototype, which illustrates and innovative new healthcare technology, which can
implements the important aspect of a computer system that be clinically tested?
one wants to demonstrate. Such an ‘experimental’ approach
becomes highly problematic when dealing with health re- The goal of this workshop is to bring together individuals
lated research – what if the experiment falls out wrong? who are actively pursuing research directions related to
Modern evidence based medicine, in contrast, is based on pervasive healthcare. This workshop will allow individuals
statistical significance – one has to demonstrate with sig- with complementary research experiences to build a col-
nificance, that a treatment or cure actually works, and with lective understanding of the issues surrounding the use of
limited side effects. Such clinical trials often involve great pervasive technologies in healthcare settings. This work-
number of subjects (i.e. persons) involving various test shop is timely given the existence of diverse research in the
groups, including a control group. To set up such a clinical area, and will help to bring together the work of emerging
trial running over several months or years clearly takes research groups, all working in the area of pervasive
much more that a ‘proof-of-concept’ prototype. One has to healthcare.
have the resources to design, develop, implement, and
maintain a full-fledged computer system, used by thou-
sands of users. Hence, there is a fundamental methodologi- WORKSHOP ACTIVITIES AND GOALS
cal contradiction embedded in pervasive healthcare, and we This workshop will be run over a full day and will be
would like to discuss this at the workshop as one theme. structured to provide maximum time for group discussion
The purpose of this workshop is to collect original and in- and brainstorming. Prior to the workshop, each participant
novative contributions in the area of pervasive healthcare will be required to read the other participants’ position
statements to ensure that he/she is familiar with their re-
295
search in the area and their visions for pervasive healthcare. pectations towards the workshop, and the author's research
All of this will be available from the workshop homepage. activities including a short bio of the author(s).
Participants will briefly present their research and vision Position papers should be formatted according to the stan-
for future directions in this area. Presentations will be or- dard Springer Verlag format and submitted in PDF format,
ganized according to various themes within pervasive according to the template submission file, which can be
healthcare. These themes will be decided upon after the found at http://www.springer.de/comp/lncs/authors.html.
review of all submissions. Discussion periods will follow Papers should be submitted by email to Ilkka Korhonen
after the presentation of each theme. Upon completion of ([email protected] ).
the presentations, participants will be divided into working There will be a workshop website accessible from
groups based on the themes identified on beforehand, mod- http://www.pervasivehealthcare.dk/ubicomp2003 where
erated by the workshop organizers. In a plenary session both accepted submissions, workshop details (e.g. the pro-
following the groups’ work, each will present their ideas gram), and the results of the workshop will be posted.
and discuss them with all the workshop participants. In a
brief concluding plenary session, all the participants will Paper deadline: August 8
have the opportunity to provide feedback on the workshop Notification of acceptance: August 22
and decide for any continuation at another venue. Acceptance of submissions will be decided upon the review
The goals of this workshop are the following: of the program committee.
1. To build a network and community of researchers
and practitioners working within pervasive
PROGRAM COMMITTEE
healthcare;
The workshop’s program committee includes leading aca-
2. To identify common research themes in pervasive demic researchers as well as representatives from industry.
healthcare; The committee has international members from outside
3. To discuss methodological challenges to pervasive North America reflecting the fact that UbiComp is an inter-
healthcare, especially how to conduct evidence national conference. To date, the following committee
based medicine; members have been confirmed:
4. To foster collaborative efforts among participants, Henry Kautz (Past Chair, U of Washington, US)
who might work together in research projects; Eric Dishman (Past Chair, Intel Labs, US)
5. To create an awareness about research within per- Irfan Essa (Georgia Tech., US)
vasive computing and healthcare.
Ken Fishkin (Intel Labs, US)
Peter Gregor (U of Dundee, Scotland)
PARTICIPATION Edmund LoPresti (AT Sciences, US)
We welcome participants from industry, academia, and
government. We encourage people who are designing and Misha Pavel (Oregon Health & Science University, US)
implementing pervasive computing technologies in medical Martha Pollack (U of Michigan, US)
settings, like in hospitals, in patient’s home, in elderly resi-
Rich Simpson (U of Pittsburgh, US)
dents, and in the offices of general practitioners. We en-
courage a broad range of researchers and practitioners to Ad van Berlo (Smart Home Foundation, The Netherlands)
participate due to the multi-disciplinary nature of using Liam Bannon (University of Limerick, Ireland)
pervasive computing in healthcare. We encourage teams of
Morten Kyng (University of Aarhus, Denmark)
medical and technological researchers to submit work, as
well as individuals representing governmental bodies who Nicos Maglaveras (Aristotle University, Greece)
are enacting related policies, such as those dealing with Niilo Saranummi (VTT Information Technology, Finland)
privacy and ethics of such technologies.
We will explicitly encourage the participation of a number PUBLICATION
of central research groups working in the area of pervasive The IEEE Transactions on Information Technology in
healthcare and future health. This includes research groups Biomedicine (IEEE T-ITB) has a call for a special issue on
in Scandinavia, central Europe, and the USA and Canada. pervasive healthcare (See http://www.vtt.fi/tte/
After review of all submissions, 10 to 15 participants will samba/projects/titb). This issue will be edited by the work-
be invited based on the quality and relevance of the shop organizers, and participants in this workshop are en-
position paper. Each position paper should be two to five couraged to submit their research to this special issue. The
pages in length and consist of the author's vision of the use deadline for the journal papers is November 30, 2003,
of pervasive computing in healthcare, current work, ex- which is a nice timing to the UbiComp 2003 conference.
296
ABOUT THE ORGANIZERS focusing on remote health monitoring using sensors, wire-
Dr. Jakob E. Bardram’s main research areas are perva- less networks, and Web Services. Dr. Wan has presented
sive and ubiquitous computing, distributed component- his research in various academic conferences. His work is
based system, computer supported cooperative work also o widely covered by the media, including Wall Street
(CSCW), human-computer interaction (HCI), and medical Journal, Financial Times, BBC, CNN, ABC News, and
informatics. His main focus currently is ‘Pervasive TechTV.
Healthcare’ and is conducting research into technologies of
future health – both at hospitals and in the patient’s home.
Currently, he is managing a large project investigating REFERENCES
technologies for “The Future Hospital”, which includes 1. Weiser, M. “The Computer for the 21st Century”. In
(among other things) embedding ‘intelligence’ in everyday Scientific American, vol. 265, no. 3, Sept. 1991, pp. 66-
artifacts within a hospital, such as in the walls of the radi- 75.
ology conference room, in the patient’s bed, in the pill 2. Stanford, V. “Using Pervasive Computing to Deliver
containers, and even into the pills. He received his Ph.D. in Elder Care”. In IEEE Pervasive Computing: Mobile and
computer science in 1998 from the University of Aarhus, Ubiquitous Systems, vol. 1, no. 1, Jan.-Mar. 2002, pp.
Denmark. He currently directs the Centre for Pervasive 10-13.
Healthcare at Aarhus University
[www.pervasivehealthcare.dk]. 3. Kidd, C. et al., “The Aware Home: A Living Laboratory
Prof. Ilkka Korhonen's main research interests are appli- for Ubiquitous Computing Research”. In Proceeding of
cation of pervasive computing on healthcare and wellness, 2nd International workshop on Cooperative Buildings
(CoBuild99). Lecture Notes in Computer Science, vol
biosignal interpretation, home monitoring and critical care
1670, Springer-Verlag, Berlin, 1999, pp. 191-198.
patient monitoring. He received his PhD ('97) in signal
processing from Tampere University of Technology, Fin- 4. Mynatt, E.D., Essa, I., and Rogers, W. Increasing the
land. He is a docent in Medical Informatics at Tampere opportunities for aging in place, in CUU 2000 Confer-
University of Technology, and a Research Professor in In- ence (New York, NY, 2000), ACM Press, 1 – 7.
tuitive Information Technology at VTT Information Tech-
5. I. Sachpazidis. ”@HOME: A modular telemedicine
nology, Tampere, Finland. He has >70 scientific publica-
system Mobile Computing in Medicine”, In Proceed-
tions in international scientific journals and conferences. ings of the 2. Workshop on mobile computing
Dr. Mihailidis has been conducting research in the area of 11.04.2002 Heidelberg, Germany
cognitive devices for older adults with dementia for the
6. I. Sachpazidis, “@Home Telemedicine”, In Proceed-
past eight years. While at Sunnybrook & Women’s College ings of Telemed 2001 Conference. Telematik im Ge-
Health Sciences Centre in Toronto, Canada, he was one of sundheitswesen, 9th - 10th November 2001, Berlin,
the first researchers in this field to develop and clinically Germany.
test a prototype of an intelligent cognitive device that as-
7. Kautz, H., Fox, D., Etzioni, O., Borriello, G., and Arn-
sisted older adults with Alzheimer’s disease during a wash-
stein, L. An overview of the assisted cognition project,
room task. He has presented this area of work at many in-
in AAAI-2002 Workshop on Automation as a Caregiver:
ternational conferences and has published in key journals
The Role of Intelligent Technology in Elder Care
related to rehabilitation engineering, assistive devices, and
(2002), AAAI Press.
dementia care.
8. Adlam, T., Gibbs, C., and Orpwood, R. The Gloucester
Dr. Wan has been investigating how emerging technolo- smart house bath monitor for people with dementia.
gies, such as ubiquitous computing, can be used to help
Physica Medica, 17, 3 (2001), 189.
create new consumer experiences and business opportuni-
ties for the past seven years. Five years ago, he developed 9. Mihailidis, A. Fernie, G.R., and Barbenel, J.C. The use
the Magic Medicine Cabinet a popular prototype that inte- of artificial intelligence in the design of an intelligent
grates biometrics, RFID, and health monitoring devices to cognitive orthosis for people with dementia. Assistive
provide consumers with compliance support, vital moni- Technology, 13 (2001), 23 – 39.
toring, and personalized health information. Currently, he is
297
2nd Workshop on Security in Ubiquitous Computing
298
entity recognition. Can the private remain private even Topic 5: The use of Context Information
without priori trust? Context is any set of information that can be used to
characterize the situation of an entity. It ranges from
Topic 2: The Absence of Priori Trust
persisted logs and records of an entity, to the observation
Through the use of X509 certificates and the presence of an
and sensing of the physical environment. Context
online CA (certification authority), PKI (Public Key
information is a foundation for making security-relevant
Infrastructure) has become an adopted model for trust in
decisions. A certain context might convey an uneasy
the Internet. Other models include Kerberos, which is
feeling about the security of an interaction, while in
characterized by a centralized KDC (Key distribution
another situation unknown devices are trusted to carry out a
center), as well as PGP (Pretty Good Privacy), which
critical transaction. How are such environments
creates a web-of-trust that does not require one single
characterized, and how do entities adapt when the situation
signing authority, to whom decisions of trust can be
changes?
delegated. In any event, all of these models are built on the
assumption that some prior registration has been made, and Organizational Information
hence a registrar (CA, KDC, web-of-trust) can be
referenced when faced with making the ultimate decision - Papers will be selected based on their contribution to the
to trust or not. In some ubiquitous computing applications understanding of security in ubiquitous
these infrastructures may or may not exist; often the user computing,originality, and novelty.
has to explicitly authorize transactions. Can we still make
decisions of trust without making the assumption that such Based on the success of last year’s program, we will aim to
authorities exist? Do all principals and subjects still require keep a similar format, but are open to change based on the
a strict identity-based authentication? Are the advantages quality and quantity of accepted submissions.
of separating the identity (secret key) from the permissions, Results of Workshop will be published and exploited
as suggested by SPKI [2], well positioned for ubiquitous initially through the UBICOMP conference, mailing list
computing? Once trust is established, how does it evolve and on a prepared website. We will consider publication of
over time, and how can collaboration amongst devices a special issue, following the evaluation of the workshop.
contribute to higher assurance levels? Contact person: Philip Robinson, [email protected]
Topic 3: Beyond Traditional Credential-based The Organizers
Authentication
It has often been said that authentication is the fundamental Joachim Posegga
basis for security and trust. Applications require Pervasive Security Research
authentication to determine authorizations, establish SAP AG, Germany
constrained channels, infer data integrity, enforce non-
repudiation, and complete some form of billing and Philip Robinson
auditing. Authentication is essentially the proof of identity Tele-cooperation Office (TecO)
through a secret, unique characteristics, permissions, or University of Karlsruhe, Germany
possession. Traditional credential-based authentication
includes passwords, biometrics, and signatures. Narendar Shankar
Topic 4: Adaptive Infrastructures University of Maryland
The origins of Ubiquitous Computing were really USA
concerned with the user. Writers like Weiser [3] and
Norman [4] urged that the comprehensive use of computer Harald Vogt
systems and machines were at times beyond the grasp of Institute for Pervasive Computing
the everyday-man. Complicated user interfaces and rules ETH Zurich
become more of an obtrusion than an aid for completing
mental and physical tasks. HCI (Human Computer REFERENCES
Interaction) and distributed systems offer research into 1. 1st Workshop on Security in Ubiquitous Computing,
adaptive interfaces and middleware respectively, to support UBICOMP 2002, Göteborg Sweden, September 2002.
the goals of seamless integration of technology into http://www.teco.edu/~philip/ubicomp2002ws/proceedin
everyday life and enterprise systems. How do we fit gs.htm
security into these mechanisms without undoing the
2. http://www.ietf.org/html.charters/spki-charter.html
flexibility and usability aspects? How do we exploit
context information in supporting security infrastructures 3. Mark, Weiser. The Computer for the 21st century.
that are adaptive and complementary? Scientific American, September 1991.
4. Donald A. Norman. The Design of Everyday Things.
Doubleday, 1988.
299
Multi-Device Interfaces for
Ubiquitous Peripheral Interaction
Loren Terveen Charles Isbell Brian Amento
Computer Science & Engineering College of Computing Speech Interfaces Department
University of Minnesota Georgia Tech AT&T Labs
Minneapolis, MN 55455 USA Atlanta, GA 30332 USA Florham Park, NJ 07932 USA
[email protected] [email protected] [email protected]
300
specific goals, including identifying requirements for ubiquitous Enabling software techniques and architectures.
peripheral awareness devices and considering specific devices that For any information delivery system, ensuring that information is
can meet these requirements, exploring software techniques and relevant to an individual user is a key goal. This is even more
architectures that drive the interaction, and examining designs for important in our context. Networking may be relatively slow,
interfaces that divide their functionality across several wearable unreliable, and expensive. Most of the time, users’ attention will
devices. We consider each of these goals in some detail. not be focused on their peripheral awareness device, so the system
Devices for ubiquitous peripheral awareness: requirements and should be careful not to distract users from their current tasks.
candidates. These considerations pose challenges for the information man-
A wearable peripheral awareness device must be always on, al- agement software that runs on the server. This software is respon-
ways accessible (that is, a user must always be able to receive sible for monitoring events in a user’s computational environment
information from it), and must not demand explicit user attention (e.g., email, voicemail, calendar), deciding what information to
(that is, it is accessible without a user explicitly “taking it out to send to a user’s wearable system, prioritizing that information,
use”, as one must with a PDA or cell-phone). It would be very dealing with user responses, and learning from those responses so
helpful if some version of the device already enjoys widespread its decisions can improve over time.
use (as do wristwatches, for example). And, since in our vision, One challenge for the design of this software is to identify the
this device is intended to work together with a PDA, it must be factors it can use to make these types of decisions. One such fac-
networked. We believe that wireless devices are much more ac- tor is timeliness. New email or voicemail, upcoming appoint-
ceptable to users, which means that the networking should be ments, and changes to web-based information services such as
wireless. stock tickers or sports scoreboards are some examples of timely
There are various promising devices that meet some or all of these information. For mobile use, location also is important in deter-
requirements. One is a computationally augmented wristwatch. mining relevance. GPS is one technology that enables devices to
There has been a lot of activity in this area recently, both in com- be location-aware. Location-aware devices could notify people of
mercial and research contexts. Companies such as Suunto make nearby stores with items they need to purchase, or of nearby
sophisticated wrist computers tailored for sports like golf and friends whom they’d like to contact.
skiing. Fossil and onHand make wrist PDAs that runs Palm OS. A system also should provide users with information that is im-
Swedish researchers prototyped a simple ‘Reminder Bracelet’ portant. For a simple example, it should avoid streaming spam
(CHI 2001) – LEDs were added to a wrist band, and they were email to its users. Ideally, it should be able to prioritize all the
lighted to show several types of information of varying levels of types of information (messages, reminders, news items, etc.) and
importance. Most notably, IBM and Citizen are prototyping a use these priorities to guide its presentation of the information.
general purpose wrist computer that runs Linux. Initial priorities may be based on the type of the information: for
Another interesting possibility is the use of audio for peripheral example, getting notified that one of your friends is in the same
awareness. For example, Sahwney and Schmandt’s Nomadic mall or library as you is likely to be more important than most
Radio used small, neck-mounted, directional speakers to deliver news stories.
audio. Headsets that are relatively unobtrusive and still allow Finally, an effective system will be adaptable. No two users will
users to engage in normal interaction are another possibility;. have the same notion of importance. A system should learn from
Grinter & Woodruff (CHI 2002) did a pilot user study that user responses what that particular user considers important. For
showed some preference for single ear headsets; ear buds also are example, if a user frequently reads email messages from particular
worthy of more exploration. addresses, messages from these addresses should gradually re-
Other wearable devices also are worth investigating, e.g., compu- ceive higher priority.
tational jewelry or eyeglass displays. Thus, the key enabling software techniques come from areas such
When it comes to the networking requirements for a multi-device as information filtering and artificial intelligence, particularly
interface, Bluetooth seems well-suited. It works over short dis- machine learning. The workshop will explore the range of tech-
tances, but the peripheral awareness device and the PDA will be niques that are relevant, discuss lessons to be learned from exist-
well within the working range. While Bluetooth is appropriate for ing research prototypes, and identify open problems where new
networking the components of the multi-device interface, the research is needed.
PDA will need to connect a server (more on this below). This Design of multi-device interfaces.
will require a long-distance wireless networking technology such
as WIFI or GSM/GPRS. There are several challenges here. One is the general problem of
designing for devices with i/o capabilities that are quite different
Another important issue we will explore is how much can be done from, and relatively impoverished compared to desktop user inter-
with off-the-shelf devices. The IBM/Citizen ‘Watchpad’ is not faces. Devices may have very small displays – or, in the case of
yet available; if it were, it would fulfill most or all the require- audio devices, no displays at all. And their input capabilities may
ments mentioned above. However, it currently still may be neces- be very limited, in the case of a wrist computer, or inherently
sary to do some simple hardware prototyping along the lines of error-prone, as with a speech interface. Designing an interface for
the Reminder Bracelet. a PDA or cell-phone is hard; designing for a wristwatch will be
In general, the workshop will address this goal by trying to reach even harder.
agreement on appropriate requirements, developing a comprehen- Another issue is when it is appropriate to ‘promote’ information
sive list of candidate devices, and evaluating the extent to which beyond the periphery, that is, when the system should attempt to
each candidate satisfies the requirements. notify the user about some information at once. When this is nec-
essary, a wristwatch computer might be able to flash its display,
301
vibrate, or even sound a tone. Of course, the decision whether to they now go through the PDA? This suggests that the division of
seek the user’s attention must be guided by the prioritization tech- functionality across the two devices may have to be dynamic.
niques mentioned above. Workshop activities.
It’s also important to consider how users might respond to items We will attempt to achieve these goals through a combination of
on the peripheral awareness device that catch their attention. One activities, both prior to and at the workshop. Would-be partici-
way would be to support brief “canned” responses. For a mes- pants will submit 2-4 page position statements, identifying one or
sage, in particular, it should be possible to send a response like more of the workshop goals that they want to address, at least one
“OK” or “I got your message – will reply in detail when I’m back specific research issue associated with the goal, and their current
at my desk” without having to take out one’s PDA. One research or planned work that addresses the issue. The organizers will
question worth exploring is how to define a small set of generally select participants based on the interest and clarity of the research
useful canned responses. Another issue is how to design an inter- questions and the novelty and interest of the proposed or com-
face for a device with very limited input capabilities (like a wrist- pleted solutions. Position papers of all accepted participants will
watch) that makes it simple for users to select a response. be posted on a website for the workshop.
Of course, the most interesting and novel interface design chal- Well in advance of the workshop, we will group participants into
lenge is how to divide functionality across multiple devices. The sessions, and assign a discussant for each session. The discuss-
simplest case to envision is one where a user notices something on ant’s job will be to bring additional relevant knowledge to bear on
the peripheral awareness device, gives a single command (e.g., a the work to be presented in the session, e.g., to compare and con-
button press) to indicate interest, then takes out a PDA to explore trast the different approaches, compare them to previous work, or
the information in detail. This model assumes that one first uses suggest alternative approaches. This process will begin through
the peripheral awareness device alone, then the PDA alone. It email exchanges prior to the workshop.
also requires very simple coupling between the two devices. We
think this model is well worth exploring; however, we also will At the workshop, presentations will be organized around the
attempt to identify situations when a more complicated model for workshop goals and kept short, no more than 15 minutes. This
combining the devices is necessary. For example, when a user has will allow plenty of time for discussion. The discussion will be
his PDA out and is using it, should information and notifications initiated by the assigned discussant; we expect discussants to pre-
still be going through the peripheral awareness device? Or should sent their perspective for about 15 minutes before opening up to
general discussion.
302
Ubicomp Communities: Privacy as Boundary Negotiation
John Canny1, Paul Dourish2, Jens Grossklags3, Xiaodong Jiang1, and Scott Mainwaring4
1 3
Computer Science Division School of Information Mgmt. and Systems
University of California, Berkeley University of California, Berkeley
Berkeley, CA 94720 USA Berkeley, CA 94720 USA
{jfc; xdjiang}@cs.berkeley.edu [email protected]
2 4
School of Information and Computer Science Intel Research
University of California, Irvine 2111 NE 25th Ave., MS JF3-377
Irvine, CA 92697 USA Hillsboro, OR 97214 USA
[email protected] [email protected]
303
communities? How are reputations, reliabilities, and Submission, Selection Process and Publication
risks established, measured, and represented? What The workshop organizers will select participants based on
forms of information or other exchange occur in the review of submitted position papers, taking into account
community? How might ubicomp systems handle scientific quality and relation to the workshop topics. We
differences in power, access, and expertise within a will also invite selected prominent ubicomp researchers,
community? security experts, social scientists and privacy advocates to
submit to the workshop.
Format of the Workshop and Timetable
This workshop will last for 1 full day and will be limited to The workshop proceedings will be published as a technical
20 participants (not including the workshop organizers) to report available online. Also we will seek cooperation with
enable lively and productive discussions. Participants will ACM to archive the workshop outcomes in the ACM
be invited on the basis of position papers. Such position digital library.
papers should be no longer than 4 pages excluding Update on Selected Workshop Papers
references, and they will be selected based on their Michael Boyle “A Shared Vocabulary for Privacy”
originality, technical merit and topical relevance.
James Fogarty “Sensor Redundancy and certain Privacy
The workshop will be organized into panels and breakout Concerns”
sessions. Depending on the submitted position papers, the
workshop will consist of 3 to 4 panels. Each panel lasts Jonathan Grudin, Eric Horvitz “Presenting choices in
about an hour, and includes presentation of 5 or 6 position context: approaches to information sharing”
papers that share a similar topic, followed by organizer-
Jason Hong, Gaetano Boriello, James A. Landay, David W.
moderated discussions. The morning panels are devoted to
McDonald, Bill N. Schilit, J. D. Tygar “Privacy and
community-oriented ubicomp systems, while the afternoon
Security in the Location-enhanced World Wide Web”
panels are devoted to trust issues manifested in those
systems. Also in the afternoon, there will be breakout Charis Kaskiris “Socially-Informed Privacy-Enhancing
sessions lasting about 1.5 to 2 hours, followed by reports to Solutions Economic Privacy and the Negotiated Privacy
a plenary session. In addition, coffee breaks and lunch will Boundary”
serve as opportunities for informal discussion. To the
Saadi Lahlou “Constructing European Design Guidelines
extent possible, participants will have lunch together within
for Privacy in Ubiquitous Computing”
short walking distance of the workshop location.
Marc Langheinrich “When Trust Does Not Compute - The
Role of Trust in Ubiquitous Computing”
Scott Lederer, Jen Mankoff, Anind Dey “Towards a
09:00 – 09:30 Registration & welcome
Deconstruction of the Privacy Space”
09:30 – 10:00 Introductions
Carman Neustaedter, Saul Greenberg “Balancing Privacy
10:00 – 11:00 Panel 1 and discussion and Awareness in Home Media Spaces”
11:00 – 11:30 Coffee Chris Nodder “Say versus Do; building a trust framework
11:30 – 12:30 Panel 2 and discussion through users’ actions, not their words.”
12:30 – 02:00 Lunch David J. Phillips “The information environment as text
02:00 – 03:00 Panel 3 and discussion interpreted by communities of meaning: implications for
design of ubicomp systems”
03:00 – 04:30 Breakouts
Organizers
04:30 – 05:30 Breakout presentation
John Canny, Paul and Jacobs Distinguished Professor of
05:30 – 06:00 Wrap-up & coffee Engineering, University of California, Berkeley
John Canny is a Professor of Computer Science at UC
Berkeley with a background in robotics, AI and algorithms.
His recent work is on privacy-preserving collaborative
Desired Outcome algorithms, including collaborative filtering and location-
We expect that this workshop will have concrete results based services. He founded the group on Human-Centered
that will advance the development of trust-sensitive Computing at UC Berkeley in 1998, a group of technical
community-oriented ubicomp systems. We will put and social sciences interested broadly in the impacts of IT
together a poster summarizing the activities of the on society. In 2001, he co-founded the Berkeley Institute of
workshop, and report back to the conference. Design, a new interdisciplinary program in socially-
informed design of informational environments.
304
Scott Mainwaring, Senior Researcher, People and Jens Grossklags, Ph.D. student, School of Information
Practices Research Lab, Intel Research Management and Systems, University of California,
Scott Mainwaring is a senior researcher in Intel’s Berkeley
ethnographic research and design group in Hillsboro, Jens Grossklags is a Ph.D. student in Information
Oregon, conducting fieldwork in settings of technology use Management and Systems at UC Berkeley, and has been
ranging from U.S. and Korean households to Chinese visiting research student at NASA. Jens also has been
businesses. Current interests include social, cultural, and research guest at the Max-Planck Institute for Research into
place-based constraints and opportunities for ubicomp. Economic Systems in Germany. His current work spans the
Prior to joining Intel in 2000, Scott was a member of domains of computer networks, economics, and human
research staff at Interval Research Corp., working on media factors/psychology research. He has been honored with two
spaces, lightweight communication, interactive television, conference best paper awards: for research on behavioral
and video ethnography. studies in privacy, awarded by the German Informatics
Society, and for architectural design of large-scale ad-hoc
sensor networks, conferred at the ACM/ACS MDM 2003.
Paul Dourish, Associate Professor, School of Information
and Computer Science, University of California, Irvine
Paul Dourish is an Associate Professor in the School of Xiaodong Jiang, Ph.D. Student in Computer Science,
Information and Computer Science at UC Irvine. He has University of California, Berkeley
held research positions at Xerox PARC, Apple Computer, Xiaodong is a Ph.D. student in Computer Science at UC
and Rank Xerox EuroPARC. His principle research Berkeley, where he is currently a Mayfield Fellow and
interests are in the areas of Human-Computer Interaction Hitachi Fellow. Xiaodong’s research focuses on ubiquitous
and Computer-Supported Cooperative Work. In particular, and context-aware computing. He has worked on the
he has a long-term interest in the relationship between information space model and infrastructure for context-
information system design and social analysis. Most aware computing, and applications that assist firefighters in
recently, he has been exploring the foundations of their emergency response practice.
embodied interaction, which seeks to apply
phenomenological approaches to understand encounters
between people and technology.
305
At the Crossroads: The Interaction of HCI and Systems
Issues in UbiComp
Brad Johanson Jan Borchers, Bernt Schiele
Stanford University ETH Zurich
[email protected] {borchers,schiele}@inf.ethz.ch
Peter Tandler Keith Edwards
IPSI Darmstadt Palo Alto Research Center
[email protected] [email protected]
306
Finally, application designers create applications which reasoned analyses of how the two interact and guidelines
present user interfaces to their end users. In doing so, for choosing appropriate design points. Ten to twelve
designers must understand the underlying system papers would be selected from those submitted, and those
framework they are using to ensure that the user interface is authors would be invited to participate in the workshop.
sufficiently responsive, flexible, and intuitive. It is The papers would be made available as a booklet at the
common practice to run applications through cycles of workshop, and permanently on a web site associated with
design and testing to optimize the user interface, but the workshop.
Edwards et al. [5] point out that it is probably also valuable
to feed what is learned in application design back into the We propose a one-day long workshop with the following
design process of the underlying infrastructure so that it schedule:
will evolve to be able to better support the types of
applications being built on the system infrastructure. • Anchor Talks (9-10:30am). Initial suggestions, to
be fixed by workshop:
Just as the underlying system constrains the types of
interfaces that can be built, the fundamentals of HCI place o HCI Fundamentals
bounds on any system that wishes to support human- o Systems Fundamentals
computer interaction. Some of these fundamentals are
cognitive and involve how quickly humans can respond to o Overview of performance constraints of
their environment and how fast their environment must existing UbiComp systems infrastructures
react to them. For example, humans cannot see any detail • Break (10:30-10:45am)
in motion faster than around 60 Hz, and users expect a
system to react to a trigger within 1 s before they feel they • First Paper Session [15 min presentations]
are waiting. Many of these fundamentals are documented (10:45am-12:15pm)
in classic HCI literature such as [3]. • Lunch (12:15-1:30pm)
More recently, HCI has begun to develop “post-cognitive” • Second Paper Session [15 min presentations]
theories for human-computer interaction. These broaden (1:30pm-2:45pm)
the previous focus on one user interacting with one device
• Break (2:45pm-3pm)
to accomplish a single task, to the larger context of multiple
people, tasks, and devices. Examples of such theories • Discussion, Debriefing and Attempt to Draw
include Activity Theory [8], Distributed Cognition [7], and Conclusions from Presented Materials (3pm-
Speech Acts [11]. While work on these theories is 4:30pm)
ongoing, their results will help define the characteristics
and performance needed for systems infrastructures After the workshop, the organizers and interested
supporting coordination and collaboration between multiple participants would try to move from the ideas generated in
users and machines in device-rich UbiComp environments. the final debriefing towards a set of principles or design
patterns [2] for designing user interfaces and systems and
Of course, HCI and systems issues have been intertwined infrastructure in the UbiComp world that take into account
before on desktop computers, and some of the lessons both systems and HCI issues. Participants would be offered
learned in that domain may be applicable or extensible to the opportunity to continue with this synthesis work after
the UbiComp domain. In particular in the days of slower the workshop with the goal of presenting it back to the
desktop computers many coping mechanisms (some still in community in the form of one or more future papers,
use today) had to be developed to create a more pleasant possibly collected in a special issue of an appropriate
user experience. For example, hardware cursors were used journal.
to insure the direct-feedback loop between user input and
cursor display occurred without noticeable delay. Other Resources/References
mechanisms include hourglasses and progress bars, 1. Becker, C., G. Schiele, and H. Gubbels. BASE - A
window drags as outlines, and jump scrolling. At the same Micro-kernel-based Middleware For Pervasive
time, many of the well-established metaphors that today's Computing. In 1st IEEE International Conference on
desktop user interfaces deploy simply do not work in Pervasive Computing and Communications (PerCom
UbiComp environments—for example, what are concepts 2003). 2003. Dallas-Fort Worth, Texas, USA: IEEE.
such as "pointer focus" and "selection" supposed to mean in
a space augmented with multiple machines, input and 2. Borchers, J. A Pattern Approach to Interaction Design.
output devices, and used by many people simultaneously? 2001. Chichester, UK: Wiley.
Proposed Workshop Format 3. Card, S., T. Moran, and A. Newell, The Pyschology of
We would ask members of the UbiComp field to submit Human-Computer Interaction. 1983, Hillsdale, NJ:
papers either on their personal experiences with the Erlbaum.
interaction of HCI and systems constraints in UbiComp, or
307
4. Cerqueira, R., et al. Gaia: A Development
10. Tandler, P. The BEACH Application Model and
Infrastructure for Active Spaces. In Ubitools Workshop Software Framework for Synchronous Collaboration in
at Ubicomp 2001. 2001. Atlanta, GA. Ubiquitous Computing Environments. To appear in
5. Edwards, W.K., et al. Stuck in the Middle: The Journal of Systems and Software, 2003 (Special Issue
on Application Models and Programming Tools for
Challenges of User-Centered Design and Evaluation
for Middleware. In CHI 2003: Human Factors in Ubiquitous Computing).
Computing Systems. 2003. Fort Lauderdale, FL: 11. Winograd, T., and F. Flores. Understanding Computers
Association for Computing Machinery. and Cognition: A New Foundation for Design. 1986.
6. Grimm, R., et al. A System Architecture for Pervasive Norwood, NJ: Ablex.
Computing. In 9th ACM SIGOPS European Workshop.
2000. Kolding, Denmark: pp. 177–182.
308
System Support for Ubiquitous Computing – UbiSys
Roy Campbell1, Armando Fox2, Paul Chou3,
Manuel Roman4, Christian Becker5, Adrian Friday6
1
University of Illinois at Urbana-Champaign, USA
[email protected]
2
Stanford University, USA
[email protected]
3
IBM T. J. Watson Research Center, USA
[email protected]
4
DoCoMo Labs, USA
[email protected]
5
University of Stuttgart, Germany
[email protected]
6
Lancaster University, UK
[email protected]
309
Ubiquitous computing environments are envisioned as organization, application models, and application support
being populated with large numbers of computing devices in general.
and sensors to the extent that the physical and
We strongly believe that the proposed workshop will
computational infrastructures become fully integrated,
provide a unique opportunity to foster discussion and
creating a dynamic programmable environment. To realize
interaction among researchers working on this new area.
this vision, several projects have developed prototype
We expect the workshop to be the first step towards the
environments, typically focused on a particular ubiquitous
creation of a discussion group and future workshops and
computing scenario or application. System support is often
conferences working on the formalization and
pragmatic, problem oriented and difficult to generalize to
standardization of basic building blocks for Ubiquitous
other domains. To fully realize programmable ubiquitous
Computing.
computing environments it is essential to provide services
that coordinate software entities and heterogeneous
networked devices and provide the low-level functionality ORGANIZERS
needed to enable ubiquitous computing in the general case. • Roy Campbell, Ph.D., Department of Computer
Systems software provides a homogeneous computing Science, University of Illinois at Urbana-Champaign.
environment where applications are supported with Roy Campbell is a professor of computer science at the
resource management (i.e. resource and service discovery), University of Illinois at Urbana-Champaign. His
and common abstractions that leverage the implicit research interests include operating systems, distributed
heterogeneity in such environments. The work of multimedia, network security, and ubiquitous
independent researchers has revealed patterns of service computing. Prof. Campbell is the head of the Gaia
usage, indicating that systems software for ubiquitous project, a ubiquitous computing infrastructure under
computing may converge to a set of necessary core development that is investigating systems support and
services. If such a set of requirements could be identified, applications for ubiquitous computing environments.
applications could then be more easily ported across He received his B.Sc. in mathematics from the
different implementations and interoperability would be University of Sussex, and his M.Sc. and Ph.D. in
simplified. computing from the University of Newcastle upon
Tyne.
One of the key issues for debate is the underlying structure • Armando Fox, Ph.D., Department of Computer
of ubiquitous computing middleware. Current prototypes Science, Stanford University. Armando Fox is an
are characterized by three different architectural assistant professor at Stanford University, and is the
organizations. In the first case, the environment provides faculty leader of the Software Infrastructures Group
an infrastructure that coordinates the resources present in a (SWIG). He received a BSEE from M.I.T., an M.S.E.E
specific geographical location. Applications can discover from University of Illinois at Urbana-Champaign, and a
and access such resources only via the infrastructure. Ph.D. in Computer Science from UC Berkeley. His
Furthermore, all communication between devices is primary research interests are systems approaches to
mediated by the infrastructure. Additional information, improving dependability (the Recovery-Oriented
such as large amounts of application data which for Computing project) and system software support for
instance cannot be stored on small devices, can be ubiquitous computing (the Interactive Workspaces
maintained in the infrastructure. Direct interaction between project).
devices is not considered and the infrastructure typically
• Paul Chou, Ph.D., IBM T.J. Watson Research Center.
provides services localized to a specific geographical area
Paul is a Research Staff Member and the manager of the
such as a room, or a building, covered by one or more
Emerging Interactive Spaces department at IBM
network types. The second architectural organization relies
Research. His present research focus is on pervasive
on spontaneous interaction among the devices present in
computing, in particular, the technology and usability
the environment as a federation of peers. There is no
challenges in bringing physical objects and digital
common infrastructure per se – applications have to store
infrastructure together to address social and business
and maintain application data cooperatively. Such an
issues. His recent projects include intelligent vehicle,
organization is typically based on an underlying ad-hoc
telematics data privacy protection, and office of the
configuration. The third model is a hybrid that relies on a
future. Paul received his Ph.D. in Computer Science in
centralized system support infrastructure, but relies on
1988 from the University of Rochester.
peer-to-peer communication among entities. So far, the
three approaches seem to offer benefits in distinct • Manuel Roman, Ph.D., DoCoMo Labs, USA. Manuel
application domains and it is likely that they will continue Roman recently completed a Ph.D. in Computer
to co-exist and complement each other. These three Science at University of Illinois at Urbana-Champaign.
approaches lead to a variety of research questions His research interests include ubiquitous computing,
concerning interoperability, architecture, service middleware, and operating systems, and how to
310
combine ideas from all these areas to create research projects in the areas of mobile computing,
interactive/programmable active spaces. His research context-aware systems and advanced distributed
interests include also handheld devices and how to systems, including the Equator IRC, GUIDE II and
integrate them in active spaces. He received his BS and CORTEX projects. He has a consistent track record of
MS degrees in Computer Science from La Salle School publishing in high quality peer-reviewed international
of Engineering (Ramon Llull University) in Barcelona, conferences and journals, having authored and co-
Spain. authored over 50 publications to date. His current
research interests include: distributed system support
• Christian Becker, Ph.D., Institute for Distributed and
for mobile, context-sensitive and ubiquitous computing.
Parallel Systems, University of Stuttgart, Germany.
Christian Becker is a senior researcher and lecturer at WORKSHOP TITLE
the University of Stuttgart. He received a PhD in “System Support for Ubiquitous Computing – UbiSys”
Computer Science from the University of Frankfurt and PARTICIPATION SOLICITATION
a Diploma in Computer Science from the University of We plan to invite key researchers and practitioners who
Kaiserslautern. His primary research interests are work in this area to submit papers and give oral
ubiquitous computing systems. Currently he is involved presentations on their research. We also plan to put out a
in middleware support for spontaneous networking (the call for papers that solicits research papers describing
BASE platform) and support of context-aware current and on-going research and experience reports
applications though global augmented world models related to the three aspects described in this proposals. All
(the Nexus project). submissions will be reviewed and selected based on their
• Adrian Friday, Ph.D., Computing Department, originality, merit, and relevance to the workshop. All
Faculty of Applied Sciences, Lancaster University, UK. accepted papers should be presented orally during the
Adrian Friday graduated from the University of London workshop.
in 1991. In 1992 he helped establish the mobile MAXIMUM PARTICIPANTS
computing group at Lancaster University, completing We expect to have up to 15 participants giving short
formative work in the area of mobile distributed presentation about their submissions. We are also
systems, leading to the award of his PhD in 1996. In expecting 30-50 attendees to participate in the discussion
1998 he was appointed as a Lecturer in the department session.
of Computer Science and is an active member of the
Distributed Multimedia Research Group. During his
research career, Adrian has been involved with over 9
311
First International Workshop on Ubiquitous Systems for
Supporting Social Interaction and Face-to-Face
Communication in Public Spaces
Organizers
Rick Borovoy Volodymyr Kindratenko
nTAG, LLC NCSA, UIUC
[email protected] [email protected]
Program Committee
Donna Cox David Pointer
NCSA, UIUC NCSA, UIUC
[email protected] [email protected]
312
embedded in the space. One of the earliest experiments, interact with each other and the interaction is logged and
Meme Tags project, used electronic nametags capable of subsequently uploaded to a private website accessible by
exchanging short messages (memes) via IR. The tags also each user.
stored information about the interaction between tag SpotMe system by Shockfish SA (www.spotme.ch)
wearers and shared it with the centralized database. The requires participants to carry a cell phone-size device
cumulative data was shown on large displays (Community through which they can find out who is standing within a
Mirrors). 30 meter radius from them. Participants can be notified if a
Digital Assistant developed at ATR Media Information person with shared interests comes within 10 meters, and
Science Lab aimed to enhance communication among they can send messages to each other or exchange
conference participants by tracking them as they attend electronic business cards.
various locations and providing access to a content-rich More recently, research investigates also the combined use
personalized environment either via web kiosks or of public displays integrated in the architectural
interactive displays. Users were required to wear IR environment with mobile devices carried by visitors of
badges that could be detected at some locations within the public spaces or semi-public spaces in campus-like
conference space. The resulting data was used to create the environments of large organizations (e.g., companies,
user's touring diary and to provide personalized real-time universities). The AMBIENTE-Team at Fraunhofer IPSI
services. developed smart artifacts as the GossipWall (a large wall-
Georgia Tech’s Social Net system required attendees to sized ambient display) and the ViewPort (a mobile device)
carry a portable device (Cybiko) that uses RF to help in order to support informal communication between
mutual friends connect strangers (who were co-located for people and convey atmospheric information
a considerable amount of time). In order for these mutual (www.ambient-agoras.org).
friends to identify who among their friends are not
Justifications for the workshop
connected (but should be, because they tend to be co- Although the above-described projects attempt to solve the
located), the system requires each user to provide a list of same class of problems and use relatively similar set of
all their friends – a task that turned out to be challenging basic techniques, the developers do not have a common
for some in a field test of 10 users at a 3-day conference. venue for sharing the results of their work. For example,
MIT Media Lab's Sociometer prototype captures social papers describing this type of systems are scattered across
interaction among individuals who use wearable computers several barely-related journals, which makes it difficult for
with microphones. While the system tracks and a novice developer to assess the state of the art in the field.
subsequently analyzes communication patterns, it does not The proposed workshop therefore is a first attempt to bring
use the data to provide any real time value-added services together like-minded researchers and practitioners who
to the users. focus on the area of ubiquitous systems for supporting
NCSA’s IntelliBadge™ project implements location social activities and social interaction in public spaces and
tracking by proximity to RFID location markers installed at at public events.
the points of interest. All the user services are built around An interdisciplinary approach is required in order to
tracked location information and a prior knowledge about develop a successful application for supporting social
the attendees and the conference events. These services interactions. Without an interdisciplinary team, the
include the ability to locate other people, view events application is likely to overstate some issues and
attendance statistics, and interact with visualization completely miss other important topics. However, this is
applications. not an obvious detail for most novice developers. This
nTag by nTAG Interactive, LLC (www.ntag.com) uses workshop therefore will bring together researchers from
semi-passive Radio Frequency IDentification (RFID) tag different disciplines, such as sociology, psychology, art,
operating in the UHF band which enables a conference computer engineering, computer graphics, and interface
organizer to use it for security, to record how many people designers with the goal to point out the place and
attended certain sessions, or to track how many people importance of each of these disciplines in the overall scope
visited certain areas of an exposition floor. When people of a successful system design.
meet, their tags exchange information about their interests The final goal of the workshop is to identify and clarify the
and preferences, thus facilitating social interaction among research challenges and directions that the researchers
the attendees. Tags also store and provide convenient involved with this type of work are likely to face.
access to the conference program.
WORKSHOP TOPICS
CharmBadge by Charmed Technology, Inc. The main subject of the proposed workshop is the
(www.charmed.com) uses IR-based tags programmed with development and use of ubiquitous systems to support
attendees' individual business card information. This social interaction in public spaces and at public events,
information is exchanged between attendees as they
313
such as museums, conferences, trade shows, etc. Topics deploy advanced versions of this technology in commercial
relevant to this subject include: settings.
• Applications: existing commercial and experimental Harry Brignull is a research fellow at Sussex University's
applications, e.g., ubiquitous systems in museums, at Interact lab, where he is also finishing his Ph.D. His
public gatherings, etc. specialist area is user-experience design for public situated
displays. Harry currently works on the Dynamo project, on
• User interface: how to provide a simple and intuitive
which they have developed a system which provides a
user interface for novice users to a complex system.
communal surface for the sharing and exchange of
• Presentation: how various types of information information in face-to-face scenarios. On another thread of
acquired by the ubiquitous system can be effectively the Dynamo project, Harry has looked at the use of large
presented to the end users. displays to encourage socialising, and the issues involving
• Scalability: how to accommodate a large number of enticing users to progress from passer-by to participant.
simultaneous users at a potentially unlimited number Shahram Izadi is a research fellow in the Communications
of locations. Research Group at the University of Nottingham. He is
• Deployment: how to package the system so that it can actively involved in a number of research projects at
be easily deployable in an environment that is not Nottingham including Equator, a six-year Interdisciplinary
prepared for such type of applications. Research Collaboration (IRC) supported by the EPSRC
that focuses on the integration of physical and digital
• Reliability: how to build robust and reliable systems interaction; and Dynamo a public multi-user interactive
that can guarantee at least some minimal number of surface that supports the cooperative sharing and exchange
services. of a wide range of media. Shahram has also had the
• Privacy: if the system "knows" everything about opportunity to work on the Speakeasy project at PARC,
everybody currently present in the tracked ubiquitous where he has helped engineer an interconnection
environment, what are the privacy concerns and how technology for enabling digital devices and services to
best to address them? easily interoperate over both wired and wireless networks.
He has published at international conferences on ubiquitous
• Security: what happens if the system is defatted and
computing, CSCW and mobile computing, and written
the intruders gain access to all the accumulated
several journals articles. He is currently working on a book
knowledge. How to prevent this from happening.
chapter for the Handbook of Mobile Computing.
• Social aspects: how the technology can be used to help
Volodymyr Kindratenko graduated in mathematics and
forming social networks and how it can be used to
computer science from the State Pedagogical University,
study them.
Kirovograd, Ukraine, in 1993. He received a Ph.D. degree
WORKSHOP GOALS from the University of Antwerp, Belgium, in 1997. His
The goals of this workshop are research involved the development and application of
• to examine the state of the art of an emerging area of image analysis techniques for Scanning Electron
ubiquitous computing research, Microscopy imagery. From 1997 to 1998, Volodymyr
Kindratenko was employed by the National Center for
• identify relevant projects and technologies and build Supercomputing Applications (NCSA) at the University of
up a taxonomy Illinois as a Postdoctoral Research Associate in the Virtual
• identify challenges and issues that need to be resolved Environments group where he worked on the development
in order for this technology to proliferate in the future, of a distributed virtual reality system for collaborative
product design. From 1998 to 2002, Volodymyr
• provide a venue for the like-minded researchers to
Kindratenko was employed by NCSA as a Research
meet and end exchange ideas.
Scientist in the Visualization and Virtual Environments
WORKSHOP ORGANIZERS group where he worked on the development of industrial
Rick Borovoy has a Bachelors Degree (1989) from virtual reality applications and high-end visualization
Harvard in Computer Science, and a Masters (1995) and systems. He is now a Research Scientist at NCSA in the
Ph.D. (2001) from the MIT Media Lab. In 1995, he co- Experimental Technologies Division where he is involved
created the Thinking Tags: interactive name tags that gave with ubiquitous systems, interactive spaces, and sensors
people talking to each other at a conference a simple research and is the Technical Lead on the IntelliBadge™
measure of how much they had in common. This led to project.
several more prototypes, including the Meme Tags and i- Alex Lightman is a leading writer and speaker on the
balls, and a Ph.D. thesis on "Folk Computing: Using future of technology and communications. He is the author
Technology to Support Face-to-face Community Building". of the first book on 4G wireless, Brave New Unwired
He recently started a company -- nTAG Interactive -- to World: The Digital Big Bang and the Infinite Internet,
314
published by John Wiley in 2002 and has published nearly Xerox PARC and at the Intelligent Systems Lab of MITI,
100 articles for technology, business, and political Tsukuba Science City, Japan..
publications including Red Herring, Chief Executive, and He is the Chair of the Steering Group of the EU-funded
Internet World. proactive initiative "The Disappearing Computer", a cluster
Lightman is the CEO of Charmed Technology, of 17 projects, and, more recently, the co-chair of
(www.charmed.com) which makes wearable computers CONVIVIO: the EU-funded Network of Excellence on
and achieved world-wide acclaim for producing 100 People-Centred Design of Interactive Systems. He was and
wearable technology fashion shows in 20 countries. He is still is active in various special interest groups in different
the founding director of The 4G Society and the first Cal- scientific organizations, e.g., GI (Gesellschaft für
(IT)2 Scholar at the California Institute for Informatik), DGP (Deutsche Gesellschaft für Psychologie),
Telecommunications and Information Technology, a joint ACM (Association for Computing Machinery), EACE
program of UCSD and UCI. (http://www.calit2.net). (European Association for Cognitive Ergonomics).
Lightman has nearly 20 years of high technology His research interests range from Cognitive Science,
management experience and, in addition, has experience in Human-Computer Interaction, over Hypertext/Hypermedia
politics (including work for a US Senator), construction, and Computer-Supported Cooperative Work to Interaction
consulting, the oil drilling industry, and the renewable Design for Ambient/Pervasive/Ubiquitous Computing in
energy industry. He also created, managed and received the context of an integrated design of real and virtual
accreditation for the Nizhoni Institute, a small school and worlds. He and his team are known, e.g., for the
college, and produced the 100 Brave New Unwired World development of Roomware®, the integration of walls and
fashion shows featuring wearable and pervasive furniture with information technology and the design of
computing, which included many of Lightman’s own Smart Artefacts. The roomware components that were
inventions and designs, such as the patented Charmed developed in close cooperation with an office furniture
Viewer display and the first Internet jewelry. Harvard manufacturer won several design prices. In the EU-funded
Business School featured Lightman and Charmed in case project “Ambient Agoras: Dynamic Information Clouds in
study that recognized Lightman’s pioneering innovation of a Hybrid World”, he is now working on situated interaction
presenting computers as fashion. Both the show and and services employing wall-sized ambient displays and
Lightman’s designs are now widely copied worldwide. handheld mobile devices in the context of Cooperative
Norbert Streitz holds a Ph.D. in physics and a Ph.D. in Buildings.
psychology. He is the head of the research division He has published/edited 15 books and more than 90 papers
"AMBIENTE – Workspaces of the Future" at the presented at the relevant national and international
Fraunhofer institute IPSI in Darmstadt, Germany, where he conferences or in journals in his areas of interest. He serves
also teaches at the Department of Computer Science of the regularly on the program committees of these conferences
Technical University Darmstadt. He studied mathematics, and on several editorial boards. In the context of his
physics, chemistry, and psychology at the University of interest in design and architecture, he was also appointed as
Kiel, Germany, and psychology, education, and philosophy a design competition jury member. He is often invited to
of science at the Technical University (RWTH) of Aachen, present keynote speeches to scientific as well as
Germany. He was a post-doc research fellow at the commercial events in Europe, USA, South America, and
University of California, Berkeley, a visiting scholar at Japan.
315
Intimate (Ubiquitous) Computing
316
Kay used ‘intimate’ as a modifier to computing in an essay intimacy between remote people that would normally only
reflecting on the relationship between education, computers be possible if they were proximate. Examples include
and networks [10]. He wrote, “In the near future, all the explicit actions (e.g. erotically directed exoskeletons [19]),
representations that human beings have invented will be non-verbal expressions of affection or “missing” [22], and
instantly accessible anywhere in the world on intimate, computationally enhanced objects, like beds, that offer “a
notebook-size computers.” This conjoining of intimate shared virtual space for bridging the distance between two
computers and ubiquitous computing within an issue of remotely located individuals through aural, visual, and
Scientific American dedicated to Communications, tactile manifestations of subtle emotional qualities.” [5].
Computers and Networks is perhaps not a coincidence – These computationally enhanced objects are all the more
both represents complementary parts of a future vision. effective because they themselves are rich (culturally
specific) signifiers. Dodge states of the bed, it is “very
How has this conjunction been expressed more recently?
"loaded" with meaning, as we have strong emotional
Broadly, there are 3 manifestations in the (predominantly)
associations towards such intimate and personal
technology literature. 1. intimacy as cognitive and
experiences”[5].
emotional closeness with technology, where the technology
(typically unidirectionally) may be aware of, and responsive INTIMATE COMPUTING TODAY AND TOMORROW
to, our intentions, actions and feelings. Here our So where are we to go with intimate computing in the age
technologies know us intimately; we may or may not know of ubiquitous and proactive computing and the tentative
them intimately. 2. intimacy as physical closeness with realities of pervasive computing [23]? Clearly, as we move
technology, both on the body and/or within the body. 3. to the possibility of computing beyond the desktop and
intimacy through technology: technology that can express home office, to wireless hubs and hotspots, and from fixed
of our intentions, actions and feelings toward others. devices to a stunning array of mobile and miniature form
In the first category, Lamming and Flynn at Rank Xerox factors, the need to account for the diversities of people’s
Research Center in the UK in the mid-1990s invoked embodied, daily life starts to impose itself into the debate.
‘intimate computing’ as a broader paradigm within which to We already worry about issues of privacy, surveillance,
situate their ‘forget-me-not’ memory aid. They wrote, “The security, risk and trust – the first accountings of what it
more the intimate computer knows about you, the greater its might mean for individual users to exist within a world of
potential value to you. While personal computing provides seamless computing. And then there are issues of scale –
you with access to its own working context – often a virtual ubiquitous computing is a far easier vision to build toward.
desktop – intimate computing provides your computer with It promises a sense of scale and scalability, of being able to
access to your real context.” [12]. Here ‘intimate design a general tool and customize it where a local
computing’ (or the ‘intimate computer’) refers to the depth solution is needed. But intimate computing implies a sense
of knowledge a technology has of its user. of detail; it is about supporting a diversity of people,
bodies, desires, ecologies and niches.
‘Intimate computing’ has also occasionally been used to
describe a different kind of intimacy – that of closeness to THE WORKSHOP:
the physical body. In 2002, the term appears in the Outlining A Research Agenda for Intimate Computing
International Journal of Medical Informatics along with In this workshop, we address the relationship of people to
grid computing and micro-laboratory computing to produce ubiquitous computing, using notions of ‘intimacy’ as a lens
“The fusion of above technologies with smart clothes, through which to envisage future computing landscapes, but
wearable sensors, and distributed computing components also future design practices. We consider the ways
over the person will introduce the age of intimate ubiquitous computing might support the small scale realities
computing” [20]. Here ‘intimate computing’ is conflated of daily life, interpersonal relations, and sociality, bearing
with wearable computing; elsewhere intimate computing is in mind the diversity of cultural practices and values that
even subsumed under the label of wearable computing [2]. arise as we move beyond an American context.
Crossing the boundary of skin, Kurzweil paints a vision of We perceive four interrelated perspectives and strategies
the future that centralizes a communication network of for achieving these goals: (1) deriving understandings of
nanobots in the body and brain. He states “We are growing people’s nuanced, day-to-day practices; (2) elaborating
more intimate with our technology. Computers started out cultural sensitivities; (3) revisioning notions of mediated
as large remote machines in air-conditioned rooms tended intimacy, through explorations of play and playfulness; and
by white-coated technicians. Subsequently, they moved (4) exploring new concepts and methods for design. Below
onto our desks, then under our arms, and now in our we elaborate on these perspectives:
pockets. Soon, we'll routinely put them inside our bodies
and brains. Ultimately we will become more nonbiological 1. Nuanced practices
than biological.”[11] A sense of intimacy made its way into Wesier’s thinking
Finally, intimate computing has also referred to about ubiquitous computing. In collaboration with PARC’s
technologies that enhance or make possible forms of anthropologists, he and his team became aware of ways in
317
which people’s daily social practices impacted their As ubiquitous computing researchers, we must be aware of
consumption and understanding of computing. They looked this human tendency to play, and use it to our advantage.
at the routine, finely grained, and socially ordered ways in When does play occur? How does it begin and end? When
which people use their bodies in the world to see, hear, is it appropriate or inappropriate? What elements give rise
move, interact, express and manage emotion and pondered to play? The understanding of play may affect our views
“how were computers embedded within the complex social about the origin and experience of human intimacy.
framework of daily activity, and how did they interplay with 4. New paradigms for design
the rest of our densely woven physical environment (also
known as the “real world”)?”[27] This consideration of It is hard to imagine that the computer, an icon of
social frameworks and physical environments led Weiser’s modernity, high technology and the cutting edge could in
team to propose “calm computing” as a way of managing some ways be behind the times. However, its association
the consequences of a ubiquitous computing environment. with modernity marks it as old fashioned; as a product of
Calm computing is concerned with people in their day-to- modernity the computer is highly functional with a
day world, with affective response (beyond psycho- minimalist aesthetic. It approaches the modernist ideal of
physiological measures of arousal), with the body, with a pure functionality with little necessity for physical presence.
sense of the body in the world, and with the inner workings Computer chips become smaller and smaller black boxes
and state of that body. This notion of calmness and calm offering more and more functionality, but not necessarily
technology thus echoes the sense, if not sensibility, of more intimacy.
intimate computing. [26] Bergman states modernity has been admired for its “high
2. Culture Matters seriousness, the moral purity and integrity, the strength of
its will to change”, but he also goes on to note “At the same
Weiser also credits anthropologists with helping him see the time, it is impossible to miss some ominous undertows: a
slippage between cultural ideals and cultural praxis as it lack of empathy, an emotional aridity, a narrowness of
related to the use of computing technology in the work imaginative range.”[4]. Modernity in art, design,
place. One of the issues that is very clear when we engage architecture and fashion are associated with aesthetics and
in a close reading of ubiquitous computing is how very design principles from the first half of the twentieth century
grounded it is in Western practices, which makes sense [7]. Since then, movements in pop art, deconstructivism,
given its points of origin and the realities of resource and and postmodernism have invited us beyond functionalism to
infrastructure development. However, there have been new ways of thinking about how to make the impersonal
several significant, unanticipated changes in the last decade, computer more intimate. There are lessons in consumer
in particular the leapfrogging of developing countries into product design; the founder of Swatch focused on the
wireless networks and whole-sale adoption of mobile emotional impact of the watch to start his business,
phones. It is important then to explore some of the ways in designing the watch as a fashion accessory and invoking the
which intimacy is culturally constructed, and as such might ideals of pop art “fun, change, variety, irreverence, wit and
play out differently in different geographies and cultural disposability” [21]. What might it mean to apply such
blocks [3;9]. We also need to explore cultural differences lessons to the design of ubiquitous computing systems?
in the emotional significance and resonance of different
objects. Goals of the workshop
3. Can Ubiquitous Computing come out and Play? Taking the above perspectives as a springboard for
discussion, this workshop has the following aims:
“You can discover more about a person in an hour of play
than in a year of conversation” (Plato 427-347 BC). Play x To bring together a multi-disciplinary group of
provides a mechanism to experiment with, enter into, and practitioners to discus what it might mean to account
share intimacy. The correlation of play and intimacy is so for intimacy in ubiquitous computing and to consider
strong that elements of one rarely occur without the other. It issues like: How do notions of intimacy change over
is during play that we make use of learning devices, treat time and place? How do notions of intimacy differ as
toys, people and objects in novel ways, experiment with we engage in different social groups and social
new skills, and adopt different social roles [16, 17, 18]. We activities? When does intimacy lead to or become
make two important observations about play: (1) humans intrusion? Invasion? Stalking?
seamlessly move in and out of the context of play and (2) x To elaborate new methods and models in design
when at play, humans are more exploratory and more practice that can accommodate designing for intimacy.
willing to entertain ambiguity in their expectations about
x To develop an agenda for future collaborations,
people, artifacts, interfaces, and tools. Such conditions may
research and design in the area of intimate computing
more easily give rise to intimacy. Such a scenario
and identify critical opportunities in this space.
represents a different design scenario from designing for
usability and utility [6].
318
Workshop Activities 11. Kurzweil, R. We Are Becoming Cyborgs, March 15,
We will balance presentations and discussion with 2002, http://www.kurzweilai.net/
collaborative, hands-on creative activities. These activities 12. Lamming, M., Flynn, M. Forget-me-not: Intimate
will include: Computing in Support of Human Memory. In Proc.
x Cluster analysis, including questions like what does FRIEND21: Int. Symposium on Next Generation
intimacy cluster with semantically (ie: identity, Human Interfaces, pp. 125-128, 1994.
uniqueness, personalization, friendship, connection) 13. Lupton, E. Skin. Surface, Substance and Design. New
x Designing intimacy within, upon and beyond the skin: York: Princeton Architectural Press, 2002.
build your own membrane/skin; designing supra-skin 14. McDonough, W and Braungartm M. Cradle to Cradle:
technological auras; designing for a reflective ethics Remaking The Way We Make Things. NY: North Star
Workshop Organizers Press, 2002.
The organizers of this workshop come from a wide range of 15. Maines, R. The Technology of Orgasm: "Hysteria," the
backgrounds, including cultural anthropology, computer Vibrator, and Women's Sexual Satisfaction. Johns
science, psychology and design. Together they have Hopkins University Press, 2001.
considerable experience in workshop organization across 16. Newman, L. S. Intentional and unintentional memory
several disciplines. in young children : Remembering vs. playing, J. of
Exp. Child Psychology, 50, pp. 243-258, 1990.
REFERENCES
1. Adelman, C. What will I become ? Play helps with the 17. O'Leary, J. Toy selection that can shape a child's
answer, Play and Culture, Vol. 3, pp. 193-205, 1990. future. The Times, 1990.
2. Baber, C., Haniff, D.J., Woolley, S.I. Contrasting 18. Piaget, J and B. Inhelder, The psychology of the child.
paradigms for the development of wearable computers. Basic Books, 1969.
IBM Systems Journal, 38, 2, 551-565, 1999. 19. Project paradise, Siggraph 1998
3. Bell, G., Blythe, M., Gaver, B., Sengers, P., Wright, P. eserver.org/cultronix/pparadise/happinessflows.html.
Designing Culturally Situated Technologies for the 20. Silva, J.S. and Ball, M.J. Prognosis for year 2013.
Home. Proceedings of CHI ’03. ACM Press, 2003. International Journal of Medical Informatics 66, pp.
4. Bergman, M. The Experience of Modernity. In 45-49, 2002.
Thackara, J. (Ed.) Design after Modernism. Hudson 21. Sparke, P., A Century of Design. London: Reed
and Thames, London, 1988. Consumer Books, 1998.
5. Dodge, C. The Bed: A Medium for Intimate 22. Strong, R. and Gaver, W. Feather, Scent and Shaker:
Communication. Extended Abstracts of CHI'97. ACM Supporting Simple Intimacy in Videos. CSCW '96,
Press, 371-372, 1997. p29-30, 1996.
6. Gaver, W., Beaver, J. and Benford, S. Ambiguity as a 23. Tennenhouse, D. Proactive Computing. CACM, 43, 5,
resource for design. Proc. CHI2003. ACM Press, 2003. pp. 43-50, May 2000.
7. Glancey, J. The Story of Architecture, Dorking 24. Thackara, J. Design after Modernism. London: Hudson
Kindersley, New York, 2000. and Thames, 1988.
8. Katagiri, Y., Nass, C. and Takeuchi, Y. Cross-cultural
studies of the computers are social actors paradigm: 25. Weiser, M. The Computer for the Twenty-first
The case of reciprocity. In M. J. Smith, G. Salvendy, Century, Scientific American, 265, 3, pp. 94-10, 1991.
D. Harris, and R. Koubek, (Eds.). Usability evaluation 26. Weiser, M. and Brown, J.S. The Coming Age of Calm
and interface design. Lawrence Erlbaum Associates, Technology, Beyond Calculation: The Next Fifty Years
2001. of Computing. P. Denning and R. Metcalfe (Eds) NY:
9. Kato, M.: Cute Culture. The Japanese obsession with Springer-Verlag, 1997.
cute icons is rooted in cultural tradition. Eye: The
27. Weiser, M., Gold, R., Brown, J.S. The Origins of
International Review of Graphic Design, Vol. 11, #44,
ubiquitous computing research at PARC in the late
London, 58-64, 2002.
1980s. IBM Systems Journal, 38, 4, pp. 693-696, 1999.
10. Kay, A. Computers, networks and education. Scientific
American, 265, pp. 100-107, September 1991.
319
Ubicomp 2003 Workshop Proposal
on Ubiquitous Commerce
320
• Health- and home-care. Security Officer at the Ministry of Defence, Athens,
Greece, where he designed the Hellenic armed forces
• Industrial applications
Internet exchange and domain name systems, and as a
• Automotive telematics research fellow at Imperial College, London, UK, where he
WORKSHOP GOALS conducted research in distributed systems and algorithms.
Ubiquitous computing has been recognized as an inherently His current research interests include ubiquitous and
interdisciplinary research field, requiring the collaboration pervasive computing and commerce with particular
between several technical disciplines including but not emphasis on ubiquitous narratives, trailblazing and retail as
restricted to computing, telecommunications, human well as active rules for sensor networks. George is also a
computer interfaces and industrial design. In addition to director for Netsmat Technologies Ltd a start-up providing
these, ubiquitous commerce requires contributions from the home care applications over Digital TV infrastructures to
product development, finance, business process the NHS. He is a member of the ACM and an associate of
management, standardization, law, consumer experience the IEEE and the IEEE Computer Society. He holds a B.Sc.
design and social science points of view, to produce useful in (Pure) Mathematics from University of Athens, Greece,
results. However, researchers with the required expertise a M.Sc. in Numerical Analysis and Computing from
do not have a forum to exchange ideas and concerns and University of Manchester Institute of Science and
develop common agendas and roadmaps for research. Technology, UK, and a Ph.D. in multi-resolution
The proposed ubiquitous commerce workshop will aim to computer-aided geometric design from Imperial College,
bring together researchers with diverse background to: University of London, UK.
• Share understandings and experiences as well as Anatole Gershman
recognize each other’s concerns. Anatole Gershman is a partner at Accenture, one of the
world’s largest consulting companies and Director of
• Foster collaboration across research communities. Research at Accenture Technology Labs. Under his
• Create effective channels of communication to transfer direction, the laboratory has been conducting extensive
lessons learnt from one community to the other. applied research in ubiquitous commerce across many
industries. Anatole Gershman holds a Ph.D. in Computer
• Co-develop a roadmap for future research directions.
Science from Yale University and has been conducting or
directing applied research for over 25 years.
WORKSHOP ORGANISERS Panos Kourouthanassis
The workshop organizers have worked previously on the Panos Kourouthanassis is a Research Officer at ELTRUN
design and development of ubiquitous commerce systems the eBusiness Center hosted at the Athens University of
and recognize the importance of interdisciplinary research Economics and Business (AUEB), Athens, Greece. His
and development teams. They have found that interaction research interests include information systems design,
between their respective disciplines is critical in their ubiquitous computing and mobile business. Panos holds a
previous work and propose this workshop to promote this B.Sc. in Information Systems and a M.Sc. in Decision
type of interdisciplinary cross-fertilization. The three Sciences both from AUEB and is preparing his Ph.D. thesis
organizers have a broad range of international experience in pervasive retail information systems at the Department
and complementary expertise: GR is conducting research in of Management Science and Technology, Athens
the various technical aspects of ubiquitous commerce with University of Economics and Business (AUEB), Athens,
emphasis on retail and identity management; AG is Greece.
developing prototype ubiquitous commerce systems aiming
to transfer the technologies to practitioners; and PK is WORKSHOP ACTIVITIES
concert with the implications of ubiquitous commerce for This workshop will attempt to attract participants with
businesses. technical, business, legal and economics backgrounds as
well as with experience in consumer culture research and
George Roussos the social implications of changes brought about from new
George Roussos is a Lecturer in IT Applications at the methods to conduct commerce. The workshop will be
School of Computer Science and Information Systems, organized around position statements and panel
Birkbeck College, University of London, U.K. Before discussions. Participation will be invited on the basis of
joining Birkbeck he was the Research and Development relevance and originality of contributions and so as to
Manager of Pouliadis Associates Corporation, Athens, represent the multidisciplinary nature of the workshop.
Greece, where he was responsible for the strategic
development plan for new IT products, primarily in the
areas of knowledge management and the mobile Internet, REFERENCES
as well as for international collaborations in new 1. R. Asthana, M. Cravatts and P. Krzyzanowski: An
technology fields. He has also held positions as an Internet indoor wireless system for personalized shopping
321
assistance, Proc. of IEEE Workshop on Mobile Recommendations, Data Mining and Knowledge
Computing Systems and Applications, Santa Cruz, Discovery, vol. 5, 2001, 11-32.
California, IEEE Computer Society Press, 1994, 69-74. 13. G. Roussos, L. Koukara, P. Kourouthanasis, J.O.
2. J. Buckhardt, H. Henn, S. Hepper, K. Rindtorrff and T. Tuominen, O. Seppala, G. Giaglis and J. Frissaer: A
Schaeck, Pervasive Computing, Addison-Wesley, 2001. case study in pervasive retail, ACM MOBICOM
3. A. Fano, and A. Gershman: The Future of Services in WMC02, 2002, pp. 90-94.
the Age of Ubiquitous Computing, Communications of 14. G. Roussos, P. Kourouthanassis and T. Moussouri:
the ACM, 45, 12 (December 2002), 83-85. Appliance Design for Mobile Commerce and
4. A. Gershman: Ubiquitous Commerce - Always On, Retailteinment, to appear in a special issue on
Always Aware, Always Pro-active, Proc. SAINT 2002: Appliance Design in Personal and Ubiquitous
37-38 Computing, 2003.
5. E. A. Gryazin and J.O. Tuominen: The SMART 15. G. Roussos, D. Peterson and U. Patel: Mobile Identity
Environment for Easy Shopping, Proc. Int. ITEA Management: An Enacted View", to appear in a special
Workshop on Virtual Home Environment, February issue on mobile business of the Int. Jour. E-Commerce,
2002. 2003.
6. J. King: Is IT Ready to Support Ubiquitous E- 16. G. Roussos, Diomidis Spinellis, Panos Kourouthanasis,
Commerce?, Computer World, March 2000. Eugene Gryazin, Mike Pryzbliski, George
Kalpogiannis, George Giaglis: Systems Architecture for
7. M. Kärkkäinen and Jan Holmström: Wireless product Pervasive Retail, Proc. ACM SAC 2003, Melbourne,
identification: Enabler for handling efficiency, Florida, 2003, pp. 631-636.
customisation, and information sharing, Supply Chain
Management: An International Journal, vol. 7, no. 4, 17. S. Sarma, D.L. Brock and Kevin Ashton: The
2002, 242-252. Networked Physical World Proposals for Engineering
the Next Generation of Computing, Commerce and
8. 4. V. Kotlyar, M. Viveros, S.S. Duri, R.D. Lawrence, Automatic-Identification, Whitepaper WH-001, Auto-ID
and G.S. Almasi: A Case Study in Information Delivery Centre, MIT, Cambridge, MA.
to Mass Retail Markets, in T. Bench-Capon, G. Soda
and A M. Tjoa (Eds.): DEXA’99, LNCS 1677, 1999, 18. J. Smaros and J. Holmstrom: Viewpoint: Reaching the
842-851. consumer through e-grocery VMI, International
Journal of Retail and Distribution Management, vol.
9. 8. P. Kourouthanassis, L. Koukara, C. Lazaris, K. 28, no. 2, 2000, 55-61.
Thiveos: Last Mile Supply Chain Management:
MyGROCER Innovative Business and Technology 19. M. Strassner and T. Schoch: Today's Impact of
Framework, Proc. 17th International Logistics Ubiquitous Computing on Business Processes, in F.
Conference, 2001, 264-273. Mattern (ed.) Proc. Pervasive 2002, Short Paper
Proceedings, Zurich, 2002
10. P. Kourouthanasis, G. Lekakos and G. Doukidis, 2001,
"Challenges for Automatic Home Supply 20. P. Tarasewich: Wireless Devices for Ubiquitous
Replenishment in e-Retailing", e-Commerce Frontiers Commerce: User Interface Design and Usability, in
2001, Cheshirehenbury, Macclesfield, UK. Brian E. Mennecke and Troy J. Strader, (Eds.) Mobile
Commerce: Technology, Theory, and Applications,
11. P. Kourouthanassis and G. Roussos: Consumer Culture 2002, Hershey, PA: Idea Group Publishing, 26-50
and Pervasive Retail", IEEE Pervasive Computing, to
appear in the April issue on The Human Experience, 21. C. Trigueros: ALBATROS: Electronic tagging
April 2003. solutions for the retail sector, Informatica El Corte
Inglés, Madrid, Spain, 1999.
12. G.S. Lawrence, V. Almasi, M.S. Kotlyar, M. Viveros
and S.S. Duri: Personalization of Supermarket
322
AIMS2003: Artificial Intelligence In Mobile Systems
SUMMARY AIMS 2001 (with IJCAI '01, Seattle), AIMS 2002 (with
ECAI '02) organised by the same persons and institu-
Today's information technology is rapidly moving small
tions. In order to foster the investigation of AI methods in
computerised consumer devices and hi-tech personal ap-
ubiquitous computing scenarios AIMS 2003 will be held
pliances from the desks of research labs into sales shelves
in conjunction with Ubicomp 2003. A combination that
and our daily life. Various platforms from low perform-
we believe will be very fruitful for both research areas.
ance PDA, embedded computers in cameras, cars, or mo-
bile phones, up to high performance wearable computers
will become essential tools in many situations for private SCOPE
and professional use. These systems require new interac- In the AIMS 2003 workshop we intend to bring together
tion metaphors and methods of control. Well-known in- researchers working in the sub-fields of AI described
teraction devices, such as mouse and keyboard are not above and those working with the design of mobile appli-
necessarily available, rendering user interfaces that rely cations and devices (wearable as well as environmental).
on them inappropriate. Other resources such as power or The scope of interest includes but is not limited to:
networking bandwidth may be limited or unreliable de-
• Location awareness
pending on time and location. Moreover, the physical
environment and context are changing rapidly and must • Context awareness
be taken into account appropriately. In the future the fo-
cus will shift from single users, using single services on • Interaction metaphors and interaction devices
single artefacts towards groups of users collaborating us- for mobile /ubiquitous systems
ing a combination of different services in physical spaces
equipped with personal as well as public dynamically • Smart user interfaces for mobile /ubiquitous
configured artefacts (ubiquitous computing or ambient systems
technology).
• Multi-modal interfaces for mobile /ubiquitous
Therefore, the main challenge for the success of mobile systems
systems is the design of smart user interfaces and software
that allows ubiquitous and easy access to personal infor- • Situation adapted user interfaces
mation and that is flexible enough to handle changes in
user context and availability of resources. Artificial intel- • Adaptation to limited resources
ligence has investigated the problems of making user in-
terfaces smart and cooperative for many years and is at- • Fault tolerance
tacking the challenges of explicitly dealing with limited
resources lately. AI methods provide a range of solutions • Service discovery, service description lan-
for those problems and currently seem to be one of the guages and standards
most promising tools for building location and situation We encourage submissions from researchers and practi-
aware mobile systems that support users at their best and tioner in academia, industry, government, and consulting.
behave cooperatively in unobtrusive ways. Students, researchers and practitioners are invited to sub-
mit papers (max. 8 pages) describing original, novel, and
AIMS 2003 will be the fourth workshop in a row as a inspirational work. The submissions will be reviewed by
successor of AIMS 2000 (with ECAI 2000, Berlin), an international group of researchers and practitioners.
323
PROGRAM COMMITTEE:
http://w5.cs.uni-sb.de/~krueger/aims2003/
324
Author Index
326
Paulos, Eric 3, 316 Spasojevic, Mirjana 69, 223
Pering, Trevor 97 Steinbach, Leonard 119
Petzold, Jan 191 Stenzel, Richard 267, 277
Pham, Thanh 77 Stoddard, Steve 273
Phelps, Ted 100 Streitz, Norbert 277, 312
Picard, Rosalind 271 Stringer, Mark 123
Pinhanez, Claudio 265 Subramanian, Anand Prabhu 203
Plewe, Daniela 277 Sue, Alison 149
Pointer, David 312 Sumi, Yasuyuki 193
Policroniades, Calicrates 195 Summet, Jay 283
Posegga, Joachim 298 Sundar, Murali 97
Poupart, Pascal 219
Prante, Thorsten 267, 277 Tabert, Jason 74
Tallyn, Ella 69
Röcker, Carsten 277 Tanaka, Yu 157
Raghunathan, Vijay 97 Tandler, Peter 306
Rajani, Rakhi 69 Terada, Tsutomu 213
Randell, Cliff 100 Terveen, Loren 300
Rao, Srinivas G. 205 Toye, Eleanor F. 123
Rashid, Al Mamunur 84 Trumler, Wolfgang 191
Rea, Adam 21 Truong, Khai N. 137
Rebula, John 273 Tsukamoto, Masahiko 213
Rehg, James 283
Reitberger, Wolfgang 131 Ungerer, Theo 191
Rhee, Sokwoo 211 Ushida, Keita 157
Robinson, James G. 108, 111
Valgårda, Anna 165
Robinson, Peter 169
VanArsdale, David 227
Robinson, Philip 9, 298
van Alphen, Dnaiel 277
Rode, Jennifer A. 123
van Berkel, Kees 199
Rogers, Yvonne 100
Van Kleek, Max 181
Roman, Manuel 309
van Loenen, Evert 199
Ron, Ruth 104
Vekaria, Pooja C. 137
Roussos, George 320
Vertegaal, Roel 77, 281
Russell, Daniel M. 149
Vidales, Pablo 195
Rydenhag, Tobias 179
Vildjiounaite, Elena 215
Vina, Victor 127
Sanneblad, Johan 279
Vogt, Harald 298
Sato, Yoichi 153
Schiele, Bernt 155, 161, 207, 269, 306 Wan, Dadong 285, 294
Schilit, Bill N. 74 Wang, Ningya 211
Schmidt, Albrecht 9, 155 Want, Roy 97
Scott, James 291 Wei, Sha Xin 131
Seetharam, Deva 211 Weller, Michael 229
Semper, Robert J. 223 White, David Randall 137
Serita, Yoichiro 131 Wilson, Daniel 141
Shankar, Narendar 298 Winograd, Terry 35
Shapiro, R. Benjamin 44 Witchey, Holly R. 119
Shell, Jeffrey 281 Wolf, Ahmi 56
Shell, Jeffrey S. 77 Woo, Woontack 177
Shen, Jia 257 Wren, Christopher R. 205
Shimada, Yoshihiro 157
Singer, Eric 13 Xiao, Jason 211
Smith, Brian K. 38
Smith, Marc 115 Yamaguchi, Akira 189
Sohn, Changuk 77 Yoshihisa, Tomoki 213
Somani, Ramswaroop 283
Soroczak, Suzanne 84 Zimmer, Tobias 9, 217
327