Proceedings PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 341

Preface

UbiComp 2003, the Fifth International Conference on Ubiquitous Computing, is the premier forum for presen-
tation of research in all areas relating to the design, implementation, deployment and evaluation of ubiquitous
computing technologies. The conference brings together leading researchers from a variety of disciplines, perspec-
tives and geographical areas, who are exploring the implications of computing as it moves beyond the desktop
and becomes increasingly interwoven into the fabrics of our lives.

The full papers and technical notes for UbiComp 2003 are published in the Springer-Verlag Lecture Notes in
Computer Science (LNCS) series, volume 2864. In addition to papers and technotes, UbiComp 2003 is hosting a
wide variety of other presentation forums, including demonstrations, interactive posters, a doctoral colloquium,
a video program, twelve workshops, and a panel. This broad selection of venues and media within the conference
is one of the great strengths of the UbiComp series, and this Adjunct Proceedings volume includes extended
abstracts from each of these forums. While the acceptance rates in these categories were higher than for the full
papers and technical notes, all submissions were subjected to a peer review process designed to ensure high quality.

Firstly, UbiComp 2003 includes a Panel track chaired by Gerd Kortuem. The panel, chaired by Eric Paulos
on the final day of the conference, features participants sharing their views on the prospects for new forms of
“mobile play” engendered by ubiquitous computing technologies.

The Demonstrations program, co-chaired by Eric Paulos and Allison Woodruff, with assistance from Eliza-
beth Goodman, includes approximately forty examples of ubiquitous computing technology, applications and art,
many of which provide opportunities for attendees to directly experience the impacts of ubiquitous computing.
The large collection of demonstrations includes a living sculpture, a WiFi game in the streets of Seattle, cardboard
boxes for configuring an information network, and new location-tracking systems and extended-sensor ‘motes’.

Our Interactive Posters track, co-chaired by Marc Langheinrich and Yasuto Nakanishi, offers a venue for the
presentation of late-breaking and/or controversial results in an informal and interactive setting. This year, over
forty posters were accepted, representing a variety of scientific backgrounds, and including many researchers who
are new to the field of ubiquitous computing.

The participants in the Doctoral Colloquium, chaired by Tom Rodden, are given the opportunity to present
their thesis research plans to a panel of senior researchers representing several areas within ubiquitous computing,
and receive focused, constructive feedback. These students may also choose to present their work to the larger
conference community as posters.

The Videos program, chaired by Jason Brotherton and Peter Ljungstrand, offers another format in which
researchers in ubiquitous computing can present their work. Videos offer an opportunity for authors to present
their work in a scenario of use, creating aspects of the context that may be difficult to replicate on-site at the
conference venue. In addition to the extended abstracts in this volume, the twelve videos this year are also dis-
tributed in DVD format.

Finally, twelve Workshops precede the main conference this year, offering the chance for small groups of par-
ticipants to share understandings and experiences, to foster research communities, to learn from each other and to
envision future directions. This year’s workshop program, chaired by Michael Beigl, covers many emerging topics
in ubiquitous computing, including healthcare, commerce, privacy and intimacy.

We are very grateful to the chairs and authors in all of these participation categories for providing attendees
with new perspectives on and experiences of ubiquitous computing. We are also want to express our immense
gratitude to Khai Truong, our webmaster and design guru, for his outstanding work on the conference web pages
and on the visual identity for the conference, which appears on the web site, the student volunteer tee-shirts, the
conference program, the conference DVD cover, and, of course, the cover to this volume.

Several organizations helped provide financial and logistical assistance for the conference, and we gratefully
acknowledge their support. The donations by our corporate benefactor, Intel, and by our corporate sponsors, Fuji
Xerox Palo Alto Laboratory, Hewlett-Packard Laboratories, IBM Research, Microsoft Research, Nokia Research
and Smart Technologies, help us provide a world-class conference experience for all attendees.

Finally, we wish to thank all the people attending the conference, as it is the opportunities to meet and interact
with all of you interesting people that makes the planning of such a momentous event a worthwhile endeavor for
all involved!

October 2003 Joe McCarthy, Conference Chair


James Scott, Publications Chair

N.B. Copyright © 2003 is retained by the respective authors of each of the works contained herein.

iv
Conference Organization

Conference Chair
Joe McCarthy Intel Research Seattle (USA)
Program Chairs
Anind K. Dey Intel Research Berkeley (USA)
Albrecht Schmidt University of Munich (Germany)
Technical Notes Chairs
Tim Kindberg Hewlett-Packard Labs (USA)
Bernt Schiele ETH Zurich (Switzerland)
Demonstrations Chairs
Eric Paulos Intel Research Berkeley (USA)
Allison Woodruff Palo Alto Research Center (USA)
Interactive Posters Chairs
Marc Langheinrich ETH Zurich (Switzerland)
Yasuto Nakanishi University of Electro-Communications (Japan)
Videos Chairs
Peter Ljungstrand PLAY, Interactive Institute (Sweden)
Jason Brotherton Ball State University (USA), and
University College London (UK)
Doctoral Colloquium Chair
Tom Rodden Nottingham University (UK)
Workshops Chairs
Michael Beigl TecO, University of Karlsruhe (Germany)
Christian Decker TecO, University of Karlsruhe (Germany)
Panels Chair
Gerd Kortuem Lancaster University (UK)
Student Volunteers Chair
Stephen Voida Georgia Institute of Technology (USA)
A/V & Computing Chair
James Gurganus Intel Research (USA)
Treasurer
David McDonald University of Washington (USA)
Publications Chair
James Scott Intel Research Cambridge (UK)
Publicity Chair
Mike Hazas Lancaster University (UK)
Webmaster
Khai Truong Georgia Institute of Technology (USA)
Local Arrangements
Ellen Do University of Washington (USA)
Conference Manager
Debra Bryant University of Washington (USA)
Demonstrations: Program Committee
Jeff Burke University of California, Los Angeles (USA)
Elizabeth Churchill FX Palo Alto Laboratory (USA)
Mike Fraser University of Nottingham (UK)
Bill Gaver Royal College of Art (UK)
Lars Erik Holmquist Viktoria Institute (Sweden)
Sherry Hsi The Exploratorium (USA)
Mark Newman Palo Alto Research Center (USA)
Kenton O’Hara Appliance Studio (UK)
Dan O’Sullivan New York University (USA)
James Patten MIT Media Lab (USA)
Marc Smith Microsoft Research (USA)
Mark Smith HP Labs (USA)
John Stasko Georgia Institute of Technology (USA)
Lyndsay Williams Microsoft Research Cambridge (UK)
Ken Wood Microsoft Research Cambridge (UK)

Interactive Posters: Reviewers


Karl-Petter Åkesson Swedish Institute of Computer Science (Sweden)
Stavros Antifakos ETH Zurich (Switzerland)
Michael Beigl TecO, Karlsruhe University (Germany)
Jan Beutel ETH Zurich (Switzerland)
Sonja Buchegger EPF Lausanne (Switzerland)
Thomas Buchholz University of Munich (Germany)
Eleni Christopoulou University of Patras (Greece)
Esko Dijk Technical University Eindhoven (Netherlands)
Riggas Dimitris Computer Technology Institute (Greece)
Hannes Frey University of Trier (Germany)
Masaaki Fukumoto NTT DoCoMo (Japan)
Daniel Goergen University of Trier (Germany)
Tero Häkkinen Tampere University of Technology (Finland)
Kasper Hallenborg University of Southern Denmark (Denmark)
Robert Harle University of Cambridge (UK)
Dominik Heckmann Saarland University (Germany)
Jan Humble University of Nottingham (UK), and
Swedish Institute of Computer Science (Sweden)
Sozo Inoue Kyushu University (Japan)
Matthias Joest European Media Laboratory (Germany)
Alexandros Karypidis University of Thessaly (Greece)
Oliver Kasten ETH Zurich (Switzerland)
Hiromitsu Kato Hitachi Systems Development Laboratory (Japan)
Yoshihiro Kawahara Tokyo University (Japan)
Nicky Kern ETH Zurich (Switzerland)
Tatiana Lashina Philips Research Laboratories (Netherlands)
Peter Lönnqvist Stockholm University (Sweden)
Filipe Meneses University of Minho (Portugal)
Florian Michahelles ETH Zurich (Switzerland)
Martin Mühlenbrock Xerox Research Centre Europe (France)
Tatsuo Nakajima Waseda University (Japan)
Yoshiyuki Nakamura AIST (Japan)
Andronikos Nedos Trinity College Dublin (Ireland)
Chaki Ng Harvard University (USA)
Stina Nylander Swedish Institute of Computer Science (SICS) (Sweden)

vi
Kenji Oka Tokyo University (Japan)
Mario Pichler Software Competence Center Hagenberg (Austria)
Jaana Rantanen Tampere University of Technology (Finland)
Steffen Reymann Philips Research Laboratories (UK)
Dimitris Riggas Computer Technology Institute (Greece)
Matthias Ringwald ETH Zurich (Switzerland)
Michael Rohs ETH Zurich (Switzerland)
Tobias Rydenhag PLAY, Interactive Institute (Sweden)
Yutaka Sakane Shizuoka University (Japan)
Ichiro Siio Tamagawa University (Japan)
Martin Strohbach Lancaster University (UK)
Yasuyuki Sumi Kyoto University (Japan)
Tsutomu Terada Osaka University (Japan)
Tore Urnes Telenor Research and Development (Norway)
Julien Vayssiere INRIA (France)
Kousuke Yamazaki Tokyo University (Japan)
Tobias Zimmer TecO, Karlsruhe University (Germany)

Videos: Reviewers
Harold Thimbleby University College London (UK)
Matt Jones University of Waikato (New Zealand)
Armando Fox Stanford University (USA)
Brad Johanson Stanford University (USA)
Trevor Pering Intel Research (USA)
Chris Long Carnegie Mellon University (USA)
Khai Truong Georgia Institute of Technology (USA)
Marco Gruteser University of Colorado, Boulder (USA)
Merrie Ringel Stanford University (USA)
James Fogarty Carnegie Mellon University (USA)
Desney Tan Carnegie Mellon University (USA)

Sponsors
Corporate Benefactor Intel

Corporate Sponsors Fuji Xerox Palo Alto Laboratory


Hewlett-Packard Laboratories
IBM Research
Microsoft Research
Nokia Research
Smart Technologies

Supporting Societies

UbiComp 2003 enjoys in-cooperation status with the following special interest groups of the Association for
Computing Machinery (ACM):
SIGCHI (Computer-Human Interaction)
SIGSOFT (Software Engineering)

vii
viii
Table of Contents

I Panel
Mobile Play: Blogging, Tagging, and Messaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Eric Paulos

II Demonstrations
Context Nuggets: A Smart-Its Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Michael Beigl, Albert Krohn, Christian Decker, Philip Robinson, Tobias Zimmer, Hans Gellersen, and
Albrecht Schmidt

Eos Pods: Wireless Devices for Interactive Musical Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13


David Bianciardi, Tom Igoe, and Eric Singer

Wishing Well Demonstration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17


Tim Brooke and Margaret Morris

Extended Sensor Mote Interfaces for Ubiquitous Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21


Waylon Brunette, Adam Rea, and Gaetano Borriello

Palimpsests on Public View: Annotating Community Content with Personal Devices . . . . . . . . . . . . . . . . . . . 24


Scott Carter, Elizabeth Churchill, Laurent Denoue, Jonathan Helfman, Paul Murphy, and Les Nelson

Platypus Amoeba . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Ariel Churi and Vivian Lin

M-Views: A System for Location-Based Storytelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31


David Crow, Pengkai Pan, Lilly Kam, and Glorianna Davenport

Stanford Interactive Workspaces Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35


Armando Fox and Terry Winograd

Picture of Health: Photography Use in Diabetes Self-Care . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38


Jeana Frost and Brian K. Smith

Noderunner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Yury Gitman and Carlos J. Gomez de Llarena

UCSD ActiveCampus — Mobile Wireless Technology for Community-Oriented Ubiquitous Computing . . . . 44


William G. Griswold, Neil G. Alldrin, Robert Boyer, Steven W. Brown, Timothy J. Foley, Charles P.
Lucas, Neil J. McCurdy, and R. Benjamin Shapiro

The Location Stack: Multi-sensor Fusion in Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48


Jeffrey Hightower and Gaetano Borriello

A Novel Interaction Style for Handheld Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52


James Hudson and Alan Parkes

WiFisense: The Wearable Wireless Network Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56


Milena Iossifova and Ahmi Wolf

Tejp: Ubiquitous Computing as Expressive Means of Personalising Public Space . . . . . . . . . . . . . . . . . . . . . . . 58


Margot Jacobs, Lalya Gaye, and Lars Erik Holmquist
Telemurals: Catalytic Connections for Remote Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Karrie Karahalios and Judith Donath

Fluidtime: Developing an Ubiquitous Time Information System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65


Michael Kieslinger

Pulp Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Tim Kindberg, Rakhi Rajani, Mirjana Spasojevic, and Ella Tallyn

Living Sculpture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Yves Amu Klein and Michael Hudson

Place Lab’s First Step: A Location-Enhanced Conference Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74


Anthony LaMarca, David McDonald, Bill N. Schilit, William G. Griswold, Gaetano Borriello, Eithon
Cadag, and Jason Tabert

AuraLamp: Contextual Speech Recognition in an Eye Contact Sensing Light Appliance . . . . . . . . . . . . . . . . . 77


Aadil Mamuji, Roel Vertegaal, Jeffrey S. Shell, Thanh Pham, and Changuk Sohn

The Ubiquitous Computing Resource Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81


Joseph F. McCarthy, J. R. Jenkins, and David G. Hendry

Proactive Displays & The Experience UbiComp Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84


Joseph F. McCarthy, David H. Nguyen, Al Mamunur Rashid, and Suzanne Soroczak

Networking Pets and People . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88


Dan Mikesell

Responsive Doors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Greg Niemeyer

Squeeze Me: A Portable Biofeedback Device for Children . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93


Amy Parness, Ed Guttman, and Christine Brumback

The Personal Server: Personal Content for Situated Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97


Trevor Pering, John Light, Murali Sundar, Gillian Hayes, Vijay Raghunathan, Eric Pattison, and Roy
Want

Ambient Wood: Demonstration of a Digitally Enhanced Field Trip for Schoolchildren . . . . . . . . . . . . . . . . . . . 100
Cliff Randell, Ted Phelps, and Yvonne Rogers

Wall Fold: The Space between 0 and 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104


Ruth Ron

Digital Poetry Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108


James G. Robinson

The Verse-O-Matic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111


James G. Robinson

AURA: A Mobile Platform for Object and Location Annotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115


Marc Smith, Duncan Davenport, and Howard Hwa

Anatomy of a Museum Interactive: “Exploring Picasso’s ‘La Vie’ ” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119


Leonard Steinbach and Holly R. Witchey

Facilitating Argument in Physical Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123


Mark Stringer, Jennifer A. Rode, Alan F. Blackwell, and Eleanor F. Toye

Box. Open System to Design your own Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127


Victor Vina

x
Demonstrations of Expressive Softwear and Ambient Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Sha Xin Wei, Yoichiro Serita, Jill Fantauzza, Steven Dow, Giovanni Iachello, Vincent Fiano, Joey
Berzowska, Yvonne Caravia, Delphine Nain, Wolfgang Reitberger, and Julien Fistre

Mobile Capture and Access for Assessing Language and Social Development in Children with Autism . . . . . 137
David Randall White, José Antonio Camacho-Guerrero, Khai N. Truong, Gregory D. Abowd, Michael
J. Morrier, Pooja C. Vekaria, and Diane Gromala

The Narrator : A Daily Activity Summarizer Using Simple Sensors in an Instrumented Environment . . . . . 141
Daniel Wilson and Christopher Atkeson

III Interactive Posters

Interfaces
Device-Spanning Multimodal User Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Elmar Braun and Andreas Hartl

On the Adoption of Groupware for Large Displays: Factors for Design and Deployment . . . . . . . . . . . . . . . . . 149
Elaine M. Huang, Alison Sue, and Daniel M. Russell

Super-Compact Keypad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151


Roman Ilinski

EnhancedMovie: Movie Editing on an Augmented Desk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153


Yoko Ishii, Yasuto Nakanishi, Hideki Koike, Kenji Oka, and Yoichi Sato

Instructions Immersed into the Real World — How Your Furniture Can Teach You . . . . . . . . . . . . . . . . . . . . . 155
Florian Michahelles, Stavros Antifakos, Jani Boutellier, Albrecht Schmidt, and Bernt Schiele

i-wall: Personalizing a Wall as an Information Environment with a Cellular Phone Device . . . . . . . . . . . . . . . 157
Yu Tanaka, Keita Ushida, Takeshi Naemura, Hiroshi Harashima, and Yoshihiro Shimada

Ambient Displays
Healthy Cities Ambient Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Morgan Ames, Chinmayi Bettadapur, Anind K. Dey, and Jennifer Mankoff

LaughingLily: Using a Flower as a Real World Information Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161


Stavros Antifakos and Bernt Schiele

Habitat: Awareness of Life Rhythms over a Distance Using Networked Furniture . . . . . . . . . . . . . . . . . . . . . . . 163
Dipak Patel and Stefan Agamanolis

End-User Programming of Smart Objects


Smart Home in Your Pocket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Louise Barkhuus and Anna Valgårda

SiteView: Tangibly Programming Active Environments with Predictive Visualization . . . . . . . . . . . . . . . . . . . 167


Chris Beckmann and Anind K. Dey

Towards Ubiquitous End-User Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169


Rob Hague, Peter Robinson, and Alan F. Blackwell

Interaction, Collaboration, and Information Sharing


TunA: A Mobile Music Experience to Foster Local Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Arianna Bassoli, Cian Cullinan, Julian Moore, and Stefan Agamanolis

xi
AudioBored: a Publicly Accessible Networked Answering Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Jonah Brucker-Cohen and Stefan Agamanolis

Dimensions of Identity in Open Educational Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175


Alastair Iles, Matthew Kam, and Daniel Glaser

Digital Message Sharing System in Public Places . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177


Seiie Jang, Woontack Woo, and Sanggoog Lee

The Spookies: A Computational Free Play Toy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179


Tobias Rydenhag, Jesper Bernson, Sara Backlund, and Lena Berglin

k:info: A Smart Billboard for Informal Public Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181


Max Van Kleek

Context Detection and Modeling


An Intelligent Broker for Context-Aware Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Harry Chen, Tim Finin, and Anupam Joshi

Containment: Knowing Your Ubiquitous System’s Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185


Boris Dragovic and Jon Crowcroft

ContextMap: Modeling Scenes of the Real World for Context-Aware Computing . . . . . . . . . . . . . . . . . . . . . . . 187
Yang Li, Jason I. Hong, and James A. Landay

Service Platform for Exchanging Context Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189


Daisuke Morikawa, Masaru Honjo, Akira Yamaguchi, and Masayoshi Ohashi

The State Predictor Method for Context Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191


Jan Petzold, Faruk Bagci, Wolfgang Trumler, and Theo Ungerer

Collaborative Capturing of Interactions by Multiple Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193


Yasuyuki Sumi, Tetsuya Matsuguchi, Sadanori Ito, Sidney Fels, and Kenji Mase

Sensors and Networks


Ubiquity in Diversity — A Network-Centric Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Rajiv Chakravorty, Pablo Vidales, Boris Dragovic, Calicrates Policroniades, and Leo Patanapongpibul

A Peer-To-Peer Approach for Resolving RFIDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197


Christian Decker, Michael Leuchtner, and Michael Beigl

Single Basestation 3D Positioning Method using Ultrasonic Reflections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199


Esko Dijk, Kees van Berkel, Ronald Aarts, and Evert van Loenen

Prototyping a Fully Distributed Indoor Positioning System for Location-aware Ubiquitous Computing
Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Masateru Minami, Hiroyuki Morikawa, and Tomonori Aoyama

Connectivity Based Equivalence Partitioning of Nodes to Conserve Energy in Mobile Ad Hoc Networks . . . 203
Anand Prabhu Subramanian

Selfconfiguring, Lightweight Sensor Networks for Ubiquitous Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205


Christopher R. Wren and Srinivas G. Rao

Smart Objects: Artifacts and Architectures


Grouping Mechanisms for Smart Objects Based On Implicit Interaction and Context Proximity . . . . . . . . . . 207
Stavros Antifakos, Bernt Schiele, and Lars Erik Holmquist

xii
Inside/Outside: an Everyday Object for Personally Invested Environmental Monitoring . . . . . . . . . . . . . . . . . 209
Katherine Moriwaki, Linda Doyle, and Margaret O’Mahoney

iBeans: An Ultralow Power Wireless Sensor Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211


Sokwoo Rhee, Deva Seetharam, Sheng Liu, Ningya Wang, and Jason Xiao

A Rule-based I/O Control Device for Ubiquitous Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213


Tsutomu Terada, Masahiko Tsukamoto, Tomoki Yoshihisa, Yasue Kishino, Shojiro Nishio, Keisuke
Hayakawa, and Atsushi Kashitani

Smart Things in a Smart Home . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215


Elena Vildjiounaite, Esko-Juhani Malm, Jouni Kaartinen, and Petteri Alahuhta

Resource Management for Particle-Computers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217


Tobias Zimmer, Frank Binder, Michael Beigl, Christian Decker, and Albert Krohn

Applications
Using a POMDP Controller to Guide Persons With Dementia Through Activities of Daily Living . . . . . . . . 219
Jennifer Boger, Geoff Fernie, Pascal Poupart, and Alex Mihailidis

The Chatty Environment — A World Explorer for the Visually Impaired . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Vlad Coroama

Support for Nomadic Science Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223


Sherry Hsi, Robert J. Semper, and Mirjana Spasojevic

Development of an Augmented Ring Binder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225


Magnus Ingmarsson, Mikael Isaksson, and Mats Ekberg

Meaningful Traces: Augmenting Children’s Drawings with Digital Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227


Nassim Jafarinaimi, Diane Gromala, Jay David Bolter, and David VanArsdale

The Junk Mail to Spam Converter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229


Michael Weller, Mark D. Gross, Jim Nicholls, and Ellen Yi-Luen Do

IV Doctoral Colloquium

Communication from Machines to People with Dementia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233


T. D. Adlam

Context Information Distribution and Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235


Mark Assad

Publish/Subscribe Messaging: An Active Networking Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237


Michael Avery

Workspace Orchestration to Support Intense Collaboration in Ubiquitous Workspaces . . . . . . . . . . . . . . . . . . 239


Terence Blackburn

Visualisations of Digital Items in a Physical Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241


David Carmichael

Identity Management in Context-Aware Intelligent Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243


Daniel Cutting

Towards a Software Architecture for Device Management in Instrumented Environments . . . . . . . . . . . . . . . . 245


Christoph Endres

xiii
Ubiquitous Support for Knowledge and Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Michael A. Evans

Anonymous Usage of Location-Based Services over Wireless Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249


Marco Gruteser

Service Advertisement Mechanisms for Portable Devices within an Intelligent Environment . . . . . . . . . . . . . . 251
Adam Hudson

ME: Mobile E-Personality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253


Pekka Jäppinen

User Location and Mobility for Distributed Intelligent Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255


Teddy Mantoro

Towards a Rich Boundary Object Model for the Design of Mobile Knowledge Management Systems . . . . . . 257
Jia Shen

V Videos
DigiScope: An Invisible Worlds Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Alois Ferscha and Markus Keller

Bumping Objects Together as a Semantically Rich Way of Forming Connections between Ubiquitous
Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Ken Hinckley

Ubiquitous Computing in the Living Room: Concept Sketches and an Implementation of a Persistent
User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Stephen Intille, Vivienne Lee, and Claudio Pinhanez

STARS — A Ubiquitous Computing Platform for Computer Augmented Tabletop Games . . . . . . . . . . . . . . . 267
Carsten Magerkurth, Richard Stenzel, and Thorsten Prante

A-Life: Saving Lives in Avalanches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269


Florian Michahelles and Bernt Schiele

Breakout for Two: An Example of an Exertion Interface for Sports over a Distance . . . . . . . . . . . . . . . . . . . . . 271
Florian Mueller, Stefan Agamanolis, and Rosalind Picard

Concept and Partial Prototype Video: Ubiquitous Video Communication with the Perception of Eye
Contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Emmanuel Munguia Tapia, Stephen Intille, John Rebula, and Steve Stoddard

The Design of a Context-Aware Home Media Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275


Carman Neustaedter and Saul Greenberg

Hello.Wall — Beyond Ambient Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277


Thorsten Prante, Carsten Röcker, Norbert Streitz, Richard Stenzel, Carsten Magerkurth, Dnaiel
van Alphen, and Daniela Plewe

Total Recall: In-place Viewing of Captured Whiteboard Annotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279


Johan Sanneblad and Lars Erik Holmquist

eyeCOOK: A Gaze and Speech Enabled Attentive Cookbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281


Jeffrey Shell, Jeremy Bradbury, Craig Knowles, Connor Dickie, and Roel Vertegaal

Virtual Rear Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283


Jay Summet, Ramswaroop Somani, James Rehg, and Gregory D. Abowd

xiv
Virtual Handyman: Supporting Micro Services on Tab through Situated Sensing & Web Services . . . . . . . . . 285
Dadong Wan

VI Workshops
Ubicomp Education: Current Status and Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Gregory D. Abowd, Gaetano Borriello, and Gerd Kortuem
2003 Workshop on LocationAware Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Mike Hazas, James Scott, and John Krumm
UbiHealth 2003: The 2nd International Workshop on Ubiquitous Computing for Pervasive Healthcare
Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
Jakob E. Bardram, Ilkka Korhonen, Alex Mihailidis, and Dadong Wan
2nd Workshop on Security in Ubiquitous Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Joachim Posegga, Philip Robinson, Narendar Shankar, and Harald Vogt
Multi-Device Interfaces for Ubiquitous Peripheral Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
Loren Terveen, Charles Isbell, and Brian Amento
Ubicomp Communities: Privacy as Boundary Negotiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
John Canny, Paul Dourish, Jens Grossklags, Xiaodong Jiang, and Scott Mainwaring
At the Crossroads: The Interaction of HCI and Systems Issues in UbiComp . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
Brad Johanson, Jan Borchers, Bernt Schiele, Peter Tandler, and Keith Edwards

System Support for Ubiquitous Computing — UbiSys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309


Roy Campbell, Armando Fox, Paul Chou, Manuel Roman, Christian Becker, and Adrian Friday
Ubiquitous Systems to Support Social Interaction and Face-to-Face Communication in Public Spaces . . . . . 312
Rick Borovoy, Harry Brignull, Donna Cox, Shahram Izadi, Volodymyr Kindratenko, Alex Lightman,
David Pointer, and Norbert Streitz
Intimate (Ubiquitous) Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
Genevieve Bell, Tim Brooke, Elizabeth Churchill, and Eric Paulos
Ubiquitous Commerce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
George Roussos, Anatole Gershman, and Panos Kourouthanassis
AIMS2003: Artificial Intelligence In Mobile Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
Antonio Krüger and Rainer Malaka

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325

xv
Part I

Panel
Mobile Play: Blogging, Tagging, and Messaging
Eric Paulos
Intel Research
2150 Shattuck Avenue #1300
Berkeley, CA 94704
[email protected]
PANELISTS
Barry Brown, University of Glasgow, [email protected]
Bill Gaver, Royal College of Art, [email protected]
Marc Smith, Microsoft Research, [email protected]
Nina Wakeford, University of Surrey, [email protected]

You can discover more about a person in an hour of play CAN UBICOMP COME OUT AND PLAY?
than in a year of conversation. – Plato 427-347 BC Current ubiquitous computing research has provided
ABSTRACT marked milestones of systems, tools, and techniques along
Ubiquitous computing, by its very definition, aspires to the path of situated, focused problem solving. While
weave computing technologies across the fabric of our crediting the achievements of this area, we explicitly draw
everyday lives. Many of the successes and failures emphasis to the portion of everyday life made up of non-
encountered during the pursuit of ubiquitous computing goal directed activities and play.
will be dictated by the manifest integration of play. It is We make two important observations about play: (1)
play that helps us cope with the past, understand the humans seamlessly move in and out of the context of play
present, and prepare for the future. This panel of experts is (sometimes on a minute by minute basis) and (2) when at
passionately interested in engaging in a critical dialogue play, humans employ a separate mental cognition. The
around the applicability, adoption, and consequences of scope of their current activity is more ambiguous [8], and
such elements of play in ubiquitous computing research. As their expectations about people, artifacts, interfaces, tools,
motivation, several tremendously popular ubiquitous etc are increasingly relaxed. The mind is open up to wildly
computing themes with playful elements will be examined: fanciful interpretations, connections, and metaphors. The
blogging, tagging, and message play. rules of human engagement are completely altered. It is
often during this unique “play time” that we
Keywords
Play, blogging, tagging, messaging, digital graffiti, SMS, serendipitously establish important intellectual connections,
IM, ambiguity, toys, GameBoy, mobile computing, context leap to improved views of our world and society at large,
aware play. and resolve conflicting paradigms. In essence, it is often
through play that we advance our own substantial, novel
INTRODUCTION contributions in life.
It is during play that we make use of learning devices, treat
This fundamentally important human phenomenon clearly
toys, people, and objects in novel ways, experiment with
deserves a forum as a legitimate theme within the context
new skills, and adopt different social roles [1]. As children
of ubiquitous computing. In fact as ubiquitous computing
we clearly don’t play to learn, but we certainly learn from
researchers, we must not only be aware of this human
play [2, 3]. Play helps us as children (and adults) to answer
tendency to play, but perhaps more importantly use it to
the questions: What can I do in this world? What am I good
our advantage. When does play occur? How does it begin
at? What might I become [4]? Many of us attribute our
and end? When is it appropriate or inappropriate? What
abilities, interests, and even our careers, to childhood toys,
elements give rise to play? Quell play?
games, and play [5-7]. Play unquestionably resonates with
the very essence of human behavior and our role in society MOBILITY
and will play a vital role in the adoption of ubiquitous Play by its very nature is an active event, promoting co-
computing. ordination, flexibility, and fine motor skills [9]. Often toys,
While gaming is a popular and important part of human the tools of play, respond to movement and hold our
play, this panel is focused more specifically on the attention. From an early age toys encourage physical play:
fundamental activity of mobile, situated human play and its activity centers for babies, push-pull toys for toddlers, and
role in ubiquitous computing. blocks, balls and climbing frames for older children.

3
Throughout our lifetime, we draw upon these innate skills by the community. However, graffiti is simply defined as
and experiences to provide a safe and comfortable means an inscription or drawing made on some public surface.
of interfacing with others and the world around us through Graffiti is an extremely important medium through which
play. we engage into dialog across and within our community.
There is no doubt that the current commercial adoption of Not just “gang tags” but political stickers, city produced
wireless, mobile ubiquitous computing devices is indirectly marks indicating gas lines, discarded receipts, cigarette
spawning novel practices of social, mobile play. The butts, broken benches, covered parking meters, and
research buzzwords of context awareness, always on, body scrawled messages are all examples of public place
worn, multi-medial, community awareness, and social community message play.
networks are in fluid use across diverse non-research How will ubiquitous computing contribute to play within
communities. Today’s personal mobile devices have the space of tagging? What motivates the human passion
already been repurposed by independent, passionate users of marking objects? How do we communicate by, through
and groups for various forms of mobile play. As ubiquitous and with objects and artifacts? Why and how do objects
computing researchers, we have a primary interest in exhibit an aura [11]?
understanding the methods of such adoption and, more Not surprisingly, nearly every manufactured item already
importantly, the evolution of its re-appropriation. contains a unique “tag”. Better recognized as a barcode,
While we are interested in exploring new trends in mobile this form of tagging has been socially re-purposed by
play, there are numerous currently deployed systems that digital, wireless tools to generated independent dialogs
have been re-appropriated from the context of work to about these objects, empowering communities. Similarly,
play. The documented evolution of these systems and their where will radio frequency identification tags (RFID)
current usage models help drive many of the research situate themselves within this space of social community
questions for future mobile play. We use these systems as a dialogue? How will we tag wireless 802.11 access points?
starting point for debate of mobile play. Where will such technologies and techniques give rise to
play?
BLOGGING
A blog (derived from “web-log”) is a web page made up of MESSAGE PLAY
usually short, frequently updated posts that are arranged From childhood note passing to adult flirtations couched in
chronologically – similar to a “what’s new” page or amusing metaphors, we find humans engaged in message
journal. There is no limit to the content or topic of available play. We elucidate this continuing motivation for message
blogs: links and commentary about other web sites, play by example: the wireless pager. The initial usage
political issues, news about companies/people/ideas, model for pagers was that a person would send their phone
diaries, photos, poetry, mini-essays, project updates, number to another individual’s pager; the recipient would
fiction, journalism, and even personal messages by dial the received number on a phone and establish a voice
embedded reporters on today’s modern battlefield [10]. connection. What evolved was an entirely different usage
Blogs are almost always personal, imbued with the temper model. In fact a new cultural vocabulary of numerical
of their writers. Perhaps more importantly, to invoke messages arose. For example, users defined new encodings
Marx, blogs seize the means of production, bypassing the such as, “When I send ‘1-2-3’, that means ‘thinking of
ancient rituals of traditional publication houses. In some you’, ‘4-5-6’ means ‘feed the dog’.”
sense blog posts are instant messages to the web. Similar playful re-appropriate occurs with our current
The technologies to support blogging have been in place personal messaging tools such as cell phones and SMS text
since the dawn of the web, yet it has not been until recently messaging. One teen expressed, “I carry my mobile phone
that this technique has self organized itself into a playful around all the time, even in the house.…It's like my little
social pursuit. With modern wireless mobile PDA’s and baby, I couldn't live without my mobile, I bring it into the
phones, the urge to share and play with text, images, and bathroom with me.” Similarly, another couple on separate
sound in real time across vast distances and within a social continents (and hence time zones) used SMS to send
network of friends (and enemies) is overwhelmingly playful awareness messages to each other with no intention
compelling. of engaging in dialogue. “When I get up in the morning I
Several of the panelists have extensive experience playing send her an SMS message that I’m ‘Now making coffee’
in such worlds as well as building and evaluating tools that just to let her know what I’m doing….I guess I want her to
use and extend the blogging metaphor of social be able to imagine me in the kitchen making coffee.”
empowerment. This urge to send playful messages is evident in almost
every personal messaging tool in current use: instant
TAGGING
Tagging is often used within groups and communities to messaging (IM), SMS text messaging, mobile phones, and
mark ownership or control over an object or territory. wireless PDA’s. For example, corporations created the
Tagging and graffiti are typically viewed as an anathema service of “Caller ID”, but its appropriation as an
awareness messaging tool through “one ring calls” became

4
a preferred form of message play between users. discussion directly into the interface of such systems, the
Fundamentally, humans engage in play and will certainly overall experience of the technology can become a more
continue to socially repurpose mobile technology to satisfy enjoyable one.
this necessary human urge.
Bill Gaver
This leaves numerous open questions for debate: How will Biography
other forms of ubiquitous mobile message play be created? Bill Gaver is a Senior Research Fellow at the Royal
Engaged? Deigned for? Encouraged? Diverted? How will College of Art. He has pursued research on innovative
mobile play affect human relationships in terms of trust, technologies for over 15 years, working with and for
persuasion, and conflict? How will we map current companies such as Apple, Hewlett Packard, IBM and
messaging techniques onto and across such systems? What Xerox. Recent projects have included electronic furniture
direct and side effects will result? for public areas, information appliances that emphasize the
PANELISTS (Alphabetically) emotions and spirituality, and the creation of compelling
The following is an alphabetical listing of each of the public experiences from urban pollution sensing and data
panelists that will participate in this panel along with their from Antarctic lakes. He is a principle investigator on
position statement on this topic and a brief biography. Equator IRC, in which his group is exploring digital
devices that offer ludic opportunities for the home.
Barry Brown
Biography Position Statement: Designing Ubiquitous Play
Barry Brown is a research fellow and ethnographer at Play is ubiquitous. Not only do we play when we’re
Glasgow University where he explores social issues supposed to play – when we’re gaming, or blogging, or
surrounding human leisure and technology. Recently his flirting – but we play when we’re doing other things as
focus has been on various leisure enabling technologies well. We play with ideas, with interpretations, with our
such as music listening, museum visiting, and tourism. He own identities. We’re curious, we explore, we fiddle, and
has edited a highly respected book that deconstructs many doodle. From this point of view, play is not an activity so
aspects of mobile phone usage [12]. Barry has also much as an attitude, one in which we’re relatively free
investigated the parallels of video game interfaces and its from external constraints and defined tasks.
relationship to ubiquitous computing [13]. In my research I am trying to understand how to support
playful attitudes without defining systems as being ‘for
Position Statement
Designing technologies for leisure presents a number of play.’ For instance, in the ongoing Equator IRC, we are
challenges for technology designers. It is not just that the looking at technologies for the home that encourage people
goals in leisure are more diffuse, or that there are a more to reflect on their own activities, to try on new roles, to
diverse set of requirements. In leisure the aim is day-dream and speculate. None of the things we are
enjoyment, rather than productivity. How something is designing could be considered ‘for play,’ yet they all
done is often more important than the end result. For the depend on a playful frame of mind. They are intended to sit
tourists we have studied, using a guidebook was enjoyable in a middle ground between work, consumption and
in itself as well as contributing to their visit [14]. For entertainment, encouraging people to wander and wonder,
music enthusiasts finding new music is not just a goal but rather than focus on clear tasks.
enjoyable process in itself [15]. How do we design to allow play without dictating it? A
The importance of enjoyment as part of the experience of couple of factors seem important. First, we need to
using a technology is something we, as ubiquitous embrace subjectivity – our own and others’ – in our
computing researchers, can learn from gaming software. designs. Rather than seeking to create experiences based on
For example, gaming software often develops a user’s our knowledge about typical desires and activities, it is
skills in a particular technique, and when that technique is often more compelling to design for the idiosyncratic and
perfected discards that technique to encourage the unusual. Second, ambiguity and openness are important
development of new forms of competency. In this way factors in creating systems that people can appropriate into
games maintain an interest in learning new and more their own lives. Rather than dictating what a system is for,
advanced skills. or even what it means, it is often more effective to design
systems that are suggestive and open to interpretation. For
Games are also very much social activities (both co-present it is in the act of making meaning from ambiguous
and online), and much can be learned from how these situations that we are often at our most playful.
social experiences are pleasurable and shared. In our
current work we are studying groups at play – in situations
such as go-kart racing. We are interested in observing how Marc Smith
discussion and socializing around an event becomes a Biography
powerful component of the enjoyment of the event itself. Marc Smith is a research sociologist leading the
By designing social support for reflection and follow-up Community Technologies Group at Microsoft Research.

5
The focus of the group is to explore and build tools to here: young people are no longer seen as invisible and
support association and collective action through inconsequential subjects, but active actors with agency.
networked media. This explains the kinds of digital play which we have
Position Statement
observed amongst young people both on the 73 bus route
Play, in the form of exploration, direct manipulation, and study and which they have reported in in-depth interviews.
collaborative interaction is a critical component of social PANEL PLAYTIME
life. Information technologies, despite their extensive uses Clearly, the focus of this panel is to use the synergy of the
in the forms of “games” often lack a playful quality and panelists and audience participation to elucidate the grand
impose instrumental usage patterns. This often leads to research challenges in the area of mobile play. As
significant underutilization of technical capacities as users expected, individual panelists will present positions and
avoid exploration for fear of stepping beyond the scope of relevant work to support their arguments at the panel. The
their instrumental skills. The emerging capacities of inevitable insuring discussions across panelists and
ubiquitous computing suggest new opportunities for audience will hopefully reveal the foremost research
encouraging playful exploration of technical systems by questions associate with mobile play.
supporting the primary sensory channels of feedback, However, we are also interested in consciously creating
direct manipulation, inscription, and mutual awareness. At scenarios during the course of the panel that allow the
question is how the playful uses of information audience to freely enter into a playful state of mind. Not
technologies will be domesticated or will potentially literal game play, but play as a vital part of brainstorming,
rupture existing social institutions. self-discovery, identity, and creativity.
Nina Wakeford Come out and play!
Biography
Nina Wakeford is Director of the INCITE research centre REFERENCES
[1] L. S. Newman, "Intentional and unintentional memory in young
at the University of Surrey, UK. Trained in anthropology children : Remembering vs. playing," Journal of Experimental Child
and sociology she studied for her PhD at Oxford University Psychology, vol. 50, pp. 243-258, 1990.
where her thesis focused on the sociology of risk. For the [2] G. G. Fein, "Skill and intelligence. The functions of play,"
past ten years she has been working on sociological Behavioral and Brain Sciences, vol. 5, pp. 163-164, 1982.
[3] J. S. Bruner, "The nature and uses of immaturity," American
approaches to new technology production and Psychologist, vol. 27, pp. 687-708, 1972.
consumption, including studies of email discussion lists, [4] C. Adelman, "What will I become ? Play helps with the answer,"
web pages, mobile phone use, web logs and public internet Play and Culture, vol. 3, pp. 193-205, 1990.
access points, including wireless. One of her current [5] D. M. Tracy, "Toy-playing behaviour, sex-role orientation, spatial
ability, and science achievement," Journal of Research in Science
projects uses the route of the number 73 bus in London as a Teaching, vol. 27, pp. 637-649, 1990.
way to sample usage of digital content in the city, including [6] J. Piaget and B. Inhelder, The psychology of the child. New York,:
web pages, text messaging and blogging. She is also Basic Books, 1969.
studying the way in which ethnographers work with [7] J. O'Leary, "Toy selection that can shape a child's future," in The
Times, 1990.
interface designers, artists and engineers, and what they [8] W. Gaver, J. Beaver, and S. Benford, "Ambiguity as a resource for
learn from each other. design," presented at ACM CHI, 2003.
[9] J. A. Byers and C. Walker, "Refining the motor training hypothesis
Position Statement for the evaluation of play," American Naturalist, vol. 146, pp. 25-40,
A sociology of ubiquitous computing necessarily involves 1995.
thinking about the linkages between space and social [10] A. Harmon, "Improved Tools Turn Journalists Into a Quick Strike
practice. One way of engaging with digital content in the Force," in New York Times, Late Edition - Final ed. New York, 2003,
pp. 1.
city of London, for example, is to create mundane light
[11] W. Benjamin, Illuminations. New York: Schocken Books, 1969.
content which might be characterized as playful in nature. [12] B. Brown, N. Green, and R. Harper, "Wireless world: social, cultural
Teasing, joking, shaming, and pranking are all routine and interactional aspects of wireless technology," Springer Verlag,
activities of the set of young people in the UK who 2001.
[13] J. Dyck, D. Pinelle, B. Brown, and C. Gutwin, "Learning from
characterize themselves as heavy users of mobile phones.
Games: HCI Design Innovations in Entertainment Software," in
Creating a sociology framework around the concept of Proceedings of Graphics Interface 2003, 2003.
mobile play involves thinking about the many wider social [14] B. Brown and M. Chalmers, "Tourism and mobile technology," in
Proceedings of ECSCW 2003, 2003.
and structural processes in which these activities are [15] B. Brown, E. Geelhoed, and A. J. Sellen, "The Use of Conventional
embedded. For example to characterize an activity as and New Music Media: Implications for Future Technologies," in
'playful' draws on wider cultural assumptions of risk, trust Proceedings of Interact 2001, vol. 67-75, M. Hirose, Ed. Tokyo,
and blame. It may also involve notions of intimacy and Japan: IOS Press, 2001.
power. The contemporary sociology of childhood can aid

6
Part II

Demonstrations
Context Nuggets: A Smart-Its Game
Michael Beigl*, Albert Krohn*, Christian Decker*, Philip Robinson*, Tobias
Zimmer*, Hans Gellersen+, Albrecht Schmidt-
*TecO, University Karlsruhe + Lancaster University - Universität München
*{michael, krohn, cdecker, philip, zimmer}@teco.edu
+ [email protected]
- [email protected]

ABSTRACT used to actively retrieve the status of the game and


Small, embedded, sensing and communicating computer hardware, such that the attributes displayed are either game
systems continue to show their applicability in various related or retrieved from monitoring the technology.
settings. The Smart-Its platform, which we present, testifies THE PLATFORM: SMART-ITS
to this. We have developed a game called "Context- The basic idea behind the Smart-Its platform is a move
Nuggets", in order to test and demonstrate the extremities towards simplifying the embedding of computing,
of this platform when subjected to a setting with multiple, perception and wireless communication into the physical
ad-hoc users, discovering each other and exchanging world, along with an integrated toolset to make use of the
context data. Attendees simply attach a Smart-It to their information collected. The Smart-Its platform enables the
body and they can join in. The gaming strategy entails development of Ubicomp environments, applications and
collecting as much "context" as possible, through altering tests.
interactive behavior with other players. Context sources
include light, audio and movement sensors. Context is
traded via short-range wireless communications.
A tool that manages the on-site gaming statistics is also
used for analyzing run-time behavior and system status of
the Smart-Its.

Keywords
Ubicomp Platform, Games, (usability, technology) tests
INTRODUCTION
In-Situ context generation, processing and communication
has advantages in many application areas. This demo Figure 1: Small Smart-It with sensors attached to body or
shows a platform for application scenarios, consisting of clothing
tiny computing devices that are embedded into everyday
objects, on people or clothing, or in the environment. A major part of the Smart-Its platform [3] is the hardware
Further demonstrated are development libraries and tools device, which comes in various forms (e.g. figure 1). It
for building applications and services for supervising builds the embedded hardware toolset and contains RF
applications and experiments. The demonstration presents based wireless communication, on-board processing,
this technology platform through one example application. memory, sensors and actuators. It can produce sensor
The central component of such a platform is a tiny device information from up to 12 sensors, process context
called the Smart-Its, which is used to retrieve context information within a local processor, provide adequate
information from the environment, run applications and storage of context and general information, and host
communicate via a wireless network. The first part of the applications such as the game described below. It works
demonstration is a Ubicomp game application, secondly we independently of any external infrastructure and allows
give more details on the technology involved. Attendees spontaneous, short-range peer-to-peer and ad-hoc exchange
are invited to take part or observe the game, and of processed data. Smart-Its are tiny, lightweight and have
subsequently have a closer look at the enabling software low energy consumption, such that the extent of objects to
and hardware design of Smart-Its. which they can be embedded ranges from very small or the
human body (figure 1). The Smart-Its software can be
Ubicomp games [1] stimulate use of Ubicomp technology, rapidly developed based on a simple-to-use library
as has been shown in previous Ubicomp conferences - e.g. providing a high-level access to communication, sensing
Pirates [2]. In our game, "Context Nuggets", attendees of and actuating functionality. Furthermore, generic programs
the conference are invited to be players by configuring and are available for certain application areas as usability tests.
using a small device. The device can be worn on the waist
or adhered to the shirt (figure 1). Analysis tools can be
9
While infrastructure is optional, as Smart-Its communicate nuggets through a secret formula known only by you. But
ad-hoc, PC based services such as wireless development unfortunately, you cannot use the ingredients you produce
and maintenance of Smart-Its applications require such yourself. Instead, you have to trade ingredients with other
integration. Infrastructure equipment enables access to alchemists - one of your lux for one of their magical
Smart-Its over the Internet and vice versa. Infrastructure – motions, one of your spells for one of their spells etc.
based services may also be a source of additional context Based on your formula, "context nuggets" are created and
information, such as location or a history database. Figure at the end of the day the alchemist with the most nuggets is
2 shows a setting with several Smart-Its distributed in a flat the winner. Players influence the progress of the game by
or office environment. entering the secret formula to make context nuggets. The
secret formula describes how many of the ingredients are
needed for creating a nugget and therefore determines the
strategy for the player. A total of 10 units from the 3
ingredients are required, and at least one lux, one magic
motion and one spell. The 7 remaining units can be
allocated arbitrarily by the user, based on a calculated
guess of the most available ingredients (figure 4).

Figure 2: Smart-Its environment


The development toolset is a collection of programs that
support programming, configuration and debugging of
Smart-Its. With these software tools programming can be
done on any PC based computer and over the Internet.

Figure 4: Entry page for the Context Nugget game


Producing ingredients, trading and processing the nuggets
Figure 3: Graphical Visualization via Particle Analyzer
appear to be “magical” as no explicit user intervention is
The text-based Smartspy tool and the graphical based required. After entering the formula Smart-It devices are
Particle Analyzer tool allow the supervision and distributed to the players and the game starts. At this point,
interpretation of output data from a running Smart-It players can influence the game by generating ingredients
application (figure 3). The graphical representation of data and meeting other players for trading. Ingredients are
via the Particle Analyzer provides a quick overview of fast produced through a sensor perception on the Smart-it
changing context, sensor or network data. This feature is device. Sensing light level results - after a certain time - in
often used for informal verification of Smart-Its behavior producing a lux unit; likewise, sensing movement
or analyzing performance parameters from the network or (acceleration sensor) and sound (microphone) result in
application. The tool’s ability to display raw sensor data is magic movements and spells respectively. Following the
also useful in the application design and debugging phases. rules of the game and the secret formula, the device also
For maintenance or for recording data for test runs, a calculates and stores the nuggets for the player. It is not the
history database was implemented. This history, stored in intention of the game to encourage players to adapt their
the Particle database (ParticleDB), is accessible through a behavior to win the game by creating more ingredients etc.
Web-frontend. The tool allows us to export selected data to - which is also very difficult to perform during the game. In
standard formats for further statistical analysis. spite of this, creation of nuggets, trading and generation of
ingredients can be perceived by the users through short
THE GAME: CONTEXT NUGGETS
flashes of light in different colors on the Smart-Its device.
Imagine being an alchemist in the middle ages trying to
produce gold out of mystic ingredients: Lux, spells and The game ends after some hours at a fixed time where all
magic motions. These ingredients are then used to produce devices have to be given back for final download of

10
buffered data. The "alchemist" with the device containing What it Demonstrates
the most nuggets is the winner. The Game shows some specific strengths of the Smart-Its
platform. First of all, it shows that the Smart-Its are
Technical Setting
complete self-contained and independent devices. They do
To run the game, players are required to wear a tiny
not need infrastructure and are able to generate higher-level
electronic Smart-It device and to attach this device to their
information, represented here by the context (nuggets) and
clothing (figure 1). The device works independently of
ingredients, from physical sensors.
other computers and networks, holds the game rules and all
information concerning the devices' player. The device The second strength is the ability of a Smart-It to work
constantly detects physical attributes such as light, sound unobtrusively as factor of its small dimensions and long
and movement. time operation. Furthermore, the device does not require
any administration, maintenance or other explicit
The device is able to communicate wirelessly and
interaction to fulfill its task.
spontaneously in order to exchange ingredients information
between participants of the game. Game communication is The third strength is the de-centralized communication. No
uses the Smart-Its ad-hoc and infrastructure-less master device or access point is necessary. Nevertheless,
networking. Collection of physical data, semantic when connected to a backbone network, additional data
aggregation and the ad-hoc communication is done analysis and statistic functionality is enabled.
implicitly without any user interaction. Any internal GAME EVALUATION
processing like trading and generation of ingredients or The evaluation part of the demonstration uses information
nuggets is stored together with a time stamp in the Smart- generated by the "Context Nuggets" game and shows one
Its memory buffer. When the device enters a special application of the infrastructure toolset and services. These
marked area with connection to the Internet, buffered data toolset and services can be used for a variety of application
is transferred to the Particle DataBase for immediate or areas including supervision of field tests or living lab [4]
later use. Conference attendees are able to monitor the tests. In our demo setting, it monitors the progress of the
current game status using either their own WiFi enabled game application, computes game data (e.g. the score) and
device or using a Game Terminal during the game. observes technical parameters.
Game rules Observation with Smart-Its
1) The goal of the game is to create more nuggets than In the Smart-Its set-up, simple sensors are the basis for the
other players supervision in contrast to complex video surveillance often
2) You create nuggets by collecting ingredients and used in controlled laboratory user studies. An advantage of
processing them to nuggets according to the formula you the use of directly attached simple sensor systems is that
entered at the start of the game. Converting ingredients to they can collect data automatically, with fine granularity
nuggets is done automatically when the necessary type and and independent of the location, e.g. while on the move.
amount of ingredients are available. Only traded materials Additionally, they are able to measure data constantly
can be used to produce nuggets, so you can't use your own without being disturbed by occlusion. For many situations,
ingredients. ad-hoc embedding of the proposed technology is easy to
handle and cheap, making the Smart-Its based evaluation
3) You are able to generate 3 ingredients: Lux (created
suitable for small ad-hoc tests. Due to the lack of video
from a light level sensor), magic motions (created from
surveillance cameras, the entire user behavior is not
movement sensor) and spells (created from an audio
supervised. This suggests a move towards privacy-sensitive
sensor) through wearing a tiny magic device. Ingredients
user monitoring.
are produced automatically without any user interaction.
There are also some disadvantages. Firstly, only specific
4) Ingredients that are not traded are perishable - their
parameters can be supervised and have to be known
maximum usability period is about 2 minutes.
beforehand. Secondly, in the absence of additional
5) To create nuggets, you consume 10 ingredients from surveillance of the user, users may fool the system by not
others alchemists. The ingredients you need are part of the or inappropriately using the devices and so adulterate
secret formula you enter at the start of the game. collected data.
6) You can trade with other participants on a 1-to-1 basis Example scenario: Game supervision
by standing within 5 meters of them or even passing by. The "Context nuggets" game builds the application
The longer you stand next to other wizards the more you scenario for collecting user related data. The behavior of
trade. players gives hints to how good the game performs in a
7) Trading and creating can be done everywhere as no given environment. From this data, valuable analyses about
extra infrastructure is needed. Simply wear your device the overall performance of the game can be carried out, but
correctly at the belt or shirt, not inside a bag etc. Otherwise also individual player performance data can be shown.
your ingredient production stops and you are not able to For the proposed game several parameters are of interest:
trade.
• How often and when do players generate ingredients

11
• How often do players meet in general access of every player and the load distribution among
access points are of interest. A low access to one of the
• How often do players exchange ingredients (combines
access points may indicate the need for a relocation of this
generation and meeting)
access point.
Additionally, for the progress of the game the information
Additionally, the context aggregation behavior of Smart-Its
about the average "nugget" production per time is of
is noteworthy in order to optimize the rule-set of the game.
interest.
The threshold and generating algorithms for units of
Technically, the above parameters are retrieved by ingredients can be adjusted according to the measured
querying the Particle DB through a Web-Server based movements, noise and light.
script. The ParticleDB holds all events that took place
All these measured parameters can be graphically displayed
during the game with the correlating timestamp. Using this
through the Particle Analyzer program from one of the
detailed data, the statistic applications generate graphs and
terminal PCs on the demonstration site.
reports. These can then be accessed by a Web-Browser
either from the Terminal on the demonstration place or ACKNOWLEDGMENTS
from any other computer connected to the network. Smart-Its project is funded by the Commission of the
European Union as part of the research initiative “The
Disappearing Computer” (contract IST-2000-25428).
REFERENCES
1. Björk, S., Holopainen, J., Ljungstrand, P. & Åkesson
K-P. (2002). Designing Ubiquitous Computing Games -
A Report from a Workshop Exploring Ubiquitous
Computing Entertainment. In Personal and Ubiquitous
Computing January, Volume 6, Issue 5-6, pp. 443 - 458
2. Björk, S., Falk, J., Hansson, R., & Ljungstrand, P.
Pirates! - Using the Physical World as a Game Board.
Paper at Interact 2001, IFIP TC.13 Conference on
Human-Computer Interaction, July 9-13, Tokyo, Japan.
3. Beigl, M., Zimmer, T., Krohn A., Decker, C., Robinson,
P.: Smart-Its - Communication and Sensing Technology
for UbiComp Environments. Technical Report ISSN
1432-7864 (2003)
Figure 5: Example Statistic 4. Kidd, Cory D., Robert J. Orr, Gregory D. Abowd,
Christopher G. Atkeson, Irfan A. Essa, Blair MacIntyre,
Elizabeth Mynatt, Thad E. Starner and Wendy
Newstetter. The Aware Home: A Living Laboratory for
Example scenario: Technical (network and sensor) set- Ubiquitous Computing Research" Proceedings of the
up evaluation Second International Workshop on Cooperative
For technical evaluation the ad-hoc and statistical analysis Buildings, October 1999.
are also of interest. For the "Context Nuggets" game,
network coverage or general network problems could be
critical. The percentage and the average time of backbone

12
Eos Pods: Wireless Devices for Interactive Musical
Performance

David Bianciardi Tom Igoe Eric Singer


Audio, Video & Controls, Inc. Interactive Telecommunications Program LEMUR
nd
415 Lafayette St. 2 floor Tisch School of the Arts, NYU Brooklyn, NY
th
New York, NY 10003 721 Broadway, 4 floor [email protected]
+1 212 353 9087 New York, NY 10003
[email protected] +1 212 998 1896
[email protected]

ABSTRACT The design team members have extensive interest in and


In this paper, we describe a hardware and software system experience with performance augmented and mediated by
for an orchestral performance in which the audience digital technology. Eric and David are both central
members are the performers, led by the orchestra’s members of LEMUR (league of electronic musical urban
conductor. The system is designed such that audience robots), a group of musicians, robotics experts, artists and
members are not required to have previous musical designers dedicated to creating musical robots. Both have a
knowledge. Audience members are seated at banquet tables background in musical composition and entertainment
throughout the performance space. By tapping on a lighted system design, and an interest in electronically mediated
dome at the center of the table when prompted by the musical performance. Tom’s background is in theatre
conductor, the audience members trigger musical sequences lighting and design, and is currently the area head for
in the composition. Coordination is managed through a physical computing at the Interactive Telecommunications
central software controller. Like all good centerpieces, the Program at NYU. His research focuses on the design and
domes are not tethered in any way, easily moved, and use of networked embedded systems.
unobtrusive when the performance is not in progress.
Keywords The parameters that attracted us to the project were as
New Interfaces for Musical Expression, ubiquitous follows: first, it involved a large number of participants,
computing, interaction design, embedded networking, and a need for tight coordination and performance-level
wireless sensor networks. response times, drawing from and expanding on team
members’ prior experience in multi-participant interactive
musical environments [2]. Second, as in any musical
INTRODUCTION performance, the physical operation of the instruments
New York’s Eos Orchestra works to test the limits of must not take the performer’s whole focus, so that the
orchestral performance, through a regular repertoire of performers can focus on the music, not the operation of the
experimental works, and through the commissioning of instrument; a derivative of one of several of Perry Cook’s
new works for orchestra. For the orchestra’s 2003 benefit observations and principles relating to musical controllers
banquet, Artistic Director Jonathan Sheffer sought to [3]. Any user feedback from the device would have to be in
conduct a performance in which the audience at the banquet the periphery of the audience/performer’s attention, not at
performed the piece. Terry Riley’s minimalist work “In C” the center of it. Third, though a significant amount of
was selected for this experiment, due to its significance as a technology was involved, it would have to be transparent.
groundbreaking work in non-traditional composition and The interfaces would have to appear aesthetically to be part
its algorithmic applicability [1]. The specifics of the of the decor, not an alien piece of control technology. In
performance were kept at a minimum: the audience was to this sense, it let us put into practice many ideas from
“play” the work, the result should be a musically acceptable Weiser & Brown’s manifesto of ubiquitous computing [4],
performance of Riley’s composition, and the audience specifically their notions of the synergistic effects of
members should have an engaging experience in playing networks of microcontrollers and of technology at the
the piece. All other details were left to the implementation periphery of an experience rather than the center. Though
team’s discretion. heavily reliant on computing and networking power, this
project would center on shared musical environment rather
than a computing environment.

13
the MIDI generated by a master computer running a Max
patch and output the 30 player parts and a click track.
Audio of the triggered chunk would be fed to the main mix
and routed wirelessly to the table that triggered it and fed
SYSTEM DESCRIPTION to a local, powered monitor. A click track would also be
The performance was held in a banquet hall at which Eos output from the master computer and mixed to the main
Orchestra would be performing for invited supporters. The audio feed.
tables in the hall were arranged according to an instrument
layout for an orchestra, with each table representing one In the final implementation, the wireless audio return path
instrument. Each of the thirty tables at the banquet featured was not implemented. Instead, clusters of local speakers
an interactive centerpiece consisting of a 12” diameter were positioned overhead around the room. Three tables
translucent acrylic dome approximately 8” tall. Electronics were served by each cluster of speakers. Audio for each of
housed within the dome allowed the guests to tap the dome the tables served by a local cluster was routed to the
and trigger a phrase of music, as long as they had received cluster, so that the sound appeared to be coming from the
a cue from the conductor. In a manner similar to MIT table itself.
Media Lab’s “Tribble”s [5], LED lights within the dome
were used to signal status to the guest players. Three states
were communicated: Inactive, Enabled (cued by the SOFTWARE
conductor), and Playing. Domes were “served” to the tables Interaction, playback and communication software was
by the technical staff to replace the banquet centerpieces implemented on Macintosh computers in Cycling 74’s
after dinner was over, so no wires could be permitted for Max software. A master Max patch was developed for this
power or communications. User input sensors in the domes application with the following features:
communicated via a microcontroller back to a master
computer, sending data on how hard the dome was tapped.
The master computer used this data to trigger a musical • graphical user interface for operator control of performance
phrase played by that table’s instrument with a MIDI parameters
velocity relative to the force applied to the dome. • bidirectional UDP communication with pods
• interactive player subpatch for each pod
Two Akai samplers with 16 audio outputs each were fed • algorithms for MIDI playback, interaction arbitration and

Figure 1. Operator
14 interface in Max
progression through the piece enabled pod, with the circle’s line thickness representing
• control of pod LED’s for user feedback relative volume level. When a pod was triggered, a blue
circle within the green flashed on and off to indicate active
playback.
The piece “In C” is composed of 53 individual melodic
patterns of varying length to be played in sequence. In the
performance instructions, the composer states that each The interface enabled the Max operator to follow the
performer must progress through the patterns in order but conductor’s movement among the tables and assist the
may make entrances and repeat each pattern numerous times conductor in controlling the performance. Using a pen and
at his or her choosing. graphics tablet, the operator could move a circular marquee
around the interface, with pen pressure controlling the
diameter of the marquee and thus the area of influence. The
Our patch was designed to simulate these instructions, with operator could enable, disable and scale volume of each of
entrances determined by the players and progression the pods using keystrokes, with the keystrokes’ effects
through the patterns controlled by the patch. A “horse race” applying only to the pods enclosed by or touched by the
algorithm was implemented with a counter progressing marquee.
through pattern numbers 1 through 53 and each pod racing
towards or past this master number but remaining within
+/- 4 of its current value. This was done by periodically Bidirectional network communication was accomplished
incrementing each pod’s pattern number, with the odds of using the o t u d p object [6] to send UDP messages
incrementing being greater if further behind and less if wirelessly between the Max host computer and the pods.
further ahead of the master number. Outgoing messages from the host were addressed to each
pod on a local IP address and port, with the last byte of the
IP number corresponding to the pod’s table number.
Players were able to trigger playback of a sequence by Messages from the pods to the host were sent on a
tapping on the pod dome. Tap signals were sent via UDP broadcast IP address, enabling both the host and a backup
to the host with a message indicating the velocity of the host computer to receive them.
tap and the pod number. If the pod was enabled, a tap
triggered playback of the pod’s current pattern, with
playback volume scaled by both the tap velocity and the HARDWARE
pod’s level as set in the operator’s interface. Hardware for the pods was implemented on a PIC
microcontroller, using NetMedia’s Siteplayer web
coprocessor to manage TCP and UDP connectivity. The
Playback of all parts was synchronized to a master eighth- entire module was connected wirelessly using an Ethernet-
note clock running at a fixed tempo. Entrances occurred on to-WiFi bridge from D-Link. The pods were powered by
the next eighth note following receipt of a trigger, playing 12-volt rechargeable motorcycle batteries.
the triggering pod’s current pattern synchronized to the
master clock.

Players for each pod were implemented as a parameterized


subpatch. Parameters to the subpatch allowed setting of IP
number, MIDI port and channel for output voice, trigger
threshold and voice scaling. The subpatch handled pod
communication, triggering arbitration and synchronization,
MIDI playback and scaling, LED feedback and UI feedback
for each pod.

To give users feedback about playback, the host controlled


RGB LED’s inside each pod. A red color indicated that the
pod was currently disabled, and green indicated enabled.
Tapping the pod caused a white flash (all RGB colors on). Figure 3. The pods in testing
During pattern playback, blue LED’s were flashed on every
eighth note beat and every note in the pattern; this was
done to ensure ample player feedback, even during rests. Sensing was accomplished using a set of four force-
sensitive resistors under the rim of the dome. Velocity for
An operator’s interface was implemented using the lcd the dome was taken as the highest sensor reading on any of
object. The interface presented an analog of the layout of the four sensors, provided the value was above a given
the banquet room, with each table represented as a circle threshold of sensitivity. Each dome’s sensitivity threshold
with its corresponding pod/table number inside. A thin red could be set either by sending it via UDP, or by using an
circle indicated a disabled pod; a green circle indicated an HTML forms-based interface on the Siteplayer.

15
somewhat confused as instructions were rushed. For
RGB LED output was controlled by the PIC based on example, the pods were designed to trigger once a player’s
UDP messages received from the master control software. hand lifted from the pod; not all audience members
The white LED flash on each tap was generated locally by understood this, and some kept their hand on the pod
the PIC. All other LED control was received from the constantly, wondering why it didn’t play. In future
master control software. The Siteplayer HTML interface performances, we will look for a better solution, allowing
could also be used to set LED intensity levels, and was for a wider range of playing styles. Overall, however, the
used during installation for diagnostic purposes, when the audience had an enjoyable experience, and all parties were
master control software was not online. pleased with the result. Eos has hopes for future and more
ambitious performances with the pod system, and we look
forward to further refinement of the system as the design
Message protocol between the pod units and the master team and the orchestra gain more experience with it.
control software was kept as light as possible to minimize
traffic. Two bytes were sent from each pod to the master
control software on each tap: the last octet of the pod’s ACKNOWLEDGMENTS
address, and the velocity value. Three bytes were sent from We thank Jonathan Sheffer and the Eos Orchestra for their
the master control software to each pod: red, green, and support and for their openness to experimentation, without
blue intensity values for the LED’s. Optional fourth and which this project would not have been possible. We also
fifth bytes could be sent to adjust the sensitivity of the thank the staff and students of the Interactive
pod’s FSRs, but this was used only during installation. Telecommunications Program at ITP, many of whom were
drafted into service in the fabrication of the system; and the
staff of Audio, Video, & Controls, whose expertise and
Because the pods were to be laid out in a specific order in enthusiasm made for a more pleasant working experience
the performance space, fixed IP addresses were used for throughout the process.
each pod, to simplify contact with them during
installation. Likewise, the wireless bridges were also given
fixed addresses in advance. Although it was not strictly REFERENCES
necessary to associate IP addresses with instrument [1] T. Riley, “In C”, musical score and performance
numbers, it was convenient, given the time scale of the instructions (1964)
project. [2] R. Ulyate and D. Bianciardi, “The Interactive Dance
Club: Avoiding Chaos in a Multi-Participant
Environment” Computer Music Journal , Volume 26,
IMPLEMENTATION FOR UBICOMP 2003
Number 3, MIT Press (2002)
For UbiComp 2003, our implementation of the Eos Pods
will be somewhat less extensive than for the initial [3] P. Cook, "Principles for Designing Computer Music
performance. Between 6 and 12 pods will be used, and Controllers," ACM CHI Workshop in New Interfaces for
only one sampler or synthesizer will be used. Audio will Musical Expression (NIME), Seattle, April, 2001.
not be routed to local clusters, but will be designed around [4] M. Weiser and J.S. Brown. "The Coming Age of Calm
a central PA system feeding the entire space of the Technology",http://www.ubiq.com/hypertext/weiser/acmfut
demonstration. ure2endnote.htm (October 1996).
[5] J.A. Paradiso, "Dual-Use Technologies for Electronic
CONCLUSION Music Controllers: A Personal Perspective," Proceedings of
The initial performance of the Eos pods at the orchestra’s the 2003 Conference on New Instruments for Musical
annual supporter’s banquet was a successful test run for the Expression (NIME-03), Montreal, Canada
system. All technical components performed as specified, [6] M. Wright, “otudp 2.4”, CNMAT, UC Berkley.
with remarkably few technical problems. Communication http://cnmat.cnmat.berkeley.edu/OpenSoundControl/clients
between conductor, support staff, and audience was /max-objs.html

16
Wishing Well Demonstration
Tim Brooke Margaret Morris
Intel Corporation Intel Corporation
JF3-377 2111 N.E. 25th Ave. JF3-377 2111 N.E. 25th Ave.
Hillsboro, OR 97124 USA Hillsboro, OR 97124 USA
+1 503 264 8512 +1 503 264 8512
[email protected] [email protected]

ABSTRACT needs and corresponding opportunities for ubiquitous


The technology concept described in this paper addresses computing.
needs that emerged from ethnographic research on the
health needs of tomorrow’s elders, specifically the need
to envision and plan for the later phase of life. The This paper will address one of these themes: the need for
difficulties of old age are often experienced as sudden temporal orientation, specifically orientation to the
shifts which are managed in a reactive, crisis-driven distant future, and the corresponding opportunity area of
style. The gap between this need for future planning and “lifespan mapping”. By lifespan mapping we mean tools
existing tools motivated the concept of “lifespan that will allow people to reflect on time in very personal
mapping” – engaging interfaces to invite reflection about and expansive terms – from recollecting the past, to
desirable ways to live out the later phases of life. The focusing on the present and near future, to envisioning
“wishing well” concept described in this paper is one the distant future. We are exploring a host of interfaces
component of a larger life-span mapping concept that that will make all these activities more engaging,
encourages ideation about the future. Users select an textured, and less daunting than they are currently
image that they find appealing by moving a stone onto experienced. (** nb: fix grammar at end of sentence)
that image. Through a sequence of selections, the user
develops a collection of images that represent the mood The particular concept explored in this paper, the
or spirit of particular desires and aspirations. wishing well, is a probe to invite reflection and
Keywords envisioning about the future. Probes are research tools –
Tangible Interface, Ubiquitous Computing, Calendar, malleable, low fidelity prototypes – designed to elicit
Elderly, Lifespan, prospective cognition feedback from users [6]. Our intent is to bring probes into
the homes of users, and to base our iterations on the way
INTRODUCTION
users shape them to meet their needs and desires.
As has been well publicized, the increasing lifespan is
sparking a huge demographic shift in U.S. and many
other countries, principally in Europe and Asia [1][2].
The needs of the growing elderly population far exceed
current medical and social resources. Health issues, from
wellness to chronic disease management to support for
cognitive decline, will need to be addressed through a
variety of innovative approaches that supplement medical
offerings. Ubiquitous computing technologies for the
home are one such approach that may be particularly
well suited to the needs of tomorrow’s elders [3][4].

To inform the development of home health computing


systems for tomorrow’s elders, Intel has conducted
extensive ethnographic studies on the lifestyles and
concerns of boomers and elders. The initial focus of the
study has been cognitive impairment, a health issue that
is estimated to be the strongest threat to independent Figure 1. Link between needs and solutions
living [5] From analysis of focus groups, household
interviews and shadows, a set of key themes emerged as

17
RESEARCH FINDINGS three months until finding a sufficiently assistive
Orientation to current time is well recognized as a sign of environment. Each move was precipitated by a crisis:
cognitive lucidity. This immediate temporal orientation a fall, an incident of aggression, wandering.
is assessed in mental status exams and is relatively well Lucinda’s mother was miserably unhappy in
supported through calendaring tools. However, our everyplace but the last one. They now wish they had
research indicated the importance of broadening the known her mother’s probable trajectory: they think
consideration of temporal orientation to include not only this knowledge would have allowed them to avoid
the present, but also the distant past and distant future – some of the crises by moving to a place that was
realms not addressed in most calendaring tools. offered graduated levels of assistance.
• Paul and Jenna, who love their urban third floor flat,
expressed determination to live there forever.
Arthritis in Jenna’s knee already makes the stairs a
challenge though. When asked about their plans for
the future her response was “I suppose we’ll cross
that bridge when we get to it.”
• Sue, a former teacher and successful real estate
broker, now suffers from severe vision deterioration
that prevents her from driving and a host of other
activities. She recently moved to an upscale assisted
care environment that she finds stifling. Even
though she hasn’t driven for ten years, she keeps her
car as a symbol for the freedom that she misses.
• “I didn’t want to believe this was happening to her”
Figure 2. Calendars help with orient people to the said a young woman about her grandmother who has
present and they are often saved to ease recollection of Alzheimer’s. In retrospect, she sees that she and her
the past. But they are not so helpful with envisioning and parents overlooked signs of deterioration for years.
goal setting for the distant future. She feels that they may have an opportunity where
medication could have made a big difference in
It is true that orienting to the present is more challenging slowing the course of the disease.
in old age: retirement can involve a disconnection from
the rhythms, rituals and communities associated with
So why don’t people think about old age?
workdays and weekends. Cognitive impairment certainly
There are a variety of obstacles to envisioning the future
adds to this challenge. Equally if not more consequential
that emerged from our ethnographic research. First is the
though are the struggles of orienting to the distant future.
very daunting prospect of losing health, freedom, and
We found that many people avoid thinking about their
independence. Imagining these changes for oneself or a
and their loved one’s old age until forced to do so as a
loved one is so painful that many simply avoid thinking
result of health crises. In some of these cases, more
about them. Another is optimism and the accompanying
planful, proactive decision making may well have pre-
denial about the prospect of negative future events. This
empted a number of crises and consequently prolonged
is a delicate issue since an optimistic explanatory style
periods of independence. In other less dire situations,
has been associated with better mental and physical
envisioning the future may have influenced households’
health [7][8]. So to some degree, denial about the
choices about where to live, and what social relationships
prospect of illness may actually help ward it off. Denying
to build in ways that would have improved quality of life
evidence of existing illness, however, is certainly
later on.
problematic. Almost every household in our study
reported overlooking early signs of dementia and
Following are a range of some examples from subsequent regret about missing opportunities for
ethnographic fieldwork that illustrate the tendency to treatment, education, and lifestyle planning. Another
avoid thinking about the future: obstacle is the uncertainty about the future: in particular
the resources one will have and the health issues one will
have to contend with in old age. Even if these
• Joe and Lucinda cared for Lucinda’s mother when uncertainties weren’t there, goal setting and planning can
she developed dementia. This was a rocky couple of be intimidating. Some worry about not living up to goals:
years. They moved her mother to new facility every they would prefer to have low or no expectations than to

18
disappoint themselves. Most however, lack the a “Wishing Well” toolkit. It seems more like a board
preliminary ideas and vision to start concrete planning. game than retirement planning tool and a fun way to
They sometimes only have an inkling of what they want. consider the future. The pieces of the toolkit lie in front
They lack tools to explore these preliminary desires and of him on his desk. He starts to play around with a stone
wishes in a way that is speculative and even playful. that forms part of the toolkit. Using the stone Bob starts
Existing planning tools, which tend to be business to select some images that are displayed on a table top
oriented, are overly specified for loose ideation about the display panel. Later when the Bob is with Sue(his wife)
future. they look at the pictures of new houses and
neighbourhoods and activities Bob has selected.
Discussing his selections they add new images and
The needs and obstacles that we observed suggest a host remove some from the stone. Sue removes photographs of
of requirements for future envisioning tools that not part houses with stairs . Her arthritic knee is giving her
current calendaring and planning tools. Specifically, the trouble. Several months later Bob takes his stone to a
tools should allow people to: realtor. The images stored on the stone help the realtor
select houses that Bob and Sue would be interested in
§ Carve out periods of time that are personally buying. The images also form a journal of past thoughts
salient, while remaining oriented to universally and imaginings of Bob and Sue and aid them in making
accepted metrics of time. decisions and planning out how they might live their
retirement.
§ Ponder difficult decisions about old age in a
nonstressful way
The inspiration for a tool to aid future ideation, comes
§ Conjecture, “feel out”, play with, imagine from the experiences of making wishes, such as blowing
possibilities out candles on a birthday cake or throwing a coin into a
§ Examine values and let those guide life wishing well. These experiences are generative,
decisions imaginative, playful and hopeful regarding the future.
§ Work through obstacles that may impede wishes Wishes are fun to make, can be ambiguous, romantic and
or goals emotionally driven. This whimsical spirit of wishing is in
contrast to many existing computerized tools; for
§ Plan the way one wants to live, not just instance a travel website might demand an airport
milestones (preferably a 3 letter code), departure and return date
§ Evaluate the kind of community and when the user only has a vague idea of when and where
relationships that are important for one’s late travel would be desirable. The Wishing Well interface
phase of life and how to achieve the desired would invite self-reflection and projection of ambiguous
quality of social connectedness future plans (e.g. “I want to feel like I’ve been far away”
§ Plant wishes and goals without worrying about or “I want to travel to Europe”) rather than demanding
whether or not they are achievable specifics (e.g. date, time and airports.) The intention is to
use ambiguity as a resource for design enabling
§ Reflect and build on previously set goals and intriguing and delightful user experiences[9].
wishes

The Wishing Well helps users reflect on their futures by


Next is a description of a research probe that is designed presenting sets of images that represent a sense of mood,
to invite ideation about the future. As mentioned above, spirit, or atmosphere of a future ideation. These images
this is one component of a set of tools to help with what work like swatches of fabric, samples of wallpaper. or the
we are calling “life compassing.” mood boards used in design studios. They express and
communicate direction, feel, and style of wish rather than
specifics of a plan. Since these attributes are sometimes
CONCEPT: WISHING WELL
harder to articulate and less accessible to people, this tool
facilitates communication with family members or
A wishing well scenario: someone supplying a service (be it a spiritual counselor
or a realtor). The images serve as a record of dreams,
desires, and wishes that users can reflect back upon as
Bob is nearing retirement and contemplating what how
well as inspiration or kick off points for more ideation
he will live in the future. He’s been putting it off thinking
about the future.
about it. He’s not sure how to begin. Recently he bought

19
The hardware consists of a number of stones and a flat § Do people want to wish alone or with others?
horizontal touch screen onto which the main interface is § Is it more helpful to use ambiguous or literal stimuli
displayed. The main digital interface is an image browser to help with preliminary planning
that allows the user to navigate through a series of
images related to a future ideation. For instance if the § Do people want their wishes recorded?
user is planning a new home then images related to
homes, architecture, community and neighbourhood will Eventually, the Wishing Well and other Lifespan
be displayed. The stones are used to hold moods which Mapping interfaces will become integrated with the array
are defined by a collection of images. Images become of proactive health technologies that we are currently
associated with a stone by placing the stone over an prototyping. Our goal is to test these technologies as a
image. The image is then “absorbed” into the stone. home system through clinical trials in 2004.

The Wishing Well is intended to be operated by one user


REFERENCES
at a time. However by orientating the screen to a
horizontal position the interface becomes like a table or
board game around which many people can sit, take 1. Coughlin, J.F: Technology and the Future of Aging.
turns, observe and share ideas. Through our probe Proceedings. Technologies for Successful Aging.
research, we will learn how important this social element Supplement to Journal of Rehabilitation Research
is to this concept. The reference to board games is and Development. Vol. 38 (2001) pp. 40-42.
reinforced by the use of stone “pieces” to collect images. 2. Greenspan, A: Aging Population. Testimony before
the special committee on aging, U.S. Senate,
February 27, 2003.
Tangibility is an important aspect of the Wishing Well.
The stones’ physical characteristics of texture, colour, 3. Dishman, E., Matthews, J. T., Dunbar-Jacob, J.:
shape, and weight may emotionally engage the users and Everyday Health: Technology for Adaptive Aging.
help them overcome some of fears and inhibitions about National Research Council Workshop on Adaptive
envisioning the future. Aging. January 23-24, 2003
4. Minsky, M: Wired Magazine available at:
http://www.wired.com/wired/archive/11.08/view.htm
l?pg=3 August 2003
5. Agüero-Torres, H., Thomas, V. S., Winblad, B.,
Fratiglioni, L.: The Impact of Somatic and Cognitive
Disorders on the Functional Status of the Elderly.
Manuscript.
6. Hutchinson et al. Technology Probes: Inspiring
Design for and with Families. Proceedings of CHI
’03, ACM Press, New York (2003)
7. Seligman, M.E.P. (1989). Explanatory style:
Predicting depression, achievement, and health. In
M.D. Yapko (Ed.), Brief Therapy Approaches to
Treating Anxiety and Depression. N.Y.:
figure 3. table and user interface components Brunner/Mazel, Inc., 5-32.
8. Peterson, C., Seligman, M.E.P. and Vaillant, G.
(1988). Pessimistic explanatory style as a risk factor
Next/Future Steps
for physical illness: A thirty-five year longitudinal
The “Wishing Well” is intentionally unfinished as a study. Journal of Personality and Social Psychology,
concept. We are planning to iteratively shape the 55, 23-27.
interface through cycles of user review and development .
Our probe research will address an array of questions 9. Gaver, B. Ambiguity as a resource for Design.
including ones such as: Proceedings of CHI ’03, ACM Press, New York
(2003)
§ Are photographs a suitable means of capturing the
mood and spirit underlying wishes for the future,?

20
Extended Sensor Mote Interfaces
for Ubiquitous Computing
Waylon Brunette1, Adam Rea1 Gaetano Borriello1,2
1 2
Dep’t of Computer Science & Engineering Intel Research Seattle
University of Washington 1100 NE 45th Street, Suite 600
Box 352350, Seattle, WA 98195 USA Seattle, WA 98105 USA
{wrb,area,gaetano}@cs.washington.edu [email protected]

ABSTRACT that use USB and PCMCIA interfaces (USBmote and


Although traditionally used for routing and ad hoc PCmote) to connect to the laptops, PDAs, and servers that
networking research, wireless sensor nodes can also be populate ubiquitous computing environments.
used to create personal area networks of interactive DISPLAY MOTE
devices. Our work focuses on leveraging these low power The driving motivation of the DisplayMote (shown in
wireless sensor nodes as a communication and control Figure 1) was to create a fully programmable device that
mechanism to create a variety of programmable I/O would allow a user to be able to provide input to and
devices. In this demonstration we showcase: a handheld receive output from a sensor network. The DisplayMote is
RFID reader, a DisplayMote with an integrated graphical a mote with a graphical LCD, an accelerometer, a buzzer
LCD and accelerometer, and motes with USB and and four buttons. We extended the Mica2dot mote design
PCMCIA based interfaces. to maintain as much hardware and TinyOS compatibility as
Keywords possible and to lower the barriers for others looking to
RFID, I/O devices, PAN, wireless sensors, mote create a specialty I/O device. Minimizing the size of the
DisplayMote was a key aspect to the design; the final form
INTRODUCTION factor is a single PCB design that is roughly the size of a
Personal I/O devices connected through a wireless personal wrist watch. The integrated 64x128 graphical LCD ensures
area network enable new interaction methods [1]. We have that real-time, pertinent data from the environment around
developed a suite of task-specific interface devices that a user can be displayed to the user without requiring them
extend the UC Berkeley motes [2] (now commercialized by to carry anything more than a wrist-watch. The buttons
Crossbow) to leverage coordination between various I/O and accelerometer can be used in many different ways
devices. It is only through combinations of heterogeneous depending on the application. We have used the buttons
devices that the ubiquitous computing applications and accelerometer to create a tilt-and-click input device
envisioned by Weiser [3] can be fully realized. that can be used as a remote mouse, keyboard, or menuing
All too often, the research community reinvents the wheel system. This input method is derivative of TiltType, an
by implementing the communication link between units on accelerometer based text entry method for very small
a per application basis as they try to avoid the complexity devices [4]. Outside of the I/O uses already discussed, the
and power overhead of 802.11 and/or Bluetooth. This DisplayMote is also designed for use as a remote control, a
leads to heterogeneous networks that require complex
gateways for passing data from one device to another. By
leveraging a maturing platform, like the UCB motes, the
time and effort required to create new platforms is
significantly reduced while the homogeneity of the PAN is
preserved. Devices are able to directly communicate with
other PAN nodes and the need for a single centralized
control point is greatly diminished.
At this conference, we will demonstrate a collection of
devices that were built to exploit the motes’ low-power
communication capability. They include a display mote
(DisplayMote) that adds a 64x128 LCD screen and an
accelerometer, a RFID reader mote (Mite) that adds an
RFID reader antenna and logic, and versions of the mote FIGURE 1: Functional Diagram of DisplayMote

21
information always flows to the person using the handheld
and his devices, eliminating the intervention of outside
infrastructure entirely.
MOTE INTERFACES
There has been a general lack of convenient methods to
connect motes to standard PCs and handheld devices. We
have developed two prototypes to interface motes via
common communication ports. The goal is to lower the
barriers to entry for new mote users and to provide a means
to utilize motes with computing devices that are not
equipped with traditional serial ports. We have developed
prototypes of a USB and PCMCIA based mote (shown in
Figure 3) that exhibit near plug and play functionality.
This makes connecting to existing infrastructure more
FIGURE 2: “Mite” Handheld RFID Reader
streamlined and less prone to error. In addition, we also use
a compact flash mote called a Canby to fill out our toolkit
reminding device, and as an input device for kiosks and of mote interfaces [7].
digital public displays. It can also be use as a remote
terminal to connect to specialized devices such as the Intel
Personal Server [5], that provides personal storage but has
no integrated display.
HANDHELD RFID READER
To leverage the growing field of passive RFID technology,
we created a small handheld RFID reader that can be used
as a personal actuation device. The low-power Mite shown
in Figure 2 has a small read range of only a few inches and
is based on the Skyetek multi-protocol RFID reader [6].
We enhanced the reader as a sensor node with buttons to PCMCIA Based Mote USB Based Mote
create a small, mobile reader with communication and
control capabilities. The Mite can read and write FIGURE 3: Prototypes of Mote Interfaces
information into an assortment of passive RFID tags that
have a globally unique ID and storage space for writing APPLICATIONS
additional information. The Mite also has a small While the goal was to develop a set of highly flexible
rechargeable lithium polymer power source with an sensors and PAN building blocks, the individual
accompanying USB charger. Our goal was to create a components have been designed around a few core
small, portable handheld reader to allow passive RFID tags applications sets. These generalized application sets helped
to be leveraged in many ubiquitous computing applications. guide the design process and provided a checklist for
functionalities that we wanted to be maintained throughout
An envisioned usage model for RFID tags is in smart
the development process. The Mite was designed to be a
spaces and location sensing. Tagged objects can contain
personal actuation mechanism that would allow the
part history, schematics, or even pointers to product
augmentation of objects and spaces with data. The
manuals. In addition, tags can moderate physical access
DisplayMote had a simple goal of maximizing the
control or can contain code that is executed upon a tag read
input/output capabilities.
allowing the Mite to be as actuator. Tags can also easily be
associated with auxiliary data contained in the An important application of the Mite is its ability to be
infrastructure making the possibilities of configuration used as an actuation mechanism. The Mite can be used to
almost infinite. cause actions in the environment based on the information
that the reader finds within RFID tags. For example, the
By allowing the user to have a personal reader the privacy
Mite can be used as an out-of-band connection mechanism.
model changes so that the user is in control of his or her
It can send the laptop, PDA, or any other device enough
data and whereabouts. This is in contrast to the fixed
information to bootstrap itself into a wireless network
readers where the environment is tracking the tag (on a
using the information contained in an RFID tag placed in
person or object). In this model, a user is tracked by the
the environment. A Bluetooth capable device can be
infrastructure. With the user having the reader under his
augmented with a RFID tag containing its MAC address
control, he is able to collect his own data without having to
allowing the discovery process to take less time and giving
worry about who has access to potentially sensitive
the user direct control of which devices he chooses to
location information. Additionally, in this scenario

22
communicate with. Conferences rooms can have RFID tags sensor” a part of the sensor network. By showing
that contain the necessary data to configure a laptop for messages and giving the user a means of input, a human
that location such as the SSID and WEP key of the wireless can interact in real time with a UCB sensor mote on a
network and the name of the printer and/or projector lightweight platform. This could be useful in the
available in the room. Not only does it allow a convenient deployment of sensor networks to ensure that each node is
method of configuration, it also limits people’s ability to properly configured and working.
obtain the information through room access. These
CONCLUSION
actuation events don’t have to be limited to computer Our goal is to create a toolkit for development of
interactions. RFID tags can be placed anywhere and be specialized personal-area network devices that utilize a
used as widgets to trigger events such as virtual switches to standard wireless communication platform. We leveraged
turn lights on and off. This allows for extremely dynamic the low-power radio and sensor network protocol work
environments where widgets can be reprogrammed and already in progress at UC Berkeley and other research
reconfigured. institutions to create general purpose I/O devices. By
The Mite was also designed for applications that augment creating a system of programmable I/O devices that share a
objects and spaces with information. These applications common programming language and low power
allow users to access and control data that had been communication protocol, developers should be able utilize
associated with a particular item. A prime example of this these building blocks for application development.
individual control of data is for associating repair histories Hopefully this approach enables the research community at
with a specific device. Past repairs and scheduled large to focus on implementing the desired functionality of
maintenance can be annotated at an elevator itself as well the device without having to divert their energies to
as in a centralized database. This means that the developing the base hardware components.
information needed on the worksite would already be at the
ACKNOWLEDGMENTS
worksite without relying on a network connection. In
We would like to thank Intel Research for their support of
addition, individual parts can now store their individual
the project. Roy Want’s team took our design and produced
history locally giving more precise and accurate
a prototype of the DisplayMote. Additionally, we would
information. Another strong advantage to having
like to thank Ken Smith of Intel Research Seattle for his
inexpensive, lightweight RFID reader/writers is the ability
help with packaging solutions. Finally, we would like to
for people to create personalized content. Business cards
thank Kurt Partridge and Saurav Chatterjee for advice in
are imprinted with a variety of static information (e.g.
building the DisplayMote.
Name, Title, email address, etc) and are given to a variety
of people. With writable RFID tags, business cards can REFERENCES
contain active content that allows them to become a 1. K. Fishkin, K. Partridge, and S. Chatterjee. “Wireless
malleable document full of additional information that can User Interface Components for Personal Area
be varied depending on who is the intended recipient. For Networks,” Pervasive Computing, Oct 2002, vol. 1, no.
example, it would be appropriate to embed the URL for a 4, pp 49–55.
work homepage within the card to give to a work colleague 2. J. Hill, et al. “System architecture directions for
but nice to be able point a friend to a site of pictures from networked sensors.” Proc. 9th Int'l Conf. on
last weeks golf outing using the same business cards. With Architectural Support for Programming Languages and
the reprogrammable memory available with RFID, business Operating Systems 2000, pp. 93-104.
cards can now contain sounds, product descriptions, or any
other data that can fit on an RFID tag. 3. M. Weiser, “The Computer for the 21st Century”,
Scientific American, Sept. 1991, vol. 265, no. 3, pp. 94-
The DisplayMote general application set is squarely 104.
focused on the lightweight I/O capabilities that the platform
offers. The DisplayMote was designed to provide access to 4. K. Partridge, et al., “TiltType: Accelerometer-
systems which has no integrated display like the Intel Supported Text-Entry for Very Small Devices,” Proc.
Personal Server [5]. Our goal was to make a platform the International Conference of User Interface Software and
size of a wristwatch to enabled short messages from these Technology 2002, pp 201-204.
devices to be displayed to the user and for the user to give 5. R. Want, et al, “The Personal Server: Changing the
feedback to these devices. Messages might be reminders or Way We Think about Ubiquitous Computing,” Proc.
lists of surrounding devices that the DisplayMote is able to International Conference of Ubiquitous Computing
communicate with. The DisplayMote can be used as a low 2002, pp 194-209.
cost notification system which can be placed outside 6. SkyeTekM1 http://www.skyetek.com/products/
common spaces (like conference rooms) to dynamically SkyeRead%20M1.pdf
show reservation schedules and the current status of the
room. Another application that the DisplayMote was 7. Lakshman Krishnamurthy, Intel Corporation. Personal
developed to address was the ability to make a “human Contact.

23
Palimpsests on Public View:
Annotating Community Content with Personal Devices
Scott Carter, Elizabeth Churchill, Laurent Denoue, Jonathan Helfman, Paul Murphy, Les Nelson
FX Palo Alto Laboratory
340 Hillview Avenue, Building 4,
Palo Alto, CA 94304, USA
+1 650 813 7700
{carter, churchill, denoue, helfman, murphy, nelson}@fxpal.com

ABSTRACT
This demonstration introduces UbiComp attendees to a
system for content annotation and open-air, social blogging
on interactive, publicly situated, digital poster boards using
public and personal devices. We describe our motivation, a
scenario of use, our prototype, and an outline of the
demonstration.
Keywords
Annotation; comment; public bulletin boards; community
content; social blogging
INTRODUCTION
palimpsest (n). “A manuscript, typically of papyrus or
parchment that has been written on more than once, with
the earlier writing incompletely erased and often legible.”
The system we propose to demonstrate allows people to
annotate content on interactive, digital bulletin boards
located in public places (Plasma Posters, Figure 1) using Figure 1: Annotating a Plasma Poster posting using a PDA
PDAs. We envisage this to be a mechanism by which
community members can exchange and explore interests Unlike digital advertisement boards (e.g. Adspace
and ideas. By publishing such annotations in public places, Network’s CoolSign boards), content that is posted to the
linked to the content to which they refer, we create a visible Plasma Posters is either generated by community members
“buzz” of “interest clusters”. and sent by email, or automatically selected from the
company intranet. Content typically consists of URLs, text,
In this demonstration description, we first describe our
images and short movies. A touch-screen overlay on the
digital, community, poster boards, and present user
plasma displays enables interaction with content, including
opinions related to commenting and annotating content
navigation and browsing of posted content and of
published on those boards. We then describe our approach
hyperlinks within that content.
to enabling personal and public annotation of digital
community content using public and personal devices. We Usage logs, user surveys and interviews have revealed
present a scenario, outline our current prototype, and considerable interaction with content at the Plasma Posters,
describe our demonstration at UbiComp 2003. including printing and forwarding of content to oneself and
to others from the Plasma Posters themselves [1]. Content
COMMUNITY CONTENT ON PUBLIC DISPLAY authors have also been emailed with comments regarding
Plasma Posters are large screen, interactive, digital, their postings (e.g. Figure 2). These comments are
community bulletin boards that are located in public spaces persistent, conversational threads [3] between readers and
[1]. Underlying the Plasma Posters is an information authors of posted content. We have also observed the
storage and distribution infrastructure called the Plasma existence of threaded posts (an item that is sent in response
Poster Network. We have had three Plasma Posters running to something previously posted). These threads and
in our lab for over a year, and two running in sister labs in comments demonstrate the ways by which posted content
Japan for 4 months. becomes the nexus of conversation.

24
collaborative annotation, the goal is usually to point
someone else to interesting parts of a document (e.g.
including text, video, voicemail, text-chat), as a method of
activity coordination, as a method of ongoing note-sharing
in a working situation, and for serendipitous sharing.
Finally, social or public annotation is less team-directed
than collaborative annotation, allowing people to leave
comments for others to happen across. In the last case, most
are Web–based (e.g. [5,7,10,14]).
Examples of current uses of public annotation can be found
in several applications on the World Wide Web. The most
common forms include newsgroups and Web-based
discussion forums, bulletin-boards or "blogs". Most are
Figure 2: A comment on a community posting emailed to designed to be accessed, contributed to and read by lone
the author of that content. The email contains the comment individuals from PCs. Our design challenges have been to
and a URL to the original posting. design easy-to-use and appealing methods for such
annotation from mobile devices, and to produce interfaces
Given people’s propensity to interact through and around that effectively display those annotations in public fora.
content in this way, we are developing methods that make
content annotation a more prominent feature of the Plasma
Poster Network and the Plasma Posters themselves.
Inspired by instances of PDA used for sharing comments in
focused collaboration, meeting and educational situations
(e.g. [5,10]) we have extended the Plasma Poster Network
to support capture of posted content to personal devices
such as PDAs, creation of annotations for that content on
the PDA (with text, graphics, and audio), and reposting of
the annotated content to the system, and thus to the Plasma
Posters. There are precedents for assuming people will post
personal content on situated, public displays from personal
devices. Examples include the Progress Bar’s Meshboard in
London, UK where patrons can send images from cell
phones [11], and the Appliance Studio’s TxTBoard, where
SMS text messages can be sent to public displays [13]. To
date, however, these technologies do not support inline
annotation of existing content. Further, these technologies
so far have focused on what has been called “person-to- Figure 3: The Plasma Poster Interface, the posting
place” publishing. We wish to extend this notion to represented on a PDA with the commenting facility visible,
“person-to-place-to-people-to-person” content annotation, and the Plasma Poster display with the created annotation
augmentation and publication. visible. The notes along the right edge of the Plasma Poster
are all annotations that have been created by community
ANNOTATION members from their PCs, PDAs, or the “scribble’ interface
Annotation involves marking of content where the original at the Plasma Poster itself.
remains unchanged. Most examples of digital annotation
deal with annotating textual documents, but some do
include annotation of audio or video content. Most ANNOTATING COMMUNITY CONTENT: A Scenario
annotations are text-based or ink-based, although some are Before detailing the technical aspects of our demonstration,
audio and pictorial. we present a scenario of the system in use.
We characterize annotation systems as falling broadly into While listening to a talk on a new shared note taking
3 categories: 1. annotations for personal use; 2. application, Jane, a conference attendee overhears someone
collaborative annotations; and 3. public/social annotations. near her talking about how they have just implanted a
In the first category, the goals are typically to support active tracking device in their dog. She opens her laptop and does
reading (e.g. [2,4,9]), to help with content retrieval a quick Google search on “rfid dogs” and e-mails the first
(including summarization, search and classification), for link she finds to the address of a nearby Plasma Poster,
new document retrieval and for content reuse in giving the posting the title “Is rover going robo?” Another
composition of new documents. In the second category, attendee, Jason, passing by the Plasma Poster in the lobby

25
nearby notices the post, and wants to add that such tracking client platform (e.g., large plasma display or personal
devices are highly controversial as their safety has not been computer).
fully proven. He presses the “comment” button on the
display and uses the scribble pad to attach an annotation
Reading/Listening Interfaces
(“not my dog!”) to the display, adding a pointer to a URL to
Web Personal
a Web site where the tags are discussed more critically. PosterShow Repository
(Public Display) Interface
Later, another attendee, Jeffrey, who has just been to a talk (any device)
on ambient displays sees same posting. He approaches the
Content and Annotate
display with his PDA and presses the “grab posting” button, Email Overview JSP
PosterMail Servlets and JSPs
and downloads the current posting to his PDA using the Clients
(Desktop)
Servlet
wifi connection. After he sees that his PDA has opened a Web
Posting Access
web page showing the content from the posting and the Clients
Servlet Servlet
(Desktop)
comment left by Jason, he wanders off to another talk,
Applet Sketcher
sketching a response along the way. (PDA &
Poster/ Personal Annotation
Public Display)
Metadata Repository Database
Later, other attendees gather around the display and begin eVB Audio
Annotation
Database Database
Application
talking about the post. They read the comment left by Jason (PDA)
Servlet

and look through the site he recommended, and Writing/Recording Posting Hosting and Distribution
conversation begins to focus on where exactly they implant Interfaces Infrastructure Infrastructure

the tags. After scouring the article, they locate the


paragraph that describes where the implants are positioned
(“usually in the fleshy area of the neck…”). One of the Figure 4. The Plasma Poster Network Architecture, with
folks near the display uses a gesture to highlight that Annotation components shown in white
paragraph and attaches an annotation to that region (“where
they implant it…”). When they are about to leave the lobby
area, they notice another annotation posted to the display, Additions to the Plasma Poster Network make it easy for
added remotely by Jeffrey. They open the annotation and readers of previously posted content to create and distribute
find that it is an animated sketch of an orb getting darker annotations on that content. The arrangement of technology
and lighter. They then press the audio play icon next to the we demonstrate here extends previous systems for access to
sketch and hear the author’s description of the animation publicly shared content through personal devices [5] by
(“an ambient display that gets brighter as your dog gets bringing together an infrastructure and range of client
farther away”). This spawns a whole new discussion applications that support a collage of devices (public and
amongst those present, with some arguing that the proposed personal), working across multiple media types, and
design is ridiculous while others make the case that while focusing on associating annotations with community posted
simplistic it may have merit. Later, Jane is passing by the content, where annotations may be immediately introduced
Plasma Poster and sees all the annotations that have been into the system or where sufficient contextual information is
posted over her original content. She is amused to discover stored on a personal device to allow offline annotation to be
her post has caused so much response and debate, forwards made and later uploaded into the system. On the server side,
the recommended URL to her home email so she can read it the Annotation Servlet accepts annotations on posted
later. content from both sketch-based and audio annotation
clients. Upon receiving annotation data, the servlet interacts
System implementation and architecture with system databases to associate annotations to content. A
The Plasma Poster Network is a client-server system that link to the posted content is stored along with the
has been designed to make it easy for content creators to annotation’s media type, a link to the posting author
distribute information to their community (Figure 4). Server (defaulting to "anonymous" when user information is not
components provide the collection and hosting available), and the onscreen location of the annotation
infrastructure. The Plasma Poster server consists of a interface at the time the annotation was authored.
relational database (e.g., MySQL from MySQL AB) and Furthermore, to support conversation across multiple
Java servlets and Java Server Pages (JSPs) that run in a annotations, the servlet can specify that a particular
standard Web server (e.g., Tomcat from the Apache annotation is a reply to a previously posted one and that a
Software Foundation). Client components provide a variety set of annotations are related and should be shown
of content displays and interaction mechanisms. For simultaneously, allowing multimodal annotations. The
example, posting of information to the Plasma Poster Annotation servlet also interacts with a personal repository
Network is primarily through e-mail. A PosterShow Visual database to store postings of interest to individual users for
Basic application provides a cyclic view of posted content their personal perusal at a later time. Stored content can
suitable for display and navigation on a Plasma Poster include a complete posting, parts of a posting, annotations
or any combination of these. Also on the server side, the

26
Annotate JSP allows client applications access to the data in Instead of posting the annotation directly to the display, the
the annotation and personal repository databases. Client user might want to take another picture and perhaps make
side support for annotations on personal devices includes a comments about both pictures collectively. In this way she
sketching tool and an audio-recording tool. The sketching has appropriated publicly posted, social content into a novel
tool is implemented as a Java applet and allows users to piece of content that is once again personal. She could then
draw responses to comments. Users first specify a posting repost this new content to the display to again transfer the
to annotate using an interface served by the Annotation JSP. content to another domain of ownership. In future work we
Once a user has selected a posting, the sketch applet allows intend to explore how users conceptualize such transfers of
use of the PDA stylus to input simple annotations. The ownership.
audio annotation tool is implemented as an embedded
REFERENCES
Visual Basic application and allows users to record a brief
1. Churchill, E.F., Nelson, L. and Denoue, L. Multimedia
comment using the device’s built-in microphone. Comments
Fliers: Information Sharing With Digital Community
are uploaded to the Annotation Servlet using the wifi
Bulletin Boards. Proc. Communities and Technologies
enabled PDA.
2003, September 2003, Kluwer Academic Publishers.
The Annotate JSP provides client side interfaces for 2. Denoue, L. and Vignollet, L. An Annotation Tool for
annotations on public displays. The JSP dynamically Web Browsers and its applications to information
displays annotation icons next to postings their associated retrieval , RIAO2000, Paris, France, 2000, p. 180-195.
postings. In this way users may scroll through and open
annotations using simple gestures. Users may also sketch 3. Erickson, T. Persistent Conversation, Introduction to
annotations on the public display using a version of the Special Issue of JCMC 4 (4) June 1999.
sketching tool for that device. Also on the client side, a 4. Golovchinsky, G. Emphasis on the Relevant: Free-form
Web-based interface allows users to manage their personal Digital Ink as a Mechanism for Relevance Feedback,
content repository. Users can review postings and Proceedings de ACM SIGIR'98, Melbourne, Australia,
associated annotations that they have collected from public 1998.
displays or store new content to post at a later time. This 5. Greenberg, S. and Boyle, M. (1998) Moving Between
interface thus allows users working away from the display a Personal Devices and Public Displays. Workshop on
way to see annotations to postings in which they have Handheld CSCW, CSCW, November 14, 1998.
expressed interest.
6. Gronbaek, K., Sloth, L. and Orbaek P., WebWise:
DEMONSTRATION FOR UBICOMP
Browser and Proxy support for open hypermedia
Before the conference, select members of the UbiComp
structuring mechanisms on the WWW, International
community will be asked to register with our system and to
World Wide Web Conference, Toronto, Canada, 1999,
post some content for public display. Four PDAs will be
p. 253-267.
available at the conference itself to enable attendees to
interact with posted content. 7. Hanna, R. Annotation as Social Practice. In S. Barney
(Ed.) Annotation and Its Texts. New York, Oxford:
We will support content posting, capture and annotation
Oxford University Press, 1991.
from laptops, PDAs, and PCs. We will support, for
example, a laptop user in the conference’s internet 8. IMARKUP, http://www.imarkup.com, 1999.
connection area who wishes to post the web site of a nearby 9. Marshall, C.C., Price, M.N., Golovchinsky, G. and
restaurant that he enjoys as well as plate suggestions and Schilit, B. Collaborating over Portable Reading
other comments. Similarly, we will support users who, for Appliances , Personal Technologies, vol 3, n 1, 1999.
example, take and upload photos of a demonstration in-
10. Myers, B. A., Stiel, H., and Gargiulo, R. 1998.
progress. We will support viewing and annotating content
Collaboration using multiple PDAs connected to a PC.
both at the public display itself as well as via personal
In Proc CSCW ’98, ACM Press, pp. 285–294, 1998.
devices. For example, a person using the public display can
leave a sketch or audio response to the posted restaurant 11. Progress Bar Meshboard
suggestion. A PDA user, meanwhile, can press a button on http://news.bbc.co.uk/2/hi/technology/2861749.stm.
her display that captures the content of the posting and all 12. THIRDVOICE, http://www.thirdvoice.com , 1999.
of its annotations to her PDA. She could then walk over to
13. Appliance Studio’s TxtBoard
the demo to witness it herself and attach her comments to
http://www.appliancestudio.com/sectors/smartsigns/txtb
the posting. We will also support targeted annotation of
oard.htm.
specific parts of content. For example, a user of a public
display may use a gesture to select and attach annotations to 14. Yee K.P. The CritLink Mediator,
a specific region of text. http://www.crit.org/critlink.html.

27
Platypus Amoeba
Ariel Churi Vivian Lin
319 Manhattan Ave. #3 135 Washington Ave.
Brooklyn, NY 11211 USA Brooklyn, NY 11205 USA
+1 646 382 6522 +1 718 398 0081
[email protected] [email protected]

ABSTRACT INTRODUCTION
Platypus Amoeba (Platy) is a reactive sculpture. It knows Technology is continually being devised to satisfy people’s
when someone is petting it and it can indicate how it feels. needs. But how does technology change our needs? How
By petting Platy the user speaks to it. Platy uses lights and willing are we to change our actions and desires based on
sound to speak to the user. This feedback can indicate technology? Platypus Amoeba is an experiment in
happiness or sadness or other emotions. Users begin by human/computer interaction. It asks us; what is our
trying only to initiate a response from the Platy but then relationship to our technology? It is not technology
quickly change to trying to get a happy response. The user masquerading as a creature but rather a creature born of
is trying to control Platy by petting it in certain ways but technology. Platypus Amoeba entices with the desire for
Platy is controlling the user by indicating which way it power as it allows us to cause exciting light patterns and
would like to be petted. strange noises. But quickly we see the limitations of that
power as certain interactions cause negative or
Keywords
unsatisfactory responses. We then change our behavior to
virtual pet, interactive sculpture, responsive technology, get the desired response. Is the user controlling Platy or is
zoomorphism, human-robot interaction Platy controlling the user?
INTERACTION
The ideal interaction with the Platypus Amoeba is most
effective if there is one person at a time. The user must pet
from front to back to get a vocal response. If the user fails
to be consistent with their patterns of petting, Platy may
stop glowing or emit a harsh squeal. Platy can react with
different light formations. For example, Platy can follow
your hand with lights that mirror your action. Afterwards,
Platy can get tired and its lights start to trail the action of
your hand over its body. Like the Public Anemone Robot
(2) Platy can choose to not interact with the user.
PHYSICAL FORM
The physical form of the Platypus Amoeba morphed from
its original concept, a giant caterpillar, to an organic and
zoomorphic shape, unidentifiable but familiar. The shape of
Platy is round and bulbous. The nubby legs of Platy can be
Figure 2: Final Platypus Amoeba. attributed to the original giant caterpillar concept. When
one actually touches Platy, its texture is flexible and
resilient. There is resistance against your hand when Platy
is touched due to the thickness of the silicone material.
Platy’s exterior is made from soft, translucent silicon
rubber. Platy’s mass is derived from the resilience of the
Dragon Skin Q silicone rubber, which allows for the
density and resistance when touched. Based on human
interaction with Platy, the natural response is to squeeze
Platy’s body and hold one of its legs. Users are fascinated
with its texture and tactility, usually stroking Platy until
the point where they feel comfortable to squeeze its body.

28
behavior studies, cats exhibit signals indicating “Don’t Pet
Me Anymore” aggression, explaining why cats that seem to
enjoy being petted suddenly bite (3). In contrast, cats can
emit noises that express a desire for attention that gives
humans the desire to pet. With Platy, the user will
continue to pet it and receive positive feedback. If the
feedback is negative, the user will question, where in their
actions did the Platy signal “Don’t Pet Me Anymore”
aggression.
INTERFACE
Platy experiences the outside world through sixteen
phototransistors. These detect the shadow of the users hand
as he/she pets the Platy. Phototransistors were chosen
primarily for aesthetic reasons (See Figure 4). Many other
sensor options were discarded because they would not be
pleasing to the eye. Photo resistors would look bad, force-
sensing resistors would be expensive and unattractive,
Figure 1: Original concept of Platypus Amoeba.
QPROX sensors would have been nice as they could be
ZOOMORPHISM almost completely hidden but we were unable to get
The zoomorphic shape of Platypus Amoeba is attributed to consistent results. Platy provides feedback through the
the giant caterpillar, but also to an inconceivable shape not sixty-four LEDs, which shine through it’s back, through
found in our natural environment. The definition of the color of its eyes as well as through various purring and
zoomorphic is having the form of an animal, of, relating cooing noises from a hidden speaker. The lights can show
to, or being conceived of in animal form or with animal red, green, and blue like a TV screen as well as white. This
attributes. Platy’s shape lends itself to no particular gives us a full range of color options to work with. Platys
creature, but it’s two eyes (See Figure 3) and many sounds were created by a human voice. They were based
feet/nubs make it seem to be some sort of creature. Noises on years of living with pets while trying to keep away from
of an unknown creature emanate from it. Users are able to identification with any particular animal.
decipher a beckoning purring or sometimes a less friendly
or ambivalent response. What social cues determine
emotion through sound? How does the user determine if a
certain coo or purr emitted from the Platy is a positive or
negative response?

Figure 4: Shell of the Platypus Amoeba.

DESIGN AND FABRICATION


Platypus Amoeba is designed to look like a living creature
but not like any particular living creature. Users may think
Figure 3: Platypus Amoeba wakes up. of the Platy as alive without identifying its personality
with a person or a cat or some other actual creature.
EMPATHETIC RESPONSE Platy is designed to look cute. People should not feel
Platypus Amoeba looks like a small, helpless creature. By threatened by Platy or have too much difficulty thinking of
touching it’s soft skin and it reacting, you are sharing an Platy as alive. Most electronics are hidden and the over all
experience with it. Soon, it is clueing you in to what it shape seems harmless. Eyes are low and set far apart in a
wants and you make it sparkling happy or you are large head resembling the Japanese super-deformed style of
depriving it and it is truculent and sad. According to pet

29
character design (1). The leg/nubs appear underdeveloped. ACKNOWLEDGMENTS
Overall it looks like the baby of a strange alien. Cindy Yang who made the Platypus Amoeba silicone body
in a three-part plaster mold without ever having attempted
ARCHITECTURE
such a thing before, Mallory Whitelaw who helped with
Platy is self-contained except for the power source. dynamic coding, Cindy Jeffers, for all her help, Greg
The exterior shape was first sculpted in oil-based Shakar, for his invaluable technical support, and Tom Igoe,
plastecine. From this shape we made a three-part plaster our professor.
mold which we poured the uncured silicon into. The
electronic components sit inside this silicon shell with REFERENCES
only a wire, for power, protruding. The Software resides on Pictures, video, schematic and instructions on building
a PIC16F877 microcontroller, which controls four your own available at:
MAX7219 light controllers and an ISD1416 chipcorder http://stage.itp.tsoa.nyu.edu/~ac1065/sculptwdatabody.htm
sound chip (See Figure 5). Also inside are a small speaker 1. AIC/Yoyogi Animation Gakuin, How to Draw Manga:
and four arrays of sixteen lights. One array for each color Making Anime, Graphics-Sha, 1996 (Japanese) and 2003
red, blue, green and white for a total of sixty-four lights. (English)
Economy was a consideration in design and the total cost
of the internal components was under $200US. 2. Breazeal, C., The Public Anemone Robot, SIGGRAPH
2002 Conference Abstracts and Applications, MIT
Media Lab, Cambridge MA, 2002.
3. Hetts S., Ph.D., Certified Animal Behaviorist,
Explaining Cat Aggression Towards People. Available
at http://www.catcaresociety.org/aggression.htm

Figure 5: Platypus Amoeba block diagram

RESULTS
User testing with general public took place at the ITP
Spring Show 2003. In general, users were pleased with the
tactility and interactivity of Platy. Many initial responses
were to try to pick up Platy and/or squeeze the main body
but Platy reacts only to petting. For video of user response
and interactions with the Platy please visit our video link:
http://stage.itp.tsoa.nyu.edu/~vl336/Spring_2003/SD/platy
pus.html
CONCLUSION
Perhaps Platypus Amoeba can be networked with others to
create a small army of responsive creatures. With different
personalities formed by how each user treats their Platy,
perhaps different Platys could interact with each other and
with information from a variety of sources. Ideally, Platy
would become completely portable and run on batteries.
Platy’s accessibility and mobility would become more
ubiquitous and part of the household.

30
M-Views: A System for Location-Based Storytelling

David Crow, Pengkai Pan, Lilly Kam, Glorianna Davenport


Interactive Cinema Group
E15-368 Media Laboratory
Massachusetts Institute of Technology
{ crow | ppk | lillykam | gid }@media.mit.edu

ABSTRACT

M-Views is a system for creating and participating in • Client-server architecture, allowing multiple clients
context-sensitive, mobile cinematic narratives. A Map Agent to connect to a story server, which analyzes their
detects participant location in 802.11 enabled space and context/location data and sends each client the
triggers a location appropriate video message which is sent next piece of its personalized experience
from the server to the participant's "in" box.
• Scripting language and authoring software [4],
Keyword giving authors the tools they need to create and
s
context -aware systems, participatory media, wireless indoor test location-based narratives
location awareness, mobile cinema, storytelling
• Location awareness engine, which uses wireless
INTRODUCTION network signal strength analysis to estimate the
location of each handheld client
As handheld computing becomes more popular, it will
gradually incorporate context -aware features into everyday
usage [1] [2]. Information selection will become easier
because devices will infer what their users want—even
before they pick up a stylus. While location-based
marketing and instant messaging seem certain, less
attention has been paid to the creative possibilities of
context-aware, ubiquitous computing until recently.

Every person creates and receives stories. People talk to


others, write in diaries, and send messages by phone, fax or
mail. Over the Internet, this flow of information is enhanced
by the speed, capacity and flexibility of computer
technology. Email and webpages record our “personal
narratives.” Weblogs can be regarded as an evolving tour
through the author’s life [3]. This form of storytelling will Figure 1: A mobile cinematic story
inevitably migrate to the handheld, context -aware platform.
Thus, there will be new possibilities for art and The resulting M-Views experience takes the user on a
entertainment: interactive media that relates to the user’s journey through the physical world, and pieces of the
current environment. We are interested in exploring these story—in the form of media clips—appear on the handheld
possibilities. at different locations. The selection, order, and timing of
these clips are all unique; each person will experience the
Our goal is to build a system for the development and story in a different way, because with every movement, s/he
deployment of mobile, context -aware applications— affects its outcome. We call this interactive experience
specifically location-based, cinematic stories. This platform, Mobile Cinema.
M-Views, consists of the following components:

31
Mobile Cinema is augmented by physical surroundings and M-Views Client
social engagement. As the participant navigates physical
space, s/he triggers distinct media elements that often The M-Views client operates on the Windows CE operating
depict events at the location where they appear. The system (Pocket PC). Each new event is dropped into a
individual media segments are acquired at discrete times message queue, which is visibly represented as the user’s
and places, with allowances for the serendipitous inbox. In addition to the message manager interface shown
augmentation of the whole experience through instant in Figure 3, the client also features a map viewer/editor tool.
messaging (done with the M-Views client). Since any This permits users to see their server-calculated positions
system is only as good as its content, o ur research has also and those of others. It also allows administrators to
included the production of three mobile “movies” of this calibrate map coordinates using only the standard client.
kind, which range from a mystery, to a college drama, to our The software is modular and can be augmented for new
latest story: an action thriller called 15 Minutes. functionality and sensors. It uses third party programs
(such as Windows Media Player) to play streaming media
M-Views was designed for Mobile Cinema, but its robust over the network. When a message arrives with an
features allow it to have other capabilities as well. The associated media URL, the streaming media player is
platform can be used to support many types of applications. launched. The information flow is diagrammed in Figure 4.

TECHNOLOGY

The M-Views client-server architecture consists of multiple


handheld clients connecting to a centralized server over a
wireless (802.11) network. It makes use of an
account/subscription service model, allowing users to
subscribe to multiple stories at the same time. The server
contains modules and information used for specific
behavior, such as particular types of context monitoring or
scripting operations and the location awareness engine.
Story scripts are also maintained on the server and dictate
the content and media to be returned. M-Views
applications are defined by these story scripts.

Figure 3: Client Interface on Pocket PC Figure 4: System Information Flow

32
Communication content, and an associated Media URL. The scripting
system is used to specify story behavior based on user
Communication between the client and server is carried out activity, and each event element contains user variable
via HTTP POST requests. Using this protocol provides requirements and results. If current variable values
both stability and portability. Every update cycle (maintained in the account data of each user) meet event
(approximately once per second), the client transmits requirements, the event is considered encountered, and the
authentication information, communication settings, and user’s variables are changed according to any update rules
sensor data to the server, which then validates the that may also be defined for that event.
information and sends back messages, story events, and
location estimates. This communication scheme eliminates Location Awareness
the need for a logon/logoff mechanism, and it is very fault-
tolerant. If the connection is interrupted (perhaps due to MapAgent is the default location awareness engine written
losing wireless network coverage), the client will keep trying for M-Views. M-Views clients monitor the Received Signal
to send the last request until a connection is made or the Strength Indicator (RSSI) for all 802.11 wireless access
program is terminated. To allow for roaming between points in range. These measurements are averaged over a
wireless networks, the client attempts to reinitialize its small time window and transmitted to instances of
wireless network card and DHCP address after any MapAgent running on the server. For each subscribed
connection timeout or interruption. In practice, it takes map, the associated MapAgent compares the RSSI
about 10-30 seconds to reacquire a new network connection averages to measurements recorded previously by an
after the previous one has been lost. administrator at known locations, which are called hotspots.
Hotspots have a threshold, and they are represented on the
M-Views Server map with translucent circles, as in Figure 6. The MapAgent
algorithm uses a combination of nearest neighbor matching,
The M-Views server is written in Java and runs as a servlet triangulation, and trajectory estimation to determine client
with the appropriate container software, such as Apache locations. The average accuracy is between 1 and 5 meters,
Tomcat. After initialization, the server maintains all story, depending on the environment, map resolution and
message, and user information as memory-resident XML calibration layout. It functions both indoors and outdoors.
data. XML management is done using the Apache Xerces 2 MapAgent also keeps track of all clients currently
package. appearing on the map, allowing applications to incorporate
a location-based social component.
The server features a messaging framework that is
specifically designed to support narrative structures but
flexible enough to be used for a full range of applications.
Under this framework, all messages—whether they are
client-to-client instant messages or events encountered in a
location-based story —are processed using the same
mechanism. All messages and events are stored in either a
story script or the general message forum (to which all users
are subscribed and where client-to-client messages are
created). Additionally, all messages, even those sent by
clients, can be made context -dependant and can have
associated media URLs. These features, coupled with
familiar functionality (i.e. message forwarding and group
mailing), allow for an intuitive, robust, context -aware
messaging experience.

Scripting

Story scripts contain a collection of messages (events).


These XML elements include event information,
requirements for the client’s context and state variables,
state change information (applied to a user’s profile when
he or she receives the message), heuristics that describe the Figure 6: Map Monitor on the Client

33
STORY DESIGN ACKNOWLEDGEMENTS

The need for good content has prompted the creation of The authors wish to acknowledge the contribution of all
numerous M-Views stories. These have included two large project team members: Carly Kastner, Lilly Kam, Debora Lui,
productions by students at the MIT Media Lab: a campus- Chris Toepel, and Dan Bersak. Special thanks goes to our
wide mystery (designed as a time-dependent scavenger Interactive Cinema colleagues: Barbara Barry, Paul
hunt) and a dramatic tour through the lives of students at Nemirovsky, Aisling Kelliher, and Ali Mazalek. We also
MIT. Each production stressed different aspects of Mobile thank Prof. Alan Brody, Prof. Bill Mitchell, Prof. Donald
Cinema—in particular, nonlinearity and the connection with Sadoway, and Prof. Ted Selker for excellent acting, advice,
space. and support. We gratefully acknowledge Thomas Gardos
from Intel, Taka Sueyoshi from Sony, Steve Whittaker from
Nonlinearity refers to the modularity of story clips. Authors BT, and Franklin Reynolds from Nokia for their kindness
must accept the possibility that clips will be seen at odd and support . This work is supported in part by grants from
times or in strange orders. Therefore, the story and each the MIT Media Lab’s Digital Life Consortia and the Council
clip that composes it must be able to withstand these for the Arts at MIT.
uncertainties. M-Views authors have discovered that every
clip should be entertaining independent of the other story REFERENCES
material; each scene must have its own miniature “story
arc.” [1]. G. Chen and D. Kotz, "A Survey of Context -Aware
Mobile Computing Research," Technical Report
Connecting with space is essential to the mobile experience. TR2000-381, Dept. of Computer Science, Dartmouth
The small screen of a handheld device is a disadvantage in College, November 2000.
this regard. Therefore, it is up to the author to anticipate
the interest and curiosity of the user. Carefully planned [2]. J. Hightower and G. Borriello, "Location Systems for
cinematography is the key here. Authors of Mobile Cinema Ubiquitous Computing," Computer, special issue on
have learned to give their audience spatial awareness and location-aware computing, vol. 34, no. 8, pp. 57-66,
dramatic focus through use of motion, extreme close-ups, 2001.
and wide establishing shots.
[3]. http://newhome.weblogs.com/historyOfWeblogs
SIGNIFICANCE
[4]. Pengkai Pan, Carly Kastner, David Crow, and Glorianna
Previous context -aware mobile media systems, such as the Davenport, "M-Studio: an Authoring Application for
Cyberguide system [5], the Guide system [6] and the Hippie Context -Aware Multimedia," ACM Multimedia 2002,
project [7], are all aimed at providing location-based Juan-les-Pins, France, 2002.
experiences for visitors, city travelers, or museum tourists.
All these systems adopt client-server architectures similar [5]. Gregory D. Abowd, Christopher G. Atkeson, Jason
to M-Views, but differ in that they do not focus on the Hong, Sue Long, Rob Kooper, and Mike Pinkerton,
narrative aspect. In addition, few systems are full "Cyberguide: a Mobile Context -Aware Tour Guide,"
development platforms for mobile applications. None of Wireless Networks, 3(5):421-433, October 1997.
this past research has focused on the development of
cinematic narrative and little effort has been made to [6]. Cheverst, K., N. Davies, K. Mitchell, A. Friday, and C.
purposely support multiple kinds of mobile applications Efstratiou, "Developing a Context -Aware Electronic
using these architectures. Tourist Guide: Some Issues and Experiences," Proc. of
CHI 2000, Netherlands, pp. 17-22, April 2000.
M-Views breaks new ground by giving people the chance
to author and experience Mobile Cinema with unlimited [7]. Reinhard, M. Specht, and I. Jaceniak, "Hippie: A
freedom—use it to create your desired type of mobile movie Nomadic Information System," In Proceedings of 1st
or game, or build your own context -aware application. Or International Symposium on Handheld and
simply write about your own life using the space around Ubiquitous Computing (HUC '99), pp. 330-333.
you as your medium.

34
Stanford Interactive Workspaces Project
Armando Fox, Terry Winograd, and the Stanford Interactive Workspaces group
Computer Science Department, Stanford University
Stanford, CA 94305
+1 650 723 9558
{fox,winograd}@cs.stanford.edu

ABSTRACT Our main experimental research facility, the iRoom, is


“iRooms” (Interactive Workspace rooms) are characterized located in the Gates Information Sciences Building at
by the presence of one or more large shared displays plus Stanford. We believe the iRoom is representative of
several users’ laptops or handhelds. (See figure.) Key "Weiserian" ubiquitous computing spaces within the task
philosophical underpinnings of the iRoom project are fluid domains we are addressing. We are actively pursuing
interaction and incremental integration. Fluid interaction research on the intersection of HCI and systems problems
refers to the ability of users to focus on the tasks they are that arise in deploying, operating and developing
doing, not on the technology being used to do them. applications and human interfaces for an iRoom, including:
Incremental integration refers to the ease with which new • Multi-device, multi-user applications
behaviors or devices can be incorporated into an existing
• Multimodal and fluid interaction
ensemble of components forming a ubicomp environment,
in order to augment or modify the behaviors of existing • Reusable, robust, and extensible system software
applications in that environment. In a “miniature” iRoom, for deploying COTS-based (commercial, off-the-
we will demonstrate how the software design decisions that shelf) ubiquitous computing environments like our
support incremental integration enable the capabilities of own
fluid interaction. Attendees will be able to directly interact • integration of large (wall-sized) displays with
with the technology themselves, instantly download client advanced visualization capabilities into an iRoom
software, and view prototypes of “portable” iRooms that
can be deployed in environments with little or no existing • integration of computing “appliances” including
ubicomp infrastructure. PDA's, scanners, digital cameras, etc. into an
Keywords iRoom
Interactive, large display, collaboration, system software, We explicitly focus on reusable system software and the
integration, capture and access ability to integrate "legacy" off-the-shelf applications and
systems, and encourage others to build on our work. iROS,
INTERACTIVE WORKSPACES
The Stanford Interactive Workspaces project is exploring the middleware that powers the iRoom, has been deployed
new possibilities for people to work together in in other research and classroom settings around the world,
technology-rich spaces with computing and interaction including KTH (Swedish Royal Institute of Technology) in
devices on many different scales. We have chosen to focus Kista, Sweden, ETH-Zurich, Hewlett-Packard
on augmenting a dedicated meeting space with technology Laboratories, Wallenberg Hall classrooms at Stanford
such as large displays, wireless/multimodal I/O devices, University, and the Center for Design Research at Stanford
and seamless integration of mobile and wireless
"appliances" including handheld PC's. We concentrate on
task-oriented work such as brainstorming meetings and
design reviews (rather than entertainment, personal
communication, or ambient information), and on the ability
to rapidly prototype new software as well as integrate and
augment legacy software.
This cross-disciplinary project is staffed by faculty and
students from the Interactivity Lab, Software
Infrastructures Group, and Graphics Lab; our experimental
facilities are also used as applications testbeds by the
Stanford Learning Lab, Wallenberg Global Learning
Network (WGLN), Stanford Center for Integrated Facility
Engineering (CIFE), and the Program in Writing and
Rhetoric.

35
University. of the room's EventHeap , recent activity in the room, and
the contents of the iClip-board. For example, the
iROS: Interactive Workspace Middleware
The dynamism and heterogeneity in ubiquitous computing EventHeap Visualization provides awareness of the flow of
environments on both short and long time scales implies information between machines within our environment,
that middleware platforms for these environments need to and has also been used to identify bugs and breakdowns in
be designed ground up for portability, extensibility and the system.
robustness. We have developed the iROS (iRoom iWall (Interactive Wall) is a software framework for easing
Operating System) middleware platform for augmented the development and deployment of multi-display post-
room-sized ubicomp environments through the use of three desktop applications for ubiquitous computing envi-
guiding principles: economy of mechanism, client ronments. Multiple general-purpose graphical "views" run
simplicity, and use of levels of indirection. Apart from on several devices of vary-ing capabilities and platforms
theoretical arguments and experimental results, our and are controlled by applications through a simple but
experience through several deployments with a variety of powerful EventHeap-based protocol. iWall interacts
applications, in most cases not written by the original smoothly with other iROS-based technologies such as
designers of the system, provides some validation in iStuff: for example, a user can play iPong (a table-tennis
practice that the design decisions have in fact resulted in game spanning multiple displays, using iWall as a
the intended portability, extensibility and robustness [6,4]. substrate) using iStuff and the PatchPanel to select among
An important lesson drawn from our experience so far is multiple physical “paddle controllers” as they play.
that a logically-centralized design and physically- The iClipboard, PointRight [8], and Multibrowsing [9]
centralized implementation enables the best behavior in together provide the essential mechanisms necessary to
terms of extensibility and portability along with ease of easily move data (Web pages and documents) back and
administration, and sufficient behavior in terms of forth between users’ personal devices and large shared
scalability and robustness. displays. PointRight allows a single mouse and keyboard to
The fundamental design stance of iROS is that a major control multiple screens. When the cursor reaches the edge
challenge of ubicomp middleware is design for integration. of a screen it seamlessly moves to the adjacent screen, and
We will inevitably continue to encounter situations in keyboard control is simulta-neously redirected to that
which the goal is to “integrate” a new behavior, controller, machine. Laptops may also redirect their keyboard and
or service into an existing environment not designed to pointing device, and multiple pointers are supported
accommodate it; therefore the design goal of all our simultaneously. Multibrowsing allows any pointer to direct
middleware is to make the integration task as easy as the movement of content from any display to any other
possible. This is reflected at the lowest layer in the iROS display. The iClipboard allows cutting-and-pasting of
EventHeap [7], at the application/UI integration layer by content across machines (shared or personal, Windows or
iStuff [1] and the Patch Panel [2], and at the UI generation Mac), and integrates with PointRight to do the right thing
layer by Interface Crafter [10]. (a user can cut something on one screen, use PointRight to
have the same mouse move the pointer onto another screen
Interactive Workspace Applications and Technologies
of a different type of machine, then Paste).
iROS has been the basis of numerous technologies and
applications, many in regular use in the Stanford iRoom Although most iRooms deployed to date are fixed facilities
and other interactive workspaces. Although each is the built into infrastructure-rich rooms, we have also explored
subject of one or more refereed publications, we try to give encapsulating much of the functionality of the iRoom in an
a sense of the breadth of work that has been enabled by this “appliance” that also addresses the needs of
platform: roaming/nomadic users. The MeetingMachine [5]
provides a substantial amount of the functionality of an
The Workspace Navigator is a suite of tools designed to
iRoom in a projector-like appliance, and can be
facilitate capture, recall and reuse of information in an
immediately deployed in a facility with no other
interactive environment.
infrastructure. The MeetingMachine’s design decisions
iStuff [1] is a framework for prototyping physical UI’s by reflect important differences that arise when trying to
building inexpensive physical devices and integrating them accommodate nomadic as well as “fixed” users in
rapidly with existing iRoom behaviors and applications. interactive workspaces.
The Patch Panel [2] provides a generic and easy-to-use
DEMONSTRATION SCHEDULE
software mechanism to intermediate between iStuff and
existing applications, suitable for a range of sophistication We will have three kinds of demonstrations at
from non-programmers to power users. Ubicomp’03:

The AmbienTable explores issues involving the use of “Let us show you” demos will consist of guided
tables as ambient displays. The table displays several demonstrations highlighting Interactive Workspace
visualizations relevant to iRoom users, including the status applications and technologies, including the Workspace
Navigator, iStuff for physical UI prototyping, the Patch

36
Panel for incremental integration and reconfiguration, and iRoom, or to download the iROS software (including easy
the AmbienTable for visualizing interactive workspace installers for Windows NT/2000/XP), visit
activity. Not all applications will be demonstrated during http://iwork.stanford.edu.
the entire demo session; please visit our booth for the
detailed schedule.
REFERENCES
“Exploratorium” style demos1 will provide conference 1. Ballagas, R., Ringel, M., Stone, M., and Borchers, J.
attendees the opportunity to freely interact with iRoom iStuff: A Physical User Interface Toolkit for Ubiquitous
demos as they wish; researchers will be available to explain Computing Environments. In Proc. Intl. Conf. on
“what’s going on” in each demo, and posters adjacent to Computer/Human Interaction (CHI) 2003 (to appear).
the demo booth will give further details.
2. Ballagas, R., Szybalski, A., and Fox, A. The Patch
“Try it, you’ll like it” will allow conference attendees Panel: Enabling Control-Flow Interoperability in
with Windows XP laptops to try the MeetingMachine for Ubicomp Environments. Submitted for publication.
themselves. Whereas the demo iRoom is intended to
simulate the permanent iRoom at Stanford, the 3. Johanson, B., Fox, A., and Winograd, T. The
MeetingMachine provides a substantial amount of the Interactive Workspaces Project: Experiences with
functionality of an iRoom in a projector-like appliance, a Ubiquitous Computing Rooms. IEEE Pervasive
kind of “iRoom-in-a-Box” that can be immediately Computing Magazine 1(2), April-June 2002.
deployed at a meeting or brainstorming session even if 4. Ponnekanti, S., Johanson, B., Kiciman, E. and Fox, A.
there is no network infrastructure in place. The client-side Portability, Extensibility and Robustness in iROS. Proc.
Windows software for the MeetingMachine will be IEEE International Conference on Pervasive Computing
available on CD-ROMs and USB Flash/CompactFlash and Communications (Percom 2003), Dallas-Fort
drives for attendees to download immediately to their Worth, TX. March 2003.
laptops. In addition the MeetingMachine will support 5. Barton, J., Hsieh, T., Vikram, V., Shimizu, T.,
media transfer via USB, CompactFlash, and RFID tags. Johanson, B., and Fox, A. The MeetingMachine:
SUMMARY Interactive Workspace Support for Nomadic Users.
iROS and its associated applications have been Proc Fifth IEEE Workshop on Mobile Comp. Sys. and
successfully used in a number of experimental and Apps. (WMCSA) 2003, Monterey, CA, October 2003.
production scenarios, including design brainstorming 6. Johanson, B ., and Fox, A. Tuplespace-based
sessions by professional designers, construction of class Coordination Infrastructures for Interactive
projects built on the iROS system, training sessions for Workspaces. Journal of Software and Systems (JSS),
secondary school principals, construction management, to appear in 2003.
collaborative writing in a Stanford English course, and of
course, our own weekly group meetings. iROS technology 7. Johanson, B., and Fox, A. The Event Heap: A
has also been deployed in two classrooms in Stanford’s Coordination Infrastructure for Interactive Workspaces.
new Wallenberg Hall, and Multibrowse and PointRight In Proc. Fourth IEEE Workshop on Mobile Computing
have been readily adopted by instructors of courses ranging Systems and Applications (WMCSA 2002), Callicoon,
from engineering to Classics. Overall results have been NY, June 2002
positive, with many suggestions for further development 8. Johanson, B., Hutchings, G., Winograd, T., and Stone,
and improvement; public deployments of iRooms for M. PointRight: Experience with Flexible Input
student use in libraries and dormitories are in the planning Redirection in Interactive Workspaces. In Proc. Symp.
stage, using the MeetingMachine appliance [5] for rapid on User Interface Sys. and Tech. (UIST) 2002.
deployment. We have been encouraged by comments from 9. Johanson, B., Ponnekanti, R., Sengupta, C., and Fox, A.
programmers who have appreciated how easy it is to Multibrowsing: Moving Web Content across Multiple
develop applications with our framework. Finally, the Displays. Proc. Intl. Conf. on Ubiquitous Computing
adoption and spread of our technology to other research (UBICOMP) 2001.
groups also suggests that our system is meeting the needs
of the growing community of developers for interactive 10. Ponnekanti, S., Lee, B., Fox, A., Hanrahan, P., and
workspaces. Winograd, T. ICrafter: A Service Framework for
Ubiquitous Computing Environments In Proc. Intl.
For more information and publications on the Interactive Conf. on Ubiquitous Computing (UBICOMP), 2001.
Workspaces project, to see photos and videos of the

1
Inspired by the “do, then read” hands-on exhibits at the
Exploratorium museum in San Francisco, for those
readers who have visited it.

37
Picture of Health: Photography Use in Diabetes Self-Care
Jeana Frost Brian K Smith
The Media Laboratory School of Information Sciences and Technology
MIT College of Education
Cambridge, Ma 02139 The Pennsylvania State University
USA University Park, PA 16802
[email protected] USA
[email protected]

ABSTRACT There are two major difficulties in maintaining such a


Physicians and nutritionists often prescribe behavioral routine. Firstly, it is often difficult for diabetics to
change such as healthy eating and regular exercise to understand that the actions taken now have implications for
patients with diabetes without an understanding of the health status late. They are not motivated to shift behavior.
reality of the individual’s life. As a window into the Unlike conditions with immediate biofeedback from
patient’s experience, we propose a new type of health unhealthy behaviors such as asthma, in diabetes the
record, photography. Diabetics shoot pictures of their daily damage caused by continued high blood sugar may not
actions such as eating, exercising, and socializing. manifest for many years. At that point, it is often too late to
Software tools synchronize the images with concurrent save deteriorating vision or blood flow in fingers or feet.
blood sugar data. Diabetics review and critique their health Without such biofeedback, many diabetics falsely assume
practices by examining these records in diabetes education they are taking care of themselves adequately. Secondly,
settings or with health care providers. We present results physicians and nutritionists generally suggest self-
from a study and demonstrate the use of this method. management procedures without knowing about the
person’s extant routine. These treatments do not necessarily
Keywords
fit into the individual’s reality. It is difficult to recommend
Explanation, health education, imagery as data, reflection,
specific foods to eat and exercises to do without knowing,
visual systems
for example, if the person likes to cook or how much free
INTRODUCTION time he or she has to exercise. General solutions cannot
Approximately sixteen million people in the United States help each specific individual.
live with diabetes. The disease is a major risk factor for However, it may be possible to help diabetics develop
heart disease, stroke, and birth defects, and it is the leading better understandings of how daily activities impact
cause of kidney failure, amputations, and adult blindness fluctuations in their blood sugar and use this method to
[7]. Very generally, diabetes is a condition in which the design personalized interventions. Most diabetics routinely
individual lacks enough effective insulin to transfer sugar carry portable monitoring devices—glucose meters—to test
from the blood stream to the cells. Without proper care, their physiological condition (e.g., their blood sugars)
sugar remains in the bloodstream leaving the cells starved multiple times a day. These meters sample the amount of
of energy. Over extended periods, elevated blood sugars sugar contained in a small drop of blood (usually drawn
damage small blood vessels and nerves leading to the from fingertips or the forearm) and display the result as a
conditions described above. number1. Glucose meters dramatically increased the
Physicians treat diabetes in the doctor’s office, hospital and lifespan of diabetics by giving them immediate feedback
laboratory, but people live with this disease in homes, about how they are managing their blood sugar [13].
offices and public spaces. Treatment often takes the form Knowing one’s blood sugar and how it fluctuates can assist
of a general prescription of healthy behavior. Diabetes the development of routines to normalize one’s health.
cannot be cured, but it can be controlled with insulin The work we describe in this paper is concerned with
supplements, oral medicines, and most importantly for this augmenting this meter to help diabetics and their
work, by modifying one’s health practices. Physicians and physicians understand why blood sugar levels are low,
dieticians tell their patients to increase exercise, decrease elevated, or normal. In a sense, we want to create the
fat and carbohydrate consumption, and monitor blood behavioral equivalent of the glucose meter, a way to
sugar levels with commercial glucose meters. These
behaviors greatly reduce the likelihood of diabetic
complications [8,12]. Yet, people find it difficult to adhere 1
Blood sugar is measured in milligrams per deciliter
to new regimented behaviors. Anyone who has attempted a
(mg/dl) in the United States, in millimoles per liter
diet or exercise plan is familiar with that frustration.
(mmol/L) in other countries.

38
capture and examine diet, exercise, and other routines to sections of the class. We prototyped the project through
understand the connection between behavior and blood using disposable cameras distributed to the class attendees.
sugar measurements. We asked people to take pictures of meals, exercising and
In this project, we outline and test the concept of a social events, anything that people thought might impact
behavior meter. This meter includes a novel data format to blood sugar. People took the cameras home between
supplement the blood glucose readings, photographs. These sessions to take pictures and returned them for processing
photographs offer literal portraits of events in a person’s at a subsequent class.
life. By juxtaposing these images with blood sugar values a Results
diabetic and health care providers can begin to understand Diabetics shared these images with other people in the
the intersection of behavior and blood sugar control for class. These pictures served as focal points of health
both creating a treatment plan and critiquing its efficacy. discussions. Instead of talking about subject in the abstract,
Images in this capacity function as data for reflection and the students discussed specific situations from people’s
review. Educational researchers have studied how learners experience. For example, people used the language of the
can analyze behaviors depicted in still and/or moving classroom such as ideal portion size to discuss meals
images, generate and test hypotheses about how and why pictured like the one in Figure 1. In addition, the
these behaviors occur to develop deeper understandings of nutritionists and nurse practitioners were allowed “into” the
various concepts [1-3,5,6,10,11,14,15]. In the area of homes of these patients through these pictures. They saw
healthcare, asthma patients were asked to videotape their what their patients regularly ate, what is in the refrigerator,
daily routines[9]. These results showed inconsistencies where they walk and generally what their lives are like.
between the amount of allergens patients reported exposure This information changed the level of specificity with
to and those captured on video. This work suggests that which health care providers discussed problem solving
people cast their activities in a “healthy” light even while with patients.
enacting unhealthy practices. We think diabetics may act
similarly, explaining a healthy lifestyle to medical
professionals while a closer examination of their daily
activities might reveal examples of unhealthy routines.
The remainder of the paper describes a series of design
studies that introduced photography into diabetes self-
monitoring practices. We report on a project done in a class
on diabetes self-care, where newly diagnosed diabetics
often go to learn how to cope with their disease. We
introduced photography into those courses to help students
connect lecture materials to their daily lives as captured on
film. And, we describe a computer-based visualization tool
designed to help diabetics see relationships between their
blood sugar measurements and photographs of their daily
routines.
figure 1: A photograph of one student’s dinner. This image
CLASSROOM STUDY prompted discussions about portion sizes, food preparation, and
Background balanced nutrition.
We began our inquiry through observing a course on Consistent with Rich et al.’s work with asthmatics,
diabetes self-management held in a local hospital to diabetics seemed to report their behaviors in a positive light
understand class procedure and curricula requirements. The even while engaging in unhealthy behaviors. In Figure 2,
class ran for ten weeks and met once a week for an hour. In the photographer reported taking the picture to show the
this class, diabetics learn about self-care practices such as syringes and healthy foods. Other students asked about the
eating the right number of portions of a particular food soda with sugar and the beer. Through the social
group and caring for feet and eyes that often suffer interaction of the class, inconsistencies between the
complications. A different specialist teaches each session in patient’s view of a situation and the health conscious
his or her own teaching style. Generally, these are hour- outsider’s view came to light. These discrepancies fueled
long lectures in which attendees passively. About 10 discussion.
people came to each session with 3 or 4 missing any
particular meeting.
Intervention
For the next course, we worked with the nurse practitioner
in charge to introduce photography into appropriate

39
Instances where very high readings, co-occurred with
images such as in Figure 4. Such examples from the data
set

figure 2: Two photographs from a student’s collection. While


the student claimed to be photographing his refrigerator, others
focused on objects that contribute to poor health (e.g., the soft
drinks and beer).
In general, photography inspired a new level of discussion figure 3: A visualization of blood glucose levels over time.
in the classroom. Students became more vocal with both Each column represents a day of monitoring; each row is
their questions and their critiques. In interviews, patients an hour of time. The colored boxes indicate blood sugar
said they became more aware of the options they have on a measurements, with dark, light, and off white symbolizing
day-to-day basis and of the choices they made. Whether low, high, and acceptable glucose levels respectively. The
this type of awareness could lead to long-term behavioral visualization used is color-coded. These data showed that
change has not been established. Dan had high readings dispersed throughout the day. The
use of color helps patients and physicians get a global
SOFTWARE DESIGN
overview of glucose levels that could not be seen with
With images we began to capture a record of behaviors simple line graphs.
over time but these records did not include how behavior
impacts blood sugar changes. In order to do so, we
experimented with data visualizations of blood sugar and
juxtaposing images with these types of visualizations to
create a more complete record of health. Figure 3 shows
the top-level visualization of blood sugar values for a one-
month period. Figure 4 is a detailed view of the images
placed on a timeline that depicts blood sugar fluctuations
for one day.
We prototyped this project with the help of two diabetics
and have used our experience with them to plan subsequent
work. One diabetic gave us his logbooks and records and
shot pictures informally to give us his reactions to the
process. The other person, Dan, uploaded his blood sugar
readings, shot pictures for a six-week period and met with
us regularly to discuss his experience.
Results
Data visualizations as well as the pictures both provided
evidence that patient’s theories about health do not always
capture the full story about self-care practice. Dan’s blood
sugar visualization for example, revealed very high
readings more often than he thought they occurred. He
explained these results as a consequence of not taking Figure 4 A closer look at one person’s blood sugar levels
enough insulin. Increasing the insulin dosage would be a and photographs. The upper image shows a line graph of
solution to this predicament, but changing personal his glucose for the day with an icon representing the time
particular meals that seemed to drive up blood sugar that the lower photo was taken. His blood sugar elevated to
behaviors would also facilitate good blood sugar control 276 mg/dl after eating the pizza in the photograph.
and be healthier. Closer inspection of the data set revealed

40
motivated discussion. Additionally, Dan did experiments Lesgold (Eds.), Learning Issues for Intelligent Tutoring
using this tool. For example, he “tested” whether dancing Systems (pp. 1-18). New York: Springer-Verlag.
would lower his blood sugar by taking pictures during a 4. Committee on Health and Behavior. (2001). Health and
party. Yet, without social support such as was available in Behavior: The Interplay of Biological, Behavioral, and
the diabetes classroom, Dan did not seem to question his Societal Influences. Washington, DC: National
previously held beliefs about his health. The data Academy Press.
visualization with blood sugar data and images did allow
for exploration and reflection of personal health. But, while 5. Goldman-Segall, R. (1997). Points of Viewing
Dan generated explanations for events, he did not change Children's Thinking: A Digital Ethnographer's Journey.
his underlying theories about his own self-care. Meeting Mahwah, NJ: Lawrence Erlbaum Associates.
with a health care provider or other with diabetics may be 6. Gross, M.M. (1998). Analysis of human movement
critical in utilizing these image technologies for improving using digital video. Journal of Educational Multimedia
self-care. and Hypermedia, 7(4): 375-395.
GENERAL DISCUSSION 7. National Institute of Diabetes & Digestive & Kidney
Imaging technology is becoming more cheaply available, Diseases. Diabetes Overview. 1998, National Institutes
and increasingly ubiquitous. Generally, health care records of Health: Bethesda, MD.
are composed of physiological data that reflects the state of 8. Norris, S.L., Engelgau, M.M., & Narayan, K.M.V.
an individual’s health verses the causes of a particular (2001). Effectiveness of self-management training in
condition. Image technology allows for new types of health Type 2 diabetes. Diabetes Care, 24: 561-587.
records for personal reflection and sharing with health care
9. Rich, M., Lamola, S., Amorty, C., & Schneider, L.
providers. In these projects, we have explored how such a
(2000). Asthma in life context: Video
visual record of behavior could be made and the utility of
intervention/prevention assessment (VIA). Pediatrics,
such a record in both a classroom setting and on an
105(3): 469-477.
individual basis.
10. Rubin, A., Bresnahan, S., & Ducas, T. (1996).
FUTURE WORK Cartwheeling through CamMotion. Communications of
We are currently testing the value of data collection in the ACM, 39(8): 84-85.
implementing behavioral changes. To do so we have
enlisted a group of college-aged diabetics at Pennsylvania 11. Rubin, A. & Win, D. (1994). Studying motion with
State University. They have used our software to KidVid: A data collection and analysis tool for digitized
synchronize blood sugar and image data and have video. In Conference Companion to CHI '94 (pp. 13-
discussed these data with researchers. Currently, we are 14). New York: ACM Press.
analyzing results from this study. 12. Schiffrin, A. & Belmonte, M. (1982). Multiple daily
Acknowledgements self-glucose monitoring: Its essential role in long term
We would like to thank Shelley Leaf, R.N. for allowing us glucose control in insulin-dependent diabetic patients
to observe and introduce our photographic experience into treated with pump and multiple subcutaneous injections.
her diabetes education courses. We also thank Dr. Hector Diabetes Care, 5(5): 479-484.
Sobrino for medical consulting and assistance with 13. Smith, B.K. (2002). You prick your finger, we do the
experimental design. This research was sponsored by an rest: Glucose meter evolution. User Experience: The
NSF CAREER award (NSF REC-9984773) granted to the Magazine of the Usability Professionals' Association, 3:
second author and the MIT Media Laboratory’s 31-34.
Information: Organized consortium. 14. Smith, B.K., Blankinship, E., Ashford III, A., Baker,
References M., & Hirzel, T. (1999). Inquiry with imagery:
1. Bransford, J.D., Sherwood, R.D., Hasselbring, T.S., Historical archive retrieval with digital cameras. In
Kinzer, C.K., & Williams, S.M. (1990). Anchored ACM Multimedia 99 Proceedings (pp. 405-408). New
instruction: Why we need it and how technology can York: ACM Press.
help. In D. Nix & R. Spiro (Eds.), Cognition, 15. Smith, B.K. & Reiser, B.J. (1998). National Geographic
Education, and Multimedia: Exploring Ideas in High unplugged: Classroom-centered design of interactive
Technology (pp. 115-141). Hillsdale, NJ: Lawrence nature films. In Proceedings of the CHI 98 Conference
Erlbaum Associates. on Human Factors in Computing Systems (pp. 424-
2. Cappo, M. & Darling, K. (1996). Measurement in 431). New York: ACM Press.
Motion. Communications of the ACM, 39(8): 91-93.
3. Collins, A. & Brown, J.S. (1988). The computer as a .
tool for learning through reflection. In H. Mandl & A.

41
Noderunner
Yury Gitman Carlos J. Gomez de Llarena
Wireless Artist Media Architect
250 E. Houston St. apt PHB 17 W 54th St. Apt. 8-D
New York, NY 10008 USA New York, NY 10019
+1 646 263 5554 +1 212 765 4364
[email protected] [email protected]

ABSTRACT combination of running on foot and riding by taxi. An


In this paper, we describe Noderunner, a game designed for additional link was established between the teams and a
cities with a thriving wireless telecommunication central ‘headquarters’ location where the progress of the
infrastructure. It explains the game how to play it. competitors was plotted on an 80-foot map of Manhattan
Additionally this paper comments on how Noderunner and where photographs taken by the teams were projected
offers the general public an entrance point into the many on a large screen. Urban photography gave spectators a
conflicting social, political, and economic forces that new appreciation of the city’s open nodes, and the winning
compete to deploy WiFi hotspots in public spaces. team popped Champagne in celebration.
Keywords
Wireless, City, Game Design, Augmented Reality, Art THE FIELD OF PLAY
Noderunner’s playing field is the available WiFi spillover
in a densely populated area. The density of this spillover
THE GAME
is so great that it can be used as a legitimate wireless
Noderunner is a game that transforms a city into a playing network. For example, in New York City it is now easier
field. Two teams race against time to access as many to find an open and free 802.11 hotspot than it is to find a
wireless Internet nodes as possible. To prove that they public restroom. New Yorkers with WiFi enabled laptops
have successfully connected to an open node, each team are becoming accustomed to stumbling on open access
must submit photographic proof to the Noderunner weblog. points made available by their neighbors, pubic parks,
During game play, the weblog becomes a busy scoreboard cafes, bars, and not just their work places. These well-
tracking the competing teams in real time. After the game, traveled cultural spots have never before experienced
the photos provide visual documentation of the path taken Internet connectivity, nor has the Internet ever enjoyed such
by each team and public spaces that have free wireless seamless integration into a city’s actual architecture and
connectivity. social fabric [1].

Each four-person team was given a WiFi enabled laptop, a Noderunner sessions highlight overlaps between
digital camera, taxi fare, and two hours to get from Bryant information and the urban environment, encouraging the
Park in midtown to Bowling Green in Lower Manhattan, use of public spaces for creative endearvors. As wireless
both free wireless parks. Teams earned points by taking access becomes more prevalent in our cities, this paradigm
their portraits in the exact spots where they were able to offers new opportunities for applications that treat public
connect to wireless access points. They also earned points space as an interface. This work draws on spatially based
by using scanning software to sniff all the nodes along the games like tag, scavenger hunts, and hide-and-go-seek, as
way, even those that were password protected or too weak well as graffiti art, skateboarding, and urban bicycling that
to transfer pictures. The teams collected logs recording characterize cities like New York. Recently, new
hundreds of closed or weak nodes, but scored more points technologies have expanded the scope of these activities,
when they were actually able to use a node to upload a spawning a diverse community of artists, entrepreneurs and
picture. activists developing location-based models for social
movements, advertising, urban services and pervasive
The simple rule set forced players to develop strategies for gaming [2]. Instead of making our video games look more
planning the most rewarding routes within the city. For realistic, we now have the ability to turn our reality into a
example, the East Village was a popular route destination video game, a city’s infrastructure into a play space. Our
because it offered a large number of open nodes. cities are becoming game engines and software, as citizens
Participants also needed technical savvy to troubleshoot collectively program, code, or update the place where they
connection problems and upload pictures despite fragile live.
connections. Spending too much time on a weak node
could have been the difference between winning and losing
so teams moved quickly through the city with a

42
This diverse collective action means that even in the same Runner as celebration of free and open wireless connectivity
city, like New York, Noderunner's playing field is in and as a symbol of the city’s cultural flexibility and
constant flux as WiFi continues to proliferate. At first potency.
glance this would appear to make Noderunner easier but as
WiFi spreads new legislation, use patterns, and
technologies emerge. Will new security measures limit ACKNOWLEDGMENTS
open access despite an increase in nodes and improvements Noderunner was created by Yury Gitman and Carlos J.
in transmission distance? Played over time, Noderunner Gomez de Llarena for an exhibition called We Love NY:
games help answer these questions by providing empirical Mapping Manhattan with Artists and Activists
data about our culture’s adoption of wireless technology. (www.eyebeam.org/ny). The exhibition was produced by
Noderunner is in itself an exemplar of an emerging culture. Eyebeam, a new media arts organization, and curated by
A culture where smart and wireless environments are as Jonah Peretti and Cat Mazza. Like other Eyebeam R&D
much an object of play as is a grass field or an open lake. projects, Noderunner is a form of empirical research and
political engagement as well as an art project. Noderunner
was developed in collaboration with New York City
THE ART AND CULTURE OF OPEN WIRELESS
Wireless (www.nycwireless.net), a non-profit organization
The open wireless movement is being built by the end- dedicated to providing free wireless Internet, and supported
users, one node at a time. Drawing on the original spirit of by a grant from the New York State Council of the Arts.
the Internet, WiFi enthusiasts embrace open standards, Thanks to Jonah Peretti for helping with the editing of this
peer-to-peer dynamics, and user-centered innovation. As paper.
artists, we combined game design with the existing culture
of the open wireless movement. Instead of creating an
artificial game environment, we tapped into the revolution
that was already happening around us. Our goal was not
just to contribute to a new genre of public art, but also to
actively engage the general public in a vital cultural and REFERENCES
technological transformation. Node Runner is continually 1. Wired/Unwired: The Urban Geography of Digital
re-invented by the citizens who build the network and run Network. Available at http://www.mit.edu/~amt/.
the streets. The game is an entrance point to the political 2. SmartMobs Website. Available at:
and social movements behind wireless. We offer Node http://www.smartmobs.com.

43
UCSD ActiveCampus - Mobile Wireless Technology for
Community-Oriented Ubiquitous Computing
William G. Griswold+ Neil G. Alldrin+ Robert Boyer+ Steven W. Brown+
Timothy J. Foley+ Charles P. Lucas+ Neil J. McCurdy+ R. Benjamin Shapirox
+ x
Department of Computer Science & Engineering Department of Learning Sciences
University of California San Diego Northwestern University
La Jolla, CA 92093-0114 [email protected]
fwgg,nalldrin,rboyer,tfoley,cplucas,nemccurd,[email protected]

ABSTRACT While the campus administration pursues new policies to sus-


A university campus is designed to foster a thriving commu- tain our community, the UCSD ActiveCampus project is ex-
nity of learners, but modernity has introduced many stresses. ploring the use of technology to meet this challenge. With
Mobile computing holds the potential to strengthen a cam- assistance from Hewlett-Packard, we have given HP Jornada
pus’s traditional institutions of community by creating seren- PDA’s with 802.11b wireless to 800 undergraduates study-
dipitous learning opportunities through a process of indirect ing computing at UCSD. With UCSD’s wide deployment of
mediation. This demonstration introduces ActiveCampus Ex- 802.11b access, we are able to explore research questions in
plorer, a suite of personal services for sustaining an educa- sustaining educational communities via mobile computing.
tional community based on this idea.
Sustaining dispersed communities through virtual spaces is
INTRODUCTION well known [12]. Direct support of physical communities is
With the arrival of the baby boomers’ children, the UC San
seen in the discourse enabled by E-Graffiti [1] and GeoNotes
Diego (UCSD) is quickly growing from an intimate small
[4], where users can leave their electronic thoughts in phys-
town into a bustling city full of unfamiliar faces. Building
ical space for those who follow (See DISCUSSION). These
proceeds apace, with dozens of departments and hundreds
projects provide a compelling application and warn of the
of labs and institutes finding homes in odd corners of undis-
need for a large community and sufficient content to be suc-
tinguished buildings, old and new. This rapid growth has
cessful.
brought numerous “big city” stresses. It is hard to keep up
with the building on campus and who occupies what build- Our approach is a variant on a familiar theme[3, 7, 8, 10, 11]:
ing. Unfamiliar faces are everywhere, even obscuring those if you and every person on campus carried a mobile, wire-
that you know. With growing diversity and building con- lessly connected device, it could be used as a kind of “x-ray
struction lagging, more students work and live off campus, glasses” onto your immediate vicinity to let you see through
making them visitors to their own campus and education. the crowds and buildings to reveal nearby friends, potential
One third of undergraduates transfer to UCSD after two years colleagues, departments, labs, and interesting events. By
at another college, abbreviating their campus experience. making the clutter transparent and highlighting otherwise in-
These changes strike at the heart the campus’s mission of visible things, once-unnoticed opportunities are now appar-
learning—research and education. With this mission comes a ent, creating an environment for serendipitous learning.
culture that believes in the power of knowledge to transform
A simple realization of this idea for a small wireless device
people and the world in the most positive way. The univer-
like a PDA is shown in Figure 1. In the top image, the large
sity campus, as originally conceived, was a place to nurture
area is a map of a person’s immediate vicinity, as detected
those values and pass them on to the next generation, a kind
through some geolocation method. Overlaid are links show-
of perpetually rejuvenating cocoon. In becoming a city of
ing the location of nearby departments and friends. Depart-
towns, and perhaps a city of visitors, UCSD could lose its
ment links and the like can be clicked to bring up their web
transformative powers—or it could magnify them. The latter
page. A nearby colleague, formerly no more available for
requires new ways for people to stay in touch with old col-
lunch than a hundred others, is seen to be in the vicinity and
leagues, meet new ones, and become aware of the exciting
can be instantly messaged or found on foot. Any place or en-
opportunities around them.
tity can be tagged with digital graffiti, supporting contextual,

This work is supported in part by gifts from HP and Microsoft Re- asynchronous discourse.
search, support from the California Institute for Telecommunications and
Information Technology (Cal-(IT)2 ), NSF Grant IIS-9873156, and the Ac-
tiveWeb project, funded by NSF Research Infrastructure Grant 9802219.
ActiveCampus Explorer is a working system available for
Intel’s Networking donated network processors and Symbol Technologies use by everyone at UCSD [5]. It is in active use by our group
provided technical support. and several others (about 50 in number) and was deployed

44
figuration and its institutions.1 First, the campus organiza-
tion itself brings people with complementary interests into
close proximity, easing communication and increasing the
chances of serendipitous interactions. The campus not only
brings learners and teachers together, but also concentrates
area specialists by organizing the campus into schools and
departments of expertise (such as schools of Engineering and
departments of Computer Science). A department is not just
an aggregation of interest, but is a full-blown institution pro-
viding services for its aggregate of people, including working
spaces, meeting spaces, seminars, opportunities for chance
interaction, equipment, curricula, degree programs, funding,
etc., to enable and encourage the processes of learning.
Because these institutions operate through proximity, they
function less well when people are not “there” on a full-time
or full-attention basis. Moreover, it can take considerable
time for someone to internalize the workings—the culture—
of an institution. If someone does not know the internal
workings of an institution (for example, how talks are sched-
uled and where they normally occur) its mediating power is
lost on them, and indeed possibilities are disguised (when
it is possible to drop in for a talk). When such obfuscation
is combined with a busy schedule, conflicting priorities, dis-
tractions, interruptions (most of UCSD undergraduates pos-
sess cell phones), it is not surprising that many opportuni-
ties are missed. Further complicating matters is that many
campus institutional structures crosscut each other, creating
ambiguity, but also richness. For example, UCSD is divided
into residential college neighborhoods. Each department sits
in a college neighborhood and is nuanced by it, yet it actually
Figure 1: The Map and Buddies services. The Map service belongs to a school, not the college. Each faculty member
shows a map of the user’s vicinity, with buddies, sites, and ac- belongs to a college, however, and of course a department.
tivities overlaid as links at their location. The Buddies service
shows colleagues and their locations, organized by their prox- We hypothesize that mobile computing applications, by me-
imity. Icons to the left of a buddy’s name are buttons that show diating the institutional mediation of learning, can accelerate
the buddy on the map, send the buddy a message, and look at one’s on-going acclimation process, thereby mitigating time
graffiti tagged on the buddy. Other services are reached by the
navigation bar or clicking items embedded in the views. and attention deficit. In such a role, ActiveCampus is not
a replacement or proxy for extant institutions, but rather a
facilitator. Such a role befits mobile devices, given (on the
in stages to our base of Jornada users for beta testing and negative side) their limited form factor, interface, and com-
structured events such as games. As part of a broader project puting power, as well as (on the positive side) their mobility
using the campus as a living laboratory, researchers in the and relative unobtrusiveness.
department of Communication are conducting ethnographies
Building on the idea that a campus organizes institutions for
to understand ActiveCampus’s impact on campus life.
mediating learning, it is natural to consider reifying (display-
In the following we first identify a set of sociological issues ing) contextual information about (a) you (the learner), (b)
and places them in a conceptual framework that clarifies how mediating institutions, and (c) the sources of learning en-
technology can contribute. Second, we define a base set of abled by those institutions such as a professor, friend, book,
services necessary to sustain a community through mobile event, or another institution like a lab. Since a campus insti-
computing. Third, we demonstrate these services with a par- tution is typically a physically aggregated entity, displaying
ticular design and implementation suitable for small form- an institution in a transparent form and showing its mediated
factor wireless devices. sources of learning “inside” it (or even next to it) is a natu-
ral way to convey mediating relationships. Depending on the
Theory and Requirements 1 Here, we interpret institution broadly, including entities like depart-
Learning activities, spontaneous and otherwise, are heavily ments, libraries, seminar series, and even people. The notions of mediated
mediated by a university campus through its structural con- learning described herein are informed by the work of Michael Cole [2].

45
possible relationships between the learner and the learning as she’d thought—it’s made out of metal and talking quietly.
source (including role reversal), participants may need the That’s so weird. Flipping open her PDA, she clicks over to
ability to talk—as well as see—through walls. Gradually, the digital graffiti page of ActiveCampus, since a friend told
then, through experience, a participant learns to parametri- her there was lots of arts stuff in there (by default graffiti is
cally associate the institution with learning sources, imbuing not shown on the map since it can clutter). There is a list
the institution with its full power. of graffiti that’s been “tagged” in the area, including a “liv-
ing dead tree” link near the top. Clicking on different parts
There are many research efforts on augmenting the physical of the tree leads to different parts of an interactive artwork.
world with information from virtual spaces, albeit without Clicking on the tree’s roots leads to a story about the tree,
an explicit focus on communities, culture, and learning. At pointing her to other talking trees on campus, and gives the
ATT Research Cambridge, users wear goggles which over- lowdown on UCSD’s Stuart art collection. Now she begins to
lay information to enhance their knowledge of what they are understand all the weird stuff she’d been seeing on campus!
already seeing [9]. GUIDE [3], CyberGuide [7], Hippie [10], Clicking on the spray can to the left of the graffiti’s subject
and a host of other electronic tour guides provide information line, she is taken to a page where she “tags” the interactive
for the user about the local surroundings using a mapping tree with a “Thanks tree!” note to be seen by others who
metaphor to abstract the world, making physical boundaries view the living dead tree via ActiveCampus. Walking off,
transparent, and thereby expanding the horizons of the user. she thinks, huh, I wonder if there is a role for art in engineer-
These interfaces typically include links to allow the user to ing? She’d have to ask Mark about that.
drill down for more information. HP’s Cooltown creates a
web presence for people, places, and things to support users DISCUSSION
as they go about their everyday tasks [11]. IR beacons, RF Sarah’s day reveals several crucial properties of ActiveCam-
ID tags, and bar codes identify entities in the environment. pus. The most notable is that it helps campus denizens see
through the unintended barriers created by institutions. Sarah
ACTIVECAMPUS SCENARIO can see that there is a talk starting nearby, even though it
Sarah, a UCSD computer engineering sophomore who trans- was only officially disseminated to the campus via posters in
ferred from Mesa Community College last quarter, walks engineering building hallways. Even if she had seen these
out of her morning Engineering 53 lecture, introduction to posters earlier, it would not have been in the context of her
electrical engineering. This isn’t what I signed up for, she’s frustrating day and probably long forgotten. Seeing a talk
thinking, wondering where was the engineering her Dad had with “human” in the title, and in an engineering building,
told her about—building things that improved people’s lives? was her cue that this talk might be especially relevant to her.
Flipping open her PDA, ActiveCampus shows a map of her This is a function of mediation—the particulars in the scope
vicinity, and she sees a link to a talk with “human” in the of the general establish a context for interpretation.
title (Figure 1, left).2 Clicking through, she sees there’s a
talk just starting in the engineering building on the human- ActiveCampus has similar benefits at the Price Center food
machine interface. Curious, she decides to go. Although the court and library. In the Price Center, the mere concentration
talk gets technical quickly, the introduction has shown her a of people is the barrier created by the institution, but the con-
link between people and computer engineering. text is eating, which implies relatively unstructured time—a
friend of hers at the Price Center is probably free to chat. Ac-
Realizing she’s hungry, Sarah heads to the Price Center for tiveCampus merely provided the “final mile” solution, timely
lunch. Her usual table of friends is probably gone by now. and contextualized information about her surroundings.
Really wanting to talk to someone about adjusting to UCSD,
Colleague Interactions. Sarah’s use of the buddy and in-
she checks ActiveCampus (Figure 1, right) and sees that her
“buddy” Brad is nearby and active (both location and mes- stant messaging features are indicative of ActiveCampus’s
sage icon highlighted in blue), clicks on him and sends him facilitator role. After helping her notice that her friend Brad
a “Wanna go eat?” with a couple of clicks. Brad notices the was nearby, she used messaging and his displayed location
“dome” on his PDA flashing,3 and flips it open to see that to purposefully find him. One-click messaging short-cuts are
Sarah has sent him a message and is nearby. Now both look- available for typical meeting-directed communications, for
ing for each other, they see each other through the lines of example, “Are you free?” If many friends were nearby, she
people and sit down to talk about their day. could have messaged all nearby or active buddies in one ac-
tion to speed the process. In this way, Sarah is using Active-
After lunch, Sarah decides to go to the library to get a head Campus to maintain and even develop her social network in
start on her Engineering 53 homework. Later, leaving the li- a chaotic context. Sarah could meet new people by modify-
brary, she notices that the tree outside the library is not dead, ing her privacy settings from the default “visible to buddies
2 ActiveCampus uses the PDA’s report of all sensed 802.11b access points only” to “visible to buddies and others”. Her location can be
and their relative signal strengths to infer a location [6]. suppressed independently from her on-line status.
3 The flashing dome feature has been prototyped but is not yet deployed.
ActiveCampus also uses the second line of each page to convey events like Revealing one’s location on ActiveCampus could lead to un-
a new message arrival. wanted interactions. Thus, before Sarah could see Mark on

46
her PDA (or vice versa), both she and Mark had to add each Acknowledgements. We thank Jim Hollan for his philo-
other as buddies—a mutual acceptance policy. In an ad hoc sophical and technical guidance on technology-sustained com-
community, it might be hard to buddy-up spontaneously with munities. We thank UCSD’s Facilities office, in particular
such a method. At UCSD Sarah can use Mark’s campus e- Roger Andersen, Robert Clossin, and Kirk Belles, for their
mail name to add him to her list. She didn’t have to ask him time, expertise, and resources. Thanks to Ed Lazowska for
for his ActiveCampus ID or exchange contacts with someone reading a draft of this paper. We thank Jeremy Weir, Jolene
else. In fact, UCSD has a “finger” service that maps names Truong, David Harbottle, Andrew Emmett, David Hutches,
to e-mail names. Jadine Yee, Justin Lee, Daniel Wittmer, Antje Petzold, Jean
Aw, Linchi Tang, Adriene Jenik, and Jason Chen for their
Digital Graffiti. Sarah used digital graffiti to answer the assistance on the project. Finally, we thank Intel’s Network
question “What is this tree?” because there was no official Equipment Division for donating network processors and Sym-
link for the tree. Consequently, she found out not only what bol Technologies for their software technical support.
the tree was, but what other people thought about it. This
is beneficial to Sarah because she is discovering that this is REFERENCES
not just a campus of busy, stuffy professors lecturing to qui- 1. J. Burrell and G. K. Gay. E-graffiti: Evaluating real-world
etly listening undergraduates, but a place where people just use of a context-aware system. Interacting with Computers,
14:301–312, 2002.
a bit “ahead” of her are participating in the campus’s aca-
demic life. Thus, as with discovering the talk, Sarah has— 2. M. Cole. Cultural Psychology: A Once and Future Discipline.
conceptually—seen through the walls of an art studio to see Harvard University Press, Cambridge, MA, 1996.
the campus in action. In actually posting her own graffiti, 3. N. Davies, H. Cheverst, K. Mitchell, and A. Efrat. Using and
Sarah has taken an important step from being a passive visi- determining location in a context-sensitive tour guide. IEEE
tor to a campus citizen involved in community discourse. Computer, 34(8):35–41, 2001.
4. F. Espinoza, P. Persson, A. Sandin, H. Nystrom, E. Cacciatore,
Not all of digital graffiti’s potential is revealed in Sarah’s and M. Bylund. GeoNotes: Social and navigational aspects of
day. Any ActiveCampus entity can be tagged: a static ob- location-based information systems. In Ubicomp 2001, pages
ject such as a restaurant (e.g., “Get the ham sandwich, it’s 2–17, Berlin, 2001. Springer.
great!”), physical location (e.g., someone’s favorite sunset 5. W. G. Griswold, R. Boyer, S. W. Brown, and T. M. Truong.
locale), transient object (a buddy), or other graffiti. Through A component architecture for an extensible, highly integrated
artistic expressions, political debates, and the like, graffiti context-aware computing infrastructure. In 2003 International
can become a valued record of campus life. For example, Conference on Software Engineering (ICSE 2003), 2003.
a student might learn what others thought about recent con- 6. W. G. Griswold, R. Boyer, S. W. Brown, T. M. Truong,
certs held at a campus venue, find links to band web sites, E. Bhasker, G. R. Jay, and R. B. Shapiro. Activecampus - sus-
etc., helping people choose amongst opportunities. taining educational communities through mobile technology.
Technical Report CS2002-0714, UC San Diego, Department
Early Experience. Our own use of ActiveCampus has not of CSE, July 2002.
been unlike that of our character Sarah. The following are 7. S. Long, R. Kooper, G. D. Abowd, and C. G. Atkeson. Rapid
a few typical examples of serendipitous interactions assisted prototyping of mobile context-aware applications: The Cy-
by ActiveCampus. berguide case study. In Proceedings of the 2nd ACM Inter-
national Conference on Mobile Computing and Networking
 Ben drops by Bill’s office, but he’s not there. Ben checks (MobiCom’96), November 1996.
his PDA and sees that Bill is at the cafeteria across the 8. J. F. McCarthy and E. S. Meidel. ACTIVEMAP: A visualiza-
quad. Ben heads over to the cafeteria and joins Bill and tion tool for location awareness to support informal interac-
Jens for lunch. tions. In Intl. Symposium on Handheld and Ubiquitous Com-
 Bill is stuck in a late meeting and sees that Pat is still in puting (HUC’99), pages 158–170, 1999.
his office. A quick message confirms that Pat will still be 9. J. Newman, D. Ingram, and A. Hopper. Augmented reality
there in a half hour for a much-needed meeting. in a wide area sentient environment. In Proceedings of the
2nd IEEE and ACM International Symposium on Augmented
 Bill is late for a meeting, but has to pick up lunch first. The Reality (ISAR 2001), New York, 2001.
group waiting for him sees that he’s in the “line area” at
10. R. Oppermann and M. Specht. Context-sensitive nomadic
the food court, and concludes that he’ll arrive shortly.
exhibition guide. In Ubicomp 2000, pages 127–142, Berlin,
 Bob is waiting for Bill to return to his office, while contin- 2000. Springer.
uing to work in the lab. When Bill shows up on his buddy 11. S. Pradhan, C. Brignone, J. H. Cui, A. McReynolds, and M. T.
list as being in “Griswold’s at APM”, he walks over. Smith. Websigns: Hyperlinking physical locations to the web.
 While at his favorite cafe, Bill sees a graffiti claiming that IEEE Computer, 34(8):42–48, 2001.
the croissants are the best on campus, and he makes a note 12. H. Rheinhold. The Virtual Community. MIT Press, Cam-
to try one sometime. bridge, revised edition, 2000.

47
The Location Stack: Multi-sensor Fusion in Action
Jeffrey Hightower and Gaetano Borriello
Dep’t of Computer Science & Engineering Intel Research Seattle
University of Washington 1100 NE 45th Street, Suite 600
Box 352350 Seattle, WA 98105
Seattle, WA 98195 +1 206 633 6555
+1 206 543 1695 {jeffrey.r.hightower,gaetano.borriello}@
{jeffro,gaetano}@cs.washington.edu intel.com

ABSTRACT of many location sensor technologies including infrared


The Location Stack is a set of design abstractions and proximity badges, passive RFID tags, ultrasonic ranging
sensor fusion techniques for location systems. It employs badges, active radio proximity tags, global positioning
novel probabilistic techniques such as particle filters to fuse system receivers, infrared laser range-finders, 802.11b
readings from multiple sensor technologies while providing wireless clients, and, more importantly, any combination of
a uniform programming interface to applications. Our these. Our architecture consists of scalable distributed
implementation is publicly available and supports many services communicating with asynchronous XML messages
location sensor technologies. Specifically, our live and remote procedure calls, similar to many modern
demonstration tracks multiple people using statistical ubiquitous computing systems.
sensor fusion of RFID proximity tags and ultrasonic LOCATION STACK ABSTRACTIONS
distance measurement badges. Participants are invited to The Location Stack codifies a set of layered abstractions
don tracking badges and watch a projected visualization of based on properties identified in a previous survey of
the real-time probabilistic estimates of all participants’ location systems [1] and the design experiences of several
locations. projects [2]. Figure 1 shows the Location Stack.
Keywords
Location sensing, sensor fusion, particle filters
INTRODUCTION
Location is essential information for many ubiquitous
computing systems: We want our home to learn and
respond to its inhabitants' movements. We want to capture
and optimize workflow in a factory. We need directions
from one place to another. We want to interact naturally
with input-output devices casually encountered in the
environment. Yet, to meet these goals, existing location-
aware ubicomp systems can be improved in two areas:
1. Solid design abstractions can provide a common
vocabulary for comparative evaluation of location
systems.
2. Fusing readings from multiple different sensor
technologies can exploit the advantages of each
technology while presenting a single application Figure 1: The Location Stack abstractions are a general
programming interface that probabilistically represents framework and common vocabulary for location-aware
location information. ubiquitous computing systems.
Our contribution is in both of these areas. Based on We briefly discuss the layers and the interfaces they
lessons from a previous survey of location systems [1] we provide with particular emphasis on the fusion layer – the
created the Location Stack, a common vocabulary and thrust of this demonstration.
general framework for multi-sensor location-aware
ubiquitous computing. In this demonstration, we highlight Sensors
our Fusion layer's use of Bayesian filter techniques, more The Sensors layer consists of the sensing hardware for
specifically, particle filters and multi-hypothesis tracking to detecting a variety of physical phenomena. Our
estimate people’s locations in real multi-sensor implementation has drivers for many common location
environments. Our implementation supports sensor fusion technologies including infrared proximity badges, passive

48
RFID tags, ultrasonic ranging badges, active radio
proximity tags, global positioning system receivers,
infrared laser range-finders, and 802.11b wireless clients.
Information is pushed up the stack as sensors generate new
information about the changing state of the physical world.
This demonstration uses passive RFID tags and ultrasonic
ranging badges.
Measurements
Each sensor driver discretizes and classifies the data
produced into measurements of type Distance, Angle,
Proximity, or Position as well as several aggregate types
such as Scan (a distance-angle combination). For example,
infrared badges and RFID sensors both produce proximity
measurements with likelihood models based on the power
of the infrared emitters and the range and antenna
characteristics of the radio. These measurement likelihood
models describe the probability of observing a
measurement given a location of the person or object.
Such a model consists of two types of information: First,
the sensor noise and, second, a map of the environment. Figure 3: Measurement likelihood model for infrared
The problem of constructing maps of indoor environments proximity badges. Darker areas represent higher
receives substantial attention in the robotics research likelihood.
community and is not our focus in this work. Figure 2 shows the likelihood model at all locations in our
lab for a specific 4.5 meter ultrasound distance
measurement. The likelihood function is a ring around the
location of the sensor where the width of the ring is the
uncertainty in the measured distance. Such noise may be
represented by a Gaussian distribution centered at the
measured distance. Furthermore, since ultrasound sensors
frequently produce measurements that are far from the true
distance due to reflections, all locations in the environment
have some likelihood, as indicated by the gray areas in the
map. White areas are blocked by obstacles. Figure 3
illustrates the sensor model for the infrared badge systems.
Infrared sensors provide only proximity information, so
likelihood is a circular region around the receiver. RFID
tags are also a proximity technology and behave similarly.
Fusion
The Fusion layer continually merges measurements into a
probabilistic representation of objects' locations and
presents a uniform programming interface to this
representation. In this demonstration we illustrate
Figure 2: Measurement likelihood model ultrasound estimating the location of multiple people where each
tags. Darker areas represent higher likelihood. person wears an RFID tags and ultrasonic ranging badge.
Due to these sensors' low accuracy (relative to robotics and
motion capture sensors like precision scanning laser range
finders), the belief over each person's location is typically
very uncertain and often multi-modal, hence we apply a
Bayesian filter techniques called particle filters which is
commonly used in robot localization and is optimized for
this type of scenario. Particle filters can naturally integrate
information from different sensors. Refer to [3] for a
general survey of Bayesian filtering techniques for location
estimation or [4] for an in depth treatment of particle filters
and Monte Carlo statistical techniques.

49
There are two pieces of additional research we have
contributed but are not highlighting in this demonstration.
First, we have shown how particle filters can be used more
efficiently by constraining possible locations of a person to
locations on a Voronoi graph of free space that naturally
represents typical human motion along the main axes of the
environment. In experiments we found that such Voronoi
graph tracking results in better estimates with less
computation. Furthermore, the Voronoi graph structure
can be used to learn high-level motion patterns of a person.
For example, the graph can capture information such as
“Rebecca goes into room 22 with probability 0.67 when
she walks down hallway 9.” More details on using
Voronoi graphs with particle filters and on applying high-
level learning can be found in [5,6]. Second, although also
not shown in this demonstration, other work of ours at the
Fusion layer has addressed the problem of estimating
objects' identities in situations where explicit identity
information is not provided by the all the sensors. In
particular, we have introduced a technique to combine
Figure 4: Sensor fusion of infrared and ultrasound
highly accurate anonymous sensors like scanning infrared
sensors. Density of the particles reflects the probability
laser range finders with less accurate identity-certain
posterior of the person's location.
location technologies like infrared and ultrasonic badges
Figure 4 shows snapshots from a typical sequence [7].
projected onto a map of the environment. In this example, Arrangements
the person is wearing an infrared badge and ultrasound tag We provide two operators to relate the locations of multiple
and starts in the upper right corner as indicated by the icon. objects. We provide a test for multi-object proximity given
Since the start location is unknown to the system, the a distance and a test for containment with a map region.
particles are spread uniformly throughout the free-space of Because we operate directly on the location probability
the environment. The second picture (top right) shows the posteriors of each object, the results of these tests can also
location probability after the person has moved out of the be probabilistic. For example the proximity test produces a
cubicles and into the upper hallway. At this point, the pairwise confidence matrix that a given group of objects
samples are spread over different locations. After an are within 4 meters of one another. Taken together, these
ultrasound sensor detects the person, their location can be operators provide a probabilistic implementation of the
estimated more accurately, as shown in the third (bottom “programming with space” metaphor as used with great
left) picture in Figure 4. Later, after moving down the success in AT&T Sentient Computing project [8]. Future
hallway on the left, the samples are spread over a larger work in our implementation of the Arrangements layer is to
area, since this area is only covered by infrared sensors that provide an additional operator to test for more general
only provide very coarse location information (bottom geometric formations of multiple objects.
right).
Context and Activities
A single sensor fusion service running on a modern PC
The Contextual Fusion layer combines location information
(1.8 GHz Pentium 4 with 512 MB memory) can perform
with other contextual information such as personal data
real-time multi-sensor probabilistic tracking of more than
(schedules, email threads, contact lists, task lists),
40 objects at a sustainable rate of 2 measurements per
temperature, and light level while the Activities layer
second per object. Objects are tracked in 7 dimensions (x,
categorizes contextual information into semantic states
y, z, pitch, roll, yaw, and linear velocity). Higher
defining an application's interpretation of the world. Our
performance (more objects or a faster measurement rate)
implementation of the Context and Activities layers is in its
can be realized by reducing the state space to two
infancy because few ubiquitous computing systems have
dimensions or through more advanced techniques such as
been deployed which take sensor information all the way
our technique of constraining the particle filters to Voronoi
up to the level of human activity inference. To make
graphs of the environment discussed below. Another way
inroads, we are collaborating with the Assisted Cognition
to increase performance is to distribute computation across
research group, a group seeking to create novel computer
multiple fusion services, although applying certain
systems that will enhance the quality of life of people
Arrangements layer operators then poses additional
suffering from Alzheimer disease and similar cognitive
challenges.

50
disorders [9]. Our goal for this collaboration is to design Applications (WMCSA 2002), pages 22–28, Callicoon,
general interfaces for the Context and Activities layer NY, June 2002. IEEE Computer Society Press.
based on usage patterns of the existing Fusion and 3. Fox, D., Hightower, J., Liao, L., Schulz, D., and
Arrangements layers in support of these higher level Borriello, G. Bayesian Filtering for Location
learning tasks. Estimation. IEEE Pervasive Computing, vol. 2, no. 3,
SUMMARY pp. 24-33, IEEE Computer Society Press, July-
Our demonstration highlights the primary capabilities of September 2003
our Location Stack implementation: We show a highly 4. Doucet, A., and de Freitas, N., Gordon, N. editors.
flexible system which can track multiple people using Sequential Monte Carlo in Practice. Springer-Verlag,
statistical sensor fusion of information from multiple New York, 2001.
sensor technologies, in this case, RFID proximity tags and
ultrasonic distance measurement badges. 5. Liao, L., Fox, D., Hightower, J., Kautz, H., and Schulz,
D. Voronoi tracking: Location estimation using sparse
The Location Stack abstractions structure location systems and noisy sensor data. Proceedings of the IEEE/RSJ
into a layered architecture with robust separation of International Conference on Intelligent Robots and
concerns allowing us to partition the work and research Systems (IROS), 2003.
problems appropriately. Our implementation is a publicly
available Java package containing a complete framework 6. Rabiner, L. R. A tutorial on hidden Markov models and
for operating a multi-sensor location system in a ubiquitous selected applications in speech recognition.
computing environment. The implementation is typical of Proceedings of the IEEE. IEEE, 1989. IEEE Log
a modern ubiquitous computing system: a set of reliable Number 8825949.
distributed services communicating using asynchronous 7. Schulz, D., Fox, D., and Hightower, J. People Tracking
XML messages and linked using dynamic service with Anonymous and ID-Sensors using Rao-
discovery capability in the middleware. The Location Blackwellised Particle Filters. Proceedings of the
Stack is deployed in our laboratory and workspace at Intel Eighteenth International Joint Conference on Artificial
Research Seattle, operates nearly 24x7, and is used by Intelligence (IJCAI), 2003
other research projects as a reliable source of location 8. Addlesee, M., Curwen, R., Hodges, S., Newman, J.,
information. Steggles, P., Ward, A., and Hopper, A.. Implementing a
REFERENCES sentient computing system. Computer, 34(8):50–56,
1. Hightower, J. and Borriello, G. Location systems for August 2001.
ubiquitous computing. Computer, 34(8):57–66, August 9. Kautz, H., Etzioni, O., Fox, D., Weld, D., and Shastri,
2001. L. Foundations of assisted cognition systems. UW-CSE
2. Hightower, J., Brumitt, B., and Borriello, G. The 03-AC-01, University of Washington, Department of
location stack: A layered model for location in Computer Science and Engineering, Seattle, WA,
ubiquitous computing. Proceedings of the 4th March 2003.
IEEEWorkshop on Mobile Computing Systems &

The columns on the last page should be of equal length.

51
A Novel Interaction Style for Handheld Devices
James Hudson and Alan Parkes
Computing Department, Lancaster University, Bailrigg, Lancaster , LA1 4YR, UK
{j.a.hudson@, app@comp.}lancs.ac.uk

ABSTRACT aids can be an obstacle to the user. Pointers (e.g., styli),


Handheld devices are not as usable as they could be. The clip on keyboards, and data gloves, impede device
multitude of attempted solutions to the text input problem usability. To interact with the device the user must either
for mobile devices addresses such usability issues. Our don the interaction accessory or, say, pick up a stylus,
novel approach to the handheld interaction problem makes which in the case of many portable devices, means that
use of animated transparencies and mouse-gesture both hands are needed [5]. A number of approaches also
optimized menu interaction. Our research explores incur a learning and skill acquisition overhead. Many small
techniques to create interface tools that permit the liberal device interface mechanisms, such as optimized soft
and intensive population of a display with adequately sized keyboards for text input, are not easy to learn to use [12].
controls, without compromise to the visibility of the The use of 2D alphanumeric gestures is another such
underlying user interface. Additionally we strive to realize example with a significant learning overhead [13].
highly functional control elements that support manual Consideration of the contributing factors in the design of
interaction, even for hand held displays. interaction models for handheld and mobile devices leads
Keywords to the following design requirements:
Handheld, PDA, Mouse stroke, and Transparency. • We should not rely on additional interaction aids, e.g.
INTRODUCTION styli, as these are detrimental to the portability and
This paper addresses problems associated with handheld ergonomic effectiveness of the device.
device interaction. It proposes the application of control • A suitable balance between redundancy in input
elements composed of integrated superimposed animated device features and availability of display area should
graphical layering [1,3,8,15] or, more specifically, image be sought.
multiplexing or visual overloading [7] combined with
mouse gesture interaction [10,13] similar to marking menus. • The device should reflect an effective trade-off
between display area, size of elements in the input
The paper then goes on to identify the individual features panel, and usability.
and difficulties of the PDA interaction problem [9,12]. To
demonstrate the benefit of our approach we have • Interface skills should be easily learned [12].
implemented a mobile phone interface that involves the IMAGE MULTIPLEXING WITH GESTURE INTERACTION
application of visual overloading, standard gesture In view of the above requirements, we now introduce our
interaction and an approach of gesture optimized list interaction model for small devices. Transparency is
interaction [10]. This application is used throughout the commonly used to optimize screen area, which can often
paper to provide supporting examples of our approach. be consumed by menus or status dialogues. [2,6,9]. The
Constraints of Handheld Interaction conventional approach of using a layer of transparency to
All proposed solutions to the handheld interaction problem display a menu is done at the cost of obscuring whatever is
fail to acknowledge the constraints of portability and in the background. This is not the image multiplexing
compactness, ease and convenience of interaction and the effect we are after, but rather a compromise between two
deft conservation of screen real estate. This is perhaps images competing for limited display area. Overloading, or
further confounded due to the lack of a suitable solution, image multiplexing, is the application of techniques such
giving rise to multiple versions of essentially the same as dynamic signatures and animation [1,3,7,8,15] to permit
approaches, varying only in the design compromises made, the layering of multiple transparent images, whilst
such as proposing the use of larger control elements at the reducing the effects of visual rivalry between these
expense of screen real-estate. Consider the following competing layers, a sort of “intensive farming” of screen
influences on interface design for small handheld devices. real estate rather than a compromise between background
and foreground images.
An important factor in the design of user interfaces for small
devices is ease of use. In order to free up as much screen The incorporation of simple gestures or mouse strokes
display as possible, input dialogues are reduced in size. To [10,13] is an elegant solution offering the additional
minimize the display area used, designers resort to using context required beyond that of the restricted point & click
menus. Seldom used commands inevitably feature in approach. This permits a larger population of control
hierarchical submenus, leading to an awkward, slow, and elements with a greater redundancy of related commands
cumbersome interaction style [9]. Unnecessary interaction without compromise to their size, thus facilitating manual

52
interaction. By using a mouse, pen, or touchpad, the user Gesture Activated Buttons and List Elements
simply draws a 2D symbol to execute an action; we will In Figure 2 we see the use of the gesture activated “Name”
refer to this as stroke or gesture interaction. However, button to search for a given phone number. By drawing a
gestural input is partly a consequence of implementing ‘T’ over it (left) the interface lists all telephone number
visual overloading, since it is necessary to resolve issues of entries that begin with the letter ‘T’ and by drawing a ‘P’
layer interaction. To avoid the overhead of manipulating (middle) the list is further optimized to all elements that
layers, such as moving them about, to address, for example, begin with the letter ‘T’ and contain the letter ‘P’. This
elements or widgets, which are beneath a layer, gestural approach drastically cuts down on executions for selecting
interaction is used to provide the necessary context. an entry, whilst possessing a greater cognitive salience.
IMPLEMENTATION
Our implementation takes the form of a mock up of a
mobile phone interface with a standard menu or list driven
interface on a 12x5cm display. This approach was taken to
assist in rapid prototyping and to avoid any difficulties with
device specific limitations. We chose to use simple
animated black and white transparent gifs. This we did to
show that processor intensive alpha blending was not
essential and adequate results could be achieved with simple
well chosen animations.
Overview
Commands can be executed with either the standard “point Figure 2. A gesture activated ‘Name’ Button is used to
& click” approach or the user can circumvent intrusive make a search for a telephone number.
hierarchical menu interaction by drawing a symbol that To further optimize the interface, drawing a symbol or
starts over the relevant list item or button, which takes the tapping on the left of the list executes a command; such as
user directly to the required dialogue or executes the desired a double-click to call a number or drawing a ‘d’ to access
command. Note that a stroke is not restricted in size. the ‘list details’ dialogue. Whereas, a symbol drawn on the
right side of the list will further refine the search to any
remaining items that contain the desired letter.
Redundancy of Interaction Styles
This form of interaction model is not restricted to gestural
interaction alone, it can be used in the same way as a
conventional mobile phone or by using the gesture
optimizations. This allows the user to learn these gesture
optimizations as they become familiar, thus avoiding any
significant learning overhead. To access a list element the
user can either tap over it or gesture over it. For example as
depicted in Figure 1 (middle), the user can simply draw a
‘d’ starting over the list element, to go straight to the
desired ‘list details’ dialogue, in this case from the item
marked ‘sport centre’, or looking at the list of frequently
Figure 1. The initial screen contains a list of frequently called numbers (Figure 1), to access the details of a
dialed numbers and two animated overloaded controls. The telephone number the user can click on the menu button
darker traces show the execution of a stroke. and navigate a series of submenus to access the ‘get
details’ option. Similarly, in the example from figure 2 the
In addition, two overloaded control elements, depicted in user can dispense with the gesture interaction and use a
Figure 1 are superimposed over the menu items, one of an series of hierarchical menus by simply tapping on the
envelope to access messaging functions and the others of option button and accessing a number in the conventional
the word ‘register’, to access the call register, which fashion.
demonstrates the overloading of text.
A necessary example is that of dialing a number (see
We now discuss the interface components and consider Figure 3), the use of gestures would be a less than adequate
some interaction scenarios to help explain the use and means of carrying out this task, so the approach resorts to a
benefits of this interface design. more conventional one where necessary.

53
input or “Compose” dialogue makes use of an overloaded
text input panel.
A letter is selected by starting a simple gradient gesture
over a group of letters, as shown in Figure 4 (Middle,
Left). The direction of the line determines the letter
selected. In this example the ‘L’ has been selected,
whereas an upward stroke would select ‘K’ and a left up
stroke would select the letter ‘J’.
The approach to text input enables the user to enter text
easily without a complex combination of keystrokes via an
adequately sized soft keyboard.
Figure 3. How a number can be dialed. The appropriate
With respect to the design requirements discussed earlier,
gesture is executed over the ‘Menu’ button to access a
the benefits of our proposed design of a mobile phone
dialogue to dial a number.
interface can be summarised as follows:
To dial a number the user either clicks on the ‘Menu’ button
• Practical one handed manual touch screen interaction
enabling him to access a hierarchical menu, containing an
option to ‘Dial a number’ (as in conventional interfaces), or • Maintain adequately sized control elements
the user executes the appropriate gesture, an (upward line), • The optimization of limited screen real-estate
over the menu button, to go directly a conventional dialogue
more suited to the desired task. This approach demonstrates • Avoid the use of memory intensive hierarchical menu.
the practical integration of the two models of interaction. • Reduction in the cognitive overhead of a visual search
Overloaded Icons schema, e.g., scanning for a list element
The initial screen has two overloaded icons (figure 1). As • A greater cognitive purchase afforded by the gesture
expected, executing the appropriate gesture over a list item interaction
will execute a command. However, if the gesture starts
• Greater redundancy in the functionality of controls.
within a region of an overloaded control and the gesture
relates to that overloaded control element the appropriate • More efficient number look-up e.g., the selection of a
command is executed, thus disambiguating between phone number within 1-3 gesture executions, rather
competing overloaded controls and menu items. than 3-8+ button presses.
For example, drawing an ‘M’ stroke over the ‘register’ • The incorporation of standard point & click with the
overloaded icon, demonstrated in figure 1 (left), accesses overloaded gesture interaction exploits a redundancy
the ‘Missed calls’ dialogue, whereas executing an ‘r’ of interaction styles, thus optimizing learnability.
accesses the ‘Received calls’ dialogue.
PRELIMINARY EVALUATION
Text Input Ten subjects used our interface design to carry out a range
of tasks such as those discussed above. The tasks were first
carried out in the conventional way (through hierarchical
menus), and then by the stroke-optimized route. After
spending a short time learning to use the interface, the
users readily completed the tasks unaided, and expressed a
preference for the gesture optimized shortcuts and
overloaded icons over conventional interaction styles.
The subjects reported they did not favor devices that relied
on additional interaction aids, such as a stylus, and
preferred our model, which supports manual operation.
Subjects also commented that our interface is less awkward
to use than systems without gesture interaction.
Figure 4. A text input dialogue that embodies the same
approach for the overloaded text input panel as used for the Moreover, we discovered that, with appropriate training, a
‘Register’ overloaded icon (Figure 1). user can input a text message, without using a stylus, at
rates comparable to that of standard single finger soft
Referring back to Figure1 (top left) drawing a ‘C’ for keyboards (i.e., around 40wpm). This is achieved without
“compose” over the animated envelope would open a text the cumbersome interaction associated with common
input dialogue (Figure 4), whereas an ‘I’ or ‘O’ would mobile devices. This represents a significant improvement
invoke the ‘Inbox’ and ‘Outbox’, respectively. The text over conventional text input for handheld devices with
small display screens.

54
It has to be said that there is a slight overhead in learning 3. Belge, M., Ishantha, L. and Rivers, D. Back to the
the appropriate gesture, although a user could always resort future: a graphical layering system inspired by
to the conventional form of interaction if difficulties were transparent paper. InterCHI'93 Adjunct Proceedings,
encountered. Users commented they found the gesture (April 1993), ACM Press, 129-130.
approach to be both novel and useful and many reported
they felt motivated to learn the necessary gestures. We 4. Bier, E., Stone, M., Pier, K., Buxton, W. and DeRose, T.
intend to reduce the learning overhead by using the more Toolglass and Magic Lenses: The See-Through
familiar ‘Graffiti’ handwriting recognition alphabet found in Interface. Proceedings SIGGRAPH '93, (August 1993),
many handheld devices. ACM Press 73-80.
CONCLUSION 5. Goldstein, M., and Chincholle, D. The Finger-Joint
This paper has proposed a solution to the problems and Gesture Wearable Keypad. Workshop on Mobile Devices
shortcomings of existing text input schemes, particularly for INTERACT'99 (Edinburgh, UK, August 1999).
small devices. A prototype system, making use of gestures
and visual overloading, was also described. It was 6. Harrison. B., Hiroshi, I., Vicente, K. and Buxton, W.
demonstrated that this prototype makes effective use of Transparent layered user interfaces. Proceedings CHI
screen area, and preserves the portability of the device, ‘95 (May 1995), ACM Press 317-324.
while providing a rich set of easily accessible features.
Our current work involves investigating the application of 7. Hudson, J. and Parkes, A. Visual Overloading. Adjunct
our techniques to support interaction for large screen Proceedings HCI International2002 (June 2003), 67-68.
devices such as Databoards, for public information displays,
8. Ishantha, L. and Suguru, I. GeoSpace. An interactive
the desktop and for very small, e.g., wearable, devices [12].
visualization system for exploring complex information
We are exploring the effectiveness of visual overloading
spaces. Proceedings CHI'95 (May 1995), ACM Press
itself, and seeking to improve touch screen interaction,
409-414.
among other things. We also intend to explore the use of our
techniques in a predictive text application. 9. Kamba, T., Elson, S., Harpold, T., Stamper, T. and
FURTHER RESEARCH Sukariya, P., POBox: Using small screen space more
In continuation of our work we intend to explore ways of efficiently. Proceedings CHI ‘96, (April 1996), ACM
providing better affordance, since poor affordance can be a 383-390.
major drawback of gesture interaction. We will explore the
use of visually overloaded help-prompts to provide for goal 10. Kurtenbach, G., Buxton, W., User learning and
navigation and goal exploration, such as gestures being used performance with marking menus, Proceedings of the
to call up an overloaded layer of commands related to a SIGCHI conference on Human factors in computing
control. systems, (April 1994), ACM Press 258-264
We are currently designing experiments to support our 11. Masui, T. An efficient Text Input method for Handheld
theory that gesture interaction and animated icons are and Ubiquitous Computers. Lecture Notes in Computer
suitable for creating highly usable small devices and to Science (1707), Handheld and Ubiquitous Computing,
examine the acceptability of animated transparencies with Springer Verlag, 289-300, 1999.
respect to distractibility.
12. MacKenzie, I., Zhang, S. and Soukoreff, W. Text entry
Finally, we recognize that our future research will benefit
using soft keyboards. Behaviour & Information
from an investigation into theories of perception. Such work
Technology 1999, 18, 235-244.
may help us to minimize, and govern the effects of, visual
rivalry, perhaps by introducing 3D elements and dynamic 13. Meyer, A. Pen computing: a technology overview and a
shading and elements [4,14]. vision, ACM SIGCHI Bulletin, (July 1995) v.27 n.3,
REFERENCES ACM 46-90,
1. Baecker, R., Small, I., Mander, R., Bringing icons to
life, Proceedings of the SIGCHI conference on Human 14. McGuffin, M. and Balakrishnan, R. Acquisition of
factors in computing systems, (April 1991) ACM 1-6, expanding targets. Proceedings of CHI'2002 (April
2002), ACM Press 57-64.
2. Bartlett, J. Transparent Controls for Interactive
Graphics. WRL Technical Note TN-30, Digital 15. Silvers, R. Livemap: a system for viewing multiple
Equipment Corp., Palo Alto, CA. (July 1992). transparent and time-varying planes in three-
dimensional space. Conference companion CHI'95
(May 1995)

55
WiFisense™: The Wearable Wireless Network Detector
Milena Iossifova and Ahmi Wolf
Interactive Telecommunications Program
New York University
[email protected], [email protected]

ABSTRACT WiFisense addresses both sensing the intangible as well as


WiFisense is a wearable scanner for 802.11 wireless integrating technology into everyday objects. It aims at a
networks (WiFi) embedded in a handbag. Using emerging practical solution to the problem of detecting the increasing
technology, in an everyday functional object, we create a coverage of wireless networks in our lives. There is interest
device that helps people discover and qualify the wireless in the awareness of a network’s presence not for the mere
networks through which they pass. sake of awareness but for the desire to use it.
Keywords The practical need for discovering WiFi radio waves has
Wireless networking, WiFi/802.11, wearable computing, existed since the very emergence of this standard in the late
ubiquitous computing, network detector 1990s. From 2000 to 2002 the practice of warchalking [5]
became popular as an underground movement for detection
INTRODUCTION of wireless networks on a street level. WiFi hackers
Wireless networks have recently arrived in many cities. pervaded the streets, equipped with network scanning
Their bottom-up spread, while somewhat unexpected, is software on their laptops and chalk in their hands, marking
steadily expanding the Internet’s communication territory. signs denoting the availability of WiFi access points, their
Multiple wireless beams radiating out of the wired SSIDs, signal strength and whether they are public or
backbone in homes and business offices are extending our private. Later, with the increasing popularity of WiFi and a
connectivity to public spaces of the city such as streets, proliferation of access points over larger areas, the
parks, cafeterias and plazas. movement evolved to the practice of wardriving in which a
On a daily walk through a city we unknowingly pass person drives a car through the city and uses software to
through many wireless networks that exist in both public detect the hot spots.
and private spaces. Intentionally or not, some of these The impracticality of our current methods of network
networks are open and unencrypted. They are a useful detection - walking with open laptops, driving with
resource for anyone with a mobile lifestyle and wireless computer equipment in cars, searching for cryptic sidewalk
connectivity. drawings - combined with the daily need to stay connected
while moving about demands that this process be less
CONTEXT
intrusive and more integrated into our actions.
An early inspiration was to explore the border between the
tangible and intangible in our perception of space. The idea Several computer equipment manufacturers [6, 7] have
that we are passing through electromagnetic, radio, WiFi realized the need for WiFi detectors and are producing
waves, which are out there but invisible, triggers a lot of equipment towards that end. However, these new devices
imagination about how it would be best to sense them and demand that you carry yet another object in your already
then portray them. crowded pockets and handbags. WiFisense differs in that it
seeks to integrate the needed technology into the objects
There have been projects in the past that attempted to look
that are already a part of our everyday lives.
at network activity in a physical manner. Natalie
Jeremijenko’s Dangling String, a piece of plastic string EXPOSITION
attached to a motor that vibrates based on the amount of The first iteration of WiFisense is a handbag - a functional
Ethernet activity, was an effort to visualize intangible data object that is easily carried through any space you wish to
in a calm non-invasive manner [1]. explore for connectivity. WiFisense currently gives most
Recent work on ambient displays [2] has demonstrated that advantage to people with a laptop due to the fact that they
they are successful as interfaces to more indirect, less can use the information it provides to get on the net.
intrusive communication. WiFisense extends that concept However, the device can be embedded in other objects such
and uses the increasing pervasiveness of wireless networks as keychains, belts, jackets, etc and we imagine the
as an ambient display in a mobile setting. physical shape of WiFisense evolving with people’s needs.
Anthony Dunne and Fiona Raby, in their project Fields The actual Mylar design of the prototype was on some
and Thresholds, explore creating a sense of another space level a statement that technology can also be fashionable.
versus explicit representation [3]. One of their foci is on the We could make raw designs with loose cables, screws and
merging of the physical body of objects with an appropriate bolts, but such obvious circuitry might intimidate rather
technical behavior; design of objects to present their than attract. The intent is to augment a useful object with
technological affordances in a proper physical form [4]. relevant embedded technology, which also goes along with

56
a fashion image. Creating various designs might make position in space relative to it, other activity on the same
WiFisense appealing to more people. access point, and the building materials of the space.
The bag can display up to 8 networks at a time, creating a
beautiful collage of multiple networks with various
strengths overlapping in space.
RESULTS & FUTURE WORK
Seeing us wear the WiFisense bag in the streets of New
York City, people have started conversations, amazed to
find out that there is such a thing as wireless Internet
access. A potential implication of this is that WiFisense
can be successful as a means to increasing public awareness
for this new technology.
However, we have already thought about what it means for
possible subsequent functionality such as moving away
from the passive act of scanning to joining and actively
communicating on available networks.
Lastly, we are currently exploring various physical forms
for future iterations of WiFisense.
CONCLUSION
WiFi is still an emerging standard for wireless Internet
communication. By broadcasting information about the
Experimenting with light as a gentle means of networks found, in the form of dynamic light patterns, we
communication, the bag uses LEDs for the physical display intend to increase public awareness for this new
of information. When WiFisense does not see a wireless technology.
network, the LEDs look like simple beads. When a WiFisense explores the boundaries between the tangible
network is discovered the LEDs light up in patterns and perceptible world and the rest that surrounds us – the
displaying the number of networks at a certain physical intangible yet present. It turns a person’s movement
location and their corresponding signal strength. through space into a display of the unseen yet increasingly
Attempting to capture one’s attention only when the device ubiquitous world of connectivity.
finds relevant information, lights from the bag color the
REFERENCES
environment as a means of ambient feedback.
1. Weiser, M. and Brown, J. Designing Calm
PROCESS Technology, PowerGrid Journal, v1.01, 1996.
WiFisense scans for the presence of 802.11b networks. 2. Wisneski, C., Ishii, H. Dahley, A., Gorbet, M.. et al.
When it discovers a network, it uses the signal strength and Ambient Displays: Turning Architectural Space Into an
encryption status to create patterns of light announcing the Interface between People and Digital Information.
network’s availability, quality and accessibility. Lecture Notes in Computer Science, Springer Verlag,
WiFisense goes beyond the currently marketed 2.4Ghz Vol 1370, p 22, 1998.
detectors [6, 7]. It uses an embedded controller and an 3 A. nthony Dunne and Fiona Raby
802.11 wireless card to implement its functionality. In a http://www.mediamatic.nl/Doors/Doors2/DunRab/Dun
passive manner it scans for the presence of 802.11 Rab-Doors2-E4.html
management frames broadcast by all access points. The
presence of these frames and the information that lies 4 . Norman, D.A., The Design of Everyday Things,
within them informs the device of an existing wireless Basic Books, New York, USA, 2002.
node and its various features. Characteristics such as signal 5. Warchalking website
strength and sometimes the encryption status are available
within these frames. http://www.warchalking.org
The controller uses eight rows of eight LEDs to announce a 6 . Kensington Technology Group. ………
network’s features with each row representing a single http://www.kensignton.com/html/3720.html
network . The strength of the signal is mapped to eight 7. SmartID Technology, Pte. Ltd.
LEDs – the higher the signal strength, the more LEDs that
http://www.smartid.com.sg/prod01.htm
light up. The LEDs flicker due to fluctuating signal
strength – based on the proximity to the base station, the

57
Tejp: Ubiquitous Computing as Expressive Means of
Personalising Public Space
Margot Jacobs Lalya Gaye, Lars Erik Holmquist
Play Studio, Interactive Institute Future Applications Lab, Viktoria Institute
Hugo Grauers gata 3, 412 96 Göteborg, Sweden Box 620, 405 30 Göteborg, Sweden
www.playresearch.com www.viktoria.se/fal
+46-(0)734055867 +46-(0)31-7735562
[email protected] {lalya, leh}@viktoria.se

ABSTRACT In order to achieve this, we develop a series of various low-


We present the project Tejp, which aims at exploring the tech prototypes to deploy in urban environments simply to
potential of ubiquitous computing as an expressive means see what will naturally occur. These prototypes are not
of personalising public space. The project consists of a meant to become end-products, instead we use them as
series of experiments in which users deploy open low-tech concept illustrators and props for experimenting with
prototypes in urban settings to create layers of personal people in real-life settings, in order to observe emerging
information and meaning in public space through the content, behaviour, narratives and meaning.
parasiting of physical environments. Focusing the More specifically, these prototypes allow us to explore how
experiments on the aspect of physical interaction, we their characteristics and physical attributes influence the
observe how emerging information content and user way people appropriate and personalise space through
behaviours are influenced by the characteristics of the interacting with them. As a result, each of the prototypes
prototypes. This will result in design implications that will consists of different combinations of media, structure and
allow for a heightened degree of poetry and personal level of abstraction, while deliberately remaining
expression in ubiquitous computing. straightforward and open.
Keywords PRELIMINARY STUDY
expressive ubiquitous computing, personalisation of public At the start of the project, we performed a preliminary
space, parasiting of physical environment, détournement study of urban visual cultures and interviews of public
INTRODUCTION artists in order to gain insight into the aesthetics, the
How can people create their own ubiquitous computing values and the acceptance of current alternative forms of
infrastructures to deploy in the everyday environment, in communication and personalisation of public space, such as
order to make it more personal, meaningful and expressive? stickering and recent forms of graffiti. The study revealed
The project Tejp addresses this question by experimenting current tendencies towards the discrete use of context and
with technology-enabled layering of personal, location- ephemeral situations into the pieces, an approach that is
based information on public space, enabling people to more widely accepted by the general public than other more
overlay as well as uncover personal meaning in their classic forms of graffiti. This prompted us toward the
physical environment. importance of physically incorporating the information
space to its local context and led us to basing parts of the
The project is a testing platform geared towards providing design of the prototypes on the idea of pararisiting
opportunity for open customisation and creation of physical environments.
ubicomp environments invested with personal meaning. It
focuses on exploring the actual physical interaction between The concept of “parasitic media” introduced by Johnson [5]
the users and the information space. Attention is also is further developped and refined by Martin [6] as “adding
directed on the resulting content given to a physical place functionality to a pre-existing system (…) [and making]
through this interaction. Avoiding imposing formulated use of only that which you create which in turn remains
content or fixed interaction procedures, we aim at allowing invisible” and stays “within the system margin of error.”
personal aesthetics, possible strangeness and poetry to Whereas parasitic media mainly focuses on mainstream
emerge. media and corporate network systems, our approach to
parasiting occurs on the level of the physical environment,
where the prototypes re-use existing elements of physical
environments or situations in public space as an intrinsic
part of their functionality, while still maintaining a discrete
presence.
We were also inspired by Situationist ideas of intervention
in everyday life, in particular that of détournement, defined
as “deflection, diversion, misappropriation, hijacking, or

58
otherwise turning aside from the normal course or purpose”
[1], which is usually used as a critique of the information
content pervading public space in many examples we
encountered during our study.
PROTOTYPES
We describe Audio Tags and Glitch as first examples of the
types of prototypes we develop and experiment with in this
project.
Audio Tags
Audio tags illustrate the notion of overlaying personal
traces on public space. An audio tag contains an audio
message that once recorded can be left at hidden places in
public spaces. When passers-by lean towards the device,
this personal message is whispered to their ears. People Figure 1: Audio tags: adding a layer of personal audio on
then have the possibility to record over the existing physical structures
messages with their own. Glitch
The prototypes are made from hacked low-cost gadgets and As opposed to overlaying information, Glitch is about
are about a few cm3 big. They consist of a small revealing a hidden layer of personal communication in
microphone through which an up to 10 seconds long audio public space. Interferences caused by passers-by’s messages
message can be recorded onto a small sample buffer while and phone calls, are loudly broadcasted at a public place
holding a button, and a small speaker that reveals the with high traffic potential, such as bus stops or busy street
content of the message when an IR sensor senses the corners. If the speaker array is f. ex. linearly disposed along
proximity of a person (Fig. 1). After recording their a usual pedestrian path, the glitches stalk the mobile user
message, people can attach the tags to walls or other during the whole phase of mobile communication initiation
structures in public space. (Fig. 2).
Virtual annotations of space have been explored by several The Glitch prototypes are arrays of powered-on
projects, such as GeoNotes [3] in which location-specific loudspeakers picking up electromagnetic interferences from
text annotations on public space authoured by the users mobile phones. Some of them use a standard antenna and
themselves are browsed through with PDAs. In its can be installed in a grid formation, while others parasite
approach to augmening public space with location-based off existing metallic urban structures such as fences or trash
audio, the Audio Tag experiment is also related to projects cans in the city, re-using them as antennas in parasitic way.
like Augment-able Reality [7] in which virtual voice notes
and photographs are accessible through an augmented
reality wearable system, and Hear&There [8] where personal
audio imprints are virtually linked to physical locations.
In the case of Audio Tags, we were interested in exploring
physical rather than virtual interaction with the audio space
in order for people not to need any particular device to
access the information and for the audio to be better
integrated in the public space. The Voice Boxes [4] that
record personal audio message when opened are in this way
similar to our experiment. However, while users of the
Voices Boxes trigger the messages by manipulating the
devices, the Audio Tag messages are triggered by physical
proximity, in a implicit way. By being fixed on physical
structures in the environment as parasites and by only
making themselves discretly heard within a certain radius,
as when whispering to someone, the tags open a space of
intimacy inside the public realm. This proximity triggering
combined with the small size of the tags that makes them
almost disappear into the environment, helps ensure a
serendipitous discovery.
Figure 2: Glitch: revealing a layer of meaning by re-
situating familiar phenomenas into unusual settings.
Earlier projects such as Live Wire [9] and Placebo [2] also
make otherwise hidden communication networks visible in
a way integrated into everyday contexts, respectively with
wires dangling to the amount of activity in a computer

59
network, and furnitures reacting to electromagnetic fields will be experimenting with a series of other low-tech
produced by mobile phones or other leaking electronic prototypes, resulting in informed design implications for
objects. A more recent example of this is WiFisense [10], a this field.
handbag covered with light diodes that light up when
ACKNOWLEDGMENTS
detecting wireless networks.
We would like to thank all of the participants and
Glitch, on the other hand, follows the situationsist tactic of interviewed individuals, specifically MABE, LADY and
détournement [1], by re-situating the familiar auditory EMMI. We would also like to thank Tobias Skog, Ramia
phenomena of speakers picking-up incoming calls or text Mazé, Daniel Rehn, Daniel Skoglund, as well as PLAY,
messages before the mobile phone does, usually taking FAL, and the Mobile Services project members for their
place at homes or offices, into the unexpected setting of comments and support. This project is funded by the
outdoor urban environments. As the nature and origin of Swedish Foundation for Strategic Research through the
the noises are familiar to most people and easily Mobile Services project and the Interactive Institute.
identifiable, the speakers remain hidden, a situation of
interruption is created, highlighting the virtual and REFERENCES
pervasive layer of mobile phones communication. 1. Debord, G.-E.: Methods of Détournement. Les Lèvres
Moreover, Glitch differs from the previously named Nues #8 (1956)
projects by its parasitic nature. 2. Dunne, A., and Raby F.: Design Noir: The Secret Life
of Electronic Objects, August/Birkhäuser, (London, UK
OUTCOME
and Basel, Switzerland, 2001).
Our hope is that through the accidental collaboration of
various actors in the public realm, the project will result in 3. Espinoza, F., Persson, P., Sandin, A., Nyström, H.,
physical networks of meaning, aesthetics and perhaps a Cacciatore. E., and Bylund, M.: GeoNotes: Social and
critique of the everyday environment. The Tejp prototypes Navigational Aspects of Location-Based Information
are tested on site through specifically crafted tactics and Systems, Proc. of Ubicomp ’01 (Atlanta, USA, 2001)
placement. Testing procedures and experiments range from 4. Jeremijenko, N. Voice Boxes: 3D Sound Icons for Real
outdoors workshops, to stake-outs and video-based Space, Ars Electronica Prix 95 (Linz, Austria, 1995)
analysis. Once having experimented with users in real
urban settings, we will derive informed design implications 5. Johnson, S.: Interface Culture: How New Technology
based upon reoccurring patterns of people’s (mis)use of the Transforms the Way We Create and Communicate,
prototypes and emerging narratives, both from the Harper San Francisco (San Francisco, USA, 1997)
perspective of active and accidental participants within the 6. Martin, N. M.: Parasitic Media: Creating Invisible
project. This implies observing changing content, Slicing Parasites and Other Forms of Tactical
placement, modes of initiation and interaction behaviours. Augmentation.
Based on these design implications, we will be able to http://www.carbondefense.org/cdl_writing_7.html
draw conclusions for and about expressive ubiquitous 7. Rekimoto, J., Ayatsuka, Y. Augment-able Reality:
computing environments. Situated Communication through Physical and Digital
CONCLUSION Spaces. Proc. of ISWC ‘98 (Pittsburgh, USA, 1998)
We have presented the project Tejp, which is a step towards 8. Rozier, J., Karahalios, K., and Donath, J.: An
a more poetic, strange, and personal expression in Augmented Reality System of Linked Audio. Proc. of
ubiquitous computing. Tejp explores how to empower ICAD ’00 (Atlanta, USA, 2000).
people with open pervasive means of structuring and
personalising their everyday environment through 9. Weiser, M. and Brown, J. Designing Calm
overlaying and uncovering meaning on public, physical Technology, Powergrid Journal (1996)
space. Besides the two examples we have described, we 10. Wifisense project, http://wifisense.com

60
Telemurals: Catalytic Connections for Remote Spaces

Karrie Karahalios and Judith Donath


MIT Media Lab
20 Ames St. E15-468
Cambridge, MA 02139 USA
+1 617 253 9488
{kkarahal,judith}@media.mit.edu

ABSTRACT ena triangulation: "A sign of a great place is triangulation.


Mediated communication between remote social spaces is a This is the process by which some external stimulus pro-
relatively new concept. An example of this interaction is vides a linkage between people and prompts strangers to
video conferencing among people within the same organi- talk to each other as if they were not." [10]
zation. Large-scale video conferencing walls are appearing Our hypothesis is that the creation of a social catalyst as an
in public or semi-public areas such as workplace lobbies integral part of the social environment will aid mediated
and kitchens. These connections provide a link via audio communication between spaces by providing a spark to ini-
and/or video to another space within the organization. tiate conversation and the interest to sustain it.
When placed in these spaces, they are often designed for
casual encounters among people within that community. The social catalysts of our installation extend Whyte’s trian-
Thus far, communicating via these systems has not met gulation principle into the display and interface of the con-
expectations. We explore a different approach to linking nected space. The form of our catalyst is abstract. It alters
spaces and creating interaction through what we call social the space and communicative cues between the two spaces.
catalysts. One such catalyst is a connection where current conversa-
tion of the users appears as graffiti in the environment. This
Keywords allows the occupants to see they are affecting the space and
Social interaction, mediated communication, mediated might encourage them to alter it. While the possibilities are
spaces, telepresence, remote connections infinite, the challenge is determining which agents on the
INTRODUCTION interface are effective as social catalysts and why.
In this work, we are creating an audio-video communica- In our linking of two spaces with the Telemurals installa-
tion link between remote spaces for sociable and casual tion, we are augmenting the appearance of the familiar
interaction. Some drawbacks to current systems that have audio-video wall interface with stimuli that are initiated at
been studied include lack of privacy, gaze ambiguity, spa- either end of the connection. The wall is intended to be not
tial incongruity, and fear of appearing too social in a work only a display, but an event in itself; the system becomes
environment [7]. We believe that many of these problems both medium and catalyst.
stem from designing interfaces that directly map to face-to-
This work further emphasizes the design of the interface as
face interaction. A window of straight video appears dis-
a complement to the space. We want the communication
tancing and over time, mundane. Audio-video connections
link and display to blend into the physicality and aesthetic
between spaces should be designed as an alternate form of
of the space and to make the interactions sociable and intui-
communication that is possible over a distance.
tive.
With this work, we are diverging from the approaches of TELEMURALS
current audio-video connections and focusing on encourag-
ing social interaction by designing a series of social cata- Telemurals is an audio-video connection that abstractly
lysts. We are not creating a substitute for face-to-face blends two remote spaces. The initial setup is straightfor-
interaction, but rather new modes of conversational and ward. Two disjoint spaces are connected through an audio-
physical interaction within the spaces. video wall. Video and audio from each space is captured.
The two images are then rendered, blended together, and
SOCIAL CATALYSTS projected onto the wall of their respective space. The differ-
The main idea of the social catalyst is to initiate and create ence between Telemurals and traditional media space con-
mutual involvement for people to engage in conversation. nections are the image and audio transformations that
For example, in a public space, it is not customary to ini- evolve as people communicate through the system and the
tiate conversation with random strangers. However, there blending of the participating spaces.
are events that act as catalysts and connect people who
Duplex audio is transmitted between the two locations. To
would not otherwise be communicating with each other.
provide feedback and comic relief, the audio is passed to a
Such a catalyst may be an experience, a common object speech recognition algorithm. The algorithm returns text of
like a sculpture or map, or a dramatic event such as a street the closest matching words in its dictionary. This text is
performer. Sociologist William Whyte terms this phenom- then rendered on the shared wall of the two spaces. The

61
goal here is to make it clear that the users’ words are affect-
ing the space without necessarily requiring 100% accuracy
of the speech recognition system.
A current implementation of Telemurals is shown in Figure
1. Silhouettes of the participants in the local space are ren-
dered in orange. The participants at the remote end are ren-
dered in red. When they overlap, that region becomes
yellow. The aim of this cartoon-like rendering is to transmit
certain cues such as number of participants and activity
level without initially revealing the identity of the partici-
pants.
Participation is required for this communication space to
work. To reinforce a sense of involvement, we provide the
system with some intelligence to modify its space accord-
ing to certain movements and speech patterns. That is, the Figure 2: Telemural installation in Sidney-Pacific
more conversation and movement between the two spaces, Dorm.
the more image detail will be revealed to the participants at sion that the person you are speaking to is constantly look-
each end. The silhouettes slightly fade to become more ing elsewhere.
photo-realistic. This prompts the participants to move
closer into the space to see. If conversation stops, the With Telemurals, we are creating an environment where
images fade back to their silhouette rendering. We want the rendered video maintains subtle cues of expression such as
participants to choose their own level of commitment in posture and hand motion, yet also enhances other cues. For
this shared space [6]. The more effort they exert, the more example, changes in volume alter the style of the rendered
they see of both spaces. video. By adding another layer of abstraction into the video
stream, we can enhance cues in a manner that is not possi-
Much thought has been given to the design of the render- ble in straight video streams.
ings in Telemurals. We wanted to maintain the benefits of
video in their simplest form. Adding video to a communi- In this project, the abstraction of person, the blending of
cation channel improves the capacity for showing under- participants, the graffiti conversation, and the fading from
standing, attention, forecasting responses, and expressing abstract to photo-realistic are the social catalysts for the
attitudes [5]. A simple nodding of the head can express experience. This new wall created by filtering creates an
agreement or disagreement in a conversation. Gestures can icebreaker, a common ground for interaction, and an object
convey concepts that aren’t easily expressed in words; they for experimentation. How will one communicate in this
can express non-rational emotions, non-verbal experiences. abstracted space? How will their behavior affect their
appearance and the appearance of the setting? How differ-
Yet these cues are not always properly transmitted. There
ent is communication using photorealistic vs. non-photore-
may be dropped frames, audio glitches. Lack of synchro-
alistic video? The goal here is to create new styles of
nicity between image and audio can influence perceptions
movement and speech interaction by providing a common
and trust of the speaker at the other end. Other challenges
language across the two spaces.
include equipment placement. For example, camera place-
ment has long been a reason of ambiguous eye gaze in Telemurals currently connects two common area halls of
audio-video links. A large camera offset gives the impres- MIT graduate dormitories, Ashdown and Sidney-Pacific.
The Telemural in Ashdown is located to the right of the
main lobby. In Sidney-Pacific, the Telemural is placed in a
high traffic cross-way connecting the gym, the laundry
room, and the elevators (see Figure 2). This connection
came about as the under-construction Sidney-Pacific Dor-
mitory committee was looking to put public art in its public
areas and create spaces to encourage students to gather.
Ashdown, the oldest graduate dormitory on campus, was
similarly undergoing renovations to create public spaces for
social gatherings, and the two dormitories were open to the
idea of creating a shared communication link. The sites
within the dorms were chosen because they have traffic, are
public to the community, and because a large video wall
aesthetically blends into the space.
EVALUATION
This work combines the disciplines of technology, design,
and communication. Evaluation of this work is therefore
Figure 1: Current Telemurals implementation. threefold.

62
Engineering • The number of people using the system at any one time
We evaluate if the system functions. Does it work? That is, • The number of people present but not interacting
does it transmit audio and video? Is the sound quality • The number of unique users (if possible)
acceptable? Is the video quality and speed acceptable? Are • The number of repeat users (if possible)
the interface and networks reliable?
• The number of times and the duration that people use
Design Telemurals in one space only
This is in the form of a studio critique. Professors from var- • Repeated patterns of interaction: gestures, kicks, jumps,
ious architecture and design departments and research sci- screams
entists have been invited and have volunteered to These are factors that we believe are indicative of levels of
participate in a series of critiques. interaction. However, one must always be open to the unex-
Ethnography pected and attempt to find other underlying patterns as well
The field for this observation study is the semi-public space in studying the social catalysts.
within the two chosen dormitories. The participants are Privacy
graduate students who live in the respective dormitory and When running such a project and study, it would be irre-
their friends. We are primarily interested in seeing, (1) how sponsible to ignore privacy concerns. The audio and video
people use Telemurals, (2) if the catalysts attract them, and transmitted in the Telemurals interface is not saved or
(3) how we can improve the system. stored in any way. We hope to mitigate this problem with
DISCUSSION proper signage.
As an engineering project, Telemurals works. It runs on the Summary
school network and typically uses less than1MB of band- Overall time-schedule, social events, signage, trust, site
width with audio latency varying from 500ms to 1 second selection, and a changing environment proved to influence
depending on network usage. The networking audio and population mass at the Telemural sites. The motion of peo-
image libraries are all written in C over UDP, and we use ple, the ambient noise, and the graffiti stemming from your
the Intel OpenCV library for image segmentation. own words and those of your remote companions, kept peo-
The video was reliable, the audio had acceptable lag, and ple at the site.
the system ran continuously for over two months. The one We discovered we had a larger population of use when
technical challenge that could use improvement is the Telemurals was up for shorter intervals of time. We believe
audio. Using just one microphone does not cover the it became more of an event - something that should not be
intended space and the acoustics of each space play a huge missed. Nevertheless, we continued getting requests to run
role. We are experimenting with microphone arrays and it continuously.
with physical objects that one interacts with that contain the
microphone. Dorm events such as meetings and social hours attracted
large crowds. Oftentimes it was for comic relief. Other
Telemurals was evolving throughout its construction and times it was because of the quantity of people. One person
connected installation period. We experimented with sev- at the Telemural, whether at the local or remote end tended
eral different renderings of people at each end, we changed to attract more people. A wedding party proved to be the
the fading algorithm, changed the hours of operation, and most interactive period with children repeatedly running
changed the Telemural wall site at Sidney Pacific. These back and forth across the wall. Food associated with these
changes were made according to suggestions and critiques events also attracted people. Moving food in the field of
throughout a five month period. view of the mural made it a popular spot.
The Telemurals observation took place in May and June of There was a tremendous difference with and without
2003. Initially, Telemurals ran for two hours each Wednes- instructional signage. There was original signage explain-
day and Sunday night in conjunction with a coffee hour/ ing Telemurals was an audio-video connection. However,
study break. Signage was placed in the entry ways of both people were not convinced either because there was no one
spaces to describe what is being transmitted and the privacy at the other end or because it was unfamiliar. Later, detailed
concerns of the project. instructional signage was added and usage increased five-
We had requests from both spaces to increase the hours of fold. With time, people also began to trust and understand
the connection. Telemurals then ran every night for two the link better.
hours and then ran continuously twenty-four hours a day. Changing the Telemural site at Sidney Pacific from a bright
We performed three different types of observations. room with high ceilings and glass walls, to the other corner
that was dimmer and closer to the elevators similarly
• Observation while immersed in the environment increased interaction levels. The new space provided more
• Observation from mounted camera video of a surprise, better mural visibility, and more time to inter-
• Observation from abstract blended video act.
The footage from these tapes was used to annotate patterns Ashdown and Sidney-Pacific have an interesting history. A
of use for this study and were then discarded. Initially, we good number of the inhabitants of Sidney-Pacific lived in
were interested in observing: Ashdown the previous year. Some students arranged meet-
• How long people speak using Telemurals ing times to meet at their respective Telemural.

63
People preferred more abstracted silhouettes to photorealis-
tic images due to the ubiquitous nature of the connection.
3. Galloway, K. and Rabinowitz S. Hole in Space. Avail-
The speech recognition algorithm provided positive feed-
able at http://www.ecafe.com/getty/HIS/
back loops of people talking doing their best to hit a suc-
cessfully translated phrase and comic relief. 4. Goffman, E. Behavior in Public Spaces: Notes on the
Above all, having a person present at either end is a big social organization of gathering. New York: The Free
attractor. This was even surprising during observation ses- Press. 1963.
sions when we sat near the Telemurals taking notes; we 5. Isaacs E. and Tang J. What Video Can and Can't do for
thought participants would be self conscious being Collaboration: A Case Study. Multimedia ’93.
watched. However, people would come just become some-
one was watching the wall. 6. Jacobs, J. The Death and Life of Great American Cities.
New York: The Modern Library. 1961.
Whether there is a person at the remote end or the local end,
they attracted more people. William Whyte was right. 7. Jancke, G., Venolia, G., Grudin, J., Cadia, J., and Gupta,
“What attracts people most is other people”. With Telemu- A. Linking Public Spaces: Technical and Social Issues.
rals, we hope to facilitate that. Proceedings of CHI2001.
ACKNOWLEDGMENTS 8. Krueger, M. Artificial Reality II. Addison-Wesley. May
We would like to thank Michael Bove and Stefan Agaman- 1991.
olis for their useful comments, the Sociable Media Group
for our discussions, and the Things That Think Consortia 9. Pederson, E.R. and Sokoler, T. AROMA: abstract repre-
for their support of this work. sentation of presence supporting mutual awareness. Pro-
ceedings of CHI ‘97.
REFERENCES
1. Agamanolis, S. Westner, A. and Bove, V.M. Reflection 10. Whyte, W.H. City: Rediscovering the Center. New
of Presence: Toward More Natural and Responsive Telecol- York: Doubleday. 1988.
laboration. Proc. SPIE Multimedia Networks, 3228A, 1997.
2. Bly, S. and Irwin, S. Media Spaces: Bringing people
together in a video, audio and computing environment.
Comm. ACM 36,1, 28-47.

64
Fluidtime:
Developing an Ubiquitous Time Information System

Michael Kieslinger
Interaction Design Institute Ivrea
Via Montenavale 1, 10015 (TO) Ivrea, Italy
[email protected]

ABSTRACT information, people can adjust their behaviour accordingly


Increasingly, people live and work with a new set of habits and take control over how they wish to spend their time.
regarding time, such as the increased use of the mobile A survey by Joanna Barth [3] that was done as part of the
phone to quickly schedule or change appointments. research project investigates 19 working services,
However, aside from the phone, few tools or services exist applications, and devices that deliver real time information
that support this new way of life, especially when people about public services and private appointments. Especially
interact with public or private services. in the context of travel, people can find real-time travel
With these new habits in mind, the Fluidtime prototype information at train stations or airports. More and more
system was created to provide people with personalised, city transport authorities provide people with up-to-date
accurate time-based information directly from the real-time information. This travel information is increasingly
databases of the services they are seeking. available through the Internet and recently also through
SMS. For example, many airport websites provide a real-
This abstract describes the case studies that have been time update of the arrival and departure schedule. An
implemented, presents first insights from the trials and example for real-time updates in the context of public
discusses the design issues these trials raised. transport in cities is NextBus Inc. [9] It provides
Keywords information for several cities throughout the US. In
Time, mobility, ambient displays, interaction design, Europe, in the city of Turin, Italy, travelers can use SMS
service design to access real-time information of the arrival of buses. [1]
INTRODUCTION These few examples among many show a clear trend
We have to wait when our personal time schedules do not towards the use of communication technology for
coincide with the schedules of the people and services with coordinating the timing of services. Providing people with
whom we interact. Since both people and services are in up-to-date information is becoming more accessible and is
constant flux, precise appointment times are not the most appreciated by the customers. [5]
useful means of coordination. When people are provided In the context of hospitals and medical examinations,
with continuously updated time information about a service where the timely coordination between doctors and patients
or appointment, the activity of waiting becomes more is still characterized by standard procedures, where patients
tolerable. [8] need to wait in order for doctors to have patients ready at te
Currently, most people are left wondering if their doctor’s right time. But a survey in the UK has estimated that there
appointment is on time, when their bus will arrive, or are about eight and a half million missed doctor's
when their package will be delivered. The unpredictable appointments a year, which sums up to 150 million British
nature of events requires a flexible model of time that is Pounds of lost appointment time. [4] A department at the
not reflected in the static and abstract nature of traditional Homerton Hospital in east London started a trial that uses
timing systems. SMS messages to remind patients of their upcoming
appointments as a possible solution to this problem.[6]
People increasingly use the mobile phone for scheduling
since it allows them to make instant appointments and It is not only financial motivations that make hospitals use
change them according to unforeseen personal communication technologies to coordinate patient
circumstances. Instead of arranging activities in reference to schedules. It is also the increased awareness for an
the clock, people can plan in accordance to the real-time improved doctor-patient relationship that inspires doctors
information of the people or service they are seeking. This and hospital authorities to look at new scheduling
allows them to flexibly arrange and adjust their possibilities.
appointments by coordinating their own schedule with the Time and money
changing availability of the service or of their friends. This Ever since Benjamin Franklin made his “time is Money”
use of telecommunication technology reveals an increasing statement, time has become an important and valuable
trend towards flexible time planning. [7] With real-time

65
business factor. In some cases, time has been the only stationary Internet, the service performs simple tasks, such
product that was sold. as time monitoring, and user reminding.
Samuel Langley was one of the first ever to market time as Prototype contexts
a product in the end of the 19th century. [8] His product The two contexts that were chosen for the prototype
was standard time, which was used to set a common time development were the public transport in the city of Turin
standard on which a train schedule could be based. Langley [1] and the laundry service at the Interaction Design
broadcasted the observatory’s time signal and other cities Institute Ivrea.
paid him in order to receive and use this standard time.
On average 20,000 people use the public transport facilities
Especially today, people are willing to pay for time since it in Turin every day. Turin transport authorities have already
is a highly-valued commodity. A survey that was done implemented a system that tracks all the buses and trams.
after the implementation of the new traffic information The first service prototype makes this data visible to
system in Turin [5] indicates that people would be willing travellers at home, at work or on the move. They can find
to give money for real-time arrival information. A report by dynamic information on mobile screen-based devices; while
Alpern et al. [2] shows possible revenue model of how at home or at the office, people can get the same
money could be made on real time information for public information on mechanical display units.
transport.
The second service prototype is a scheduling and time
In many operations, from travel services to home services, monitoring system to help Interaction-Ivrea students
accurate time information is a by-product of upgrading the organise their use of shared laundry facilities. The 50
business operations to digital systems. It is the decision of students and researchers share the use of a washing
individual management whether this by-product is offered machine. Having to book a time slot, remember to bring
to customers for free (in order to improve the service the dirty laundry, keep the appointment in mind, and check
experience), or if it is used to generate revenue. The future the washing machine in the basement to see when it’s
will show which models will work and which ones the finished—all add up to a less than comfortable experience.
customers will not accept.
Using different interface modalities, the service performs
Fluidtime simple tasks, such as reminding users in the morning to
The Fluidtime project aims to contribute to these bring their laundry to the Institute, or letting them know
developments by finding engaging, convenient and when their laundry slot is ready or their washing is done.
effective means to view and interact with real-time Since the system knows the users’ profiles and how busy
information. Especially through advances in wireless the day is, it can adjust it’s behaviour regarding reminders
Internet technology, it is possible to create ubiquitous from being either strict or more relaxed.
access to real-time information. Ambient devices allow the laundry users to monitor the
Current systems have the drawback that they are not progress of the machine and know when it is time to
accessible through easy-to-use interfaces or products collect the laundry. The system prototype also allows users
whether the customer is at home, in the office or on the to take advantage of a free laundry slot with enough
move. For instance, travelers first need to go to the train advance notice if needed. It does this by both checking the
station in order to find out if the train is delayed, or in the schedule and getting confirmation from those who are
case of a SMS-based updating systems, any timing changes affected.
are not reflected until the next SMS is sent. If, however,
INTERFACES
every fluctuation in the schedule produces a SMS message,
The challenge for interface design was to create a simple
the recipients could easily be flooded with too many
and effective system of interactions. The intrinsic problem
messages..
with time planning systems is that they require time to be
With the Fluidtime project, we want to investigate the use used. On one hand, they help us free up our time or
of ambient displays that reside in the background or organise our activities in a better way, but on the other
wireless mobile devices that allow the user to monitor the hand, they require time to be operated, and thus reducing
information constantly and utilize the advantages to the use the overall effectiveness.
of real time information. We hope to build a pervasive
We developed two categories of interfaces: one that was
information environment that is subtle and pleasing to use.
meant to be mobile and accessed anytime and anywhere,
PROTOTYPE DEVELOPMENT and a second category that was stationary, and designed to
The Fluidtime system be used in the context of the home or office. It is worth
We developed a time information system and interface mentioning that with the physical object interfaces we
prototypes in order to investigate the opportunities and focused on the exploration of the quality of interaction and
impact of using real time information. The system works information representation. We don’t see them as proposals
by tapping into already existing real-time logistical for products that should be built and go to the market
information from bus companies and laundry services and tomorrow, but that explore some basic functionality and
makes it available to the Fluidtime users via wired and quality. Alternatively, using a generic mobile phone
wireless networks. Using SMS, e-mail and the mobile and allowed us to explore interfaces that are on the market now
and wouldn’t require special investment from customers.

66
Mobile Interfaces Physical object interfaces
The interface is based on a Java software application that FI6: Mechanical display unit with icons
runs on a standard mobile phone (we used the NOKIA This is an object for the transport context. It has the
6610) connects to a server to get the real-time information dimensions of a small hi-fi stereo (See Figure 3) and is
and then visualises the data. meant to be placed in the home or office environment.
We created an optional wristband that allowed test users to Through the glass fronts of the object, the users can see
wear the phone interface on the lower arm, just like a small iconic representations of the buses that move from
regular watch. Once the application was activated, it the background to the right hand corner in the front. The
allowed them to check any changes of time information position of the miniature bus tells the users how far the bus
just by looking at the display, since the application was is from the bus stop. The user can configure the bus routes
always on and always connected. and stops through a web interface.
FI1 (Fluidtime interface 1): Perspective visualisation FI7: Mechanical display unit with shoes
The interface shows how far a certain pre-selected bus is This object looks just like a small shoe rack that people
away from the chosen stop. (See Figure 1) The application keep in their homes. (See Figure 3) When the user activates
permanently updates the visualization with data originating the object, the movement of the shoes indicates the
from the Turin transport authorities. distance of the actual bus. If the shoes move slowly the bus
is still far away, and the user could walk slowly and still
FI2: Iconic representation of time catch the bus. If the pairs of shoes start to ”run” then the
An icon on the upper part of the screen indicates the state user would also need to run in order to catch the bus. Since
in which the user should be in order to catch the next bus. the moving of the shoes creates an acoustic pattern, the user
(See Figure 1) If the icon displays a tranquil character, the can listen to the information even if he is not in the same
user can be relaxed. If the icon is a running figure, the user room as the object.
knows that the bus is due to arrive.

Figure 3. Fluidtime interface 6 and 7 (FI6, FI7)


FI8: Mechanical display with ribbons
This wall-mounted object has a discreet appearance and
Figure 1. Fluidtime interface 1 and 2 (FI1, FI2) indicates the status of the washing machine. (See Figure 4)
FI3: Overview of three routes and stops The turning angle of the central cube indicates the progress
With this interface, the user has increased planning control. of the washing machine. Once the washing cycle has
(See Figure 2) It allows the user to define up to three finished ribbons will appear and clearly indicate that is
different routes at up to three different stops. This time to pick up the laundry. As soon as the washing
information is necessary for travellers that need to change machine door is opened the object turns back to its initial
buses. Test users also relied on it to decide which bus stop state.
to walk to.
FI4: Washing status indicator
This interface shows the status of the washing machine,
informing the user when it is the right time to go to the
facilities in order to unload the machine (See Figure 2).

Figure 4. Fluidtime interface 8 (FI8)


TRIALS
The interfaces for mobile phones (FI1 – FI3) described
above were tested in Turin between May and June 2003.
Four candidates in their early thirties were given a mobile
phone, with the applications installed on them and two of
them received a wristband for optional use. A fifth user
received the interface FI7 to test. A small printed manual
Figure 2. Fluidtime interface 3 and 4 (FI3, FI4) described the use of the applications. Since the applications
were quite simple, the users required not much learning.

67
All of the candidates were commuters that used buses It was important to provide the fully functional prototypes
everyday to travel to and from work. The candidates were to the test users in their particular everyday life contexts in
interviewed three times during trial period. These order to study the direct influence of the new technology
interviews aimed to capture their use habits regarding the into their daily habits and rituals
prototypes, the functional value of the interfaces, the In ubiquitous computing environments, the flow of
usability and aesthetic quality, and the emotional and everyday interaction has to be as smooth as possible. The
social attitudes of the test candidates value gained by new applications is often not equal to the
Looking at the use habits, we concluded that the interfaces effort put into learning and using them.
were consulted on a daily basis. Each one of the users Ubiquitous solutions are difficult to test in the everyday
interacted with it either on their way to the bus stop or life context, since many factors influence the results of the
once arrived. Only in cases where the apartment or office investigation. Nevertheless, we found it particularly helpful
was close to the bus stop, they started the application to spend time with the users while employing the system
before leaving the place. If people could estimate the time on the streets in their everyday environment.
it would take them to the stop (e.g. using elevator) they
only started the application once out on the street. ACKNOWLEDGMENTS
We thank the Interaction Design Institute Ivrea for
Over time, the users gained experience in estimating their
supporting this work and the entire team that made it
timings. One of the users adjusted her "leaving the office"
possible including Joanna Barth, Crispin Jones, Alberto
routine over time. She would start the application and if
Lagna, William Ngan, Laura Polazzi, Antonio Terreno, and
the bus was still distant, surf the web or chat with
Victor Zambrano. We also want to thank the faculty and
colleagues, until the bus was due to arrive.
staff at the Institute for providing helpful comments and
The applications’ release used by the trial candidates did feedback for this project.
not allow the storage of frequently used bus routes and
stops. This turned out to be the biggest handicap of REFERENCES
adoption. As mentioned in the introduction of the 1. 5T consortium Turin:
interfaces section, if time applications take too much time http://www.5t-torino.it/index_en.html
to operate, the value gained by the time information 2. Alpern, M.; J. Bush, R. Culah, J. Hernandez, E.
doesn’t equal with the effort put into accessing the Herrera, L. Van. M-Commerce opportunities and
information. revenues models in mass public transportation
A second usability handicap was the fact that the scheduling,
application can only be started from within the application http://www.alpern.org/files/MobileBus%20Report.pdf
menu of the mobile phone. Again, this effort is too big in 3. Barth, J. Fluidtime survey: competitive research
order to be useful on a frequent or daily basis. examples,
All users found the interfaces aesthetically pleasing and http://www.fluidtime.net/download/Fluidtime_survey.p
gave them marks like “beautiful”, “cute”, or “entertaining” df, 2002
On a social, psychological level, the team found interesting 4. Beecham, L. Missed GP appointments cost NHS
that real-time services not only support those who like to money,
plan ahead and want to compare different route possibilities http://bmj.com/cgi/content/full/319/7209/536/c, 1999
in order to save time or be more efficient, but that it also 5. Brardinoni, M. Telematic Technologies for trafic and
gives people less inclined to plan more possibilities to transport in Turin,
seize the moment. This supported our hypothesis that time www.matisse2000.org/Esempi.nsf/0/45bb8686703c9e3a
information devices don’t necessarily save time, since they c12566a40036100e/ $FILE/5T.doc
depend on the person that uses it. In the case of Fluidtime,
the aim is to give people more control over time -- it is the 6. Dyer, O. Patients will be reminded of appointments by
user’s choice that of how to deal with the information. text messages
http://bmj.com/cgi/content/full/326/7402/1281-a, 2003
CONCLUSIONS
The aim of the project was to develop a prototype 7. Kreitzman, L. The 24 hour society. Profile Books,
infrastructure and a set of interfaces that allowed users to London, 1999
access real-time information in the context of everyday life 8. Levine, R. A geography of time. Basic Books, New
(commuting and doing the laundry). York, 1997.
9. NextBus Inc.: http://www.nextbus.com

68
Pulp Computing
Tim Kindberg, Rakhi Rajani, Mirjana Spasojevic, Ella Tallyn
Mobile and Media Systems Lab
Hewlett-Packard Labs
Palo Alto, CA 94304 USA
{timothy, rarajani, mirjana, etallyn}@hpl.hp.com

ABSTRACT photographs [3]. Our distinct emphasis is (a) on interaction


We are investigating the integration of paper artifacts into models for working between paper and electronic resources
personal and inter-personal computing environments. We that apply even when the user is nomadic; and (b)
will demonstrate portable paper interfaces to media empowering non-technical users to author hyperlinks
collections and active photos that allow uses to access between paper and electronic resources. We aim to support
annotations in both web-based and printed representations. spontaneous usage of smart paper, as opposed to relatively
long-term form-filling and augmented-book applications
Keywords
[10].
Paper interfaces, hypermedia, physical hyperlinks, mobile
computing, World Wide Web
INTRODUCTION
The Pulp Computing project at HP Labs [7] is investigating
how to integrate paper artifacts into personal and inter-
personal computing environments. Our basic research
question is: given that paper plays such an important role in
our lives [9], how can we best enable people to work card
between their collection of paper-based artifacts and the
digital resources stored on the web and in their PCs?
Our approach to paper is to view it as a compelling type of
“UbiMedia “: the intersection of ubiquitous computing and
hypermedia [1]. What we mean by that is, first, that we can
Figure 1: Fil-o-media
print physical hyperlinks [4] to digital resources on paper,
in the form of optical tags such as barcodes and glyphs, or
the form of electronically readable tags using conductive DEMONSTRATIONS
inks, such as those developed by the Paper++ project [6]. To illustrate the first steps in our research, we intend to
Those physical hyperlinks can reference any content or demonstrate both the following prototypes.
services available on the Web. Not only can users insert Demonstration 1. Fil-o-media:
tags into their documents − providing electronic Portable Paper Interfaces to Media Collections
functionality in the printed versions − but virtual objects The demonstration will be of (a) a “fil-o-media” (c.f. fil-o-
can print references and even interfaces to themselves on fax, a portable ring-binder for diary entries etc. [2]) − which
paper, for access by users with various types of “smart contains paper links to a user’s music, graphics, video and
pen” as a reader. document collection; and (b) a model for users to exchange
Second, paper is very portable, and so it is suitable for content via personal cards. The demonstration will involve
exchange between people, and also for transport into and the paper artifacts plus off-the-shelf handheld wireless
out of ubiquitous computing environments. Ideally, users devices for dereferencing the hyperlinks. We will
should be able to take advantage of their smart paper demonstrate how to transfer content from one user’s fil-o-
artifacts and acquire useful smart paper artifacts wherever media to another (Figure 1). One user selects content from
they go. For example, a user can take a print-out of a their fil-o-media, and “binds” it to a personal card: that is,
meeting back to the office to find links to the attendees and they place a reference to the content in a hyperlink on the
the content that was presented. Smart paper should just card. Then they give the card to the other user, who can
“work” in any given environment. later bind the content into their own fil-o-media − or simply
keep the card − to play on output devices of their choice.
Many have investigated augmenting paper with electronic
functionality, including notebooks [11], post-its [5] and

69
Seiko InkLink technology [8] for identifying “hot spots” in
a printed image.
REFERENCES
1. Barton, J., Goddi, P., and Spasojevic, M. Creating and
Experiencing Ubimedia. HP Labs Tech report HPL-2003-38,
2003.
2. Filofax home page: http://www.filofax.com/.
3. Frohlich, D., Kuchinsky, A., Pering, C., Don, A., and Ariss, S.
http://mirjanashomepage.com Requirements for photoware. Proceedings of the 2002 ACM
conference on Computer supported cooperative work,
I work on the Pulp Computing
project at HP Labs November 16-20, 2002, New Orleans, Louisiana, USA.
submit 4. Kindberg, T. Implementing physical hyperlinks using
ubiquitous identifier resolution, in proc. 11th International
World Wide Web Conference.
Figure 2: Dynamic annotations of web-based images 5. Ljungstrand, P., Redström, J. and Holmquist, L. E.
Webstickers: Using Physical Tokens to Access, Manage and
Share Bookmarks to the Web. In: Proceedings of Designing
Demonstration 2. Active Photos Augmented Reality Environments (DARE) 2000, ACM Press,
An “active photo” is one with both web-based and printed 2000.
representations, each of which has links to annotations in 6. Paper++ site ETH Zurich:
the form of text, audio, video or hyperlinks to web pages. http://www.globis.ethz.ch/research/paperpp/index.html
The annotations may apply to the entire image or parts of 7. The Pulp Computing home page:
the image. The photos can be taken and annotated on the http://purl.org/net/PulpComputing/home.
fly, e.g. during a meeting or at a conference (Figure 2). The
8. Seiko InkLink. http://www.siibusinessproducts.com/.
annotations can be viewed from prints of the photos as well
as on the web. 9. Sellen, A., and Harper, R. The Myth of the Paperless Office.

We shall demonstrate how a conference attendee can 10. Siio, I., Masui, T., and Fukuchi, K. Real-world interaction
using the FieldMouse. Proceedings of the 12th annual ACM
annotate themselves in a group photo with text such as their
symposium on User interface software and technology, 1999.
home page, or a spoken message. Those annotations can be pp 113-119.
experienced on a public display at the conference or at any
time from the printed “informal proceedings” of the 11. Stifelman, L. Augmenting Real-World Objects: A Paper-
Based Audio Notebook. In proceedings CHI ’96.
conference (Figure 3). Printed images have certain
advantages such as being of higher resolution, and easy to
pass around and share. Our current implementation uses the

(x’, y’) Connection to image hyperlink service

Stylus position detector

x
Browsing
y
appliance

Printed image e.g.


photo, map…
Hyperlinked resource rendered on external
device or screen/speakers built into browsing
appliance

User taps corners to locate


image

Locatable stylus tip

Figure 3: Accessing annotations from a printed image

70
Living Sculpture
Yves Amu Klein Michael Hudson
[email protected] [email protected]
Lorax Works Scottsdale, AZ 85254
12415 N. 61st Place (480) 991-4470

ABSTRACT due to the complexity involved in having that sculpture


In this paper we introduce the concept of Living Sculpture integrate multiple changing inputs and react in real time.
and its philosophy. Next, we present "Octofungi", an
eight-eyed, eight-legged Living Sculpture. We conclude
with a look at Living Sculpture's future.
Keywords
Artificial Life, Robotics, Living Sculpture, Octofungi
INTRODUCTION
Living Sculpture is the name of the current body of work
by artist Yves Amu Klein. Living Sculpture represents a
series of works that attempts to bring emotional
intelligence and awareness to sculptured life forms. To
date, computing via embedded controllers and genetic
evolution servers has been an integral part of every Living
Sculpture. In turn, Living Sculpture has brought
computing into the traditional gallery setting and made it
the brains of man-made life forms. "Octofungi" is one
such life form. Octofungi is an interactive sculpture that
exhibits simple reflexive autonomous behavior, learns its
surroundings and interacts with them. The future of
Living Sculpture promises works on the micro-scale as
Fig. 1. The eight-legged Octofungi (1994-96), a 12-inch-tall sculpture
well as giant environmental pieces. These works will of colored polyurethane, micro glass beads, and natural fibers. Driven
require new computing technologies and expand the role by a neural network, Octofungi moves its legs in graceful patterns
of computing in our world. somewhat resembling the movements of a sea anemone.

Is this ubiquitous computing? Art and sculptures are part


of our culture. They are everywhere; in our house, parks, From finding appropriate materials to developing
gardens, museums, etc. Similarly living sculptures may be technologies for gesture, locomotion, sensory input, and
found in the same environments. Living sculptures are behavior, countless technological and physical obstacles
ubiquitous in the sense that they fall into our cultural have to be overcome to achieve a unified sculpture. The
desire of having art surrounding us. They are also work is an attempt to find a symbiotic balance between
computing because they are behavioral sculptures. classical artistic expression and contemporary
technologies.

Living Sculpture also incorporates a sense of biological


design. When we observe a living organism, we can see
LIVING SCULPTURE detail and complexity on any scale or from any angle. A
We first adopted the term Living Sculpture in the early Living Sculpture should embrace this intricate detail as
1990's to describe the new breed of robotic sculptures much as possible. As a design progresses from the inner
Klein was creating. Living Sculptures are a natural basic systems to the complex outer systems, surprising and
evolutionary step in classical sculpture. In the same way elaborate aesthetic and behavioral effects may emerge that
Alexander Calder and Jean Tinguely brought motion to exceed our expectations. The goal is to make Living
their art to create kinetic sculpture [1], Klein wanted to Sculptures interesting and exciting from any perspective.
bring emotional intelligence and behavior to sculpture to As with living organisms, the details of Living Sculptures
create a new form, Living Sculpture or Behavioral Art. are not simply for aesthetic purposes. Each line and curve
The process of creating a Living Sculpture is challenging should have a significant effect on the functionality and
aesthetics of the piece in order to survive the design

71
process. In a sense, the principles of natural selection are light and reacts upon these changes. To interact with the
applied to a Living Sculpture in progress. sculpture, a person only needs to move his hands above the
A Living Sculpture should be interesting (appealing to our eight light sensors placed around the brain frame.
senses) not only in appearance but also in behavior. A Depending on the interaction of the participant, Octofungi
sculpture should respond appropriately as people interact will manifest different behaviors.
with it, and different people should provoke different
responses from the sculpture. Living Sculptures are When Octofungi is at rest or at an emotionally neutral
imbued with both instinctive behaviors and more complex state it is an inanimate object like a vase; Octofungi is
responses. For example, an aggressive viewer may trigger a symmetrical sculpture in a classical way. To stimulate
defense mechanism within the sculpture, while a gentle Octofungi people will cast shadows of varying degrees of
viewer may experience a more subtle and pleasing darkness, speed, and direction relative to it’s eight photo-
reaction. The more time the viewer invests in developing a cells “eyes” by moving their hands above the eight light
constructive relationship with the sculpture, the more sensors placed around the brain frame.
interesting the response should become. We attempt to
make pieces that show depth of design as well as depth of
behavior. Functionality and aesthetics are tightly linked The photo cells use an analog-to-digital convert to convert
within the works and create a sense of unity and the analog stimulus to a digital signal that is send to a
homogeneity. Our audience will enjoy all these features of Kohonen neural network. The neural network knows its
a living sculpture, which are implemented in Octofungi. environment; it can distinguish between an intruder and a
However, it is sometime difficult to construct behavior in periodic activity such as a fan or a slight variation in light
a demo set up since the learning process can take hours. from a window. In the Kohonen neural network, the
neurons compete to decide the winner or next position.
The network will consider Octofungi’s current state
THE PHILOSOPHY OF LIVING SCULPTURE including the position of its eight legs, which may be 1 of
Living Sculpture is also a philosophical as well as a social- 256 positions. Since the Octofungi has eight legs and each
political quest that asks questions and challenges our leg can only accept a single winning position our neural
existence with our creations. Creating life as been network can have none to eight winners.
portrayed by many writers as a dangerous and insane
enterprise. The idea of another intelligent life form
frightens religious institutions and they condemn it as The winning positions are transmitted from the neural
profanity. History has shown that we as a species are both network processors to the shape memory alloyed driver
attracted to and afraid of the unknown. We feel that (SMA) processor via a serial line, which sends a pulse
caution is definitely in order. However, if taken seriously, width modulated (PWM) signal to each of the eight legs.
artificial life can help us understand our own existence. It As the legs move a digital encoder will give a feedback of
may even provide us with a means to transform into our the position of the leg. The feedback is also send to the
next evolutionary form, a being of the cosmos. Living neural network where it is used to learn the discrepancies
Sculpture is about bringing these ideas with their pros and of the legs mechanism and environment such as friction
cons into a dynamic and interactive dialogue. Here the and obstacles. The neural network acts as the brain by
sculptures strive to understand us as we try to grasp their controlling the sculpture form giving it a life-like
significance. behavior.

Some questions that we might ask ourselves include: Is Octofungi Alive? Octofungi is quite a complex system
Should we have Artificial Intelligence and Artificial Life in but it lacks elements that are indispensable to any life
our society? If we do achieve artificial life forms, what form. First it does not look for food. If we consider the
right should those life forms have? What is life? Is life intake of energy as eating, Octofungi is fed intravenously,
dependent on carbon-based structures found on earth or is but it does not "know" that it needs to eat and it does not
it information, which is independent from its basic know how to get food. However, plants are also "wired" in
material? a sense to the nutrients in the ground and to the sun's
energy. Plants, however, know how to search for these
nutrients and energy by moving their roots or leaves closer
OCTOFUNGI to the sources.
Octofungi has eight legs and eight eyes and lives on a Octofungi's awareness at present is purely instinctual. It
pedestal in a museum’s gallery or collectors favorite spot. has no higher thought processes. It is probably comparable
Viewers interact with Octofungi by waving their hands to a non-social insect such as a moth or to a mollusk such
over its eyes. Octofungi reacts to viewers by moving its as a snail. Although Octofungi presents some of the same
body based on its interpretation of the viewer's actions. elements as simple life forms, it still lacks full autonomy.
Octofungi is a reactive piece. It is sensitive to changes in

72
Only Octofungi's behaviors are presently controlled by its migratory birds. This is what Arius and many of its
will. Nonetheless, we believe that the line that separates descendents will be. A flock of hydrogen balloons that no
the living from the inert is fuzzier than we think. one wants to mess with due to their explosive contents.
The flock of Arius will float away along the pacific coast
as some strange birds from a different time. As creatures
THE FUTURE OF LIVING SCULPTURE of the sky and sea, they use the sun as their source of
Living Sculpture started by making simple reactive energy and water, and as their food. They can fly at great
sculptures. Today we are making evolvable behavioral art speeds with their hydrogen engines or glide peacefully at
forms and tomorrow we hope to bring more technologies sunset. They can communicate with each other using
to my pallet in order to show how biology and technology sounds and light signals or use GPS to provide their
could merge into our future forms. Here we describe four positioning via radio signals. However, most of the time,
of the projects to come: they will use their intelligence and senses to navigate.
Arius is one step closer to a fully independent creature.
Lumedusa
Lumedusa is an intelligent micro-robotic wearable Living Space Ribbon
Sculpture. Lumedusa is hydra/squid-like creature with six
tentacles that lives in a pendent containing its aqueous Space Ribbon is a sculpture that will travel in orbit around
solution. Lumedusa will be approximately 4-5 mm across our planet to remind us of the beauty of Mother Nature.
when fully assembled. Lumedusa is designed so that it can Our transformation begins in our mind. I hope to be able
be micro-machined as a flat device and then fold up into a to create a colony of space ribbons so that we can admire
3 dimensional robot using its EAP actuators. Lumedusa's their dance at sunset and sunrise. For a few minutes each
body will be composed of silicon plates connected by poly- day they will remind us that we should all play and smile.
pyrrol(PPy) micro-actuators and illuminated by poly- They will then disappear to re-introducing us to our
pyrrol(PPy) LED's. Lumedusa will be tethered to it's beautiful night sky. They will remind us not to pollute our
pendent via a flexible umbilical cord used to communicate atmosphere with light and chemicals, as we can already see
with it's brain, a micro-controller built into the back of the our night vanish in the glow of our cities. Space Ribbon is
pendent. composed of a body and two long ribbons made of electro-
active polymers that can bend like the tentacles of the
nautilus. Each side of the ribbon is covered with gold-
Cello plated electrodes that reflect the sun's light while at the
Our cells are a tight swarm of microorganisms all working same time move it's tentacles.
for the colony... us. As we eat we are essentially recycling
organic mater to repair our damaged cells and give birth to
ACKNOWLEGEMENTS
new ones. When the swarm matures it will spend a great
deal of time and energy for the sole purpose of creating a We would like to thank Martin Kirk and Jason Harris for
new colony. And then the cycle continues… Cello their work on Octofungi. Thanks to Keith Causey for his
embraces this idea. Cello is a sculpture that is made of work on the FORTH Board, our current embedded
hundreds of cells that can combine together according to controller. Thanks to John Kariuki for editing this paper.
their genetic code. During its lifespan, Cello will add cells We would also like to thank Dr. Elizabeth Smela and the
to the colony to grow or replace cells where there's Small Smart Systems Center (S.S.S.C) at the University of
damage. After a genetically dictated period of time Cello Maryland for their contribution to the Lumedusa Project.
will die so that other Cellos will live by reusing the cells.
Cello's cells are identical in shape but not in behavior. As
REFERENCES
the body is being formed cells become specialized to serve
the colony. A Cello begins life as one cell, but for Cello 1. De La Croix, H., Tansey R., and Kirkpatrick, D.
life never ends. Gardner’s Art Through the Ages vol. 2 Renaissance
and Modern Art. Harcourt Brace Jovanovich. New
York, NY. 1991.
Arius
Some Living Sculptures are wild creatures going on about
their business eluding us, like a pack of lions or a flock of

73
Place Lab’s First Step:
A Location-Enhanced Conference Guide
Anthony LaMarca1, David McDonald3, Bill N. Schilit1,
William G. Griswold4, Gaetano Borriello1,2, Eithon Cadag3, Jason Tabert3
1
Intel Research Seattle, 1100 NE 45th Street, 6th Floor, Seattle, WA 98105
2
Dept. of Computer Science and Engineering, University of Washington, Seattle, WA 98195
3
Information School, University of Washington, Seattle, WA 98195
4
Dept. of Computer Science and Engineering, UC San Diego, La Jolla, CA 92093

ABSTRACT downtown Seattle, the business district of Athens, Georgia)


This demonstration explores how people’s existing have wireless hotspot coverage so dense that cells overlap.
notebook computers, the WiFi access points in a city, a WiFi is now in the mainstream: 300 U.S. McDonald’s
carefully selected cache of web pages, and some software restaurants are offering one-hour connections to customers
glue can be combined to provide a location-enhanced buying a value-meal [4]. As this trend continues, it is likely
conference guide. Although location-aware applications, that wherever you open your notebook computer or take
tourist guides, museum tours, and the like are well known out your PDA you have a good chance of finding that you
to the research community they have yet to see widespread are within range of a hotspot. Wireless hotspots can offer a
availability. Part of the reason for this is that exotic kind ubiquitous information access, if you have a usage
infrastructure or additional mobile hardware is generally subscription with the carrier. What is missing and what
required. In contrast we are pursuing a low-overhead Place Lab will enable, is a way for your computer to know
client-based software-only approach. Users load a software its location – to map the hotspots around you to a physical
package on their WiFi-notebook computers that adds WiFi location worldwide. We use the term WiFi positioning to
cell-site positioning to the browser and lets them easily find denote this capability of client-computed location using
location-relevant content about the conference hotel and wireless hotspots.
surrounding neighborhood. An unusual aspect of our Location-based computing is one of those technologies
approach is that it requires no network connectivity: a web that, like the web, increase in worth the more ubiquitous it
page cache provides content and beacons from existing becomes. More users will motivate creative developers to
WiFi access points provide location. produce more applications, services and content, which
Keywords drives investment in infrastructure, and greater usage. To
Location-aware, tourist guides, WiFi, wardriving, context- bootstrap this cycle, Place Lab lowers the cost of entry: it
aware, ubiquitous computing.
INTRODUCTION
The goal of Place Lab is to hasten the broad adoption of
location-aware computing. Our approach is to develop an
open-source software base and foster community building
in a cross-organizational initiative involving universities
and research labs. Place Lab will address both the
technological and the social barriers to truly ubiquitous
deployment of location-aware computing. In this
demonstration, we are taking a small first step that
demonstrates how to provide an entirely client-based
location-enhanced conference guide for the Ubicomp
conference venue. Place Lab leverages the fact that many
cities and towns around the world (e.g., Manhattan, Figure 1: Seattle and other cities have WiFi
coverage so dense that cells overlap. Each WiFi
access point beacons a unique identifier that can be
used as a lookup key for course grain location. In
this image each dot is the position of a WiFi AP
mapped in a single “wardrive” of down Seattle.

74
Figure 2: The main page of the Place-Enhanced
Conference Guide presents images of interesting
“sights” from around the conference venue
(conference Hotel). The user’s location is detected
through WiFi hotspots that have been previously
mapped. The content (images, factoids, opinions, and
links) are both manually created, culled from the Web
prior to the conference, categorized, geo-coded, and
placed in an install package. When a particular sight
is selected, more detailed information is displayed.
The entire web site runs without network connectivity
and uses beacons from the last seen WiFi hotspot to
approximate location.

only requires a mobile computer to have standard WiFi relevance. The map view on each page will place the user
capability. Whenever a user’s computer is in the presence on a map of downtown Seattle (or a detailed map of the
of a beaconing access point, it looks up the MAC addresses conference Hotel). The page will also present images of
of nearby hotspots in a cached directory and determines the nearby locales. The users can drill down from the basic
user’s location. This location can then be made available to view to find interesting images, facts and opinions.
both local applications as well as enterprise and web One of our concerns designing the conference guide was
services. In the case of our demo, the user’s location is that the location algorithms we are using provide rough
used to present relevant tourist information such as grain information. Although we expect that in time other
historical nuggets, nearby restaurants, hotels and points of researchers will apply better algorithms to improve this
interest. Achieving the full Place Lab vision [6] requires aspect of Place Lab, we knew that it was possible that
that a number of technical and social issues to be addressed position reports could be off by a city block or more! Our
including: how to build, maintain, and distribute the first interface had a text-based style and included specific
hotspot database, privacy mechanisms to help users control descriptions of computed position. We decided to
who can see their location data, and how to code web generalize the interface with imagery, including a map
content for its relevance to different locations. containing of few blocks, in order to avoid confusion if the
THE CONFERENCE GUIDE APPLICATION positioning broke down.
At UbiComp 2003 we are demonstrating a proof-of- The user should find the tool to be responsive to changes in
concept system to launch our community development their location and have a fairly high density of places of
effort. We have developed a stand-alone system that interest within a few blocks of the conference venue. To
conference participants can download and install onto their simplify the design of the system and to make it available
laptops that will give them a location-aware conference to as many users as possible, the prototype will not require
guide for the neighborhood that surrounds the Ubicomp ‘03 that the user have network connectivity. Rather, the content
venue, similar to GUIDE [3] and Cyber Guide [5]. We now will be bundled with the software, allowing the Place Lab
describe what we hope the user experience will be, how the pages, as well as the pages for the places of interest to be
demo system will be architected and finally describe what cached locally. The demo system the users install will
we expect to gain from the experience. contain four main components. The first is a WiFi spotter
In our demo, users will interact with the conference guide that can identify locally available base stations. The second
via a standard web browser accessing HTML pages. component is a cache of web pages for the places of
Sample pages are shown in Figures 1 and 2. All of the interest around the conference. The third is a customized
pages generated by the system will be coded for location database that maps both base station MAC addresses and

75
web pages to geographic locations. Lastly, the system volunteers. The feedback provided by these users will be
contains a small web server to make web pages available to invaluable for our research and development team. We
a web browser. will ask volunteers to release their usage (location and
When a user points their browser at the server port on their click) data to us for anonymous and aggregate analysis.
local machine, the web server runs a PHP script that takes (The release will be handled through a clear and concise
the user’s location as provided by the spotter and selects form they can optionally sign when they download their
and renders an appropriate set of nearby places of interest. demo. If they do release their usage data, we’ll ask them to
Using a local web server limits the types of services that email us the data when they return home. No one will be
we can provide, but eliminates a number of hard technical required to divulge any data without their explicit
problems such as how to ensure user’s privacy (not a permission.) We hope to learn about the coverage hotspots
problem since they are running locally), how to handle currently provide for the paths taken by users and the
periods of disconnection (not a problem since we don’t rely breadth and depth of web navigation at the different
on connectivity), how to deal with a large database of locations.
MAC addresses (our database will be very small) and how More importantly, however, our hope is to this conference
to tie legacy content to physical locations (we plan to do venue as a springboard for the start of our community-
this by hand for our sample corpus). building effort and approach to collaborative ubiquitous
computing research. Our goal is to build awareness for our
OFF SITE VERSUS ON SITE
One of the risks we see with our demo is that people may activities and enlist collaborators and users. This
not take their notebook computers when they leave the demonstration is a first step in an iterative design process
conference center or hotel. In a way we are presuming a that we expect to gather momentum and a wide range of
usage model of mobile workers who pull out their applications.
notebook computers in coffee shops, airports, copy stores, REFERENCES
internet cafes, hotel lobbies, and this may not match the 1. William G. Griswold, Patricia Shanahan, Steve W.
conference attendees. In an ideal world the conference Brown, Robert Boyer, Matt Ratto, R. Benjamin
guide would be provided on smaller devices that people do Shapiro, Tan Minh Truong, “ActiveCampus –
carry. In the future, as WiFi PDAs are more widely Experiments in Community-Oriented Ubiquitous
adopted, we will likely following that avenue. In the Computing”
meantime, we are very interested in learning how and why 2. Simon Byers and Dave Kormann, “802.11b access point
people travel with and open their notebook computers and mapping”, Communications of the ACM, 46(5), 2003,
this demonstration will be an opportunity, through informal pp 41-46.
interviews, to gather data.
3. Cheverst K., Davies N., Mitchell K. & Friday A.,
In order to give the conference guide experience to people “Experiences of Developing and Deploying a Context-
who never take notebooks offsite, we will provide smaller Aware Tourist Guide: The GUIDE Project,”
grain information and services keyed to the hotspots in the Proceedings of MOBICOM 2000, Boston, ACM Press,
conference center and hotel. Much of this content will have August 2000, pp. 20-31.
to be created by hand, but we see a nice opportunity to
create color and fun. For example our team might 4. Jim Krane, “Burgers, Fries, And Wi-Fi.” Information
photograph and interview the bartender and include a hotel Week, March 11, 2003.
bar page, as well as scan the hotel restaurant menus. 5. Gregory D. Abowd, Christopher G. Atkeson, Jason
For location inside the conference center we will use the Hong, Sue Long, Rob Kooper, and Mike Pinkerton.
existing conference APs and may add “warmspot” access Cyberguide: a mobile context-aware tour guide. ACM
points with a smaller range. These additional beacons may Wireless Networks, Volume 3(5), 1997, pp. 421-433,
be necessary to provide complete coverage of interesting Kluwer Academic Publishers.
areas. 6. Bill N. Schilit, et. al., “Challenge: Ubiquitous Location-
CONCLUSIONS
Aware Computing The Place Lab Initiative. To Appear
Our demonstration will be continuous and will be available in The First ACM International Workshop on Wireless
to volunteer users from the start of the conference. During Mobile Applications and Services on WLAN Hotspots
the demonstration session, we will show the location- (WMASH), 2003, San Diego, CA..
enhanced browser to all attendees and seek more

76
AuraLamp: Contextual Speech Recognition in an
Eye Contact Sensing Light Appliance
Aadil Mamuji, Roel Vertegaal, Jeffrey S. Shell,
Thanh Pham and Changuk Sohn
Human Media Lab
Queen’s University
Kingston, ON, Canada K7L 3N6
{mamuji, roel, shell, pham, csohn} @cs.queensu.ca

ABSTRACT speech recognition engines cannot determine which device,


In this paper we present AuraLamp, a lava lamp augmented among many, the user is speaking to.
with an eye contact sensor and speech recognition
In this paper, we discuss the design of ubiquitous speech
capabilities. The lamp listens to simple voice commands
recognition appliances that use the eye gaze of the user to
such as “On” or “Off”, but only when the user looks at the
determine when to communicate. By augmenting
lamp. It demonstrates how we may coordinate
ubiquitous devices with “eye contact” sensors that
communications between a user and many ubiquitous
determine when the user looks at them, appliances obtain
appliances by sensing when the user pays attention to a
knowledge about the current engagement of a user with the
particular device. Rather than competing for the user’s
device. Such information not only aids in the use of deictic
attention, devices enter a turn taking process similar to that
references in speech interfaces, it also provides a significant
used in human group conversation. When the user is
source of information for determining when devices should
speaking to the lamp, the speech recognition lexicon is
avoid communications with their user [8]. We focus our
automatically limited to the vocabulary of the lamp, thus
discussion on a prototype light fixture called AuraLamp, a
increasing recognition accuracy.
light fixture that listens to voice commands only when the
Keywords user looks at it.
Attentive User Interfaces, EyePliances, Context-Aware
Visual Attention and Human Group Communication
Computing, Ubiquitous Computing, Notification Systems.
We were in part motivated by work performed in the area
INTRODUCTION of social psychology towards understanding the regulation
With the advent of ubiquitous computers, we have seen a of human multiparty communication. In human group
considerable increase in the number of digital appliances at conversation, attention is inherently a limited resource.
the disposal of each user. However, most ubiquitous Humans can only listen to, and absorb the message of one
appliances are still designed to act in isolation, as if they person at a time [1]. In group conversations, humans have
were the user’s only computer. Each appliance may notify resolved this conflict by allowing only one person to speak
the user of incoming communications or computer activity at any given time. By using nonverbal cues that convey
independently, without any consideration for the user’s attention, we achieve a remarkably efficient process of
engagement with other devices. Devices now relay volumes speaker exchange, or turn taking [1]. Turn taking provides
of email, instant messages, phone calls and appointment a powerful metaphor for the regulation of communication
notifications, producing an intricate web of annoying with ubiquitous devices. According to Short et al. [9], as
‘attention grabbers’ within which a user can easily become many as eight cues may be used to indicate an upcoming
entangled. We believe that by coordinating exchange of turns: completion of a grammatical clause; a
communications on the basis of user activity, or more socio-centric expression such as ‘you know’; a drawl on the
generally, user attention, devices may engage in more final syllable; a shift in pitch at the end of the clause; a drop
polite and respectful interactions with users – in ways that in loudness; termination of gestures; relaxation of body
do not fragment their limited attention [8]. position and the resumption of eye contact with a listener.
More and more frequently, users are also engaged in In group conversations, only one of these cues indicates to
remote interactions with their digital appliances. Many whom the speaker may be yielding the floor: eye contact
ubiquitous appliances are either worn, or embedded in [12]. Eye contact indicates with about 82% percent
everyday objects without a significant visual or manual accuracy whether a person is being spoken or listened to in
computer interface. As the accuracy of speech recognition four-person conversations [12]. When a speaker falls silent,
interfaces increases, we believe users will come to rely and looks at a listener, this is perceived as an invitation to
more on voice commands in their interactions with such take the floor. According to a recent study, 49% of the
appliances. However, without specific naming conventions, reason why someone speaks may be explained by the

77
amount of eye contact made with an interlocutor [10].
Humans use eye contact in the turn taking process for two
reasons:
1) Eye fixations provide the most reliable indication of
the target of a person’s attention, including their
conversational attention [12].
2) Eye contact is a nonverbal visual signal, one that can
be used to negotiate turns without interrupting the
verbal auditory channel.
In many cases, the eye gaze of the user, as an extra channel
of input, provides an ideal candidate for ubiquitous devices
to sense when their user is paying attention to them, or to
another device or person. By tracking whether a user
ignores or accepts requests for attention, interruptions by
ubiquitous appliances can be made more subtle and
sociable. As demonstrated by Maglio et al. in a Wizard of
Oz experiment, when users interact with devices using a
speech interface, they do indeed tend to look at the device
at which the command is directed [4]. This principle is
known as Look-to-Talk [6], and it allows for devices to
deduce when to listen to the user.
AuraLamp
AuraLamp (Figure 1) illustrates an attentive gaze and Figure 1. AuraLamp light fixture with embedded eye
speech enabled appliance, or EyePliance [7]. It is a lava contact sensor.
lamp augmented with an eye contact sensor and speech
recognition capability. By looking at the lamp, a person Sensing Eye Contact
indicates attention to the device, thereby activating its AuraLamp senses the user’s looking behavior through an
speech engine. When the user does not look, its speech embedded eye contact sensor mounted on top of the device.
engine deactivates and does not listen to the user. This Eye contact sensors are cheap eye tracking input devices
avoids problems of multiple appliances listening at the especially designed for the purpose of implementing Look-
same time, removing ambiguity in user speech command To-Talk with ubiquitous appliances. Unlike traditional eye
interpretation. Since only one appliance is the active trackers, their only requirement is to detect the user looking
listener, users can use deictic references when referring to straight at the device. We designed a sensor that can be
the device. Having only one of several appliances be the built for less than $500, consisting of a camera that finds
active listener allows the use of a single centralized speech pupils within its field of view using a simple computer
recognition engine, as it greatly reduces the speech vision algorithm [11] (see Figure 2). A set of infrared LEDs
processing load for the total set of appliances. AuraLamp is mounted around the camera lens. When flashed, these
responds only to the two actions it is capable of – turning produce a bright pupil reflection (red eye effect) in eyes
on and turning off. By switching the active speech within range. Another set of LEDs is mounted off-axis.
recognition lexicon on the server to that of the EyePliance Flashing these produces a similar image, with black pupils.
currently in focus, the accuracy of speech recognition is By syncing the LEDs with the camera clock, a bright and
increased, while at the same time presenting the user with a dark pupil effect is produced in alternate fields of each
small reusable vocabulary. AuraLamp is a model for how video frame. A simple algorithm finds any eyes in front of
we may use visual attention with speech to interact with the camera by subtracting the even and odd fields of each
any household appliance. Each speech command in the frame [5]. The LEDs also produce a reflection from the
lexicon is associated with an X10 home automation surface of the eyes. These appear near the center of the
command. A serial interface routes these commands from detected pupils when the onlooker is looking at the camera,
the speech processing server to the electricity grid [13]. allowing the detection of eye contact without any
Over standard electrical wiring, the commands reach a calibration. Eye contact sensors obtain information about
simple controller unit capable of turning the appliance on or the number and location of pupils, and whether these pupils
off. The X10 interface makes it easy to extend our are looking at the device. When mounted on a ubiquitous
interaction model to any appliance in the household. device, the current prototypes can sense eye contact at up to
a distance of 2 meters.

78
On-axis
LEDs Off-axis
LEDs

Figure 2. Standalone eye contact sensor with camera lens


and on-axis and off-axis illumination circuitry.
By mounting multiple eye contact sensors on a single Figure 3. EyeReason architecture.
ubiquitous device, and by networking all eye contact
sensors in a room, eye fixations can be tracked with great When a user interacts with a particular device for a
accuracy throughout the user’s environment. When there prolonged period of time, the server determines that it is the
are many people in the room, the number of devices focus device. Requests from competing devices to deliver
actively listening is bounded by the number of people information may be suppressed by the server, or routed to
looking at devices. However, if a person looks at the the focus device depending on the content of that
device while another person is speaking, AuraLamp may information. In the case of incoming email, the server
incorrectly process the speech. By adding increased sensing determines the priority of the message using a Bayesian
capabilities to the room to determine who is looking, for model, similar to that employed by Horvitz in the Priorities
example iris detection, proximity sensing, directional System [2]. In the case of speech interaction, devices need
microphones and RFID tags, AuraLamp may be able to to be in the focus of user attention before the system allows
more accurately establish who is speaking, allowing it to the user and device to converse. By opening and closing
determine whether it is the intended target of the spoken communication channels on the basis of Bayesian statistics
commands. of user-device interaction, EyeReason acts as a gatekeeper
determining which device is allowed to take the floor.
Our current prototype eye contact sensor is
EyeReason thus provides a facility to coordinate
light and small enough to be attached to any household
communications among EyePliances by modeling user
appliance (see Figure 1). In AuraLamp, the eye contact
attention for devices and their communications. With
sensor is positioned on top of the light fixture, and has a
AuraLamp, the EyeReason architecture simplifies the
visual range of approximately 40 degrees. Sensor data is
process of augmenting a standard appliance with gaze and
sent over a TCP/IP connection to a system that
speech capability. By embedding an eye contact sensor in
synchronizes communications between a user and all
an appliance and specifying an appropriate XML speech
EyePliances. This system, called EyeReason, also
grammar, a device instantly becomes an EyePliance. If the
processes all speech from the user, interpreting it according
appliance receives eye contact, a wireless headset processes
to the lexicon of the currently attended device.
speech commands using the XML lexicon specified in
EyeReason EyeReason to perform tasks which can either be processed
The EyeReason system coordinates communications among through an X10 device, or directly interfaced into the
many EyePliances and the user by keeping track of user appliance. If neither is possible, EyeReason still recognizes
activity with each device. It operates as a centralized server that a user is engaged with the device.
that EyePliance clients such as AuraLamp may connect to.
Gaze Activated Speech Lexicons
Devices report to the server whether a user is working with
Because speech commands are processed through a
them by tracking manual interactions and eye contact.
centralized server, new forms of attentive interactivity are
When the EyeReason system determines a device is in the
permitted without increasing the complexity of each
focus of user attention, it raises the priority of
appliance. With the Look-to-Talk paradigm as a
communications between that device and the user.
foundation, EyeReason acts as more than just a gatekeeper
Typically, EyeReason allows the device with the highest
for interactions with ubiquitous appliances. It integrates a
priority to take the floor. When a speech recognition
speech recognition system that dynamically activates the
EyePliance such as AuraLamp takes the floor, EyeReason
control context of the device as the user shifts focus. The
turns on its speech engine and switches the lexicon to that
gaze actuated speech recognition encapsulated in
of the focus device. Figure 3 shows the EyeReason
EyeReason eliminates contextual ambiguity when
architecture. For each user, EyeReason maintains a list of
interacting with a device via a voice channel. Since
connected devices.

79
EyeReason allocates voice control only to the EyePliances deictic references to refer to any appliance. Focusing the
currently in focus, it allows duplicate voice grammar active speech grammar to that of the currently active
definitions across devices. EyeReason uses the Microsoft EyePliance increases speech recognition accuracy, while at
Speech API 5.1 SDK to implement these context-sensitive the same time presenting the user with a small reusable
grammars through XML-based lexicons. Processing speech vocabulary.
by AuraLamp through EyeReason involves two steps. First,
REFERENCES
the AuraLamp device driver detects activity information
1. Duncan, S. Some Signals and Rules for Taking
representing the attention of the user by polling the
Speaking Turns in Conversations. Journal of Pers. &
associated eye contact sensor over a TCP/IP connection.
Social Psychology 23, 1972.
When a sufficient level of eye contact is detected, the driver
loads the EyePliance’s context specific grammar. When an 2. Horvitz, E., Jacobs, A., and Hovel, D. Attention-
EyePliance driver activates its grammar, EyeReason sensitive alerting. In Proceedings of UAI’99.
automatically deactivates grammars for EyePliances not in Stockholm: Morgan Kaufmann, 1999. pp. 305-313
the focus of user attention. 3. Jabarin, B et al. Establishing Remote Conversations
Through Eye Contact With Physical Awareness
OTHER EYEPLIANCE PROTOTYPES Proxies. In Extended Abstracts of CHI 2003
We have developed a number of other EyePliance Ft.Lauderdale: ACM Press, 2003. pp 948-949.
prototypes that form part of the EyeReason architecture. 4. Maglio, P. et al. Gaze and Speech in Attentive User
Apart from Look-To-Talk interfaces, these include Interfaces. In Proceedings of the Third International
appliances that use eye contact sensing in novel ways to Conference on Multimodal Interfaces (2000). Beijing,
streamline interactions with the user with a minimum of China.
interruptive requests for attention. The Attentive TV uses 5. Morimoto, C. et al. Pupil Detection and Tracking
an eye contact sensor to determine whether someone is Using Multiple Light Sources. Image and Vision
watching it [7]. If nobody is watching, the TV pauses its Computing 18, 2000.
feed. When the viewer returns, the program resumes. This 6. Oh, A. et al. Evaluating Look-to-Talk. In Extended
concept can generalize to other devices that are fitted to use Abstracts of CHI 2002, Minneapolis: ACM Press, 2002
visual cues of attention to perform meaningful actions. pp.650-651
EyeProxy [3] is an attentive desk-phone that consists of a
7. Shell, J. S. et al. EyePliances: Attention-Seeking
pair of actuated eyeballs augmented with an eye contact
Devices that Respond to Visual Attention. In Extended
sensor. The proxy acts as a surrogate for a remote person’s
Abstracts CHI 2003. Ft. Lauderdale: ACM Press,
eyes. It demonstrates how a device like a phone may
2003, pp 770-771.
request attention from its user by simulating eye contact,
rather than by producing a disruptive auditory notification. 8. Shell, J. S. et al. Interacting with Groups of Computers.
When a remote person wishes to engage in a phone Communications of the ACM 46(3), (March 2003), pp.
conversation with the user, EyeProxy conveys that person’s 40-46.
interest by orienting its eyeballs towards the user’s eyes. 9. Short, J., Williams, E., and Christie, B. The Social
The user can pick up the phone by producing a prolonged Psychology of Telecommunications. London: Wiley,
fixation at the EyeProxy. If the user does not wish to 1976.
answer the call, he simply looks away. 10. Vertegaal, R. and Ding, Y. Explaining Effects of Eye
Gaze on Mediated Group Conversations: Amount or
We are currently in the process of evaluating the principle Synchronization? In Proceedings of CSCW 2002. New
of turn-taking EyePliances. Initial results are encouraging, Orleans: ACM Press, 2002, pp. 41-48.
suggesting that the use of eye contact sensing to regulate 11. Vertegaal, R. et al. Designing Attentive Cell Phones
communications with ubiquitous appliances may in fact Using Wearable Eyecontact Sensors. In Extended
improve the efficiency of verbal interactions. Abstracts of CHI 2002. Minneapolis: ACM Press, pp.
CONCLUSIONS 646-647.
We presented AuraLamp, an attentive gaze and speech 12. Vertegaal, R., Slagter, R., Van der Veer, G., and
enabled appliance, or EyePliance. AuraLamp is a lava Nijholt, A. Eye gaze patterns in conversations: There is
lamp augmented with an eye contact sensor and speech more to conversational agents than meets the eyes. In
recognition capability. The lamp listens to simple voice Proceedings of CHI 2001. Seattle: ACM Press, 2001,
commands such as “On” or “Off”, but only when the user pp.301-308.
focuses his attention on the lamp. AuraLamp demonstrates 13. X10 Home Solutions, http://www.x10.com, 2003.
how ubiquitous speech-enabled appliances may enter into a
turn taking process with the user, allowing the use of

80
The Ubiquitous Computing Resource Page
(www.ucrp.org)
Joseph F. McCarthy J. R. Jenkins, David G. Hendry
Intel Research The Information School
1100 NE 45th Street, 6th Floor University of Washington
Seattle, WA 98105 USA Mary Gates Hall
[email protected] Seattle, WA 98195-2840
{jrj4,dhendry}@u.washington.edu

ABSTRACT Computing [Gellersen, 1999], which has evolved into an


Ubiquitous Computing is a field of research that is annual, top-tier, international conference [Abowd, et al.,
attracting increasing attention in academia, business and the 2001; Borriello & Holmquist, 2002].
general population. The International Conference on The working papers and proceedings from the collection of
Ubiquitous Computing is an annual event that brings events that have been convened for researchers in
together members of the ubiquitous computing community ubiquitous computing, as well as surveys of the field
to exchange ideas and share recent results. However, aside [Abowd & Mynatt, 2000] represent important resources for
from the conference, and its archival proceedings, there are anyone wishing to learn more about the field. However,
few resources that are available for learning more about the given the increasing availability and utility of online
history, current state and future prospects for this exciting resources, we are creating an online repository of
new field. We are building the Ubiquitous Computing information about ubiquitous computing that augments
Resource Page, which contains a collection of content from these archival publications, focusing on the people, projects
a wide variety of online sources, and organizes it along the and organizations involved in the field.
dimensions of people, projects and organizations involved
in this field. The web site will also provide mechanisms RELATED WORK
for the community to share information about news, events Other fields have created online resources that provide
and other activities related to ubiquitous computing. considerable value to others in the communities exploring
those fields. The field of human-computer interaction
Keywords (HCI), for example, has the HCI Bibliography: Human-
Ubiquitous computing, information architecture, online Computer Interaction Resources (www.hcibib.org,
resources. [Perlman, 1999]), which is organized into four major areas:
INTRODUCTION • Learn about HCI
Ubiquitous Computing is a multidisciplinary field of
research that explores computing technology as it moves • The Bibliography
beyond the desktop environment and becomes increasingly • HCI Columns and News
interwoven into the fabrics of our lives. The increasing • Developer Resources
power and decreasing size and cost of a variety of
technologies enables computers to be carried, handheld, These areas are then further subdivided into a multitude of
worn, or embedded in things, places and even people. The subareas. According to information on the main page, the
research issues in this field include the design, site has been accessed over 600,000 times since April 1998
implementation and deployment of these technologies, as (when the site was revised, having been created in 1988),
well as the impact these technologies have on people and and has performed over 1 million searches for its visitors.
society. A more specialized resource exists for researchers in the
Since the early articulation of this new paradigm of fields of Machine Learning and Case-Based Reasoning
computing by Mark Weiser [Weiser, 1991], and the (www.aic.nrl.navy.mil/~aha/people.html). This page
pioneering work he led at Xerox PARC [Weiser & Brown, contains an alphabetized list of researchers, and their
1997], a large number of other people and organizations corresponding homepages, who are (or were) involved in
have engaged in research into ubiquitous computing. Over machine learning and/or case-based reasoning. Although
the years, a few workshops were held on ubiquitous this resource is not as extensive as the HCI Bibliography,
computing, and related areas [Abowd & Schilit, 1997; the listing of researchers – and their affiliations – provides a
Coen, 1998]; the field started reaching critical mass with quick overview of who’s who in the field.
the first Symposium on Handheld and Ubiquitous

81
An online resource for intelligent environments resource page is now open to any person who wishes to add
(www.research.microsoft.com/ierp) was created shortly themselves to the listing or submit resources. The site is
after a symposium on that topic in 1998 [Coen, 1998]. This moderated to prevent abuse.
collection of material on projects, organizations, hardware
Projects
and events relating to intelligent environments is clearly
Each project listed in the UCRP is labeled with its name,
relevant to ubiquitous computing, but as it has not been
which is linked to the project homepage, and a brief
maintained, it now represents a snapshot of the state of
description of the project, copied or adapted from text on
research in this area circa 1998-2000.
the project home page or from a publication associated with
THE UBIQUITOUS COMPUTING RESOURCE PAGE the project. The space limitation is intended to ease
We are creating the ubiquitous computing resource page scrolling through the list of all projects, which are listed in
(UCRP), an online resource that can be used by researchers, alphabetical order.
educators, students, the press and the general public to learn Although we have considered adding links to both people
more about the field of ubiquitous computing, to allow and organizations associated with the projects
members of the community to share information easily (unidirectional or bidirectional), our initial version does not
outside of conferences, workshops and other gatherings, have such links, in order to simplify the initial organization
and to do so within a framework that allows easy and subsequent maintenance of the collection.
maintenance via contributions from members of the
community. The resource page is available at We have further simplified the ontology represented by this
www.ucrp.org. collection by including large-scale initiatives – that
incorporate many projects, sometimes from many
The development work is divided into two phases. The aim organizations – in the list of “projects”. Thus our notion of
of phase I, collection development, is to assemble a projects spans a broad range of endeavors by researchers in
collection of resources of sufficient size to attract members ubiquitous computing.
of the Ubiquitous Computing community. At present, the
collection emphasizes breadth of coverage rather than depth Organizations
and primarily focuses on people, projects, and Organizations represent another category that has varying
organizations. The aim of phase II, community levels of granularity (as with projects and initiatives). For
development, is to enable the collection of resources to be example, academic researchers are associated with their
self-sustaining through peer review and discussion. At university, their school, their department, and any number
present, people may submit resources to a moderator for of centers or groups or labs. Academic researchers with
inclusion. In the future, we would like to move to an open joint appointments have even more options, and researchers
model with rich opportunities for open-ended collaboration who have joint appointments across industry and academia
and collection development. have even more options.
The UCRP has been implemented in ASP.NET using the While representing a hierarchical organization structure in
Community/Portal starter kit1. The starter kit provides a the UCRP that reflects the organizations in the ubiquitous
variety of features for content management and online computing community would be ideal, we believe the
communities. maintenance of such a structure would be burdensome, and
so we are currently implementing a flat organizational
Resources are organized into the following categories:
structure.
People, Projects, Organizations, UbiSites, News, and
Events. These categories are now described. UbiSites
UbiSites is a generic term used for resources that fall
People
outside projects and organizations but do have potential
Each person listed in the UCRP is represented by their
relevance to those interested in UbiComp. Example
name, affiliation and publications. The person’s name is
UbiSites include previous resource lists, online
linked to their homepage, the affiliation is listed, and
publications, and recommended reading lists. UbiSites are
publications are represented by links to other resources
open to submission and are moderated.
(currently the ACM Digital Library, www.acm.org/dl, and
the DBLP Bibliography Server, www.informatik.uni- News
trier.de/~ley/db/index.html). News items can be posted on the UCRP, with a title, brief
textual description, and link to the source of the news item.
We have seeded the resource with members of the
Currently, only administrators may post news items, but we
UbiComp 2003 Conference & Program Committees. The
hope to soon allow anyone to submit a proposed news item
to a moderator, and eventually to have the system be more
1
Community/Portal starter kit is available at self-regulated (for news, and in general).
http://asp.net/Default.aspx?tabindex=9&tabid=47

82
Events ACKNOWLEDGMENTS
Events of interest are currently associated with a calendar The authors gratefully acknowledge the contributions of the
on the UCRP. Events include such things as conferences, following people in helping us formulate and refine the
workshops and events of interest to the community. As with ideas for the ubiquitous computing resource page: Gaetano
news items, events may be submitted to a moderator for Borriello, Sunny Consolvo, Anthony LaMarca, James
consideration. Landay and Mike Perkowitz.
Other Resources REFERENCES
While we hope the UCRP becomes the primary online 1. Abowd, Gregory D., and Elizabeth D. Mynatt. 2000.
resource for the field of ubiquitous computing, there are a Charting Past, Present and Future Research in
number of other resources developed by other people and Ubiquitous Computing. ACM Transactions on
organizations that will continue to be useful to the Computer-Human Interaction (Special Issue on HCI
community. We will link to these resources from the Research in the New Millenium), Vol. 7, No. 1, pp. 29-
UCRP (and hope the UCRP, in turn, is linked to from 58.
them). 2. Abowd, Gregory D., and Bill N. Schilit (organizers).
DISCUSSION 1997. Workshop on Ubiquitous Computing: The Impact
Methods used to gather resources include such popular on Future Interaction Paradigms and HCI Research at
search engines as Google, Teoma, and Vivisimo as well as the 1997 ACM Conference on Human Factors in
online publication databases such as DBLP and the ACM Computer Systems (CHI ’97).
Bibliography. Initial search results have revealed previous 3. Abowd, Gregory D., Barry Brumitt and Steven A.
attempts to create a UbiComp Webpage bibliography or Shafer (Eds.). 2001. Proc. of the Int’l. Conf. on
resource list, but most sites represent a small area or time- Ubiquitous Computing (UbiComp 2001), Atlanta,
frame in the last five years, rather than a living collection. Georgia, September 2001. Lecture Notes in Computer
There are a variety of community features provided by the Science, Vol. 2201, Springer–Verlag.
ASP.NET framework, which, for example, allow 4. Borriello, Gaetano, and Lars Erik Holmquist (Eds.).
discussions, newsletter generation, e-mail updates, enable 2002. Proc. of the Int’l. Conf. on Ubiquitous
content to be syndicated with RSS, enable use of Web Computing (UbiComp 2002), Gothenburg, Sweden,
Services, and so on. As we gain experience with the UCRP, October 2002. Lecture Notes in Computer Science,
we hope to explore such features with the aim of Vol. 2498, Springer–Verlag.
developing a self-sustaining community.
5. Coen, Michael (Ed). 1998. AAAI Spring Symposium on
Ubicomp.org, the Home Page for the Annual Conference Intelligent Environments. AAAI Tech Report SS-98-02.
on Ubiquitous Computing, has many features that are
intended to foster online community awareness and 6. Gellersen, Hans W. (Ed). 1999. Handheld and
discussion, through its Community Directory and Ubiquitous Computing: Proceedings of the First
Discussion Forums sections. We hope to work with the International Symposium (HUC ‘99), Karlsruhe,
webmaster so that the UCRP and ubicomp.org can Germany, September 1999. Lecture Notes in Computer
complement each other effectively. Ubicomp.org has Science, Vol. 1707, Springer-Verlag.
traditionally provided support for the conference, whereas 7. Perlman, Gary. 1999. The HCI Bibliography: Ten
we hope ucrp.org will provide greater continuity and Years Old, But What's It Done for Me Lately? ACM
persistence, and prove to be a valuable resource for a interactions, v.6, n.2, p.32-35.
broader population. 8. Weiser, Mark. 1991. The Computer for the 21st
One of our most basic challenges has been the Century. Scientific American, 265(30):94-104.
determination of whether a resource is indeed related to 9. Weiser, Mark, and John Seely Brown. 1997. The
ubiquitous computing. With so many definitions and terms Coming Age of Calm Technology. In Peter J. Denning
– ubiquitous, pervasive, disappearing, sentient, ambient – it & Robert M. Metcalfe (Eds), Beyond Calculation: The
is difficult to characterize what is not an example of Next Fifty Years of Computing. Springer – Verlag, pp.
ubiquitous computing. We hope the UCRP will serve as a 75-85.
forum for discussing the very definition of ubiquitous
computing.

83
Proactive Displays &
The Experience UbiComp Project
Joseph F. McCarthy, David H. Nguyen, Al Mamunur Rashid, Suzanne Soroczak
Intel Research
1100 NE 45th Street, 6th Floor
Seattle, WA 98105 USA
{mccarthy,dnguyen,arashid,ssoroczak}@intel-research.net

ABSTRACT People are increasingly concerned about the privacy of their


The proliferation of sensing and display technologies digital information, and their concerns are being magnified
creates opportunities for proactive displays that can sense by the proliferation of sensing technologies [cf. Chai &
and respond appropriately to the people and activities Shim, 2003]. Thus, proactive display applications must
taking place in their vicinity. A conference provides an represent a compelling value proposition in order to
ideal context in which to explore the use of proactive succeed, providing enough benefit to overcome concerns
displays, as attendees come together for the purpose of about the use of digital information in physical contexts
mutual revelation, eager both to learn more about others beyond the desktop. We believe that a conference provides
and what others are doing and to tell others about a setting in which such value propositions can be articulated
themselves and what they are doing. We will deploy a suite and demonstrated
of proactive display applications that can aid and abet this Conference attendees typically share the goal of mutual
desire for mutual revelation in the context of a paper revelation: seeking to learn more about others and their
presentation session, a demonstration and poster session, work, as well as being open to opportunities to tell others
and informal break areas at the conference. about themselves and their own work. Attendees also
Keywords routinely reveal some information about themselves – such
Ubiquitous computing, proactive computing, human- as their names and the institutions with which they are
computer interaction, computer-supported cooperative affiliated – through conference registration forms before the
work, social computing, community computing, RFID, conference and badges they wear at the conference. We
public displays, ambient displays. seek to facilitate the process of mutual revelation using
technology, while minimizing disruption or deviation from
INTRODUCTION common practices of conference attendees.
Computer displays are proliferating, as the technology
advances and the costs decrease, showing up in an The International Conference on Ubiquitous Computing is
increasing variety of physical contexts, such as airports and particularly well-suited for a deployment of proactive
train stations, retail stores and even billboards along the display applications, where attendees’ familiarity with
roads [Barrows, 2002]. At the same time, sensing sensing technologies is likely to reduce the fear of the
technologies are also proliferating, from sophisticated unknown, and increase their openness to participating in
multi-purpose sensors [Kahn, et al., 1999, Gellersen, et al., experiments with this kind of technology.
2003] to rather simple radio frequency identification COMMON INFRASTRUCTURE
(RFID) tags and associated readers. We have begun to All of the applications we will deploy at UbiComp 2003
explore how these two trends may converge to create share a common infrastructure.
opportunities for proactive displays that can sense their
RFID Tags & Readers
context – nearby objects, people and/or activities – and
respond with appropriate content. Conference attendees will receive, as part of the registration
packets they receive onsite, passive RFID tags that they can
Any proactive display application must address a number of insert into their regular conference badges. Printed
research challenges: information about the information associated with the tag,
• What contexts are most amenable to the successful the applications deployed at the conference, and the privacy
deployment of a proactive display? policy regarding any information collected throughout the
conference, will be included with the packets (and available
• What kinds of content are best suited to the
via all online registration pages). Each proactive display
context(s) in which the displays are situated?
installation will have at least one RFID tag reader
• What levels of interaction are most appropriate to associated with it, to allow it to sense the tags worn by the
the content and context of use? conference attendees nearby. Our current system utilizes

84
the Alien Technology 915 MHz readers and tags. We may Since conference attendees ought to be prepared to state
make provision for the inclusion of other sensing their name and affiliations, verbally, anytime they rise to
technology and/or communication protocols, such as ask a question during a paper (or panel) presentation, we
Bluetooth [cf. Want, et al., 2002]. propose to augment this common practice by using a
proactive display as a visual aid. An RFID reader at the
Application Clients & Servers
microphone stand will identify the RFID tag worn by the
The RFID reader for each application will be connected to
person approaching the microphone, and communicate this
a local computer, which will run the application and access
to the AutoSpeakerID application which will, in turn,
a server containing both profile information about the
display a picture of the person, along with his or her name
attendees as well as other sources of content that might be
and affiliation, on a display near the front of the room.
shown on the proactive display. The profiles will reside on
a central server so that any updates made during the Those who do not wish to have their profile information
conference can be propagated immediately to the different displayed when they approach a microphone stand can
client applications. Each application client will provide the either opt out of participating at registration time or at any
capability for an administrator to stop the application, in point during the conference using a kiosk in the registration
case of unexpected and unwanted behavior. area, or may simply either remove the RFID tag from their
badge or leave their badge at their seat when they go to ask
Profile Creation & Maintenance
the question. They may also, of course, choose to “game”
Conference attendees will be given the option to opt-in to
the system by wearing another person’s tag.
any / all of the proactive display applications by creating
profiles during the registration process. No information We are, with this application and the others, very eager to
will be used in proactive display applications unless an learn whether, how and why people participate in the
attendee provides explicit consent to use that information. system.
Attendees will be also be given the option of creating or Ticket2Talk
modifying their profiles during the conference at a A paper / panel presentation session is a rather formal
computer adjacent to the conference registration table, and context in which to deploy a proactive display. We also
at one or more kiosks in the Demonstration & Posters area have applications we plan to deploy in more informal
of the conference. contexts, such as a break area or a demo or poster session.
PROACTIVE DISPLAY APPLICATIONS FOR UBICOMP One such application is Ticket2Talk, which will run on a
We plan to deploy three applications at the conference: large plasma display – in a portrait mode orientation [cf.
AutoSpeakerID, which displays the picture, name and Churchill, et al., 2003] – and cycle through visual content
affiliation of a person asking a question at the microphone explicitly contributed by attendees that represent “tickets to
during a question & answer period following a paper or talk”: some visual marker for a topic about which the
panel presentation; Ticket2Talk, which displays explicitly attendee would be happy to talk with someone. This may
specified content (a “ticket to talk” [Sacks, 1992]) for any be a research poster the attendee is presenting at this, or
single person as he or she approaches a proactive display in another, conference, the cover of a recently published book,
the coffee break area; and Neighborhood Window, which a picture of a favorite pet, vacation spot or piece of art.
displays a visualization of implicit or “discovered” content
The ticket to talk will be displayed in the central region of
(from explicitly provided homepage information) for a
the screen, with a picture and name of the attendee who
group of people who are in the neighborhood of a proactive
posted the ticket to talk appearing at the top, and a
display in an informal, open area at the conference. These
collection of thumbnail pictures & names of other people
applications are described in more detail in the sections
whose RFID tags have been detected near the display
below.
appearing in a row at the bottom. Each image will be
AutoSpeakerID selected for display based on a priority determined by both
After a paper presentation during UbiComp (and other the recency of the attendee’s badge being detected (higher
conferences), people often approach a microphone stand in priority for more recently sighted badges) and the recency
the audience to ask questions about the work described in of the attendee’s ticket having been shown (higher priority
the presentation. Everyone in the audience knows who the for less recently displayed tickets). Images will be
presenter is, but don’t always know much about the person displayed for a preset interval, probably in the range of 5 to
asking the question. A diligent session chair may remind 10 seconds. There will also be a time limit on the duration
the questioner to state his or her name & affiliation, but this for which a ticket might be in the queue of potential content
is often not the case, and even when encouraged to identify to display: although we want to focus on content for those
themselves, questioners’ names or affiliations may not be currently gathered nearby, we also might maintain a small
heard clearly by others in the audience (especially if the amount of “history” about people who have passed by
questioner is hurrying to get to his or her question). recently.

85
We will deploy this proactive display next to a table used words and phrases, and the links between them. Our goal is
for a coffee urn during a break. The serial nature of the to provide opportunities for attendees to start topical
movement of people through the line will correspond to the conversations, or at least become more aware of the
sequencing of tickets, providing each person who comes interests they share with others in the community.
through the line – who has chosen to participate – an
EVALUATION
opportunity to both learn more about those nearby in the
Our goal is to introduce technology to bridge the gap
line and allow those same people to learn more about him
between people’s digital profiles and their presence in the
or her.
physical world to enhance the conference experience for all.
The goal of this application (and Neighborhood Window) is We are assuming that the applications we have designed
to provide opportunities for conversation for attendees who will have a positive impact, but we will be carefully
do not already know each other. However, we also want to assessing the experience at the conference, to see how these
ensure plausible ignoreability, i.e., no one should feel applications impact attendees’ experience – and why.
compelled to strike up conversation with a fellow attendee
We want to allow others to learn from our experience, so
who happens to be nearby. By cycling through content, one
that the community as a whole may be able to better design
can simply notice the stream of tickets, without acting on
future proactive display applications, and other types of
any particular one. Even if the opportunity for direct
applications that seek to enhance the experience of groups
conversation is not taken, we expect that the displays will
of people using information from digital profiles.
contribute to raising the level of awareness about other
attendees’ interests – helping people learn things about their Our plan is to collect data using both qualitative and
colleagues that they may later choose to act on (e.g., at a quantitative methods. Observations and on-site interviews
demonstration or poster session, or the conference will be conducted throughout the conference. This data will
reception). then be coded and evaluated for trends and themes in
interaction. A follow-up questionnaire will also be
Neighborhood Window conducted to gauge the impact of the proactive displays on
Another context in which we plan to explore the utility of the attendees’ overall conference experience, and to identify
proactive displays in a conference setting is the areas for further research and development.
demonstration and poster session. Attendees often mill
about such a session, forming ad-hoc groups as they cluster RELATED WORK
around a demonstration or poster of interest. The Previous work [Woodruff, et al., 2001] has explored the
Neighborhood Window application will display a use of technologies to encourage conversations among
visualization of interests of those in its vicinity, based on small groups during museum visits; we are seeking to
the collection of words found on their respective broaden the context and scope of people who might engage
homepages. in conversation, and to use situated, peripheral displays
rather than handheld devices. Other researchers have
Although we could simply run the Ticket2Talk application
explored the use of ambient displays [Mankoff, et al., 2003;
on a display in the demonstration and poster session, we
Weiser & Brown, 1997] and other forms of public displays
want to take advantage of this context to explore other
[O’Hara, et al., 2003]. We seek to extend this work
dimensions of proactive display applications (and people’s
through the use of sensing technologies (in this case, RFID)
experience with them). Neighborhood Window utilizes
that enable to public displays to be more proactive –
implicit or latent profile information that can be attained
responding to the people nearby, as well as other elements
through attendees’ explicit profiles, and generates
of the local context.
visualizations of this content based on the group that is
nearby. GROUPCAST [McCarthy, et al., 2001] is an earlier
application that runs on a large display that responds to the
In addition to offering attendees the capability of providing
people nearby. However, GROUPCAST ran in a corporate
their pictures, names, affiliations and/or tickets to talk, we
environment where all the passersby were members of the
also offer them the option of providing a link to their
same company (indeed, most were members of the same
homepages in the registration process. An offline
research group within the organization), and had profiles
application then analyzes the content of their homepages,
for approximately 20 people. We seek to extend this work
collecting words and phrases, and constructing a profile
by deploying applications in a less restricted context, with a
vector that can be used to select content that is likely to
much larger number of people from multiple organizations.
represent interests shared by those near the display, but not
widely shared among the more general population. There has also been some other, promising, research into
the use of technology to enhance the conference experience
For example, two UbiComp attendees approaching the urn
for attendees. The Intellibadge system [Kindratenko, et al.,
may have references to “motes” or “ambient displays” on
2003] included a suite of visualization applications based
their homepages, and these phrases may be highlighted in
on aggregate information collected through active radio
the visualization that depicts people’s names, associated
frequency (RF) tags worn by approximately 20% of the

86
attendees of the SC 2002 conference. As an example, one deploy at UbiComp 2003: Gaetano Borriello, Sunny
application showed the distribution of interests among the Consolvo, Anind Dey, Anthony LaMarca, Sean Lanksbury,
people attending each parallel session (e.g., the number of David McDonald, Eric Paulos, Trevor Pering and Bill
compiler people vs. middleware people, etc.). Our work Schilit.
explores applications that directly react to the small number
REFERENCES
of people in the vicinity of the displays, rather than showing 1. Barrows, Matthew. 2002. The Signs have Ears: Two Billboards will
more general, aggregate data regarding the overall Scan Car Radios and Tailor Pitches to Match Listening Preferences.
conference population. Sacramento Bee, November 24, 2002.

nTAGs (http://www.ntag.com, see also Borovoy, et al., 2. Borovoy, Richard, Fred Martin, Sunil Vemuri, Mitchel Resnick,
Brian Silverman and Chris Hancock. 1998. Meme Tags and
[1998]) are devices that include infrared and radio Community Mirrors: Moving from Conferences to Collaboration. In
frequency communication capabilities, as well as a small Proc. of the ACM 1998 Conf. on Computer Supported Cooperative
display and buttons for interaction. These devices have Work (CSCW ’98), pp. 159-168.
also been deployed at a conference, with a similar goal as 3. Chai, Winston, and Richard Shim. 2003. Benetton Takes Stock of
our work (creating conversation opportunities and raising Chip Plan. CNET (news.com), April 7, 2003.
mutual awareness among the people attending the 4. Churchill, Elizabeth F., Les Nelson and Laurent Denoue. 2003.
conference). We believe that the use of large, situated Multimedia Fliers: Information Sharing with Digital Community
Bulletin Boards. To appear in Proc. of the Int’l. Conf. on
displays that react to RFID tags embedded in ordinary
Communities and Technologies (C&T 2003).
conference badges worn by attendees fits more closely into
5. Gellersen, Hans-W., Albrecht Schmidt and Michael Beigl. 2003 (to
existing practices at conferences. Also, showing content appear). Multi-Sensor Context-Awareness in Mobile Devices and
that may spark conversations on a peripheral display leaves Smart Artefacts. Mobile Networks and Applications.
more room for plausible ignoreability – it is easier to glance 6. Kahn, J. M., R. H. Katz and K. S. J. Pister. 1999. Next Century
at (and ignore) a display on the periphery than to ignore Challenges: Mobile Networking for “Smart Dust”. In Proc. of the
content shown on a display worn by a person in front of you Fifth Annual ACM/IEEE Int’l. Conf. on Mobile Computing and
– and thus will engender different types of interactions (and Networking, pp. 271 - 278.
reactions) among the conference attendees. 7. Kindratenko, Volodymyr, Donna Cox and David Pointer. 2003.
IntelliBadge: Towards Providing Location-Aware Value-Added
Yet another approach to enhancing the conference Services at Academic Conferences. To appear in Proc. of the Fifth
experience is being explored by SpotMe Conference Int’l. Conf. on Ubiquitous Computing (UbiComp 2003).
Navigator (http://www.spotme.ch), a handheld device that 8. Mankoff, Jennifer, Anind K. Dey, Gary Hsieh, Julie Kientz, Scott
people can use to detect other devices used by attendees Lederer and Morgan Ames. 2003. Heurstic Evaluation of Ambient
with similar interests. The profiles used by SpotMe contain Displays. In Proc. of the 2003 ACM Conf. on Human Factors in
Computer Systems (CHI 2003), pp. 169-176.
many of the same elements as the profiles we have
designed, but as with the nTags, we believe that using a 9. McCarthy, Joseph F., Tony J. Costa and Edy S. Liongosari. 2001.
UNICAST, OUTCAST & GROUPCAST: Three Steps toward Ubiquitous
handheld device is less proactive, and deviates further from Peripheral Displays. In Proc. of the Int’l. Conf. on Ubiquitous
existing conference practices, than the use of displays that Computing (UbiComp 2001), Lecture Notes in Computer Science,
may show content on the periphery of attention. Vol. 2201, Springer–Verlag, pp.332-345.

One of the reasons we are planning on extensive 10. O’Hara, Kenton, Mark Perry, Elizabeth Churchill and Daniel Russell.
2003 (to appear). Public and Situated Displays: Social and
evaluations during and after the conference is to facilitate Interactional Aspects of Shared Display Technologies. Kluwer
our ability to compare experiences with Proactive Displays Academic Publishers.
with experiences with other technologies and approaches at 11. Sacks, Harvey. 1992. Lectures on Conversation. Basil Blackwell,
other conferences. Oxford.

CONCLUSION 12. Want, Roy, Trevor Pering, Gunner Danneels, Muthu Kumar, Murali
Sundar, and John Light. 2002. The Personal Server: Changing the
We have designed a suite of proactive display applications Way We Think About Ubiquitous Computing. In Proc. of UbiComp
intended to enhance the conference experience for attendees 2002: 4th Int’l. Conf. on Ubiquitous Computing, Springer LNCS
by providing conversation opportunities and fostering 2498, pp. 194-209.
greater awareness among the community. UbiComp 2003, 13. Weiser, Mark, and John Seely Brown. 1997. The Coming Age of
as a community that is exploring the use and implications of Calm Technology. In Peter J. Denning & Robert M. Metcalfe (Eds),
new display and sensing technologies, will provide an ideal Beyond Calculation: The Next Fifty Years of Computing. Springer –
Verlag, pp. 75-85.
venue in which to deploy these applications, assess their
impact, and further the research agenda in this area. 14. Woodruff, Allison, Margaret H. Szymanski, Paul M. Aoki and Amy
Hurst. 2001. The Conversational Role of Electronic Guidebooks. In
ACKNOWLEDGMENTS Proc. of the Int’l. Conf. on Ubiquitous Computing (UbiComp 2001),
The authors gratefully acknowledge the contributions of the Lecture Notes in Computer Science, Vol. 2201, Springer–Verlag,
pp.332-345.
following people in helping us formulate and refine the
ideas for the proactive display applications we propose to

87
Networking Pets and People

Dan Mikesell
Interactive Telecommunications Program
254 E. 7th St. Apt. 24
New York, NY 10009 USA212 673 0696
[email protected]

ABSTRACT Humanitarian Considerations


I believe that ubiquitous computing can offer more It’s fun to think of playing with a pet while traveling,
than just an interface for humans and digital keeping an eye on your cat while you are away is a longing
information. I propose that we can now introduce for most dedicated cat owners. The parental guilt felt
the other beings that share our houses to our when leaving Fluffy at home is considerable but relieving
technology. In this paper, I will describe a the stress of a pet owner is only a fraction of the intended
mechanism for networking a interactive cat toy purpose for this device. The initial goal of the Networked
to be accessed from anywhere on the internet. Cat Toy is to provide a conduit for interacting with animals
The World-Wide Web is primarily perceived as stuck in shelters. The device is not ment to be a surrogate
information space but can also be considered as for actual human contact but rather a first contact
activity space. With this device a pet can be mechanism. I can imagine people talking about the adorable
interacted with from anywhere in the globe through kitten they played with on the internet the other day and
a browser window. perhaps developing an initial attachment that way that
Keywords would lead to adoption, or volunteering or a web based
network, cat toy, pet, interactive, remote, shelter, cash donation.
feeding What it does
This prototype allows the user to play with a house, or
INTRODUCTION
shelter bound animal while at school or at work. The
live webcam feed providees visual feedback while playing
When we think of ubiquitous computing we often with the cat and the feeder can also be used to feed the
think of humans linking to computers, humans to cat while the owner is on vacation.
humans and/or computers to computers. What
about the millions of other living things sharing
our living space that we call pets? This project is System
an attempt to provide a network node for pets The system (fig.1) is centered around an embedded network
and pet owners to interact over the internet. device that takes messages from internet browsers and
This may seem like a bizarre proposition but ask transmits them to events in the real world. Clicking on a
any pet lover if they wish they could play with link in the web page can make the toy move or the feeder
their pet from work or check up on them while feed. The web cam simply sends a live video image so
traveling. A human beings devotion to their pet the user can see that the cat is being fed of playing with
often borders on a parent child relationship. the toy.
Besides allowing pet owners to play with pets The prototype as it is now requires a microprocessor and
from home I also see the network being deployed embedded network device, a PC and a webcam. A
in animal shelters as an online marketing tool to microcontroller is basically a very small, simple computer
get publicity for animals. Hopefully by luring that can be programmed to control simple tasks. An
people to play with dispossessed animals over embedded network device, depending on manufacturer,
the internet animal-human connections will be allows a microcontroller to be accessed and controlled
made and pet adoption rates will increase. over a broadband internet connection, like DSL or cable.
A PC is used in this prototype soley as a means to network
Consider that the toy will have to be designed for a video camera. In future versions the camera and
optimal animal interaction. It’s not just a toy but microcontroller will be integrated into the micro web
an animal interface center, a pet equivalent to a server.
monitor and keyboard. Which brings us to another
point. If all the nodes are linked for two way
communication on the network then stay at home
cats could conceivably play with each other via
the network .

88
Technology Prototype Fig. 2

Physical Prototype Fig. 2

Physical Design ACKNOWLEDGMENTS


The device (fig. 2) is designed with a flexible Special thanks to Tom Igoe, Dan O’Sullivan, Red Burns
camera mounting (A) which allows the user to and the entire NYU Interactive Telecommunications
configure the view of the feeding mechanism (C) Program.
or toy (B). The user moves the cat toy through a REFERENCES
servo motor (D) mounted on a moveable platform 1. Wilson, Stephen Information Arts: Intersections of
mounted to a central post. A network device Art, Science and Technology. MIT Press, Cambridge
functioning as a micro webserver and interfaced Massachusetts, 2002
to a microcontroller is contained in the weighted
base (E) 2. Wallich, Paul. Popular Science, December 2002.

89
Responsive Doors
Greg Niemeyer
Dept. of Art Practice
345 Kroeber Hall
Berkeley, CA 94720 USA
+1 510 642 5376
[email protected]

ABSTRACT “Homage to New York, 1969, by Jean Tinguely and Billy


Responsive Doors is an ambient computer display system Kluver, and “Mori”, 2000, by Ken Goldberg et al.
embedded in common doors. It is designed to optimize the Coming from art practice, my research method is to build
behavior of people in relation to air quality. The project is an instance of an idea, to present that instance to visitors
situated at boundaries between climates, specifically doors. in a public context such as a museum, and to collect
There, air quality information is well contextualized feedback from visitors and from personal observation. This
because users are about to enter a place with a potentially feedback informs the questions and ideas which lead to the
better or worse climate. Applications for the door display next idea. I conducted the research “genetically”, in the
include places which generate hazardous materials or sense that each project created a subsequent child project,
pollution, but also places with poor air quality due to high which was more successfully answering my research
human occupancy and poor ventilation. The Responsive question than its parent. While this method lacks scientific
Door displays massive sets of CO2 data collected on both evaluation rigor, it affords much flexibility and
sides of a door. The display itself is embedded in the door. responsiveness: It allows for a reframing of initial research
On the display, color-coded (red and blue) concentric questions, and also for the integration of positive results
wedges grow and shrink from 0 to 180 degrees to reflect and useful observations on subjects we did not intent to
CO2 levels of each correspondingly color-coded (red and study.
blue) side of the door. The constant changes of the CO2
ratio drive animations which rotate the wedges around the Responsive Doors is an ambient computer display system
center of the diagram constantly like arms of a scale. The designed to optimize the behavior of people in relation to
wider wedges (higher CO2) rotate to the bottom half of the air quality. To inspire behavior optimization without being
display and narrower wedges (lower CO2) rotate to the top didactic, we developed a very light, almost playful system
half of the diagram. The resulting motions draw attention which gently organizes people in two team situations
to the diagram only when a change in air quality occurs. (inside the door and outside the door) and measures and
compares their air consumption. The consumption ratio is
Keywords displayed in an animated graph. The graph is calm, but
Media Art, Visualization, Information Display, Air animates vividly when better air quality switches from one
Quality, Contextual Computing. side of the door to the other. This animation dramatizes
INTRODUCTION the ambient display by emphasizing the ratio of two
My original research project proposed to investigate two comparable sources of data rather than displaying the
key questions: progression of only one source of data.
• Are computers limited to delivering quantitative The Responsive Door consists of five elements: A door to a
information, or can they deliver values such as ambiance, room, a transparent, passive-light, LCD display1 mounted
community, poetry, reflection, luxury and comfort? in the door like a “data widow”, a pair of CO2 sensors
attached inside and outside the door, and a networked
• Which types of interfaces and which types of content CPU. The CO2 sensors continuously transmit CO2 data
can motivate the distribution of digital content beyond from inside and outside the room to the CPU. The CPU
the gray box of the desktop computer? collects this data and averages it over several time spans,
such as 1 second, 30 seconds, 1 minute, 5 minutes, 1 hour,
Methods
1
I approached these questions in three creative research This display was developed specifically for this project by
projects, PING (2000-2001), Oxygen Flute (2001-2002), Greg Niemeyer. We described it in earlier grant papers
and Responsive Doors (2001-2003). These projects were as a digital stained glass window. This display is also a
based on previous dynamic art projects, especially part of the Responsive Door Patent Application.

90
1 day, 1 week and 1 year. The average data generates an often confuse observers, and the dramatic representation of
innovative graphic which displays the CO2 values inside such sets of data in real time would provide observers with
and outside as segments of a ring. The display is animated, rapid cognition and therefore, competitive advantages.
changes in the levels of CO2 generate an action surplus,
Conclusion
an animation which dramatizes the change of levels and This project has not been extensively tested at this time.
displays which climate, the indoor or the outdoor climate, First observations confirm that information can generate
features better air. non-quantitative values such as ambiance, community,
The subtle drama of the graphics is designed to reflect the poetry, reflection, luxury and comfort if the interfaces for
subtle, but increasingly vital question of air quality. In input and output are carefully tailored to non-quantitative
office settings, air quality is a significant factor in interpretations of the information in question.
productivity, and the Responsive Door device can inform The question of developing such interfaces touches on
people about this question in an ambient, intuitive and three traditional areas of the arts in addition to the
aesthetic fashion. The Responsive Door Device could also technology: Drama, Architecture, and the Visual Arts.
improve monitoring of air quality conditions in heavy
industry settings, where CO2 deprivation or other air Qualities of such interfaces include the action surplus,
quality factors can lead to fatigue and fatal accidents. Even coupling, and responsiveness. Action surplus is the
in a home setting, a front door with a Responsive Door amplification a system provides for a user action. Action
device could enhance behavior: the device could inform surplus can exceed, match or disappoint the user. Coupling
occupants of the need to let in fresh air, or it could inspire is the presentation of intangible information with several
them to bike to work on a day where air quality is poor tangible means, such as image and sound. Coupling can be
outdoors. too explicit and feel didactic, too vague and feel confusing,
or “just right” and feel self-evident (no manual or wall
Discussion label needed). Responsiveness is the speed of the
The relevant advantage of the Responsive Door device interaction between a human body and a system.
over normal air quality meters is the dramatized Responsiveness can be too slow: then it makes users think
comparison between two sets of data. Traditional sensors the system is dull. It can bee too fast: then users feel the
answer the question: What is the air quality in parts per system is too hard to control. It can be “just right” and in
million of CO2. This device answers the question: Which sync with human response rates and other patterns
side of the door has better air, or will “win the air quality relevant to the human sensory system. Then, users feel the
battle”? That question is much more accessible for most system is an extension of their own bodies, or even can feel
audiences, and also leads to more effective modifications the system as being “alive”.
of behaviors. Nobody wants to be on the losing side of the
battle. Resulting “just right” interfaces are specific to the types of
information provided. One general problem with
The basic concept is to use game and entertainment information technology is that the standard interface is not
principles as well as narrative strategies to engage viewers tailored to specific types of content. The resulting interface
in the consideration of fairly dry data. The main purpose of is not particularly well matched with any type of content, it
narratives is to make information interesting, engaging is usually bland. Often, interfaces are also not thoughtfully
and memorable, but most narratives deal with static contextualized within the environment of the interface.
information. Information technology can make stories out Artists and designers can help solve this problem by
of real time information in real time. Thereby, our highly matching interfaces to content more deliberately, with
developed sense of understanding stories can be invested greater aesthetic variation and with more consideration for
in understanding difficult and abstract sets of data very the changing relation between a device, its interface and
directly. its context. For example, windows for different programs
This observation requires further studies as it provides a on a PC could look distinct, they do not all have the same
connection between traditional media, such as television, fonts, frames and borders. Windows could also look
and information technology. In combining the two, different depending on the location of a computer: why
dramatic renderings of real-time data could become the does an interface look the same in Toronto as it looks in
news of the future, and new media technology would be Tijuana?
much more invested in the authoring of media content. In The Responsive Door is a possible candidate for of a
games, and particularly in pervasive games, players can successful match between content and interface. A door
generate narratives for information they acquire as the regulates indoor and outdoor relations. It is therefore a
game unfolds. The narrative itself is an emerging history good site to place a device (coupling) which describes air
which makes information relevant and memorable. quality inside and outside. The display itself is well
Immediate applications of this concept are also matched to the door, with blue elements describing the
conceivable for financial markets, where large sets of data

91
blue side of the door and red elements describing the red REFERENCES
side of the door. The responsiveness of the system to 1. Madhu C. Reddy and Paul Dourish. 2002. A Finger on
changes matches that of the expected dissipation of air in a the Pulse: Temporal Rhythms and Information Seeking
room. Neither irritating nor dull, the display elegantly in Medical Work. In Proceedings of the ACM
draws the user’s attention only if there is a dramatic Conference on Computer-Supported Cooperative Work
change in air quality. CSCW 2002 (New Orleans, LO). New York: ACM.
In conclusion, I think that the viability of using 2. Goldberg, Ken, Packer, Randall, Matusik, Wojciech
information technology for non-quantitative purposes and Kuhn, Gregory, Mori: An Internet-Based
depends on the degree to which the interface connects Earthwork, Leonardo Journal, 35(3), Spring 2002.
humans to information on human terms. 3. Alexander, Christopher. 1977. A Pattern Language.
ACKNOWLEDGMENTS Oxford University Press. Oxford, UK.
I thank Intel Corp. and Dana Plautz for supporting this 4. Shanken, Edward. 1998. Gemini Rising, Moon in
media art project, and I thank the following collaborators Apollo: Attitudes on the Relationship Between Art and
for sharing their expertise for realizing the “Responsive Technology in the US, 1966-71, Anders Nereim, ed.,
Doors: Collaborators: Julie Daley, Ben Dean, Richard ISEA97: Proceedings of the Eighth International
Mortimer Humphrey, Scott Snibbe, and Preetam Symposium on Electronic Art. Chicago: ISEA97,
Mukherjee. 1998.

92
Squeeze Me: A Portable Biofeedback Device for Children

Amy Parness, Ed Guttman and Christine Brumback


New York University, Interactive Telecommications Program
250 Elizabeth St #4, NY, NY 10012 USA
+1 646 220 7498 [email protected]

ABSTRACT learning applications, which seemed to act mainly as


Squeeze Me is a squishy portable biofeedback device for distractions. We saw the opportunity to empower these
children ages 8 to 11 that are experiencing stress related to children, and wanted to give them a personal device that
a medical condition or treatment. It takes physiological would aid them in recognizing their stresses and working to
input from hand temperature and pressure (sensors), and relieve them.
outputs a light pattern that helps the child initiate a relaxing After some brief research into non-invasive stress relief
and entertaining activity. By helping a child understand options for children, we chose to pursue a biofeedback
and manage his or her medical related stress symptoms, experience that would offer both information and a self-
Squeeze Me empowers a child who is faced with a care option to the patients.
potentially intimidating experience. Squeeze Me can be
used in a range of environments (home, school and the Current State of Bio-Feedback
hospital) because it is portable. The human organism maintains itself through homeostatic
mechanisms. The major means utilized to maintain balance
Keywords among these mechanisms is feedback control [1].
Children, biofeedback, physiology, emotions, personal Biofeedback is a process in which a subject receives
care, stress information about his or her physiological state. This
information can be used by the subject to understand,
PROJECT STATEMENT monitor and manage his or her emotional and physical
The idea for Squeeze Me originated during a course using states. The most common readings for biofeedback are
technology to help patients in the Montefiore Children's EMG (muscle tension) and hand or foot temperature.
Hospital in Bronx, New York. Montefiore is a high-tech Biofeedback procedures are often cumbersome, may be
children's hospital that aims to provide patients with high somewhat invasive and typically require a desktop
quality medical care in an environment that encourages computer. Probes are attached to various areas of the body
learning and exploration. There are no traditional waiting (neck, sphincter, wrist or frontal lobes). Current
rooms, no infamous hospital smells or sparse and sterile biofeedback devices are typically used in a doctor's office
looking areas at Montefiore. Each floor is designed and and are administered by a doctor or biofeedback
decorated for specific age groups. Each patient bed is practitioner. The patients are taught exercises to help
equipped with plasma screens and wireless keyboards so manage their state while receiving visual and/or auditory
that patients may browse the web and watch TV and feedback. Exercises learned in sessions can be continued
movies on-demand. anywhere, but without the equipment there usually no
Our team became interested in developing a playful and feedback device that lets the user know if their work is
therapeutic experience for children who are chronically or making a difference.
temporarily ill. As we observed the hospital environment Divergence: Squeeze Me
and heard about the typical experiences of patients, we Our intent was to create a personalized, portable "buddy"
were told about the stresses involved with hospital stays that helps a child understand and monitor his or her
and treatments. For example, the hospital serves a number condition, and potentially guide them through exercises if
of dialysis patients who visit on a regular basis and often needed. We did not want our device to resemble systems
have long, boring waiting experiences during the course of typically found in biofeedback programs, such as linear
treatment. Despite Montefiore's unusually humane narrative or reward based feedback. Squeeze Me offers
approach to the hospital experience, there were such children a direct and immediate connection between the
unavoidable waiting periods. The wait time and boredom feelings they are experiencing and the feedback displayed
only add to the children's existing anxiety and stress arising to them. With Squeez eMe, feedback about the user's
from their condition. Existing forms of entertainment and current physiology is given using varying colored LEDs
play for the children were limited to TV (Jerry Springer that light in groups or patterned sequences. A skin
being the most popular show), toys and a few online temperature reading is triggered when the starfish is held

93
and contact is made with the surface mounted thermistor. who worked closely with patients in our age range, and
Based on that reading (which is continuous) LEDs of a who in some cases had previously been patients
specific color appear (blue for cool, green for normal, themselves. We focused in on the handheld, light based
yellow for slightly warm and red for warm). If the child feedback features, to allow children portability and also to
squeezes a leg of the starfish, the color temperature offer clear but non-distracting feedback (sound and music
feedback display shuts off and is replaced with multi- were considered as feedback but identified as potentially
colored light patterns, reflecting the child's hand pressure. disruptive in a hospital setting). Other concepts included a
The patterns can then guide a child through breathing screen projector or installation for visual feedback and
exercises. The continuous readings and feedback allow the pulsing sound based feedback as well as some networked
child to see if the exercises are working. Through repetition application to allow the devices to communicate with each
of the exercises, focused concentration on the activity and other.
the visual and tactile appeal of the device, the child may be Design of form and interaction
calmed and/or distracted, with the likely result of reduced
stress. Shape, materials and interaction design happened in
parallel and simultaneous phases. Our team began
The portable device does not require a child to be tethered materials research and quickly identified rubber for the
to a computer, giving him or her freedom to play in a object body. We'd evaluated popular toys among our age
relaxing environment. It would also allow children to share group earlier in our process and recognized that squishy
what they have learned and compare their bio-readings. matter was popular among our audience. Thus we wanted
to offer an inviting, touchable surface that would allow for
PROCESS squeezing.
Audience selection We developed several prototype shapes, including both
abstract and representative shapes in different silicone
After researching the characteristics of various age groups hardness. We tested the shapes with our classmates initially
we focused on ages 8- 11. Children of this age are able to to determine if certain shapes were more hand-friendly than
connect cause and effect, and capable of logical and others. We focused on an apple and several shells, as those
organized thought. [2] For this reason, we believed this age seemed to be the easiest to hold. With some basic
to be ready to learn more about their bodies, and able to functionality -- light based feedback responding to hand
conduct self care activities. Research in the current toy temperature -- implanted into the molds; we then tested
market reflected that this age group seems to be maturing more extensively with several children in our target age
away from babyish toys, but is still playful and curious and range.
interested in natural forms.
The children responded well to the sticky and squishy traits
Biofeedback consultation & research
of the Dragon Skin and favored the shells for shape. They
In parallel to audience research, we investigated the current also enjoyed seeing the lights respond to their touch. Many
state of non-invasive biofeedback applications for children tested the limits of our prototypes, squeezing as
children. We consulted with a pediatrician to understand hard as they possibly could. A few commented that they
the medical perspective on biofeedback. Her guidance led wanted to see a response to their squeeze in addition to
us to focus on a soothing and entertaining application and their hand temperature. (This made sense in context of
steer away from a traditional therapeutic application. Most children without stress symptoms using the device) With
of what exists she felt was dumbed down and not engaging these observations, we decided to pursue both temperature
for children. She also advised considering more general and hand pressure feedback for our project. We also
feedback, rather than quantitative (e.g. temperature). The discovered that this age group liked a range of colored
staff at Montefiore also agreed with this sentiment, as they lights. One child did remark that he wanted to see his exact
felt high temperature readings in particular might cause temperature. Since SqueezeMe is not intended to be a
heightened stress in a child who was already ill and thermometer, but instead a general indicator, we noted the
anxious. We then consulted with a child psychologist who feedback, but thought this feedback was not enough to
uses biofeedback, to understand her typical practices and make it a development priority.
patient needs. Subsequent web-based research provided us The shape was still in question, as we'd not received much
with examples of screen-based applications that were detailed feedback in that direction from the testing. We
narrative or generally game-like. then asked several children in our target age group to play
After evaluating what we learned about current practices, with some Sculpy clay. Their assignment was to create as
we critiqued the existing methods, and held several many shapes as they wanted. The only requirement was
brainstorming sessions about characteristics we thought that the shape be something they would like to hold, carry
could improve upon existing solutions. We developed a around with them, and play with whether they felt well or
few general concepts and then discussed those with peers, sick. All of the shapes produced were animal related --
classmates, and instructors. We then presented one to the from dog bones to clam shells. The starfish model that
Montefiore hospital staff, including teen-aged volunteers came out of this 'test' provided our best option to date. It

94
was recognizable, hand-friendly for a variety of hand sizes,
and offered a good surface for visual feedback. The shape
then led us to our current functionality and we refined the
interaction. The thermistor could be mounted on the
starfish body, where we observed most people would touch
when picking up the object. The legs, which could be
squeezable, each had the potential to trigger a different type
of feedback.
Sensors and circuit construction
Our sensor evaluation included thermistors and galvanic
skin response (GSR) for the hand touch feedback; force
sensing resistors (FSR) and flex sensors for the squeeze
feedback. The first two attempts with thermistors brought
failure -- both types were too delicate to be touched
repeatedly, too slow in capturing and transferring the data,
and too sensitive for heat mounting to other parts of the
circuit. In consultation with YSI Temperature we selected
a high precision thermistor with a sizeable surface area that
Colored LEDs responding to ‘normal’ hand temperature
would withstand repeated touch and capture the
temperature data quickly. Due to deadline limitations we
only briefly evaluated GSR and were not able to produce
reliable results. With the thermistor functioning, we
decided to postpone GSR evaluation to later in the project
lifecycle.
For squeeze feedback, we looked at flex sensors first and
quickly dismissed them as an option. The ones we had
access to were too delicate for repeated squeezing and
bending. We tested various sizes of the FSRs and found
them to be reliable and responsive to the squeeze.
Additional Testing
With a working prototype (see photos above right) we
demonstrated Squeeze Me for the hospital staff, and then
participated in a semi annual group show at NYU.
Approximately 250 people interacted with Squeeze Me
during the show, from grandparents to children. Feedback
was positive, and observing usage was invaluable. Most Pattern Feedback responding to hand pressure
people expressed interest in the device, and felt it would
result in stress reduction. We were surprised at how
roughly some children interacted with it, which led us to
consider a more protected environment for the light circuit.
Generally it performed well, with some issues with drift in
thermistor redings due to heat that we're currently
addressing.

95
PROJECT PARTICIPANTS
Development Team: Christine Brumback, Ed Guttman and
Amy Parness collaborated on concept, design and
development of this project.
Advisors: Dr. Jan Leupold (child psychologist), Marianne
Petit (ITP instructor), Dr. Kim Putalik (pediatrician), Jeb
Weisman and staff (Montefiore Childrens Hospital),
provided insight and feedback on the conceptual, design
and behavioral aspects of this project. Ken Allen (YSI
Temperature), Tom Igoe, Greg Shakar, and Jeff Feddersen
(ITP instructors), provided technical advice and assistance
in sensor selection and circuit design.

DATES AND DURATIONS


The project began in January of 2003, and we expect to
continue research and development into 2004.
FUTURE ENHANCEMENTS
In the next iteration, we are in the process of making
SOLUTION DETAIL SqueezeMe waterproof. Not only would this allow for
Body cleaning and sanitation of the device, but would also allow
The body of the SqueezeMe was molded in Dragon Skin for more portability. We also plan to evaluate other shapes
Silicon. We made clay positive in the shapes that we for different kinds of users. Modular devices that children
wanted and used them to create a negative mold. can put together themselves would engage the child in a
deeper learning environment. [3] We would also like to add
Sensors
optional auditory feedback to the device, to aid in feedback
The temperature is read by a high-precision thermistor of exercises.
mounted on the top surface of the device. It is insulated to
protect it from extraneous readings. The prototype
previously tested was made with a FSR embedded in one of REFERENCES
the device’s five legs and the system was built around a 1. Ashby, W. R. An Introduction to Cybernetic. New
BX-24 chip. Currently we are incorporating an FSR in each York: John Wiley & Sons, Inc., 1963.
of the five legs. In order to better accommodate the 2. Mooney, Carol Garhardt. Theories of Childhood: An
increase in sensors, we are now developing with the Introduction to Dewey, Montessori, Erickson, Piaget &
PIC16F876 from Microchip. The chip contains the Vygotsky: Edleaf Press, November 2000.
software, reads the sensors and tells the other chips what to
do. Each sensor is attached to a separate pin while the rest 3. Yasmin B. Kafai , Mitchel Resnick. Constructionism in
are multiplexed to allow for 24 LEDs. There will be a Practice: Designing, Thinking, and Learning in a Digital
thermistor on the inside to monitor internal temperature and World. 1996
one mounted on the outside. These will be cross-referenced 4. Project: http://itp.nyu.edu/~amp318/spring/assttech/
to calculate drift. (See diagram below for working
schematic)

96
The Personal Server:
Personal Content for Situated Displays
Trevor Pering, John Light, Murali Sundar, Gillian Hayes,
Vijay Raghunathan, Eric Pattison, and Roy Want
Intel Research
[email protected]

ABSTRACT
The Personal Server is a small, lightweight,
and easy-to-use device that supports personal
mobile applications. Instead of relying on a small
mobile display, the Personal Server enables
seamless interaction with situated displays in the
nearby environment. The current prototype is
supported by emerging storage, processing, and
communication technologies. Because it is
carried by the user and does not require data to
be either hosted in the local infrastructure or
retrieved from a remote web-site, it provides a
platform that increases users’ control over their
personal data. Furthermore, it enables additional
novel applications, such as a personal location
history, that would not be appropriate for the
computing infrastructure.

OVERVIEW
The Personal Server (PS) [1] is a system
designed to provide access to a user’s personal Figure 1: Personal Server Prototype
applications and data, stored on their mobile
device, through large-screen displays in the • Accessibility – the Personal Server enables
infrastructure. The device itself does not have an quick and easy access from multiple potential
built-in display, allowing it to exist as a small, access points, not requiring access through the
yet powerful, mobile device. By providing a device itself, which may be conveniently and
flexible platform for personal information safely located in the user’s bag or pocket.
access, the PS concept explores issues in Attention – the Personal Server platform is
personal information control, trade-offs between capable of automatically interacting with local
mobility and situated displays, and environment on the user’s behalf, not requiring
environmental customization. them to immediately respond to location-
The Personal Server is designed to overcome triggered events or notifications.
several shortcomings of current mobile systems, The underlying concept behind the Personal
some of which are listed below: Server is creating and presenting an
individualized digital presence surrounding the
• Usability – most mobile devices have a small user, making it easer to access personal content
screen that makes it very difficult and and also allowing the environment to adapt to
inconvenient to access content. By enabling personal preferences. A crucial metric in
access through displays located in the nearby evaluating mobile systems is often ease of use
environment, the Personal Server allows the and the user’s attention level. By allowing easy
use of large screen displays to access one’s access through any nearby convenient display,
data without having to carry a bulky laptop and not restricting access through a phone or
around. laptop, the Personal Server enables streamlined

97
ubiquitous interaction and thus ranks very highly proactive customization of the immediate
with respect to the aforementioned metrics. vicinity without direct user involvement.
The current operational prototype of the
Personal Server is an instantiation of the overall These applications highlight how the
concept, and is designed to demonstrate the Personal Server overcomes the difficulties with
novel characteristics of the device. Although current mobile platforms by exploiting three
currently a stand-alone device, in the future the important emerging technology trends. It
Personal Server may be integrated with other provides a small, powerful, and non-obtrusive
mobile devices such as a cell-phone, laptop, or platform for supporting mobile interactions. As
wristwatch – providing the same functionality technology becomes more ubiquitous, the
without burdening the user with an additional connection between mobile users and the
device. Rapid advances in three technology areas environment around them will become more
directly enable the Personal Server concept: important, strengthening the need for
personalized mobile systems, such as the
• High density storage – high-density storage Personal Server.
technologies, both solid state and magnetic,
are increasing at an extremely high rate, DEMO APPLICATION HIGHLIGHTS
doubling approximately every 12 months. For the conference demonstrations, the three
• Power efficient processing – both the power applications mentioned above highlight the
efficiency and computational capability of Personal Server’s core capabilities: personal data
embedded processors is rapidly increasing, access, location collection, and environmental
enabling smarter and more powerful devices customization. Multiple devices, each carried by,
that also have higher battery lifetimes. and associated with, a particular individual,
• Short range communication – emerging short- provide the personalized content for each of
range wireless standards afford easy, low- these applications. By exposing the unique data
power, ubiquitous point-to-point wireless contained on each device, these applications
connectivity. highlight how advances in mobile storage,
processing, and communication can be used to
Specifically, the current prototype has an enable new types of personal interactions.
Intel® XScale™ family processor, Bluetooth™ For example, Fred’s Personal Server may
wireless radio, and a compact flash slot for contain pictures from his recent vacation to
permanent storage. The resulting device is about Japan, a web-page describing him and his
the size of a deck of cards, and supports a full general interests, and his personal collection of
Linux distribution with up to 4GB of removable rare bluegrass music. Additionally, the device
storage. As a baseline, it supports web-browser could contain detailed research data describing
and file-share access, but is also capable of his power and latency measurements of
running any compatible client- or server- side emerging wireless networking protocols. Also,
application. his personal profile may indicate that he loves
Three applications demonstrate the unique Thai food, hates coffee, and likes to browse
capabilities of the Personal Server: through antique shops.
The personal data stored on Fred’s mobile
• Personal data access – personalized content, device can be easily accessed through any
such as a photograph collection, music number of nearby situated displays, allowing
collection, or working documents, can be convenient access to data without relying on a
stored on the Personal Server platform and small-screen display. For example, Fred could
easily accessed from nearby situated displays. walk up to an available display and show his
• Location collection – information from short- friend a collection of photographs from Japan.
range beacons in the environment are collected Similarly, he could show his other colleague his
and managed by the device, allowing for latest research results. Streamlining this basic
location-based services that do not constantly interaction through a simple web and file-sharing
require the user’s attention. interface, supports a mobile lifestyle without
• Environmental customization – personal requiring a bulky mobile platform, such as a
preferences, such as music selections or laptop
immersive game profiles, can be automatically The second application, termed the
transferred to the environment, allowing Ubiquitous Walkabout, receives information
from nearby information beacons and other

98
devices to form a picture of where users travel will spur many of these explorations and
and who/what they have been around. Data is discussions.
collected in real-time as the user passes by
nearby points of interest, and can be viewed later SUMMARY
on a situated display. Because the Personal The Personal Server demo environment
Server gathers and records the data, users consists of several demonstration stations that
maintain control over their personal information: detect and respond to devices representing
it allows them to track themselves, but does not individuals. The display stations, either in the
require the trust of any third-party or the use of form of large public displays or smaller touch-
infrastructure such as GPS. Additionally, since screen displays, will show content served from
the system knows that Fred is partial towards nearby users’ Personal Server devices. At any
Thai food and antique shops, it will highlight any given time, only a few devices will be in the
Thai restaurants or antique stores he regularly vicinity of the display station, adapting the local
walks by, but doesn’t notify him about coffee environment to the preferences of nearby
shops. individuals.
Finally, the Personal Server provides a The individual demonstrations have been
platform for customizing the music or audio selected to highlight personal control over
present in communal spaces. Because of the information. Although it relies on public
significant storage capacity, Fred can store a infrastructure to access content stored on the
considerable collection of bluegrass music on his user’s mobile device, the Personal Server
device, creating, in essence, a “ubiquitous MP3 controls access to personal data, providing a
warehouse” that makes his music available balance between mobile and ubiquitous
through music players in the environment. computing. These demonstrations provide a
Although his tastes in music are rare, he can concrete discussion point for conference
listen to his music when he likes, although he is attendees to explore ideas surrounding personal
not likely to find his favorite bluegrass playing information control and access.
on the radio. Furthermore, the environment can
combine music from other nearby users’ to ACKNOWLEDGEMENTS
automatically mediate the music played in a Brian Landry, Lamar Jordan (sp?) – Mtunes,
particular space, customizing the local Adam Rea (RFID), David Nguyen (Proactive
experience. This concept is similar to MusicFX Displays), Robbie Adler (iMotes)
[3], except music is sourced off of users personal
devices, instead of being provided through a REFERENCES
centralized agency.
As an alternative to playing entire songs, the [1] R. Want, T. Pering, J. Light, M. Sundar,
system can play a different short sound chirp or "The Personal Server - Changing the Way
show a representative graphic associated with the We Think about Ubiquitous Computing",
participants in the immediate vicinity, served Proceedings of Ubicomp 2002: 4th
from their mobile devices. For example, one International Conference on Ubiquitous
person might choose the sound of a chirping Computing, Springer LNCS 2498,
bird, while another, a snare drum hit. This Goteborg, Sweden, Sept 30th-Oct 2nd,
conglomeration of personal media signatures 2002, pp194-209.
automatically constructs a dynamic environment
based on the identity of nearby participants, [2] T. Pering, R. Want, J. Light, M. Sundar,
creating an immediate and dynamic "Photographic Authentication for Un-trusted
demonstration of environmental adaptation as Terminals", Intel Research; accepted for
individual participants come and go. IEEE Pervasive Computing Issue #5, March
Current mobile devices already possess many 2003
of the technologies necessary to implement a
Personal Server, such as processing, storage, and [3] J. F. McCarthy and T. D. Anagnost,
communication. However, accessing stored "MusicFX: An arbiter of group preferences
content through situated displays and other for computer supported collaborative
devices has yet to be fully explored. The workouts," in Proceedings of the ACM 1998
Personal Server concept provides a platform that Conference on Computer Supported
Cooperative Work, pp. 363--372, ACM
Press, New York, 1998

99
Ambient Wood: Demonstration of a Digitally Enhanced
Field Trip for Schoolchildren
Cliff Randell Ted Phelps Yvonne Rogers
Department of Computer Science School of Cognitive and Computer Sciences School of Cognitive and Computer Sciences
University of Bristol University of Sussex University of Sussex
[email protected] [email protected] [email protected]

ABSTRACT
This demonstration shows parts of the Ambient Wood expe-
rience project which has taken place in an English woodland
setting during the past year. The project provides a play-
ful learning experience for schoolchildren on a digitally en-
hanced field trip. A WiFi network was installed in the woods
to enable communication with PDAs, and a collection of in-
novative devices was designed to aid interactive exploration Figure 1: Using the probing device to find (i) moisture
and (ii) light levels and (iii) reading the resultant visuali-
of the woods. Most of the devices that were employed are sation on a PDA screen
available for conference attendees to use along with a facil-
itator’s terminal. A video of the schoolchildren using the
devices in the woodland is also shown. collaboratively discover a number of aspects about plants and
animals living in the various habitats in the wood during a
Introduction visit lasting around one hour. Their experiences are later re-
The Ambient Wood project is a playful learning experience flected upon in a ‘den’ area where both pairs of children share
which takes the form of an augmented field trip in English their findings with each other and the facilitators. The chil-
woodlands. Pairs of children equiped with a number of de- dren hypothesise about what will happen to the wood in the
vices explore and reflect upon a physical environment that long term under various conditions e.g. drought or lack of
has been prepared with a WiFi network and RF location bea- light through the trees.
cons. The intention is to provoke the children to stop, wonder
and learn when moving through and interacting with aspects Following on from a successful run late in 2002, the expe-
of the physical environment (see Figure 1). The children are rience was enhanced for children visiting the wood in June
able to communicate with a remote facilitator using walkie- 2003. Building on the experiences of the previous year we
talkies and are sent questions and information by a remote continued exploring our theme of augmenting the experience
facilitator using the network and handheld PDAs. with digital tools. An ‘Ambient Horn’ was added to enable
the children to have more control over when digital sounds
A variety of devices and multi-modal displays were used to within the wood were heard. The Horn provided a way to ac-
trigger and present the added digital information, sometimes cess sounds representing processes invisible to the eye, and
caused by the children’s automatic exploratory movements, to events that had happened at a different time.
and at other times determined by their intentional actions. A
field trip with a difference was thus created where children The Demonstration
discover, hypothesize about, and experiment with biological The demonstration consists of most of the devices which
processes taking place within a physical environment. were used as part of this project; a simplified wireless net-
work which enables a remote facilitator’s application to be
Two spaces were designed for the initial trial run, and each shown in conjunction with handheld Jornada PDAs; and a
activity space offered its own aims with focus on the differ- display showing a video of the children using the devices in
ent kinds of technologies and activities that have an overall the woodlands. The devices, laptop and Jornadas are all in-
link into habitat distributions and dependencies. These aims terconnected and functioning as designed and used.
are: Exploring, Consolidating, Hypothesising, Experiment-
ing, Reflecting. Pairs of children around the age of 10 years The Network Infrastructure

The project required that data should be collected by the chil-
Funding for this work is received from the U.K. Engineering and Phys-
ical Sciences Research Council, Grant No. 15986, as part of the Equator
dren; their positions in the woods be monitored; and that lo-
IRC. Further support is provided by Hewlett Packard’s Art and Science cation based information could be triggered. This was ach-
programme. ieved by the use of 418MHz license exempt transmitters with

100
facilitator’s laptop PC using the WiFi network. The cards
showed images of plants and wildlife; illustrations of natu-
ral processes such as photosynthesis; or alternatively could
pose questions to stimulate the children’s thought processes.
The facilitators were also able to monitor the progress of the
children through the woods by using a GPS tracking system.

The Pinger Devices


The design issues for a ‘Pinger’ are size, cost, power con-
sumption, range, transfer capacity and error rate. In its sim-
plest form our Pinger design consists of a single PIC micro-
controller connected to a FM transmitter module operating
in the 418MHz license exempt band. Its footprint is 3cm
x 3cm; it costs less than $20 in small quantities; has a six
month battery life when powered by two AA batteries; has
an adjustable range between 2m and 100m; sends an 8 byte
Figure 2: Ambient Wood Device Architecture. packet at 1Hz; and is 95% reliable i.e. approx one packet
in twenty is corrupted or lost. The pinging devices were all
designed to be stateless with varying degrees of redundancy
limited ranges broadcasting to receivers attached to handheld based on the level of interaction required with each device.
Jornada PDAs. We call these devices ‘Pingers’ based on the Five types of Pinger were employed in this project:-
simple design proposed by Hull et al [1]. A wireless Per-
sonal Area Network (PAN) based on this Pinger technology Location Pinger This is the basic design, providing a loca-
was provided for each of the pairs of children as well as an tion beacon. A datapacket is constructed containing a loca-
802.11b WiFi local area network accessed through WiFi CF tion identifier and is Manchester encoded and transmitted
Cards in the Jornadas. The WiFi network assisted communi- at 2,400 baud. The range of the transmitter is governed
cation from remote facilitators, and enabled real time mon- by the antenna configuration extending from 2m with no
itoring of the children’s activities. In the woods we exper- antenna to over 100m with a quarter wavelength whip an-
imented with three WiFi access points strategically located tenna. For our applications a helical antenna with a range
with extension antenna in the trees. An area of approximately of around 10m is normally used. The Location Pingers
4 acres had good coverage though this varied according to were set to transmit at slightly greater than 1Hz to avoid
season and climate. In our demonstration we are using a sin- periods when contention might occur with the GPS Pinger
gle access point. (see below). This guaranteed a ping being received within
two seconds of the user entering the 10m radius location.
As well as a Jornada and a small Pinger receiver, the pairs of These location Pingers were deployed at points of interest
children carry with them various pinging devices including in the environment such as in thistle patches and reed beds.
a combined moisture and light ‘Pinging Probe’, an ‘Ambient GPS Pinger The GPS Pinger uses an Garmin GPS25 oem
Horn’, a Dead Reckoning Pinger, and a GPS Pinger (see de- board with an antenna on a short cable. The output of the
scriptions below). In addition the receiver was able to detect GPS receiver is decoded using a PIC and a minimal dat-
proximity to Location Pingers situated at interesting places in apacket containing the local position data is constructed
the environment. The contextual information was processed whenever a valid fix is obtained, usually at 1Hz. This too
locally to create notifications of events to a network server as is encoded and transmitted in the same way as the location
they happened. For the original trials wireless loudspeak- beacon. A GPS Pinger is carried by each pair of children
ers, and an unusual interactive display, the Periscope [2], in a small backpack. The data provides a timed record of
were deployed in the woodland. The system architecture em- the children’s movements and is further augmented by the
ployed Elvin (a content based notification and messaging ser- Dead Reckoning Pinger.
vice [3]) originally connected to a MUD environment [4, 5], Dead Reckoning (DR) Pinger The GPS positioning signal
and later to a bespoke application. This architecture is illus- was frequently degraded by the tree canopy. To compen-
trated in Figure 2. sate for this a dead reckoning system was devised which
used an accelerometer to detect movement, and a two-axis
The Remote Facilitator electronic compass to sense heading. Whenever move-
Each of the pairs of children had a remotely located facilita- ment above a threshold value was detected, a ping data-
tor who they could relay information to using a walkie-talkie. packet was transmitted containing heading, amplitude and
The facilitator in turn could send the children information in sequential identifier bytes. The sequential bytes helped to
the form of ‘cards’, which were displayed on the PDA, and identify when pings had been lost. This enabled a simple
sounds, also played by the PDA. These were sent from the form of dead reckoning to be implemented to augment the

101
GPS data [6].
Pinging Probe A Pinging Probe was designed to provide
interaction between the physical world, by sensing mois-
ture and light levels, and the digital world by graphically
displaying the results on the PDA. Again a simple data-
packet is constructed with bytes representing the values
measured and which type of measurement the children were
interested in as indicated by a rotary switch. The Pinging
Probe was set to transmit at 10Hz to ensure that there was
no detectable latency in the interaction.
Ambient Horn A novel audio player, the Ambient Horn,
was designed to play tracks cued by Location Pingers, and
to transmit ping notifications each time a sound is played.
During the first run of Ambient Wood experiments with
hidden loudspeakers failed to generate consistent interac-
tion with the children - the sounds were too ambient. This
device was subsequently designed with the intention of
providing the children with a greater level of stimulus us- Figure 3: Children using the Horn, PDA and Walkie-
ing the prerecorded audio effects. The audio tracks were Talkie.
stored on a sound chip and then cued when a location trig-
ger was received. The Horn produced a ‘honking’ sound
tain aspects of the habitat they might not have noticed other-
and LEDs flashed when the new track was cued; and the
wise, and providing relevant contextual knowledge that they
track played when a push button was activated. A physi-
could integrate with what they saw. Sometimes this approach
cal horn extension provided both an organic metaphor for
worked, and the children related the digital information that
the device and encouraged the children to listen to, and to
was being sent to them on the PDA with what they saw in the
probe for, sounds (see Figure 3).
wood in front of them (e.g. a real thistle). However, at other
times, the children were too engrossed in another activity and
Device Performance
so would miss the beginning part of a voice-over or not even
The Pinging Probe device - used for both collecting and sub- notice a sound. In these moments, the children were often
sequent viewing of the data - provided a thoroughly engross- reluctant to switch their attention to what was happening on
ing experience. The pairs of children made frequent probes the PDA from what they were already doing.
for both moisture and light, usually with one child doing the
probing and the other holding the PDA, reading off the visu- The audio playing Horn device was designed to address this
alisation. Sometimes both children would look at the PDA problem and was successful in giving control of the sound
screen together, and other times the one holding it would tell playing to the children. While this was less ‘ambient’ it
the other what they had seen on the screen. The probe design still gave the opportunity for the serendipitous triggering of
was particularly successful as the digital information result- sounds and also enabled the children to replay particular sounds
ing from the children’s activities was tightly coupled with the on request. The similar physical design of the Horn and
activity, and the children readily understood the connection Probe encouraged the children to seek sounds associated with
between the two. locations by probing with the Horn. We repeatedly observed
the children associating sounds with locations.
Initially the Location Pingers were less successful. While
the technology performed as intended, we had engineered The GPS Pinger performed well enabling positions to be re-
the digital information to be presented to the children in a corded for all the children’s activities. The need for the Dead
more pervasive way i.e. where their bodily presence in an Reckoning Pinger was largely obviated by the use of a high
area triggered the digital information to appear on the PDA, gain active patch antenna with the GPS receiver. Neverthe-
or sounds to be played through nearby wireless loudspeak- less initial results from the DR Pinger indicated that this ap-
ers. In these contexts, the children did not have control, but proach could be useful in situations where poor GPS recep-
relied on the serendipity of their movements as to whether tion is experienced. Figure 4 illustrates the combined posi-
they passed in the vicinity of the Location Pinger. The chil- tioning performance of the GPS and DR Pingers. We also
dren were never quite certain when this would happen and experimented with virtual location beacons created using the
were often surprised when they heard a sound or saw an im- GPS data however these were found to be unsatisfactory due
age on the PDA screen. Part of our intention of using this to inaccuracy, drift and occasional spurious readings.
pervasive technique was indeed to introduce an element of
surprise and the unexpected. Another reason was to augment The PAN, though simple with no protocol stack or handshak-
their physical experience, by drawing their attention to cer- ing, worked well partly due to the redundancy inherent in

102
Acknowledgements
The Ambient Wood is an Equator IRC project and we thank
our collaborators especially Eric Harris, Sara Price, Paul Mar-
shall, Hilary Smith, Mia Underwood and Rowanne Fleck of
the School of Cognitive Sciences (COGS), University of Sus-
sex; Mark Thompson, Mark Weal and Danius Michaelides
of the Intelligence, Agents, Multimedia Group (IAM) at the
University of Southampton; Henk Muller of the Department
of Computer Science at the University of Bristol; Danae Stan-
ton of the School of Computer Science at the University of
Nottingham; and Danielle Wilde of the Royal College of
Art. Thanks also go to the children and teachers of Varndean
School who approached this project with such enthusiasm
and without whom it would not have been possible.
REFERENCES
1. R. Hull, P. Neaves, and J. Bedford-Roberts. Towards
situated computing. In Proceedings of The First Interna-
tional Symposium on Wearable Computers, pages 146–
153, October 1997.
2. D. Wilde, E. Harris, Y. Rogers, and C. Randell. The
periscope: Supporting a computer enhanced field trip for
children. In Proceedings of The First International Con-
Figure 4: Aerial Photograph showing Position Sensing ference on Appliance Design, May 2003.
using GPS and Dead Reckoning. The white pixels rep-
resent the readings from a GPS receiver, the black pixels 3. B. Segall and D. Arnold. Elvin has left the building: a
show the positions estimated by dead reckoning. publish/subscribe notification service with quenching. In
Proceedings of AUUG97, September 1997.
the design. By setting the transmission rate of the Pinging 4. R. Bartle. Interactive multi-user computer games. Tech-
Probes to be significantly higher than for the GPS and Loca- nical report, BT Martlesham Research Laboratories, De-
tion Pingers it was ensured that the Probes appeared to func- cember 1990.
tion with no latency and took priority over the other Pingers.
Any delay in receiving a location ping was not critical as 5. E.F. Churchill and S. Bly. Virtual environments at work:
the user interaction appeared to be serendipitous in any case. on-going use of muds in the workplace. In Proceedings
The GPS pings provided a monitoring function and were not of the International Joint Conference on Work Activities
critical to the progress of the trials. While we estimate that Coordination and Collaboration (WACC99), pages 99–
around 5% of the pings were lost, in practice the users of the 108, 1999.
system were not aware of any latency or data loss in the PAN. 6. C. Randell, C. Djiallis, and H. Muller. Personal posi-
tion measurement using dead reckoning. In Proceedings
Contribution
of The Seventh International Symposium on Wearable
Computers, October 2003.
This project is notable for its location away from any in-
frastructure whatsoever. It required careful consideration of 7. L.E. Holmquist, F. Mattern, B. Schiele, P. Alahuhta,
power requirements and the effects of woodland on RF prop- M. Beigl, and H-W. Gellersen. Smart-its friends: a
agation under differing climatic conditions. It also benefited technique for users to easily establish connections be-
from a lack of any possible external RF interference. The tween smart artefacts. In UbiComp 2001: International
range of uses of the Pinger technology is unusual and its in- Conference on Ubiquitous Computing, pages 116–122,
tegration to form a PAN for collecting minimal data packets September 2001.
extends the concept of using devices such as Smart-Its [7]
8. J. M. Kahn, R. H. Katz, and K. S. J. Pister. Next cen-
and the Berkeley Motes [8] for the collection of pervasive
tury challenges: Mobile networking for ”smart dust”. In
data. The Probe and Horn devices both had great appeal to International Conference on Mobile Computing and Net-
the children who enjoyed using them constructively to learn
working (MOBICOM), pages 271–278, 1999.
about the environment. We believe that these inventions may
inspire others to develop further interesting ways of interact-
ing with ubiquitous computing systems.

103
‘Wall_Fold’: The Space between 0 and 1
Ruth Ron [1]
Archi-TECH-ture
www.ruthron.com
+1 312 753 5064
[email protected]

ABSTRACT
The Wall_Fold installation analyzes personal space in the
contemporary reality of portable computing and wireless
communication. It conveys a more sensitive and complex
environment than the typical Modernist white cube. The
physical architectural element generates an ambiguous
spatial condition: smooth and flexible folds between the
inside and the outside, open and closed. The space thus
becomes continuous and dynamic.
Six pairs of servomotors, connected by flexible bands,
create a smooth surface. The motors alternate between two
positions (0° & 180°), stretching the binary ON/OFF into a
continuous transition, a whole grayscale or gradient
between 1 and 0.
Keywords
Personal space, Smooth space, interactivity, installation
INTRODUCTION
Wall_Fold is a theoretical prototype for a ‘smart’
architectural partition with programmed behavior and
changing patterns. It may suggest domestic or public Europe. The rigid coding scheme was adopted in many
interior wall partition, or an interactive opening. It can be urban reconstructions after World War II. However, the
developed further into a full three-dimensional spatial functional planning strategy was later criticized for being
version. The installation generates a subjective, hybrid, inhuman, inhospitable, socially destructive and damaging
flexible, immersive and dynamic personal space. It leaves for the urban fabric.
the existing Modern space intact and undermines it with
Zoning
digital media.
In the practice of ‘Urban Planning’, the preparations of
CONTEXT AND BACKGROUND zoning maps and strict coding documents are still the
Modern architecture standard and most common approach to planning. In
Modern architecture, which includes many of the spaces response to the increasing criticism of the crudeness and
we inhabit today, has emerged out of the industrial rigidity of modernism, the four categories of C.I.A.M -
revolution. It is based on standard, industrialized, rational, dwelling, work, recreation and transportation, were
functional, efficient and orthogonal spaces. It evolved from extended to include more groups, such as: industry,
Le Corbusier’s ‘Radiant City’ and C.I.A.M (Congrès commercial district, retail, natural resorts, public services
Internationaux d'Architecture Moderne, founded in Swiss and more. ‘Mixed-use’ areas started to appear on the
in 1928) proposals for ‘The Functional City’. planning maps, breaking the zoning blocks into finer
In contrast to the traditional city patterns, Le Corbusier grains. For example, the same building was divided into
envisioned hygienic, regimented large-scale high-rise commercial areas at the lower floors and residential areas
towers, set far apart in a park-like landscape. His rational above them.
city would be separated into discrete zones for working, Zoning in ‘my scale’
living, transportation and leisure [2]. Consequently, At the present, technological and communication
C.I.A.M was committed to standardized functional cities developments, such as the Internet, wireless phones,
with a similar scheme in its 1933 congress [3]. These ideas modems and hand-held computers, have a major impact on
had a profound influence on public authorities in post-war our lives. The work environment has been tremendously

104
influenced; a large part of the work is done with Context in Contemporary Art
computers, and Internet connectivity has altered The work of some contemporary artists can serve as
communication with clients and co-workers. Time and precedents to formal approach, space transformations and
place are now much more flexible (24/7). Our social lives the use of new technology by artwork.
and leisure time are changing as well. Contemporary sculptures
The modernist zoning (the assignment of human activities James Turrell's installations are powerful examples of
into separate areas) has become obsolete. In the same space deformation with immaterial assets. They succeed in
manner, the functional Modern apartment design, which altering the viewers’ perceptions of air, light and shape.
‘zones’ family activities of leisure, work, eat, rest and bath, He creates conditions that are neither ‘object’ nor ‘image’
must be adjusted. With increased possibilities to stay at and manipulates space using light and form.
home, (for work, education, communication and more) the Gordon Matta-Clark explored architecture’s inextricable
design of the personal space needs to be changed. relationship to private and public space, urban
Technology is getting closer to the personal scale and at development and decay. His provocative approach to
the same time allows the individual to connect to conventional building and his social criticism undermined
‘everyone’ from ‘everywhere’, as a node in the global the rational and function of buildings, using ‘negative’
network. Our customized and intimate relationship with actions like subtracting material from walls and floors. His
technology should challenge architecture to evolve from site-specific installations, the "building cuts," in which he
the ‘standard’ and ‘universal’ values of modernism to cut into and dismantled abandoned buildings, created
support these new needs and living patterns. unexpected aesthetic qualities, views and accessibility in
Modern architecture was characterized by the reference to an unconventional spatial way.
new building materials, such as steel, concrete and glass, Anish Kapoor creates curved biomorphic shapes that exist
and by the industrialized production process that became as an indeterminate form between object and space. Many
available due to the technological inventions of that era. of his pieces have been incorporated into the walls and
In the same manner, contemporary architecture should floors of exhibition areas. He intends to provoke the
reflect to the current technological developments of audience into a permanent doubt about the way it
computation and communication, which affect our comprehends reality. The theme of duality reappears in
everyday lives. This project employs micro computing and many of his pieces: positive and negative, physical and
sensors to explore new ways of architectural expression. mental, present and absent, form and non-form, light and
Alternative contemporary theories dark, male and female, place and non-place, solid and
Looking for alternative theories for complex and sensitive intangible.
spaces, I turned to the French philosopher Gilles Deleuze Kapoor has often incorporated into large-scale works more
and the contemporary architect and theorist Greg Lynn. literal versions of interiority, being drawn repeatedly
towards the use of concave and convex shapes to create
Gilles Deleuze
areas of emptiness, pockets of absence within dense
In ‘1440: The Smooth and the Striated’ [4] Deleuze and
material. His work challenges our sense of natural
Guatarri define a ‘smooth space’, in contrast to a ‘striated
boundaries, interior and exterior, and undermines the
space’, as amorphous, heterogeneous, nomad, intensive,
conventional space with new geometry. He establishes
rhizomatic and haptic. They point out that in reality the
physical precedents of ‘smooth’ space deformations.
‘smooth space’ co-exists in a mixture with the ‘striated
space’. Kinetic and Electronic Art
Kinetic art explores how things look when they move. It is
Greg Lynn
about processes of motion and evolution. It creatively
In ‘The folded, the pliant and the supple’ Greg Lynn
employs inert materials as carriers of forces, so as to
recounts the advantages that architecture can gain by
extend three-dimensional works beyond the static
introducing ‘smooth’ systems: “Pliancy allows
occupation of space into time and motion. Some kinetic
architecture to become involved in complexity through
sculptures engaged the viewers’ interaction with moving
flexibility. It may be possible neither to repress the
forces, and are generally regarded as a precursor to the
complex relations of differences with fixed points of
digital, computer, and laser art of today.
resolution nor arrest them in contradiction, but sustain
The artist Alan Rath [6] manipulates electronics as both
them through flexible, unpredicted, local connections”[5].
formal and metaphorical elements. He creates inventive
The fold encourages architecture to become more sensitive
sculptures that comment on the symbiotic relationship
to the complex changing needs of the contemporary
between humans and machines. Unlike the mobility in
person, and the ‘smooth mixture’ allows continuous co-
many of the kinetic works, that depends on chance
existence of different conditions, while maintaining their
elements, such as air movement and temperature, Rath
identity.
programmed his machines to ‘understand’ and respond to

105
their environment. Some of his sculptures are programmed Example: FluxSpace Ross Gallery, New York 1999
to move in response to the presence of people around them. (Maya/VRML 3D animation, projectors and speakers) [9].
Some robots interact with each other and some have an Using a 3D virtual model of the gallery, and projecting it
algorithm of randomness. His work focuses not only on the back into the same space, real and virtual spaces
movement of the sculpture but on its behavior and overlapped. The superimposition of sound, light, text and
movement patterns -- how it reacts and actively responds color reconstructed, distorted and deformed the virtual
to the dynamic environment and the viewers. model, and thus influenced our perception of the real
Rath’s work is an example of robotic aesthetics that space. The gallery functioned as a filter of data and media.
embodies human gestures and organic qualities. His work The project allowed the viewer to be simultaneously in real
choreographs form, movement and interaction to create and virtual spaces and perceive these spaces from the
new meaning. inside, as an immersive environment, rather than as a
The Wall_Fold installation is interested in continuing detached spectator. The gallery was projected with a
Rath’s investigation using new media and virtual space to rendered reality and was in a constant state of flux.
convey doubt in, and deform real space. It explores the
INSTALLATION
alteration of the physical space by the use of digital media.
Concept
This allows me to add new attributes to architecture, such
The goal of ‘Wall_Fold’ is to create a ‘smart’ physical
as interaction with the viewer, dynamic changes over time,
architectonic element with programmed behavior and
sound, movement and immateriality, while preserving the
changing patterns, in order to generate visual and tactile
physical nature of the space itself.
qualities. Computation and media are used in a physical
Previous investigation way, trying to achieve a subjective, temporary, hybrid,
In my previous work [7] I investigated the relationship flexible, immersive and dynamic personal space.
between architecture and media, while criticizing This installation takes advantage of the availability,
modernism rigidity and reductively. I have experimented efficiency and rationality of Modern design. At the same
with two main strategies (or platforms): time, it criticizes the rigidity and stiffness of Modern
Web Art: bringing space into media architecture. I propose a strategy, which opposes the basic
Extension of screen-based applications by exploring three approach of Le Corbusier and the modernists: ‘destroy and
dimensional (3D) space and navigation (using 3D rebuild’ – but leave Modern space intact and ‘undermine’
modeling software, animation and interactive programs, it with digital media. This is the act of smoothing out
such as Maya, Flash and QTVR). (“retroactively”) using embedded computers (micro
Example: VOLUME 1.0 - 2002 [with Inbar Barak] controllers).
The term volume refers to the intensity of sound and to Develpement
the dimensions of a space. In this work, the volume is First prototype: one-dimensional LED sequence
interpreted in the same duality - SOUND and SPACE are An experiment with a simple system of two micro
defining, and evolving around, each other. The position of controllers (e.g. a microprocessor which operates as an
the sound-object deforms the space by changing its embedded system, in this case I used PIC 16F877)
perspective and depth. In return, the transformation of the connected by wires to each other, and light-emitting diodes
space influences the sound level and panning. This project (LEDs). The micro controllers were programmed with a
simulates reality, by positioning sound in space, and at the simple logic code, which consisted of 'IF' statements, and
same time extends the real into the potential of the virtual, sent 0 or 1 signals between them. Every time a signal was
by allowing the user to move the usually static space sent (0 or 1), the program turned a correlated LED ON or
around the sound-objects. OFF. An adjustable delay period was set by viewer’s input
Installations: bringing media into space (in this case, using a potentiometer: a component with
Merging and overlapping real and virtual, in an attempt to variable resistance). This experiment created a close,
deform the architectural space by using images and 3D linear, binary system: the LEDs turn ON or OFF in a
models (Maya, VRML, Director, C, sensors). These sequence over time. It was a one-dimensional situation:
installations took advantage of the efficiency and LED = point (0 dimensions) turning ON/ OFF on the axis
availability of the Modern space and undermined it, while of time, while the state of each LED was determined by the
leaving it intact and trying to activate Deleuze’s state of its adjacent LEDs.
'retroactive smoothing' [8]. Modern space and media were Second prototype: Servo sculpture
blended together to create smooth space and extend their In this phase I challenge the setup of the one-dimensional
dimensions beyond the traditional perception (i.e., media LED sequence and transform it into a two dimensional
was materialized into a 3 dimensional space and the surface. I translate the logic of the code into spatial
Modern ‘white cube’ was stretched beyond its limited architectural qualities. The surface is made out of pairs of
orthogonal rigid characteristics). servomotors connected by flexible vinyl strips (see image

106
in the first page). Instead of switching LEDs ON and OFF, the binary ON/OFF signals into a continuous transition.
it turns and folds the surface inside/ out. Like a ‘Moebius The fabric strips have two distinct sides: silver, smooth
strip’ (a single topological surface with only one side and and shiny front and a white, interwoven and matte back.
one edge) which continues from the inside to the outside, I read this configuration as a two-dimensional condition,
this experiment creates a pliant system that dynamically as a surface, which is dynamically twisting between inside
evolves through different variations, and flips the space and outside, open and closed.
from inside to outside and from closed to open. My
ACKNOWLEDGMENTS
intention is to create a continuous transition, a whole I would like to thank Tirtza Even and Tom Igoe for their
grayscale between 1 and 0. The experiment is generated by guidance and support.
simple code, but results in a much richer spatial condition.
Similarly to the users’ input in the first prototype, this REFERENCES
version may in future development react to the viewers’ 1. Architect and New Media Artist, M.S.A.A.D (Advanced
proximity by changing the speed of the motors and the Architectural Design), Columbia University 2000;
rotation patterns. M.P.S. (Interactive Tele-communication), New York
The limited static conditions of ‘open’, ‘close’, ‘inside’ or University 2003; B. Arch., Israel Institute of
‘outside’ are now only a single option in this multiple and Technology (Technion) 1998.
variable sets of complex positions, which are dynamically 2. Le Corbusier, (Etchells, F.- Translation), The City of
changing to adjust to the individual needs and wishes. For To-Morrow and Its Planning (1929), Dover Pubns,
example, the partition can be 10% closed at the top while 1987.
90% is open, or 40% inside and 60% outside. This way, I
3. C.I.A.M- an avant-garde association of architects
materialize the ‘smooth mixture’ concept, described by
intended to advance Modernism and internationalism
Greg Lynn as: “intensive integration of differences within
in architecture. The 1933 congress had the theme: "The
continuous yet heterogeneous system. Smooth mixtures are
Functional City". Its conclusions were published in the
made up of disparate elements which maintain their
controversial documents "The Athens Charter".
integrity while being blended within a continuous field of
other free elements”. [10] 4. Deleuze, G., and Guattari, F., (Massumi, B.-
In the ‘mixing’ and ‘folding’ process I experiment with Translation), A Thousand Plateaus, University of
the following dualities: input/ output, on/ off, front/ back, Minnesota Press, 1987.
single/ plural, light/ dark, sedentary/ dynamic, shiny/ 5. Lynn, G., Folds, bodies & blobs, collected essays, La
matte, 0°/180°, open/ closed, and inside/ outside. Letter Volee, 1998, p. 111.
Prototype Technical Description 6. Rath, A., ROBOTiCS, RAM publications, 1999.
Pairs of servomotors are mounted to a 2’ x 2’ Plexiglas
7. See: http://www.ruthron.com
frame and controlled by micro controllers (PIC16F877)
(and future input from proximity sensors). The micro 8. Deleuze, G., and Guattari, F., (Massumi, B.-
controllers’ code consist of 'IF' statements, sending signals Translation), A Thousand Plateaus, University of
to rotate the motors in relation to the positions of adjacent Minnesota Press, 1987.
servomotors, programmed patterns and input from the 9. In collaboration with Renate Weissenboeck, Atsunobu
sensors. Between each pair of servos a horizontal vinyl Maeda and Gernot Riether.
fabric band is stretched, creating a surface that follows the
10. Lynn, G., Folds, bodies & blobs, collected essays, La
logic of the program. The motors are alternating between
Letter Volee, 1998, p. 112.
two positions, from 0° to 180° and back, and translating

107
Digital Poetry Modules
James G. Robinson
Interactive Telecommunications Program
Tisch School of the Arts / NYU
c/o 142 Nelson Street, #3
Brooklyn, NY 11231 USA
+1 347 613 6239
[email protected]

ABSTRACT (as in Muzak), glass (views of the outside) and screens (to
This article details a system of digital word modules, based display news, weather, etc).
on the popular phenomenon of refrigerator magnet poetry,
that alleviate the tedium of public "in-between" places by What all of these strategies had in common was that they
providing a means of interactive play. relied on distraction, rather than interaction. We felt that
this was a limiting view of how to improve the elevator
Keywords experience, especially with the opportunities for interaction
Social awkwardness, digital word modules, magnetic provided by digital technology. Our challenge was to build
poetry, digital text. an installation that could solve the same problems in a
more interactive way -- not only between people and the
CONTEXT / MOTIVATION outside world, but between each other.
This project was originally conceived as a digital solution
to the social awkwardness endemic to elevators. Thus, its Theoretical Parameters
design parameters reflect the limitations of its original The first step in this project was to list a set of general
location. However, since many public spaces share the design parameters that this project would have to follow to
psychic and physical characteristics of elevators, it has the be successful. In our view, any elevator installation would
potential to be useful in spaces far beyond its original have to be:
context.
• Immediately understandable, since one's stay in an
The Elevator Space elevator is an ultimately brief one;
Muzak is regarded by many as a lite-pop monstrosity that is
• Unobtrusively engaging, because people should feel at
to elevators what bubonic plague was to medieval Europe.
ease when interacting with the technology, yet still
But muzak originally served a purpose, "piped into
absorbed in the experience;
elevators to help people feel safe in this new form of
technology." [1]
• Easily ignorable, as riders sometimes do not want to
Nowadays, of course, elevators are considered a very old be disturbed, whether they are already interacting with a
form of technology, and most people feel comfortable friend in the elevator or simply want to be left alone; and
enough in them for muzak to serve as more of an irritant
than a comfort. But for many people, anxiety remains, even • Warmly inclusive, to encourage riders to interact with
if it is more of a social fear than a physical one. A number each other, not just with the technology.
of emotions can be felt between various combinations of
people, such as boredom, shyness, flirting, or awkwardness Practical Considerations
– few of them comfortable. The goal of this project was to Of course, elevators have their own specific, practical
eliminate, or at least minimize, these emotions. demands. Electricity is often difficult to access, and an
installation cannot be too large or obtrusive in the cramped
space due to fire codes, building regulations, and the
DESIGN PARAMETERS
comfort of its passengers. Thus an installation should
As noted, this has not been the first attempt at making the
ideally be small and self-powered. Since the solitude of an
elevator experience more meaningful. We decided that
elevator can also invite larceny or vandalism, the
there have historically been three broad strategies that have
installation would also ideally be self-contained and
been used to engage elevator riders -- that is to say, music
inexpensive.

108
THE SOLUTION selected. Each module contains around 200 words in a
The best inspiration for a digital installation that could given part of speech; there are noun modules, verb
satisfy each of these parameters was found in the now- modules, adjective modules, and so forth. Arranged
famous magnetic poetry sets found stuck on refrigerators together, they form phrases that range from the cryptic to
across the country. After all, an elevator is like a the profound to the entertaining to the baffling. Since the
refrigerator -- a cramped gateway to a more interesting words are pre-selected, what results is not necessarily
destination. poetry per se, but rather more like one-sentence proverbs or
brief unfinished haikus.
Magnetic poetry sets consist of hundreds of tiny magnets,
each imprinted with a different word in various parts of
speech. They can be arranged on an refrigerator in
combinations from the ridiculous to the sublime, allowing
for the entertainment of the "author" while providing a
means for indirect communication of jokes, ideas, and
various degrees of poetic thought to others visiting the
space.

Refrigerator poetry is often not "poetry" per se, but the


limited wordset provided by the magnets does force
approximate compositions that reflect many of the
ambiguities of verse. Although it takes some thought to
build a sentence that rises above nonsense, building
meaningful sentences is possible. Many of the fragments
created reside in the realm of the cheerfully cryptic, much
like most of the proverbs found in fortune cookies or badly
translated philosophical texts. This does not detract from
PROTOTYPE
the magnetic poetry set; rather, the strangeness of the
Each module is based on a simple battery-driven circuit. A
sentences that arise does not mean they are any less self-
microcontroller (in this case, a Microchip PIC16F876)
expressive. Best of all, each "poem" can be modified by
prints different words from a given array to the LCD
future visitors, allowing for a uniquely indirect,
depending on the potentiometer’s value. Around 200 words
asynchronous means of communication.
can be stored on each. Future prototypes would use either
advanced microcontrollers or flash memory to store more
The idea of poetry as a “calming snapshot” is at the heart of
words. We believe about 1,000 words on each module
New York City subway’s “Poetry In Motion” as well as
would be ideal.
refrigerator poetry magnets and this digital poetry
installation. "Poetry encourages us to slow down and focus
on what's meaningful in life,” Andrew Carroll, director of Software
the American Poetry and Literacy Project told a newspaper A Perl program on the chip development computer
in 1999. “It's like a little break. It doesn't take long to read a currently builds word collections automatically from a
poem. When you're on the road, sometimes it's hard to sit given web source. They are classified with the Moby part-
down and open up a whole novel. You want just a little of-speech wordlist compiled by Grady Ward
snapshot of an emotion or an experience."[2] ([email protected]) and hard-coded onto each chip. An
example of a modules’ dynamically-generated PIC Basic
INSTALLATION DESIGN
Pro-coded wordlist can be found at
With this in mind, we designed a series of digital modules http://stage.itp.tsoa.nyu.edu/~jgr225/nouns.txt
to be installed in elevators, with the idea that these modules
could be manipulated to create sentence fragments in much Word Selection
the same way as magnetic poetry words are used on To provide a subtle introduction to the user's location or
refrigerators. Each module contained a one-row LCD destination, each set of words is culled from a digital
screen and a potentiometer knob used to select words on source that directly relates to the place being visited. For
the display. The modules are currently mounted using instance, the initial prototype of this installation used the
strong suction cups. most popular words from the ITP students’ electronic
mailing list. Similarly, a similar installation at a corporate
These modules serve the same purpose as the magnetic headquarters could use words from the company’s website.
words in a refrigerator poetry set, although the words The prototype modules presented at the 2003 Ubiquitous
encapsulated in each are not static but dynamically Computing conference will feature the most-used words
from contributors' presentations.

109
Networked Modules
Because of the lack of a network connection, the wordlists ACKNOWLEDGMENTS
in the prototype had to be hard-coded onto the chip before Thanks to Eric Liftin, adjunct professor at NYU's
installation. In network-enabled environments a connection Interactive Telecommunications Program, for his useful
could be set up to dynamically update the lists in real-time. feedback throughout the design process. Thanks also to the
Additionally, the words selected could be broadcast to a anonymous reviewers who provided helpful feedback on
website to reflect the compositions presented in a given this project proposal after it was submitted to the 2003
place to the world at large. Ubiquitous Computing conference.

BEYOND THE ELEVATOR


It became clear to us during the development and REFERENCES
installation of these modules that they have broader 1. Devereaux, J. Scores For Stores. Metropolis (March
applications beyond their original context. They can be 2003).
installed virtually anywhere indoors, and, with slight 2. Quoted in O’Briant, D. Poetry In Motion. Atlanta
modifications, outdoors as well. Journal-Constitution (April 15, 1999).

Thus, they are ideal for "transitory spaces," that is, public
areas where people often pass by, allowing for virtual self-
expression and subtle, anonymous communication between
strangers. In this sense they are best thought of as
ephemeral graffiti, although while graffiti is used mainly
by gangs to mark territory, these modules are used by
everyday people to communicate, however cryptically. In
that sense they are not objects of distraction but also true
artifacts of interaction that hopefully can serve to relieve
the social awkwardness of public spaces.

110
The Verse-O-Matic
James G. Robinson
Interactive Telecommunications Program
Tisch School of the Arts / NYU
142 Nelson Street, #3
Brooklyn, NY 11231 USA
+1 347 613 6239
[email protected]

ABSTRACT since, in today's society, poetry garners widespread


This paper details the “Verse-O-Matic”, an otherwise attention only when it offends us, not when it
ordinary printing calculator re-imagined as an playful enlightens us. Witness the recent furor over a
way to introduce and distribute verse into everyday controversial poem written by New Jersey poet
life. Instead of a numbered keypad, the device’s keys laureate Amiri Baraka, which led some to call for the
represent poetic themes, which can be combined to abolition of his $10,000-a-year post. [1]
select and print snippets of great poetry. Just as the
invention of the electronic calculator made relatively The Verse-O-Matic attepts to remedy this situation by
complex mathematics accessible to the masses, a providing a convenient, accessible interface to classic
poetry calculator elevates ordinary discourse by and modern poetry, requiring only a curiosity about
making verse more easily accessible to all. life and rudimentary knowledge of a mathematical
calculator.
Keywords
Verse-O-Matic, calculators, poetry, digital publishing, OTHER POETRY PROJECTS
handheld devices, literary databases. This project is not the only effort to seep poetry to
unexpected corners of everyday life. Robert Pinsky’s
CONTEXT Favorite Poem Project featured dozens of Americans
Before the introduction of the portable electronic reading their favorite poems on PBS’ Newshour With
calculator in the latter half of the twentieth century, Jim Lehrer. The selections were chosen from over
solving mathematical equations was a time-consuming 18,000 submissions. [2] Recently, the Poetry Society
activity. Indeed, it was a rare mark of genius to be able of America’s Poetry in Motion campaign has enriched
to calculate complicated sums quickly. the public transportation systems of 11 United States
cities with snippets of verse. "We want to surprise
The same situation exists today with poetry, a people with it, to put it in the very space where it's not
discipline that can be as relevant and meaningful to our supposed to be," executive director Alice Quinn said in
lives as mathematics. Just as a proliferation of numbers 2001. "Everything else on the subway is trying to sell
have helped to revolutionize science and finance, a you something. This offers instead a metaphysical
democratization of beautiful verse can add greater moment in the subway." [3] It is in this spirit that the
meaning and context to our relationships with each Verse-O-Matic was designed.
other, our lives and our environment by elevating our
communication beyond clichés. CONCEPTION
This project was originally designed to address the
Why is poetry largely ignored in today's society? To a question of how to introduce relevant verse into
large part, it is because it is perceived as inaccessible. everyday discourse within a simple, usable interface.
One must be highly motivated to begin an exploration In other words, we hoped to create a device that was
of poetry without any previous exposure to verse; as a capable of a range of expressive output with only a
result, verse is seen as the domain of highly educated few simple inputs. The printing calculator presented
and/or pretentious types. (Of course, as many itself as an interesting model because of its simple
academics know, the two are by no means mutually interface and the flexibility of the printed output. In
exclusive.) addition, the repurposing an ordinary device to provide
unexpected results invites an irresistible spirit of
Secondly, poetry is sometimes viewed as irrelevant. playfulness among its users, making the discovery of
That may well be a perception borne of ignorance, new poems an exciting, rather than tedious, endeavor.

111
DEVICE DESIGN introduced to a snippet of poetry that touches on the
The Verse-O-Matic is designed to look exactly like a themes he/she has selected. Secondly, the sticker
regular printing calculator, with one exception: the invites the user to share that verse, either personally or
usual digits are replaced by nine words, each anonymously, by either forwarding it along to a friend
representing a different poetic theme or or displaying it in a public place.
emotion: LOVE, HAPPINESS, BEAUTY, Thus the shared verse travels far
HUMOR, AGE, NATURE, SEPARATION, beyond the original digital database,
SADNESS, and DESPAIR. Despite the appearing in a multitude of non-
transformed key meanings, the digital spaces. It is no longer a static
universally-recognized calculator format resident of a digital database but a
allows new users to easily grasp how the dynamic, living object to be
device is meant to be used without experienced in our everyday lives.
special instructions.
When this project was first
INTERACTION demonstrated at NYU’s Interactive
When a key is pressed, the calculator Telecommunications Program,
searches its memory to select all of the students found a host of novel uses
poems that refer to that theme. for the poetry stickers. Snippets of
Additional themes can be added ("+" = AND) or verse can now be found in the most unexpected places,
subtracted ("-" = AND NOT) from the "poetic from trash can lids in the student lounge to microchip
equation" simply by pressing the appropriate keys. programmers in the physical computing lab. This
When the user presses "=", the equation is completed proliferation of verse in unexpected places represents
and the calculator prints a randomly-selected poem the best expression of the usefulness of the poetry
that fulfills all of the thematic boundaries that the user calculator.
has set.
PROTOTYPE
For instance:
Hardware
“LOVE” The original prototype for this project was built using
a custom-made keypad and a standard commercial
+
label printer interfaced serially with a Toshiba
“SEPARATION” 335CDS laptop running Linux. The laptop and label
printers, while bulky, were used so that a preliminary
+ prototype could be easily constructed and tweaked
“SADNESS” according to user feedback. In later iterations of this
project the laptop will be replaced by a microcontroller
= and the label printer by a custom serial printer, each
embedded in the calculator itself, so that the entire
"This bud of love, by summer's ripening breath,
May prove a beauteous flower when next we meet." device is completely portable for use anywhere. As
WILLIAM SHAKESPEARE [4] noted below, future prototypes will also incorporate
networked elements.
If no poems are found, the device emits a warning. The
equation resets and the user is prompted accordingly. Software
Poems are stored in a simple database on the host
OUTPUT computer, mediated by a Perl program that monitors
In the prototype, the poem is printed on a mailing the input from the keypad and distributes text to the
label, rather than a supermarket receipt (as originally serial printer accordingly. New verse is entered into
conceived). This allows the poem to be easily shared the database via a web-based CGI form on the local
once read; it can be used to seal an envelope or affixed machine’s Apache server, accessible either locally or
to a personal calendar. The printout also affords a through a connected LAN. In future prototypes the
tactile intimacy with the words that cannot be matched Verse-O-Matic will be networked to the Internet via an
in the hulking glare of a computer monitor. embedded Ethernet controller so that poems can be
collected from around the world.
Thus, the project's original purpose of distributing
verse is achieved in two levels. First, the user is

112
POETRY AND TECHNOLOGY the user’s original emotions and assumptions is in
Despite a generally positive response to this project, many ways far more preferable to one that exactly
several individuals have raised concerns about the reflects them.
implications of trying to represent the intricate,
emotional art of poetry through a mechanical, “Commodification of Aesthetics”
“logical” device. Others have questioned the Because the calculator can be loaded with
use of a typically mass-produced device to any snippet of verse, classified by the
distribute verse, arguing that it may evoke “the contributor, we would argue that it
commodification of expression and aesthetics”. represents the sharing, rather than the
Another typical response is that the commodification, of aesthetics. In this
presentation of excerpts, rather than complete sense, the calculator can be seen less as a
poems, abandons the depth and complexity of static reference and more like a highly-
the author’s original intent in favor of a less structured, asynchronous instant
meaningful soundbite. messenger device. If these devices were to
proliferate, our Verse-O-Matic would not
These questions are all valid, provocative be the same as yours; it would contain
responses to the project. However, we believe different poems from a different circle of
that they are not unique to this device, but will contributors, from myself, to my friends,
be asked of any effort to distribute verse to a family, and classmates. Thus, each
wider audience. Since any attempt to tackle calculator would be like a literary iPod --
these questions is to tempt participation in a a highly-individualized representation of a
host of broader, more contentious, intellectual circle of aesthetic expression.
debates about poetry and literature in general,
we think that within the context of this paper it is best
to address them through the original intent of the Poetry vs. Soundbite
piece. The small format of the printed sticker meant that in
most cases each verse in the Verse-O-Matic is an
“Emotional Art” vs. “Rational Calculator” excerpt of a larger poem, rather than a complete poem.
One of the largest challenges in designing this project Does an excerpt fully represent the depth and
was selecting the nine themes for the calculator’s complexity of a poet’s complete piece? Of course not.
keypad. Of course, the reason this was so difficult is But wonderful ideas can be found in the simplest of
that any attempt to reduce all verse to nine themes is sentences, even if they are merely small components of
patently ridiculous. How irrational to argue that the the artist’s broader concept.
calculus of thematic interpretation is in base ten!
Rather, the decision to select nine themes was not to For this same reason, people often feel comfortable
make any grand statement about the structure of poetry using snippets of poetry in other contexts. If pieces of
but rather to simply mirror the familiar structure of a verse can be used to introduce and enliven essays,
calculator keypad so as make the device as simple as prose and speech, why can they not serve as epigraphs
possible for anyone to use. for our daily lives?

Just as difficult, for the same reasons, is the FUTURE DEVELOPMENT


classification of submitted verse within the confines of As noted, the poems on each device need not be
those nine themes. To say a poem is about “love” or standard to each calculator. Just as poetry is a deeply
“sadness” is an almost meaningless analysis. But, individual and personal means of expression, each
again, some minor form of classification is demanded calculator could be loaded with diverse collections of
by the context of the device. verse. A wireless connection could permit users to
share their poetry with others anywhere -- on the
That classification need not be perfect. If we remember street, in the subway, or in a workspace. This could be
that the point is not to create a mathematical structure enabled through an infrared connection, as is used on
that allows for the perfect recovery of classified poetry handheld devices, or via an 802.11b wi-fi connection
but rather to introduce meaningful verse into everyday to more remote devices. A user could eventually use
life, the question of whether the verse recovered his or her cellphone's number keys to "dial up" a poem
exactly mirrors the user’s emotions becomes almost on a loved one's calculator while away.
moot. In fact, if the goal is to stimulate the intellect of
the user, a snippet of verse that echoes yet challenges

113
ACKNOWLEDGMENTS
CONCLUSION We are indebted to Dr. Natalie Friedman, Director of
There is a Chassidic tradition that insists that the Writing Center at Marymount College of Fordham
everything in the world contains a joy that we must University for her useful perspectives on poetic
continually discover and unlock. The Verse-O-Matic themes, and to Camille Norment, adjunct professor at
was inspired by that philosophy. Even a humble ITP, whose patient encouragement inspired us to
calculator can be a gateway to revelation; to happiness; pursue this idea to completion. Thanks also to the
to thought and introspection. If anything, it is a anonymous reviews who provided invaluable feedback
challenge not to poetry or literature but rather to the after the first submission of this project to the 2003
idea that the joy of beautiful verse can only be Ubiquitous Computing conference committee.
discovered in the musty halls of libraries. Rather, their
ideas should surround, envelop and inspire us REFERENCES
wherever possible, freed from the typical boundaries 1. Pearce, J. When Poetry Seems to Matter. The New
that sequester them in the realm of academia. York Times (February 9, 2003).
2. Rosenberg, H. 'NewsHour' Finds Poetry in the Soul
of America. The Los Angeles Times (May 1, 2000)
p. F1.
3. Coeyman, M. To Her, Every Spot Needs A Touch
Of Poetry. The Christian Science Monitor (April 3,
2001) p. 17.
4. Shakespeare, William. Romeo and Juliet, act 2, sc.
2, l. 121-2.

114
AURA: A Mobile Platform for Object and Location
Annotation
Marc Smith, Duncan Davenport, Howard Hwa
Microsoft Research
One Microsoft Way
Redmond, WA 98052 USA
+1 425 706 6896
{masmith, duncand, a-hhwa}@microsoft.com

ABSTRACT We created just such a system that combined widely


In this paper, we describe a system used to link online available wirelessly networked Pocket PC handheld
content to physical objects implemented with commercially computers with a laser scanner for reading bar codes.
available pocket computers using integrated bar code Client software was created to integrate these components
scanners, wireless networks, and web services. We discuss and connect them with servers available over the public
our design goals and technical architecture and describe Internet.
applications that have been constructed on this architecture. The resulting system has applications in many settings.
We describe the role of the related web site to create Meta-data about objects with UPC codes, found on almost
communities around scans collected by the handhelds. all consumer products in the United States, can be drawn
Keywords from publicly accessible online data services. These
Laminated reality, mobile object annotation, communities, services often provide the name of the object or product, its
mobile devices, bar codes, machine readable object tags, size (if it has one) and the name of its manufacturer in
wireless networks exchange for the object’s bar coded identifier. Our system
uses such a data service to retrieve meta-data that is then
INTRODUCTION used to construct queries for search engines that yield useful
Every object has a story to tell. However, labels and signs and highly relevant results. Scanned objects quickly link
can only tell part of this story; there is always an enormous back to the web sites for their manufacturers or online
amount more to learn than will fit on a label. Mobile commerce sites that offer those objects for sale. Similarly,
devices are changing this, allowing physical objects to to be books often bear an ISBN number in the form of a bar
linked to associated online content. This dramatically code. These numbers can be used in queries to online book
expands the space for commentary and services related to sellers, making the services offered there like book reviews,
the places, products, and objects that physically surround lists of related books, and, of course, purchasing available
us. with just one scan and a tap.
The technical process of linking physical objects to online
content has become increasingly straight-forward. Adding
a tag reading device to a network connected portable
computer shortens the gap between physical objects and
places, and the digital information related to them. This
enables wirelessly networked devices to cheaply and
accurately recognize a wide range of objects and places,
and offer access to information and services pertaining to
those objects. It seems reasonable that some form or forms
of tag detectors will eventually be common features of most
networked information devices. Currently cameras and bar
code readers are widely available for cell phones and
pocket computers.

Figure 1. Mobile device hardware platforms


composed of a Toshiba e740 and a Socket
Compact Flash Bar Code Scanner.

115
Symbology-

Mapping
Payload
Cache
Input
Data

Figure 2. AURA Architecture diagram

RELATED WORK Our approach is a more modest and potentially more


Several projects have explored the ways objects and places broadly deployable in the short term. Our goal is to enable
can be linked to online content and services. Ljungstrand, a light weight way to both access information about
et al. (2000) have built the WebSticker system to link physical objects and places and to add annotations to them.
barcodes to web pages. This was predominantly a desktop This focus is different from, but complementary to, efforts
bound system. There is a large body of work on “context- to link physical devices, like printers or projectors to device
aware” computing (Schilit, et al., 1994). Context- based user interfaces.
awareness refers to the identification of a user’s proximate HARDWARE PLATFORM
environment for the delivery of computing content or The mobile component of our system integrates three core
services. Xerox’s PARCTAB system uses custom built hardware features: a laser bar code scanner, a wireless
infrared transceivers to help palm-sized computers to network connection, and a PDA. There are a number of
identify their physical environments (Want, et al., 1995). alternative sensors that could be usefully integrated into this
The Cyberguide uses Palm PDA’s to provide map guides to system, including GPS and wireless network signal strength
tourists (Abowd, et al., 1997). Positioning in Cyberguide is detection for location information and readers for the
provided by a combination of custom applications based on emerging technology of RFID tags. To date we have only
infrared sensing (for indoor) and GPS (for outdoor). MIT made use of bar code readers but the system architecture is
LCS’ Cricket System deploys custom built RF and extensible, allowing these or other emerging sensor
ultrasound beacons for indoor navigation (Priyantha, et al. technologies to generate information that can be used to
2000). identify objects or places.
The CoolTown Project at HP is building context-awareness
SERVER
technologies to provide web presences for people, places
The server is comprised of three components: a web
and things (Kindberg, et al., 2000). Similar to the MIT
service, runtime, and local and remote data stores. The
Project Oxygen (MIT, 2002), CoolTown’s main goal is to
Web Service is the channel the client uses to communicate
enable future “nomadic computing” such that computing
with the backend server. This is accomplished entirely
resources follow the human user and customize the human-
using remote method invocation over HTTP (“web
computing interaction based on the local human
services”). The web service is the interface to the backend
environment.
runtime for the clients. The Runtime provides the business
logic handling event tracing, retrieval, storage, rating

116
calculations, and other tasks. The local database stores
contain user profiles, barcodes, ratings, written and speech
annotations, which are stored in a SQL2000 database.
Information on books and UPC’s are provided by multiple
remote data stores including the Amazon Web Service for
books and music, and the ServiceObjects Web Service for
UPC lookup.
MOBILE CLIENT SOFTWARE
The client is a standalone application on the Pocket PC (as
opposed to a web front-end) to support improved user
interactivity. Network connectivity is not assumed to be
continuous for the mobile client. The client application
provides queuing and retry services for the storage and
retrieval of data to and from the backend servers. These
services are not possible for a thin web based client.
Caches or local stores on the client can dramatically reduce
the demand on network access for content. In addition, a Figure 3. User scenario for grocery and related
client side application allows for a richer user interface. retail environments. Query highlighted the recall of
This is especially true when considering delays and the breakfast cereal by the FDA.
intermittent network connectivity.
These payloads are linked to the resolution service registry
CLIENT INTERFACE COMPONENTS which contains pairs of pattern matches and pointers to
Users can login to the system by creating a unique related web resources. When a tag is scanned it is matched
username and password combination either from the mobile to an appropriate payload on the basis of the structure of the
device or through the web portal interface. Without an identifier string. For example, ISBN codes start with “978”
account the device can be used to scan objects but the and have a total of 13 digits. All bar codes starting with
device creates an Anonymous User account and all the that series of numbers with that number of digits are
comments created in that context are by default public. assumed to be an ISBN and are submitted to web services
When a user sees an object that interests them and finds a that are listed in the client’s directory of resolution services
bar code printed or affixed to it they point the head of the that are registered as resolving such codes. We made use of
device at the bar code from a distance of about 6-12 inches a web service offered by Amazon.com that returns metadata
and press the scan trigger button which we mapped to the about books and music when passed an ISBN number.
thumb button normally used to invoke the voice recorder
feature of the Pocket PC. If the device acquires the tag’s
data and the application gives the user feedback and based
on some properties of the bar code data and sends a series
of network queries out to appropriate web services.
We have initially created or linked to services to support
three types of bar codes: tags created for a local art gallery,
UPC (Universal Product Code) codes commonly used to
tag consumer products and foods, and ISBN (International
Standard Book Number) codes for books. Any number of
additional or alternate payloads are possible within this
framework to provide services for these or other forms of
object identifiers.

Figure 4. UPC Item Display Screen.


When objects with UPC codes are scanned the system
recognizes that the code is not in other classes of codes and
submits the identifier to a UPC mapping service. We made
use of a UPC metadata service provided to the public by

117
ServiceObjects.Net, a commercial web service provider. CONCLUSION
This service returns a set of meta-data about the object and A wave of annotation systems for physical objects is likely
the client presents this data and creates hyperlinks to search to be about to break. Cell phones are already integrating
engines based on the results. For example, when a box of digital cameras and have the processing power needed to
breakfast cereal is scanned the resulting display provides natively decode bar codes. As pocket computers merge
two tap access to search results, the first of which notes that with cell phones the resulting hybrids will no doubt
the product has been recalled due to food safety issues combine a vision system with network connectivity and
related to undocumented ingredients that might cause fatal computation. The widespread distribution of such devices
allergic reactions for some people (figure 4 and 5). is likely to have dislocating effects in many sectors of life.
Retail environments seem the most likely to change as
consumers bring the power of the Internet to bear at the
point of sale.

REFERENCES
1.Rheingold, Howard. Smart Mobs: The Next Social
Revolution, Cambridge, MA: Perseus Publishing, 2002.
2.Fiore, Lee Teirnanan and Smith, 2001
3.Service Objects Universal Product Code Web Service
http://www.serviceobjects.com/products/dots_upc.asp?bh
cp=1
4.Abowd, G. D., et al. “Cyberguide: a Mobile Context-
aware Tour guide.” Wireless Networks, vol. 3, (1997) pp.
421-433.
5.Kindberg, T., et al. “People, Places, Things: Web
Presence for the Real World.” Proceedings
WMCSA2000, (2000).

Figure 5. Search results linked from UPC meta data 6.Ljungstrand, P. , J. Redström, and L. E. Holmquist.
“Webstickers: Using Physical Tokens to Access, Manage
WEB PORTAL and Share Bookmarks to the Web.” Proceedings of
Users can access the system through a web portal as well as Designing Augmented Reality Environments (DARE)
the mobile device. Users can log into the web site and view 2000 (2000).
their scan history sorted by various properties of the items.
7.MIT (2002) Project Oxygen. http://oxygen.lcs.mit.edu/
Scans can be sorted by time, by product category (books,
food stuffs, etc.), or by the ratings or comments of other 8.Priyantha, N. B., A. Chakraborty, and H. Balakrishnan.
users or data found in other systems. This creates a simple “The Cricket Location-Support System.” 6th ACM
way to assemble inventories of tagged objects, for example International Conference on Mobile Computing and
a collection of books, videos or music CDs. Alternatively, Networking (2000).
it creates a diary-like history of the series of objects 9.Schilit, B., and R. Want. “Context-Aware Computing
scanned while, for example, browsing through a shopping Applications.” IEEE Workshop on Mobile Computing
mall or museum gallery. Systems and Applications (1994).
10.Want, R., et al. “The PARCTAB Ubiquitous Computing
Experiment.” Technical Report CSL-95-1, Xerox Palo
Alto Research Center (1995).
11.Want, R., et al. “Bridging Physical and Virtual Worlds
with Electronic Tags.” In Proceedings CHI 1999 (1999).

118
Anatomy of a Museum Interactive:
"Exploring Picasso's 'La Vie' "
Leonard Steinbach Holly R. Witchey, Ph.D.
Chief Information Officer Manager, New Media Initiatives
Cleveland Museum of Art Cleveland Museum of Art
11150 East Boulevard 11150 East Boulevard
Cleveland Ohio 44106 Cleveland Ohio 44106
216 707 2642 216 707 2653
[email protected] [email protected]

ABSTRACT Therein lies a tale of art, artist, science, and discovery that
"Exploring Picasso's 'La Vie,'" a gallery installation as part of the Museum wanted to tell. And Picasso's La Vie would help
a major exhibition, demonstrates how an interactive display tell it.
can address various learner styles, foster both social and This presentation demonstrates and explores how the
individual interaction, and seamlessly command a Cleveland Museum of Art developed and exhibited a large
fundamental understanding of the rather complex scale interactive display which addresses various learner
relationship of artist's methods, artist's life stories and the styles, fosters both social and individual interaction, and
scientific methods that enable their discovery. The seamlessly commands a fundamental understanding of the
interactive demonstrates the roles of x-radiography and rather complex relationship of artist's methods, artists' life
infra-red reflectography as important tools in understanding stories and the scientific methods that enable their
the artist's processes. The museum found that the interactive discovery. At the same time, this interactive strove to
gave visitors the information and insight they needed to inspire users to return to the real object of delight, the
embrace new ways of looking at art. Its effectiveness may nearby painting itself. As such, it served to augment and
have been enhanced by the use of nearby, static, enhance the personal experience of the painting, rather than
complementary material. Additionally, by conforming the distract from it. The interactive also had to meet the aesthetic
installation of the interactive to the aesthetic of the rigors of a major art museum exhibition, as well as be easily
exhibition it seemed to be more readily accepted by both used by a large number of visitors of diverse age,
museum visitors and staff, which may have added to its aggregations, and cultural and technological experience.
effectiveness. Various aspects of intent, design, user
THE INTERACTIVE: "EXPLORING PICASSO'S 'LA VIE'"
experience, and lessons learned are also discussed.
Exploring Picasso's' La Vie' was presented on a 50"
Keywords diagonal plasma screen, mounted on a wall in vertical
interactive, constructivist, learning, art, museum, orientation, thereby suggesting the size, orientation and
INTRODUCTION gallery context of the painting as well as echoing the
In the fall of 2001 the Cleveland Museum of Art presented proportions and scale of the actual work. The aim is to give
the exhibition, Picasso: The Artist's Studio. . For Pablo the visitor the sense of ‘seeing through’ layers of the work
Picasso (1881-1973), the studio was the crossroads of all that and personally uncovering the secrets revealed by the
occurred in his life and contemporary society. Approximately investigative techniques of the conservation department.
36 paintings and 9 drawings demonstrated the central place Forward facing speakers were mounted beneath the screen.
of this theme in Picasso's work and presented the remarkable A wireless mouse was placed on a small pedestal
variety of ways in he explored the artist's studio through approximately 10' from the screen. (See Figure 1.)
portraiture, still lifes, interiors, landscapes, and allegories of Interface Design
artists at work. Picasso developed distinctive methods of The interface design would only be successful if it were
creating, destroying, and revising images. Because he immediately intuitive, if content could be reached in a
derived meaning from the very act of creation, studying his minimum of steps, if it fostered both group and individual
process can be crucial to unlocking the meaning of his art. experiences and, if the overall design respected differences
This understanding is revealed to conservators and art in learning styles, responsive to a broad range of visitors.
historians in great part through x-radiography, infrared For example, it would have to accommodate constructivist
reflectography, and other forms of scientific analysis. learning methods for the self-directed learner. These would

119
be the visitors who would want to create their own learning choose stories. At any time during the narrative they
experiences from non-linear encounters with could hit an "Interact" button which would switch the
image to the area of the painting being discussed.
They could then use a slide bar to morph the image
between x-ray and/or infrared or normal states, as
pertinent. A time bar helped users easily decide whether
they wanted to view the whole narrative, or proceed to
"Interact," or move to another section entirely. We
believed that this information helped the visitor make
the most efficient use of his or her time and eliminated
the frustration of not knowing how long a narrative will
take. Finally, a small representation of the entire
painting highlighted the area being discussed. In all of
these ways, the visitor experience could range from a
sequential and rather passive playing of a series of
interesting stories to a non-linear discovery of stories
(or parts of stories) and personal explorations.
• The Explore bar provided a choice of "magnifying"
glasses with which users could examine either a
magnified view of the painting, the infra-red image, or
the x-ray image If the user passed over a significant
area, a pop-up text box would tell its story. A "Reveal
Clues" button caused the painting to be overlayed with
white circles where the stories could be found. This
section of the interactive served two important
functions. First, it reinforced the Story narratives (or
vice versa) through more of a discovery approach.
Second, it familiarized the museum public with how to
Figure 1. "Exploring Picasso's 'La Vie'" as installed. read infra-red and x-ray images, much as a conservator
Other gallery walls (not shown) displayed static, back-lit x- would do. This newly acquired skill could be put to
ray and infra-red images of the painting. good use as visitors looked at the large static infra-red
and x-ray images hung on the walls nearby.
various types of rich media, in this case graphics, video,
• The Examination Techniques bar brought users to six
audio, narratives, and interactive tools for exploring the
scientific tools of conservation discovery: x-
painting. The interactive would also have to respect the
radiography, infrared-reflectography, optical
needs of the more traditional learner who requires that
microscopy, ultraviolet light analysis, sampling and
material be presented in a more sequential, less demanding
cross-sections, and scanning microscopy. These
didactic form. Both of these responses would have to use
features included animations (e.g. x-ray penetration) and
the same media objects and interface. To achieve this, the
behind-the-scenes videos showing conservators
following design features were employed(see Figure 2):
applying these techniques in the Museum's
• Instantly expanding navigation bars along the themes of conservation lab on real works of art. We believe that
"Introduction," "Stories," "Explore," and "Examination the understanding of process results in a better
Techniques" burst to the left when their iconic understanding of result. Also, museum visitors are often
representations were rolled over at any time during the intrigued by "behind the scenes" activities, and that
interactive's use. some visitors, more interested in science and
• The Introduction bar allowed the user to view an technology than art, might use this section as an entry
Introduction, Picasso biographical information point, and be intrigued enough to explore the rest.
(Quicktime movies) or Credits. • Overall, this interactive provided visitors with ample
• The Stories bar allowed users to experience illustrated opportunity to pursue their own approach and interests.
main themes of discoveries about the artist's process The traditional learner could literally "start at the top"
through an animated, narrated detailed look at specific and work down through the introductions to stories to
areas of the painting. A section of the painting was explorations to the techniques, with very little demand
panned either in normal, x-ray or infrared view, as for interaction --- no content ever requires more than
appropriate. Iconic cues and story names helped users one click. [A second click moved from a story to

120
"interact," or changed modes of magnifying glass.] On volunteers to the objective of their assistance: the comfort
the other hand, more discovery-oriented users could of visitors with a new means of self-directed discovery
explore all the options, carefully choosing those items and education, rather than use the device as a teaching or
that seemed of interest at any moment and in any order, demonstration tool. It should also be noted that this type
allowing knowledge to built in more personalized way. of experience did not preclude visitor use. many visitors
Upon opening the exhibition, staff believed that it would be did try and usually had little problem with the interactive.
helpful for volunteers to assist visitors with the interactive. Nonetheless, between the slight dissuasion of some
This proved counter-productive, as will be described below. visitors from use of the interactive, and the apparently
higher than expected level of user computer proficiency,
In addition to assuring the interactive's ease of use, the the use of volunteers was abandoned.
museum recognized that many visitors were unlikely to wait
long to use it or spend a lot of time experiencing each
feature. These concerns were accommodated in two ways.
First, the use of the large screen and distant pedestal with
mouse made group viewing feasible and comfortable.
Visitors could easily benefit from the stories or other
activities that the user was initiating either while waiting
their turn, or in lieu of it. Second, large, static, rear-
illuminated x-radiography and infra-red reflectography
images of La Vie were in the same room with the interactive,
providing analogous insights from the conservation
research. These served as a preparatory resource for those
who were waiting to use the interactive, bolstered the
information gleaned from the interactive, or provided
information in lieu of using the interactive.
Finally, regardless of visitors' learning style or comfort with
technology, the goal of this project was to inspire them to
return to and look more closely at the art. We also hoped
that visitors would internalize this experience and apply their
new insights to the way they viewed Picasso's other
paintings. We believe we were successful.
FINDINGS AND LESSONS LEARNED
Rather than pursue a formal evaluation, the Museum chose
to rely on periodic observation by staff and anecdotal
feedback for its overall assessment. Findings and lessons
learned follow:
• Because some staff believed that computer experience
among museum visitors would be very low, a volunteer
was initially present to help use the interactive. However,
rather than foster the visitors' personal exploration, the
volunteer often became a guide through the content and
visitors remained passive and complacent about this. This
defeated the purpose of self-directed learning and
exploration. We believe this situation occurred for several
reasons. Both volunteers (or docents) and museum
visitors are accustomed to, and comfortable with, the
traditional museum education/lecture/tour model whereby Figure 2. Full screen view of "Exploring Picasso's 'La
visitors, for the most part, are rather passive receivers of Vie,'" showing all menu bars "burst" to the left, for
structured information. Therefore it was easy for both illustrative purposes only.
groups to fall back into these roles. It is also likely that the In the absence of volunteers, we observed that users who
seeming appreciation of visitors for the assistance (even did not immediately grasp how the interactive worked
among those who might have liked to just give it a go seemed to work it through and often received help from
without being observed by a staff member) reinforced the other members of their party (such as their children) or even
situation. This pointed out the need to reorient the other visitors (simply as a polite gesture or because they

121
were waiting to use it themselves). This help was an exhibit. In the future we will give more consideration to
interesting phenomenon, and we can surmise that it was at the use of supplemental material.
least in part borne of the open and shared experience it • We believe the resemblance of the installation to a
presented (more on this below). Conversely, if this painting hung on a wall, and its accord with the
interactive were constructed as a single-person or small exhibition's overall aesthetic, helped engender its broad
group device, we don't think such unsolicited help would acceptance and success. Although the vertical
have been forthcoming; if it did occur that it might have orientation of interactive's image required resolution of
been perceived as more of means of hurrying up the hesitant some interesting programming issues, it was well worth
user rather pure benevolent assistance. If we decide in the the effort. The images providing an excellent proxy for the
future that visitors should receive assistance then those actual painting. The ready association of the interactive's
providing assistance would have to be trained to focus on screen image with the art made the "technology" more
the visitors' independent use of the device. transparent and brought greater focus to the content. Yet,
• Clusters of visitors, both users of the interactive and for the visitor, it did not at all replace the experience of the
observers, appeared to simultaneously find it engaging. original object. Rather, viewers went back in search of the
We attribute this to both the quality of content and the actual painting, which was several rooms away. Having
comfort and ease with the experience could be shared. learned that there were hidden images within the painting,
• In family groups, parents seemed pleased to see their some of which were were indeed somewhat perceptible if
children enthusiastically engaged with the interactive; one knew where to look, many visitors sought out the
they sometimes had to drag the kids away. actual painting and took ownership of the ability to
discern the heretofore undiscernible. This has significant
• A gender difference with respect to how the interactive implications for the potential of museums interactives to
was used has been observed. It seemed that women most teach visitors tools and techniques for their
often engaged in a random hunt and peck through the understanding and appreciation of art, beyond factual or
menu system and sampled content, while men seemed to contextual information.
engage sections in more depth, were more likely to interact
with the content, and would go through more stories • Curators and exhibition designers were not accustomed to
sequentially and completely. However, this may be biased technological augmentation of traditional art exhibitions,
by placement of this portion of the exhibition near the concerned that an interactive device near an object would
Exhibition gift shop; we suspect that the interactive distract from the original. Yet, the effort visitors made to
proved a good diversion for men waiting for wives to compare the information from the interactive to the actual
finish shopping. art demonstrates how interactive media can stimulate
interest in, rather than supplant the art experience.
• We were surprised at the importance of the static images
in the gallery. For many parties, a sort of "teamwork" • Feedback about the interactive from staff, visitors, the
occurred. While one member was using the interactive, the Trustees, the press and others has been overwhelmingly
other(s) would study complementary information in the positive, and has helped engender support for continued
wall-mounted images; then they sometimes switched roles. use of interactives in permanent and temporary
In sum, the combination of the two activities appeared to exhibitions. In 2002, the Museum the project received an
lengthen the overall duration of their experience with this American Association of Museums’ Muse Award.
section of the exhibition. The static images also provided CONCLUSION
more opportunity for visitors to focus on a single aspect Far reaching and complex goals were established for the
of the painting. Additionally, we observed that while production of Exploring Picasso's "LaVie," and we believe
some visitors did not use the interactive and only referred that our goals were substantially met. The number of users
to the static images, virtually everyone who used the who returned to the actual work of art to take a closer look is
interactive also referred to the static images; virtually no especially noteworthy. In the future more consideration will
one relied on the interactive alone. This suggests that the be given the role of supplementary materials with
effectiveness of interactives which portray rich interactives as reinforcements or adjuncts; and, the way in
information and complex concepts might benefit from which tools and techniques may be taught to visitors, as
accompanying complementary and reinforcing material. compared with facts and context.
However, we do not know how effective the interactive
ACKNOWLEDGEMENT
alone would have been. It is also possible that the
The authors wish to acknowledge the contribution of
existence of the static images mainly allowed interactive
Cognitive Applications, Inc., Brighton, England, and
users to pursue an interest that had been piqued, while
Washington, D.C. who created Exploring Picasso's 'La Vie'
allowing someone else to try the device. Perhaps in part
with the Cleveland Museum of Art
the interactive acted as a dynamic sampler of the static

122
Facilitating Argument in Physical Space

Mark Stringer, Jennifer A. Rode, Alan F. Blackwell and Eleanor F. Toye


Computer Laboratory, University of Cambridge
+44 1223 763500
{ms508, jar46, afb21, eft20}@cl.cam.ac.uk

ABSTRACT tele-conferencing (especially if some participants are


We have created a ubiquitous computing application which remotely located), but this introduces many obstacles to
will facilitate discussion. The system applies radio effective collaboration. In particular, the introduction of a
frequency identification (RFID) and tangible user interfaces shared representation is only of value if it then supports
(TUIs) to the World Wide Web. It uses TUIs to permit deixis – semantic reference to a specific component of the
users to explore and construct both sides of a debate. discussion (e.g. pointing) [1]. The whole purpose of CSCA
While our initial evaluation focuses on school children, systems is to help structure argument through the provision
during our demo conference attendees use our interface to of a shared representation that enables participants to make
participate in the debate of the pertinent Ubicomp topic; deictic reference to specific structural components of the
“Will ubiquitous computers replace paper?”. Our interface argument.
moves beyond WIMP to bring argumentation and debate Video and tele-conferencing systems are particularly poor
into the tangible realm. at supporting deixis. Although many research attempts have
Keywords been made, video-conference systems do not yet support
Computer Supported Collaborative Argumentation gaze inference such that one participant can tell what
(CSCA), Tangible User Interfaces (TUI) another participant is looking at. Pointing is a key element
of deixis, but it is very hard to create multi-user systems that
INTRODUCTION allow participants to communicate by pointing at their
A regular theme in Human-Computer Interaction research screens. If the whole structure fits in a single screen with no
has been the development of systems that help people zoom or pan, then it is possible to implement multi-cursor
impose structure on complex interpersonal communication. pointing systems. Alternatively, one user can be in control
Systems for Computer Supported Collaborative of a display that is broadcast to many screens. If each user
Argumentation (CSCA) [2], requirements capture [3], is allowed to control their own view (i.e. true
discussion thread management [15], and others provide collaboration), and if the visualisation does not fit within
interactive visualisations of human communication, in a one screen (i.e. truly complex argument rather than toy
way that assists users to address complex topics (such as examples), then it is practically impossible to establish
system design), [5] or work through contentious issues socially appropriate interfaces for collaborative
(such as industrial relations disputes) [9]. Typically these argumentation.
systems work by helping users to focus on the structure of
discussion, for example noting when new contributions are These factors have motivated us to take a ubiquitous
intended to clarify, support or rebut earlier statements. computing approach to the support of collaborative
argumentation [16,17]. Rather than using conventional
One of the challenges in building systems like these is that screen and keyboard interfaces, we have created a large
they are typically implemented to run in a conventional scale physical interface that can be distributed across a
computer environment, as an application under a WIMP room or over a board table. Participants in an argument can
operating system [2,3,4,5,9,10,15]. Although many such move freely about the room, pointing to, picking up or
systems are designed for use by multiple users, each user moving physical objects that represent elements of the
sits in front of his or her own screen, contributing to the argument structure.
discussion by operating the keyboard and mouse at that
screen. It is possible to augment the discussion via video or While our departure from WIMP interfaces for computer
support for argumentation is novel, so is our approach to
argumentation. In ancient times the study of rhetoric began
with simple forms such as fables and storytelling and
progressed through more complex forms to the
sophistication of parliamentary debate and legislation[6].
In the field of ubiquitous computing a considerable amount
of work has been done on support for narrative creation by

123
children – the beginnings of rhetorical education [7,13]. explore further the possibilities of using our system with
Meanwhile work in the field of computer-supported older children and adults.
cooperative argument has focused on rhetoric in its most
TECHNICAL APPROACH
accomplished forms of industrial negotiation and legal
One of the constraints imposed by the classroom context is
argument [4,9]. There has however been very little work
that the technology base for the ubiquitous computing
which has focused on the first exercises in persuasive
system must be extremely robust. The classroom is a
rhetoric that are used to lead the student step by step to the
physically demanding environment, with little tolerance for
heights of rhetorical complexity. We have chosen to bridge
equipment failure. We therefore selected a well-established
this gap and focus on the classical rhetorical exercises of
communications and sensing infrastructure, based on radio
encomium and vituperation, where a student praises or
frequency ID tags and readers (RFID). The physical tokens
criticises a topic or an individual. These exercises break
of argument contributions are augmented with RFID tags,
down the construction of an argument into a series of
and the argument structure is represented by a series of
manageable steps that ensure the participants cover all of
RFID readers. The RFID readers are networked to a central
the necessary ground and organise their knowledge and the
server, which generates a real-time visualisation of the
fruits of their research as effectively as possible.
developing argument for projection onto the wall of the
Both the rhetorical focus of our system and our approach to classroom.
ubiquitous computing were designed for use in schools, to
Users interact with the application by placing statements
facilitate part of the English national curriculum [11] that
which are augmented with RFID tags on the readers. Each
teaches argumentation and discussion skills to students (see
reader has a prompt and together they form a trail which
Figure 1). It is particularly useful to see visualisations of
takes the user through an argument – either for a position,
against it or showing understanding of both sides – in small
and easily managed steps. Every time the reader places a
statement on a reader this change in state in the TUI is
reflected in the GUI. The aim is to use the GUI and TUI in
combination to allow the user to do two things: firstly to
organise the statements relevant to an argument according
to the loose structure provided by the prompts on the
readers; and secondly to deliver a speech for the point of
view she has set out using both the TUI and GUI as visual
aids.
EVALUATION
We have evaluated our design approach over a period of six
Figure 1. Argument Formation Cycle months, with a range of prototypes exploring the technical
approach above. Our iterative prototyping design method
argument structure in the classroom. Teaching argument commenced with “low fidelity” prototypes that explored the
demands that the teacher be able to refer explicitly to the use of the spatial interface within an actual classroom
argument structures being developed by the children, in lesson, but only provided limited automated functionality
order to provide a relevant critique of a malformed through the use of RFID. Some automated functionality was
argument, or explain ways the argument could be made simulated during these experiments via the “Wizard of Oz”
more persuasive. In addition to this natural fit to the technique [8] where a researcher controlled the computer
classroom context, we also believe that it is especially interface to test alternative designs with minimal
valuable to design ubiquitous computing systems that are development effort. After ten generations of prototypes, we
constrained by a specific application domain. Many have developed an effective and technically operational
ubiquitous computing research projects have created system that has been evaluated under lesson conditions
products, middleware or technical architectures that have no [14].
clear application. To avoid this trap, we voluntarily
accepted the strict design constraints of the school Evidence Selection
environment, and of the highly prescriptive English Our early prototypes focused on the collection and labelling
National Curriculum, in order to focus our activities on the phases of the Argument Formation Cycle. We had observed
creation of a system that addressed a genuine need. We that children read source web pages, they then evaluated
have called this research strategy Curriculum Focused and selectively highlighted, and then grouped together
Design [12]. relevant pieces of evidence. These groups were then named
(e.g. ‘trust’, ‘sightings’, ‘evidence and backing up’) and
We are however convinced that the classroom is not the
claims or statements on each theme used to structure the
only forum that will benefit from computer support for the
argument. We gave children small stands incorporating a
learning of skills in argument and persuasion. We intend to
whiteboard on which to write a statement, and a set of clips

124
to attach collected evidence supporting that statement Figure 4. Selection tags each with a RFID & LED
(Figure 2). We intended that RFID tags in the
Argument Presentation
documents would be
The final stage of argument formation is linearization: the
argument structure turned into a linear form which can be
delivered as a speech. By arranging the TUIs the users
have to construct an argument which is also represented on
a projected display. Users can use the TUI to trigger the
GUI to display content relating to the specific section of the
argument they are verbally presenting.

Figure 2. Iteration #3 prototypes for grouping selections


recognised by an RFID reader in the stand, so that the
logical relationships between statements and collections of
documents would be recognised by the system.
Argument Construction
For each stage in the argument an “Activity Square” was
produced – a large card stating what the student should do
at that stage of the argument – e.g. “Say something good
about graffiti” (see Figure 3). Then we used statements
Figure 5. Triggering a transition in the GUI by placing
produced in
the ‘section viewer’ on the activity square.
Observations
During the course of our evaluations we have spent
upwards of twenty hours observing children in schools
debating three different topics. All of these children
succeeded in using the TUI to construct an argument which
satisfied their teacher. The teacher found several of these
arguments surprisingly articulate. Students who used the
TUI were able to interact better with their audiences while
presenting. We have formed a number of preliminary
gender-related observations: our female students seem to
lead in the coordination and structure of the argument,
Figure 3. Prototype providing rhetorical structure
whereas our male students have focused much of their
earlier stages of the Argument Formation Cycle as counters energies on understanding the relationship between the GUI
in a rhetorical board game (see Figure 4). Children placed the TUI.
labels with these statements on suitable activity squares in We learned that making many small changes through
order to structure their argument using evidence found in multiple iterations allowed us to isolate the effect of the
their research. physical affordances vs. the effect of technology. We
were forced to switch from low to high frequency RFID
tags to allow for clear grouping of multiple statements on
an activity square. These trials have dictated our plans for
further technological development. We observed how
children enjoyed stacking the early box-shaped statements
as well as how it helped with argument presentation. This
resulted in our plan for a future box-shaped prototype that
will permit stacking by containing both a RFID reader and a
tag.
CONCLUSION
We have developed an approach to locating conversational
argument processes within a physical space through the use

125
of ubiquitous computing. This provides a far richer Educational Sense-Making. Springer Verlag, Great
outcome from ubiquitous computing than previous attempts Britain, 2003. p117-136.
to integrate video and screen-based visualisation. This 6. Corbett, Edward P.J. and Robert Connors. Classical
argumentation system, while a novel use of technology, also Rhetoric for the Modern Student. 4th Ed. Oxford UP,
provides a good example of user-centered design. Careful Oxford 1999.
iteration and attention to the needs of users will help ensure
a socially appropriate interface for collaborative 7. Druin, Allison, Jason Stewart, David Proft, Ben
argumentation which is more likely to be adopted by Bederson, Jim Hollan. KidPad: A Design Collaboration
potential users. Our system will promote natural interaction Between Children, Technologists, and Educators. CHI
with evidence, presented via TUIs and paper documents. 97.p463-470.
The demonstration should be of interest both as an 8. Erdmann, R.L., Neal, A.S. Laboratory vs. Field
illustration of this type of application in use, and as a novel Experimentation in Human Factors—An Evaluation of
way for delegates to engage with an important question for an Experimental Self-Service Airline Ticket Vendor.
the future of ubiquitous computing. Human Factors. 13 1971. p521-531.
ACKNOWLEDGMENTS 9. van Gelder, Tim. Enhancing Deliberation through
This research is funded by European Union grant IST- Computer Supported Argument Visualization.
2001-34171. This paper does not represent the opinion of Visualizing Argumentation; Software Tools for
the EC, which is not responsible for any use of this data. Collaborative and Educational Sense-Making. Springer
The industrial design of the prototypes is thanks to Chris Verlag, Great Britain, 2003. p97-115.
Vernall. We would like to thank to Philip Wise and 10. Horn, Robert E. Infrastructure for Navigating
Gordon Williams for their assistance in preparing the demo Interdisciplinary Debates: Critical Decision for
photography. Representing Arguments. Visualizing Argumentation;
REFERENCES Software Tools for Collaborative and Educational
1. Barnard, P., May, J. & Salber, D. Deixis and points of Sense-Making. Springer Verlag, Great Britain, 2003.
view in media spaces: An empirical gesture. Behaviour p165-84.
and Information Technology.1996. 15 (1), 37-50 11. National Curriculum: http://www.nc.uk.net/home.html.
2. Buckingham Shum, S., V. Uren, G. Li, J. Domingue, 12. Rode, J., M. Stringer, E. Toye, A. Simpson, and A.
and E. Motta. Visualizing Internetworked Blackwell. Curriculum Focused Design. Interaction
Argumentation. Visualizing Argumentation; Software Design and Children. (2003) 119-26.
Tools for Collaborative and Educational Sense-Making.
13. Stanton, Danae, and et al. Classroom Collaboration in
Springer Verlag, Great Britain, 2003. p185-203.
the Design of Tangible Interfaces for Storytelling. CHI
3. Buckingham Shum, Simon. Graphical argumentation 2001. p482-489.
and design cognition. Human-Computer Interaction.
14. Stringer, M., J. Rode, E. Toye, A. Blackwell. Iterative
12(3) 1997. p267-300.
Design of Tangible User Interfaces. BCS-HCI 2003. (In
4. Carr, Chad S. Using Computer Supported Visualization Press)
to Teach Legal Argumentation. Visualizing
15. Viegas, Fernada B. and Judith S. Donath. Chat Circles.
Argumentation; Software Tools for Collaborative and
CHI ’99. ACM, p 9-16.
Educational Sense-Making. Springer Verlag, Great
Britain, 2003. p75-96. 16. Weiser, Mark. The Computer for the 21st Century.
Scientific American. September, 1991. pg 94-104.
5. Colkin, Jeff. Dialog Mapping: Reflections on an
Industrial Strength Case Study. Visualizing 17. Weiser, Mark. Some computer science issues in
Argumentation; Software Tools for Collaborative and ubiquitous computing. Communications of the ACM, 36
(7): 75-84.

The columns on the last page should be of equal length.

126
Box. Open System to Design your own Network
Victor Vina
Researcher
Interaction Design Institute Ivrea
Ivrea, TO 10015 Italy
+39 0125 422 11
[email protected]

ABSTRACT ideas, to create inmersive information experiences, to


Box is a modular architecture that supports distributed, integrate users into the design process, transforming them
self-regulated networks of information products. The from atomised, passive consumers into active interpreters
system combines a server application, an on-line visual of information [2].
language and a collection of wireless devices to provide an From Content to Structures
environment where networks combining these physical Computer networks allow people to dynamically interact
objects and digital information can be easily created and with a collection of media, an internal structure, and a
maintained. The system allows real-time, collaborative diversity of interfaces. The Box system aims to offer an
construction of networks of information products across insight into the elements that are part of this internal
remote locations. structure.
The Box system aims to offer an insight into the basic
elements that configure information networks, analising the
implications of using ubiquitous wireless devices as nodes
of these networks
Keywords
Connected Communities, Wireless Appliances, Visual
Languages, Network Visualization, Information Flow.
Fig 1: Box focuses on the internal structure of information networks.
INTRODUCTION This approach will allow the development of new tools and
The architectural spaces we inhabit will become an interface methods for the embedding of computation in everyday
between humans and on-line digital information. Wireless things so as to create information containing objects,
networks are becoming widely available and an increasing researching on how new functionality and new use can
number of devices and information appliances are starting emerge from collections of interacting artifacts, and
to communicate through these networks. ensuring that people's experience of these computational
Exploration is needed on computational environments that environments is both coherent and engaging in space and
mix digital media and the physical environment. Tangible time.
interfaces are becoming an increasingly popular design
SYSTEM ARCHITECTURE
strategy as computational elements hybridize and become
The Box system integrates a server application, an on-line
smaller and more ubiquitous [1]. The Box system provides
interface and a collection of modular, wireless physical
a tool-kit to explore physical computation that places
devices, called Boxes.
information in private or shared social spaces and renders
information that can be grasped, literally.
Proposal
A modular system —Box— is proposed to physically
couple virtual and actual space, and to network an
unlimited number of entities, creating a participatory
environment for communication and information exchange.
The system adapts the notion and principles of software
architectures, to the world of tangible artifacts. Fig 2: Box system architecture.
The context of the proposed system will be a connected Boxes can be distributed around a building or public space.
community: A group of people in local and/or remote They communicate wirelessly with a PC which routes the
locations who often communicate through information data to the multiuser server that holds the internal structure.
networks. The ultimate goal is to create an open platform Every PC acts as a network hub that can communicate with
to facilitate the exchange of knowledge and the interplay of up to 255 boxes.

127
Server Application Their size responds to the need for objects that are portable,
The server application, developed in Lingo (Macromedia objects that you can place in any location but you do not
Director and Macromedia Shockwave Multiuser Server) carry around with you. They present a small antenna to
maintains the networks and visualises them through a indicate they can communicate wirelessly with other
visual language: Type and location of the boxes, channels entities.
for the flow of data, objects that collect information from
web databases and other constructs which dictate how the
information is transformed and transmitted between each
one of the physical devices.
Visual Language
A simple, visual language has been integrated with the
system to allow creation and visualization of dynamic
information structures. This visual language, based on an
model that represents the flow of information, allows the
visualization of a variety of information networks: from a
web log to an e-mail list, from an ATM machine to a cell-
phone voice messaging system.

Fig 5: Boxes illustrate information products: wireless devices that can


communicate with the network. The first row depicts output boxes, while
the second row depicts input boxes.
The modular nature of this objects allows configuration of
more complex artifacts by combining them with the virtual
Fig 3: Representation of an e-mail list with the Box visual language. structure supported by the on-line interface.
The visual language is based on 5 different basic types of Hardware Kits
constructs: Boxes, Containers, Transformers, Transceivers, Massimo Banzi, Technology Professor at Interaction-Ivrea,
and Channels. has developed a custom PCB (Printed Circuit Board) board
Physical Devices to allow simple and cost-effective production of wireless
The system provides a collection of information devices, physical devices. The kit is based on the PIC series micro
embedded into the simple shape of a cardboard box. These controllers from Microchip and uses the BIM2 transceiver
objects have just one function and limited affordances: one from Radiometrix to provide RF (Radio Frequency)
of these objects presents an antenna that goes up and down, communication with the hub computer. Wireless
another one detects movement nearby, another one emits communication is controlled with S.N.A.P. (Scaleable
sounds, etc. Some of them are able to display information Node Address Protocol), an open and free protocol
through an embedded screen or a small printer; others are developed by High Tech Horizons.
able to gather data through sensors or switches.

Fig 6: Electronic kits allow easy construction of wireless devices.


The kits allow interaction design students and non-
technical people to create their own networked devices by
plugging a sensor or actuator into these pre-made kits.
Each device has an identification address embedded into the
software of the micro-controller. This address is visible
both on the physical device and on its virtual representation
on the online interface.
Fig 4: The Box system can combine an unlimited number of information
containing objects. Online Interface
Ignoring the shape of the objects and their affordances, the Combining the elements of the visual language and the
user can focus on what the boxes do, and not on the way physical devices in different configurations users can create
they look. an unlimited number of information networks. Any

128
numbers of users with an internet connection can construction of platforms for communication and
collaboratively view and modify this internal structure. information exchange. As soon as members of the
As the structure is stored on-line, boxes can be placed on community are able to define a code and agree on the role
remote locations, allowing platforms for communication each box undertakes, they can actively participate on the
and information exchange combining boxes far away from construction of information networks.
each other.

Fig 8: Printer Box archives subject lines of messages sent t o


the internal e-mail list of Interaction-Ivrea.
A number of different applications have been prototyped at
Interaction-Ivrea. Networks to create awareness of the
activity on the building, to archive discussions of e-mail
lists; networks for personal communication, for continuous
visualizations of dynamic data as stock market values,
weather forecasts or newspapers headlines; networks to
foster social interaction, to provoke debate about academic
issues, etc. The applications prototyped do not try to be
Fig 7: The on-line interface allows collaborative construction extensive, but to open up a new design space.
of distributed networks of wireless devices.
This separation between tangible objects and virtual
structure provides an environment with unlimited potential
for expandability, based on the recombination of simple
modules.
This concept of information appliances [3] is closely related
to the idea of replacing the computer with a number of
highly interconnected specialised devices in the ubiquitous
computing scenario above. Norman argues that what makes
Fig 9: Debate network. Users could rate a particular issue
the personal computer so complex and difficult to use is displayed on a Box with and embedded LCD display b y
that it aims to do too many things for too many different moving the slider of another Box. Average results over time
users. By replacing the universal computer with objects are shown on a meter Box.
optimised for a single task or activity, we can overcome
many (if not most) of the usability problems associated Distributed Systems
with computers. To get more complex functionality, users Many small simple independent elements can interact with
should be able to combine the functionality of several each other to perform useful outcomes. By examining
objects, hence the need for communication between them. distributed systems we will change the way we think about
design problems. But there is a trade off between efficiency
This solution might not be as simple to implement as it and robust adaptability. Simple machines can be efficient,
might first seem, nevertheless, if we move beyond but complex distributed systems are often not. Looking at
usability considerations, the concept of information complete systems changes the problems for design often in
appliances can be an interesting basis for reconsidering a favorable ways. The emergent complexity of decentralized
what information containing objects might be like. systems is achieved through the interaction dynamics of
INTERACTION multiple simple components all acting in parallel, each
Creating Networks with their own set of simple rules.
Every Box has an unique ID number and a visual Decentralized —distributed— models can integrate better
representation on the online interface. On this on-line with the social dynamics and learning processes found in
environment, participants can combine the elements of the connected communities. Until recently there have been few
visual language with the boxes, interconnecting modules, alternatives which would allow people to experiment with
gathering data from sensors and switches, transforming and decentralized systems. Resnick has been building a number
routing data from external sources like web databases to the of new tools for kids at the MIT Media Lab which allow
physical objects. novices, scientists and designers to explore decentralized
This provides a collaborative environment where several thinking [4]. His hope is that these conceptual tools will
participants can remotely engage in real time in the help people move beyond the centralized mindset.

129
He contends that best way to develop better intuitions Thus, we, as designers, can create open systems open for
about decentralized systems is to construct and play with interpretation, integrating participants into the design
such systems. process, encouraging creativity and turning them from
The Box system follows this line of enquiry, proposing an passive consumers into active interpreters of information.
open platform to encourage research and experimentation, With the proliferation of ubiquitous technological devices,
allowing new devices to be incorporated into the system the development of a semantic web that will be integrated
and providing an environment where self-regulated with these devices and the extensive use of computer
networks combining these objects can be created. networks to play, work and communicate, we, as designers,
need to consider the issues, values and opportunities
PHYSICAL COMPUTING
offered by these new technologies.
The Box system has been integrated with the academic
program of Interaction Design Institute Ivrea, in order to By abstracting the basic elements of these networks and
teach the fundamentals of physical computing and experimenting with them free from commercial constrains,
networked appliances. Visiting professor Bill Verplank this program expects to raise up issues about current trends
directed the course. Students were asked to create a network of the information society, which values are being imposed
of two devices: one input and one output box. on information consumers or simply whether they are
desirable.
ACKNOWLEDGMENTS
Thanks to students, professors and administration staff of
Interaction Design Institute Ivrea for their support and
contributions during the development of this project. In
particular to Gillian Crampton-Smith, Dag Svanaes, Casey
Reas, Massimo Banzi and Bill Verplank for their valuable
insights.

LINKS
http://projects.interaction-ivrea.it/box
Fig 10: Luther Thie and Belmer Negrillo’s Whispering t o
Birds, an exploration done based on the Box system for the REFERENCES
Physical Computing course at Interaction-Ivrea. 1. Ishii, H., Ullmer, B. Tangible Bits: Towards Seamless
Interfaces between People, Bits and Atoms.
Outcomes covered a broad range of interactions, from Proceedings of CHI ‘97. ACM Press.
exploration of physical behaviors representing emotions to
a network where the fall of a leaf on the input Box would 2. Jeremijenko, N. Delusions of Immateriality. Doors of
trigger the sound of a bird on the output Box located on a Perception 6: Lightness. (2000). Available online at:
far away tree. http://museum.doorsofperception.com/doos/doors6/
transcripts/jeremijenko.html
CONCLUSIONS
When users are allowed to set up and configure their own 3. Norman, D. A. (1998): The Invisible Computer. Basic
personal networks based on the recombination of simple Books, 232-239.
modules, emergent platforms will appear that best reflect 4. Resnick, M. Behavior Construction Kits. Commun.
the social networks that maintain them. ACM 36, (1998)

130
Demonstrations of
Expressive Softwear and Ambient Media
Sha Xin Wei1, Yoichiro Serita2, Jill Fantauzza1, Steven Dow2, Giovanni Iachello2, Vincent Fiano2,
Joey Berzowska3, Yvonne Caravia1, Delphine Nain2, Wolfgang Reitberger1, Julien Fistre4
1 2 3
School of Literature, Communica- College of Computing/GVU Center Faculty of Fine Arts
tion, and Culture / GVU Center Georgia Institute of Technology Concordia University
Georgia Institute of Technology {seri, steven, giac, ynniv, delfin}@ Montreal, Canada
[email protected], {gtg760j, cc.gatech.edu, [email protected]
4
gtg937i, gtg711j}@mail.gatech.edu [email protected]

ABSTRACT livening or playful applications.


We set the context for three demonstrations by describing The focus on clothing is part of a general approach to
the Topological Media Lab's research agenda. We next wearable computing that pays attention to the naturalized
describe three concrete applications that bundle together affordances and the social conditioning that fabrics, fur-
some of our responsive ambient media and augmented niture and physical architecture already provide to our
clothing instruments in illustrative scenarios. everyday interaction. We exploit the fusion of physical
The first set of scenarios involves performers wearing ex- material and computational media and rely on expert craft
pressive clothing instruments walking through a confer- from music, fashion, and industrial design in order to
ence or exhibition hall. They act according to heuristics make a new class of personal and collective expressive
drawn from a phenomenological study of greeting dy- media.
namics, the social dynamics of engagement and disen- TML’S RESEARCH HEURISTICS
gagement in public spaces. We use our study of these dy- Perhaps the most salient notion and leitmotiv for our re-
namics to guide our design of expressive clothing using search is continuity. Continuous physics in time and me-
wireless sensors, conductive fabrics and on-the-body cir- dia space provides natural affordances which sustain in-
cuit logic. tuitive learning and development of virtuosity in the form
By walking into different spaces prepared with ambient of tacit "muscle memory." Continuous models allow nu-
responsive media, we see how some gestures and instru- ance which provides different expressive opportunities
ments take on new expressive and social value. These than those selected from a relatively small, discrete set of
scenarios are studies toward next generation TGarden re- options. Continuous models also sustain improvisation.
sponsive play spaces [25] based on gesturally parameter- Rather than disallow or halt on unanticipated user input,
ized media and body-based or fabric-based expressive our dynamical sound models will always work. How-
technologies. ever, we leave the quality and the musical meaning of the
sound to the user. We use semantically shallow machine
Keywords
models.
Softwear, augmented clothing, media choreography, real-
time media, responsive environments, TGarden, phe- We do "materials science" as opposed to object-centered
nomenology of performance. industrial design. Our work is oriented to the design and
prototyping not of new devices but of new species of
CONTEXT augmented physical media and gestural topologies. We
The Topological Media Lab is established to study ges- distribute computational processes into the environment
ture, agency and materiality from both phenomenological as an augmented physics rather than information tasks lo-
and computational perspectives. This motivates an inves- cated in files, applications and "personal devices."
tigation of human embodied experience in solo and social
situations, and technologies that can be developed for en- APPLICATIONS AND DEMONSTRATIONS
We are pursuing these ideas in several lines of work: (1)
softwear: clothing augmented with conductive fabrics,
wireless sensing and image-bearing materials or lights for
expressive purposes; (2) gesture-tracking and mathemati-
cal mapping of gesture data to time-based media; (3)
physics-based real-time synthesis of video; (4) analogous

131
sound synthesis; (5) media choreography based on statis-
tical physics.
We demonstrate new applications that showcase elements
of recent work. Although we describe them as separate
elements, the point is that by walking from an unprepared
place to a space prepared with our responsive media sys-
tems, the same performers in the same instrumented
clothing acquire new social valence. Their interactions
with co-located less-instrumented or non-instrumented
people also take on different effects as we vary the locus
of their interaction.
Softwear: Augmented Clothing Fig. 1. Solo, group and environmental contact circuits.
Most of the applications for embedding digital devices in Demonstration A: Greeting Dynamics (Fantauzza, Ber-
clothing have utilitarian design goals such as managing zowska, Dow, Iachello, Sha)
information, or locating or orienting the wearer. Enter-
Performers wearing expressive clothing instruments walk
tainment applications are often oriented around control-
through a conference or exhibition hall. They act accord-
ling media devices or PDA’s, and high-level semantics
ing to heuristics drawn from a provisional phenomenol-
such as user identity [1, 7] or gesture recognition [28].
ogical schema of greeting dynamics, the social dynamics
Our approach to softwear as clothing is informed by ear-
of engagement and disengagement in public spaces built
lier work of Berzowska [2] and Orth [19].
from a glance, nod, handshake, embrace, parting wave,
We study the expressive uses of augmented clothing but backward glance. Our demonstration explores how peo-
at a more basic level of non-verbal body language, as in- ple express themselves to one another as they approach
dicated in the provisional diagram (Fig. 1). The key friends, acquaintances and strangers via the medium of
point is that we are not encoding classes of gesture into their modes of greeting. In particular, we are interested in
our response logic but instead we are using such diagrams how people might use their augmented clothing as expres-
as necessarily incomplete heuristics to guide human per- sive, gestural instruments in such social dynamics. (Fig.
formers. 2)
Performers, i.e. experienced users of our ”softwear” in-
strumented garments will walk through the floor of the
public space performing in two modes: (1) as human so-
cial probes into the social dynamics of greetings, and (2)
as performers generating sound textures based on gestural
interactions with their environment. We follow the per-
formance research approach of Grotowski and Sponge
[10, 25] that identifies the actor with the spectator. There-
fore we evaluate our technology from the first person
point of view. To emphasize this perspective, we call the
users of our technologies "players" or "performers" Fig. 2. Instrumented, augmented greeting.
(However, our players do not play games, nor do they act In addition to instrumented clothing, we are making ges-
in a theatrical manner.) We exhibit fabric-based control- tural play objects as conversation totems that can be
lers for expressive gestural control of light and sound on shared as people greet and interact. The shared object
the body. Our softwear instruments must first and fore- shown in the accompanying video is a small pillow fitted
most be comfortable and aesthetically plausible as cloth- with a TinyOS mote transmitting a stream of acceler-
ing or jewelry. Instead of starting with devices, we start ometer data. The small pillow is a placeholder for the
with social practices of body ornamentation and corporeal real-time sound synthesis instruments that we have built
play: solo, parallel, or collective play. in Max/MSP. It suggests how a physics-based synthesis
Using switching logic from movements of the body itself model allows the performer to intuitively develop and nu-
and integrating circuits of conductive fiber with light ance her personal continuous sound signature without any
emitting or image bearing material, we push toward the buttons, menus, commands or scripts. Our study of these
limit of minimal on-the-body processing logic but maxi- embedded dynamical physics systems guides our design
mal expressivity and response. In our approach, every of expressive clothing using wireless sensors, conductive
contact closure can be thought of and exploited as a sen- fabrics and on-the-body circuit logic.
sor. (Fig. 1)

132
Whereas this first demonstration studies the uses of soft-
wear as intersubjective technology, of course we can also
make softwear more explicitly designed for solo expres-
sive performance.
Demonstration B: Expressive Softwear Instruments Using
Gestural Sound: (Sha, Serita, Dow, Iachello, Fistre, Fan-
tauzza)
Many of experimental gestural electronic instruments
cited directly or indirectly in the Introduction have been Fig. 3. Gesture mapping to sound and video.
built for the unique habits and expertises of individual
professional performers. A more theatrical example is The motto for our approach is "gesture tracking, not ges-
Die Audio Gruppe [16]. Our approach is to make gestural ture recognition." In other words we do not attempt to
instruments whose response characteristics support the build models based on a discrete, finite and parsimonious
long-term evolution of everyday and accidental gestures taxonomy of gesture. Instead of deep analysis our goal is
into progressively more virtuosic or symbolically charged to perform real-time reduction of sensor data and map it
gesture. with lowest possible latency to media texture synthesis to
provide rich, tangible, and causal feedback to the human.
In the engineering domain, many well-known examples
are mimetic of conventional, classical music performance. Other gesture research is mainly predicated on linguistic
[15]. Informed by work, for example, at IRCAM but es- categories, such as lexicon, syntax and grammar.
pecially associated with STEIM, we are designing sound McNeill [17] explicitly scopes gesture to those move-
instruments as idiomatically matched sets of fabric sub- ments that are correlated with speech utterances.
strates, sensors, statistics and synthesis methods that lie in However, given the increasing power of portable proces-
the intersection between everyday gestures in clothing and sors, sophisticated sub-semantic, non-classifying analysis
musical gesture. has begun to be exploited (e.g. [30]). We take this ap-
We exhibit prototype instruments that mix composed and proach systematically.
natural sound based on ambient movement or ordinary
gesture. As one moves, one is surrounded by a corona of Interaction Scenario
physical sounds " generated" immediately at the speed of
matter. We fuse such physical sounds with synthetically In all cases, performers wearing softwear instruments will
generated sound parameterized by the swing and move- interact with other humans in a public common space.
ment of the body so that ordinary movements are imbued But when they pass through a space that has been sensi-
with extraordinary effect. (Fig. 3) tized with tracking cameras or receivers for the sensors
tracking their gesture, then we see that their actions made
The performative goal is to study how to bootstrap the
in response to their social context take on other qualities
performer's consciousness of the sounds by such estrang- due to the media that is generated in response to their
ing techniques (estranging is a surprising and undefined movement. This prompts us to build responsive media
word here) to scaffold the improvisation of intentional, spaces using our media choreography system.
symbolic, even theatrical gesture from unintentional ges-
ture. This is a performance research question rather than Ambient Media
an engineering question whose study yields insights for After Krueger's pioneering work [14] with video, classical
designing sound interaction. VR systems glue inhabitants' attention to a screen, or a
Gesturally controlled electronic musical instruments date display device and leave the body behind. Augmented
back to the beginning of the electronics era (see extensive reality games like Blast Theory's Can You See Me Now
histories such as [13]) . put some players into the physical city environment, but
still pin players' attention to (mobile) screens [4].
Our preliminary steps are informed by extensive and ex-
pert experience with the community of electronic music Re-projection onto the surrounding walls and bodies of
performance [25, 31, 32]. the inhabitants themselves marks an important return to
embodied social, play, but mediated by distributed and
tangible computation.
The Influencing Machine [12] is a useful contrasting ex-
ample of a responsive system. The Influencing Machine
sketches doodles apparently in loose reaction to slips of
colored paper that participants feed it. Like our work,
their installation is also not based on explicit language. In

133
fact it is designed ostensibly along “affective” lines. It is can insert our responsive video into non-standard geome-
interesting to note how published interviews with the par- try or materials.
ticipants reveal that they objectify the Influencing Ma- We suspend (pace T. Erickson [8]) a translucent ribbon
chine as an independent affective agency. They spend onto which we project processed live video that trans-
more effort puzzling out this machine's behavior than in forms the fabric into a magic membrane. The membrane
playing with one another. is suspended in the middle of public space where people
In our design, we aim to sustain environments where the will naturally walk on either side of it. People will see a
inhabitants attend to one another rather than a display. smoothly varying in time and space transformations of
How can we build play environments that reward repeated people on the other side of the membrane. (Fig. 4) The ef-
visits and ad hoc social activity? How can we build envi- fects will depend on movement, but will react additionally
ronments whose appeal does not become exhausted as to passersby who happen to be wearing our softwear
soon as the player figures out a set of tasks or facts? We augmented clothing.
are building responsive media spaces that are not predi- The challenge will be to tune the dynamic effects so that
cated on rule-based game logic, puzzle solving or ex- they remain legible and interesting over the characteristic
change economies [3], but rather on improvisatory yet time that a passerby is likely to be near the membrane, the
disciplined behavior. We are interested in building play affect induces play but not puzzle-solving. Sculpturally,
environments that offer the sort of embodied challenge the membrane should appear to have a continuous gradi-
and pleasure afforded by swimming or by working clay. ent across its width between zero effect (transparency)
This motivates a key technical goal: the construction of and full effect. Also it should take about 3-4 seconds for
responsive systems based on gesture-tracking rather than a person walking at normal speed in that public setting to
gesture-recognition. This radically shortens the compu- clear the width of the inserted membrane.
tational path between human gesture and media response.
But if we allow a continuous open set of possible gestures
as input, however reduced, the question remains how to
provide aesthetically interesting, experientially rich, yet
legible media responses.
The TGarden environment [25] that inspired our work is Fig. 4. Two players tracked in video, tugging at spring
designed with rich physicalistic response models that projected onto common fabric.
sustain embodied, non-verbal intuition and progressively
more virtuosic performance. The field-based models Above all the membrane should have a social Bernoulli
sustain collective as well as solo input and response with effect that will tend to draw people on the opposite sides
equal ease. to one another. The same effects that transform the other
person's image should also make people feel some of the
By shifting the focus of our design from devices to proc- safety of a playful mask. The goal is to allow people to
esses, we demonstrate how ambient responsive media can gently and playfully transform their view of the other in a
enhance both decoupled and coordinated forms of playful common space with partially re-synthesized graphics.
social interaction in semi-public spaces.
Artistic Interest and Craft
Our design philosophy has two roots: experimental theater
transplanted to everyday social space, and theories of We do not try to project the spectator's attention into an
public space ranging from urban planners [20, 33] to avatar as in most virtual or some augmented reality sys-
playground designers [11]. R. Oldenburg calls for a class tems. Instead, we focus performer-spectator's attention in
of so-called “third spaces,” occupying a social region the same space as all the co-located inhabitants. Moreo-
between the private, domestic spaces and the vanished in- ver, rather than mesmerizing the user with media "ob-
formal public spaces of classical socio-political theory. jects" projected onto billboards, we try to sustain human-
These are spaces within which an easier version of friend- human play, using responsive media such as calligraphic,
ship and congeniality results from casual and informal af- gesture/location-driven video as the medium of shared
filiation in “temporary worlds dedicated to the perform- expression. In this way, we keep the attention of the hu-
ance of an act apart.” [18] man inhabitants on one another rather than having them
forget each other distracted by a "spectacular" object [6].
Demonstration C: Social Membrane (Serita, Fiano, Reit-
berger, Varma, Smoak) By calligraphic video we mean video synthesized by
physicalistic models that can be continuously transformed
How can we induce a bit more of a socially playful ambi- by continuous gesture much as a calligrapher brushes ink
ence in a dead space such as a conference hotel lobby? onto silk. Calligraphic video as a particular species of
Although it is practically impossible in an exhibition set-
ting to avoid spectacle with projected sound or light, we

134
time-based media is part of our research into the precon-
ditions for sense-making in live performance. [10, 5].
ARCHITECTURE
For high quality real-time media synthesis we need to
track gesture with sufficiently high data resolution, high
sample rate, low end-to-end latency between the gesture
and the media effect. We summarize our architecture,
which is partly based on TinyOS and Max / Macintosh
OS X, and refer to [24, 25] for details.
Our current strategy is to do the minimum on-the-body
processing needed to beam sensor data out to fixed com-
puters on which aesthetically and socially plausible and Fig. 5. Architecture comprises clothing; sensing:
rich effects can be synthesized. We have modified the TinyOS, IR camera; logic and physical synthesis in OSC
TinyOS environment on CrossBow Technologies Mica network: Max, MSP, Jitter; projectors, speakers.
and Rene boards to provide time series data of sufficient Technical Comment on Lattice Computation
resolution and sample frequency to measure continuous Our research aims to achieve a much greater degree of
gesture using a wide variety of sensory modalities. This expressivity and tangibility in time-based visual, audio,
platform allows us to piggy-back on the miniaturization and now fabric media. In the video domain, we use lat-
curve of the Smart Dust initiative [13], and preserves the tice methods as a powerful way to harness models that al-
possibility of relatively easily migrating some low level ready simulate tangible natural phenomena. Such models
statistical filtering and processing to the body. Practi- possess the shallow semantics we desire based on our
cally this frees us to design augmented clothing where the heuristics for technologies of performance. A significant
form factors compare favorably with jewelry and body technical consequence is that such methods allow us to
ornaments, while at the same time retaining the power of scale efficiently (nearly constant time-space) to accom-
the TGarden media choreography and synthesis apparatus. modate multiple players.
(Some details of our custom work are reported in [24].)
ACKNOWLEDGEMENTS
Now we have built a wireless sensor platform based on We thank members of the Topological Media Lab, and in
Crossbow's TinyOS boards. This allows us to explore particular Harry Smoak, Ravi Varma and Kevin Stamper
shifting the locus of computation in a graded and princi- for assisting with the experimental construction, and
pled way between the body, multiple bodies, and the Junko Tsumuji and Shridhar Reddy for documentation.
room. Tazama St. Julien helped adapt the TinyOS platform.
Currently, our TinyOS platform is smaller but more gen- Erik Conrad and Jehan Moghazy worked on the prior ver-
eral than our LINUX platform since it can read and sion of the TGarden. Pegah Zamani contributed to the de-
transmit data from photocell, accelerometer, magnetome- sign seminar.
ter and custom sensors such as, in our case, customized We thank Intel Research Berkeley and the Graphics,
bend and pressure sensors. However, its sample fre- Visualization and Usability Center for providing the ini-
quency is limited to about 30 Hz / channel. tial set of TinyOS wireless computers. And we thank the
Our customized TinyOS platform gives us an interesting Rockefeller Foundation and the Daniel Langlois Founda-
domain of intermediate data rate time series to analyze. tion for Art, Science and Technology for supporting part
We cannot directly apply many of the DSP techniques for of this research.
speech and audio feature extraction because to accumulate This work is inspired by creative collaborations with
enough sensor samples the time window becomes too Sponge, FoAM, STEIM, and alumni of the Banff Centre
long, yielding sluggish response. But we can rely on for the Arts.
some basic principles to do interesting analysis. For ex-
ample we can usefully track steps and beats for onsets and REFERENCES
energy. (This contrasts with musical input analysis meth- 1. Aoki, H., and Matsushita, S. Balloon tag: (in)visible
ods that require much more data at higher, audio rates. marker which tells who's who. Fourth International
[21]) Symposium on Wearable Computers (ISCW'00), 77-
86.
The rest of the system is based on the Max real-time me-
dia control system with instruments written in MSP sound 2. Berzowska, J. Electronic Fashion: the Future of Wear-
synthesis, and Jitter video graphics synthesis, communi- able Technology.
cating via OSC on Ethernet. (Fig. 5) http://www.berzowska.com/lectures/e-fashion.html

135
3. Bjork, S., Holopainen, J., Ljungstrand, P., and Akes- 19. Orth, M., Ph.D. Thesis, MIT Media Lab, 2001.
son, K.P. Designing ubiquitous computing games -- a 20. PPS, Project for Public Spaces, http://pps.org
report from a workshop exploring ubiquitous comput-
ing entertainment. Personal and Ubiquitous Comput- 21. Puckette, M.S., Apel, T., and Zicarelli, D.D. Real-time
ing, 6, 5-6, (2002), 443-458, Springer-Verlag, 2002. audio analysis tools for Pd and MSP. ICMC 1998.
4. Blast Theory. Can You See Me Now? 22. Reddy, M. J. The conduit metaphor. in Metaphor
http://www.blasttheory.co.uk/v2/game.html and Thought, ed. A. Ortony, Cambridge Univ Press;
2nd edition. 1993, pp. 164-201.
5. Brooks, P. The Empty Space. Touchstone Books, Re-
print edition. 1995. 23. Richards, T. At Work with Grotowski on Physical
Actions. London: Routledge. 1995.
6. Debord, G. Society of Spectacle. Zone Books. 1995.
24. Sha, X.W., Iachello, G., Dow, S., Serita, Y., St. Ju-
7. Eaves, D. et al. NEW NOMADS, an exploration of lien, T., and Fistre, J. Continuous sensing of gesture
Wearable Electronics by Philips, 2000. for control of audio-visual media. ISWC 2003 Pro-
8. T. Erickson and W.A. Kellogg. Social Translucence: ceedings.
An Approach to Designing Systems that Support So- 25. Sha, X.W., Visell, Y., and MacIntyre, B. Media
cial Processes. ACM Transactions on Computer- choreography using dynamics on simplicial
Human Interaction , 7(1):59-83, March, 2000. complexes. GVU Technical Report, Georgia Tech.,
9. f0.am, txOom Responsive Space. 2002. 2003.
http://f0.am/txoom/. 26. Sonami. L. Lady’s Glove,
10. Grotowski, J., Towards a Poor Theater, Simon & http://www.sonami.net/lady_glove2.htm
Schuster, 1970. 27. Sponge. TGarden, TG2001.
11. Hendricks, B. Designing for Play, Aldershot, UK and http://sponge.org/projects/m3_tg_intro.html.
Burlington, VT: Ashgate. 2001.
28. Starner, T., Weavr, J., and Pentland, A. A wearable
12. Hook, K., Sengers, P., and Andersson, G. Sense and computer based american sign language recognizer ,
sensibility: evaluation and interactive art, Proceedings ISWC 1997, pp. 130-137.
CHI'2003, Computer Human Interaction. 2003.
29. Topological Media Lab. Georgia Institute of Technol-
13. Kahn, J. M., Katz, R. H., and Pister, K. S. J. . ogy. Ubicomp video.
“Emerging Challenges: Mobile Networking for 'Smart
Dust', Journal of Communications and Networks, Vol. http://www.gvu.gatech.edu/people/sha.xinwei/topo
2, No. 3, September 2000. logicalmedia/tgarden/video/gvu/TML_ubicomp.mov
14. Krueger, M. Artificial Reality 2 (2nd Edition), 30. Van Laerhoven, K. and Cakmakci, O., What shall we
Addison-Wesley Pub Co. 1991. teach our pants? Fourth International Symposium on
Wearable Computers (ISCW'00), 77-86.
15. Machover, T., Hyperinstruments project, MIT Media
Lab. http://www.media.mit.edu/hyperins/projects.html 31. Vasulka, S and W. Steina and Woody Vasulka: In-
strumental video. Langlois Foundation Archives.
16. M a u b r e y , B. Die Audio Gruppe. http://www.fondation-
http://home.snafu.de/maubrey/ langlois.org/e/collection/vasulka/archives/intro.html
17. McNeill, D. Hand and Mind: What Gestures Reveal 32. Wanderley, M. Trends in Gestural Control of Music,
About Thought. University of Chicago Press, 1995. IRCAM,- Centre Pompidou, 2000.
18. Oldenburg, R. The Great Good Place. Marlowe & 33. Whyte, W.H., The Social Life of Small Urban Spaces.
Company, 1999. Project for Public Spaces, Inc., 2001.

136
Mobile Capture and Access for Assessing Language and
Social Development in Children with Autism
David Randall White1, José Antonio Camacho-Guerrero2, Khai N. Truong1,
Gregory D. Abowd1, Michael J. Morrier3, Pooja C. Vekaria3, and Diane Gromala1

1
GVU Center, Georgia Institute of Technology, Atlanta, GA 30332 USA
{drwhite, khai, abowd}@cc.gatech.edu, [email protected]

2
Instituto de Ciencias Matematicas e de Computacao, Universidade de Sao Paulo, Sao Carlos/SP, Brazil
[email protected]

3
Emory Autism Center, Emory University School of Medicine, Atlanta, GA 30322 USA
[email protected], [email protected]

ABSTRACT social behaviors in the classroom help determine both the


We present a mobile device that supports expert practices effectiveness of interventions and the appropriate goals to
for assessing the development of language and social be targeted. Observers must be both “very sensitive to the
skills in children with autism (CWAs). Our Tablet child’s needs and reactions, and scrupulously objective in
PC–based system combines aspects of existing paper- and the measurement and analysis of those reactions” [5].
video-based data-recording activities at a preschool for Treatment plans are developed collaboratively by
CWAs. We created in Macromedia Director a prototype members of three stakeholder groups: researchers,
that supported automated capture and access of multiple teachers, and parents. Data must be collected, analyzed,
data streams, addressing the information needs of and reported to meet the needs of all these groups. It is
researchers, teachers, and parents. Video of natural crucial that proposed technological innovations support
classroom behaviors is synchronized with researchers’ established practices. Mackay et al. suggest that designers
assessments of behavioral variables. We obtained user who follow this guideline, taking “evolutionary path[s] to
feedback on our prototype and on the resulting Java-based … new methods,” may encounter less resistance to
system, which we will deploy and evaluate. technological change [3]. Our goal is to understand better
Keywords these practices from the perspectives of the stakeholders,
Ubiquitous and mobile computing, computer-supported and to meet their needs by developing technological
cooperative work, ethnography, capture and access, autism solutions based on automated capture and access. We
studied the environment and designed a prototype, then
INTRODUCTION obtained user reactions that influenced the development of
Early behavioral intervention — begun when children a system that we will deploy and evaluate.
with autism (CWAs) are approximately ages 2 to 5 — is
reported to improve the language and social skills of CASE STUDY
“virtually all children, and in some cases it leads to Walden is the early-childhood model demonstration
complete eradication of any sign of the disorder” [4]. At program of the Emory Autism Center, which is a
the Walden Early Childhood Center at Emory University, component of the Department of Psychiatry and
early intervention is administered in the context of typical Behavioral Sciences at the Emory University School of
preschool education activities. Treatment plans are Medicine. Walden has three classes — toddler (ages two
individualized for each child, because CWAs “are often and three), preschool (ages three and four), and pre-
characterized by idiosyncratic learning styles” [5]. kindergarten (ages four and five) — of approximately
Assessments of CWAs’ ongoing, naturally occurring eighteen children each. One-third of the students in each
class are CWAs, and two-thirds are typically developing
children who serve as role models for CWAs as they
develop language and social skills.
For ten weeks, we spent six hours a week observing
classrooms and interviewing stakeholders (two teachers,
two researchers, and three sets of parents). We interviewed
many more researchers and teachers as they worked.

137
Treatment plans for CWAs are written at the beginning of tabulate the data quarterly. Because sessions are not
each child’s tenure at Walden. The plans are reviewed videotaped, they cannot be reviewed for accuracy, or be
quarterly and updated annually to meet each child’s used for demonstrating visually to parents that progress is
changing needs. Plans are divided into goals — such as being made. The assistant director uses the tabulated data
improved language development, social interactions and to prepare reports that indicate progress on each objective
engagement, and independent-living and school-readiness and can easily be fifteen pages long.
skills — which are then broken into measurable Parents receive these reports quarterly, and discuss them
objectives set progressively over the school year. Data on with classroom coordinators. However, parents can obtain
these objectives are collected daily, in quantitative visual evidence of their children’s progress only by
experiments incorporated into classroom routines. observing classroom activities through one-way mirrors or
Research assistants also observe CWAs unobtrusively and by watching videotapes. There is no artifact that combines
capture data on video or on a paper spreadsheet known as visual evidence with expert assessment. We believe our
a Pla-Chek (pronounced “PLAY-check”; Figure 1(a)), on system will do this effectively.
which these variables are recorded:
RELATED WORK
• proximity to adult (within three feet)
Our prototype follows the principle of “voluntary,
• adult interacting with CWA
explicit, task-appropriate interaction” that Arnstein et al.
• proximity to typical child
support in the second version of Labscape [1]. The cell-
• typical child interacting with CWA
biology lab for which Labscape was designed is similar to
• proximity to another CWA
Walden in that data must be recorded with scientific rigor.
• other CWA interacting with target CWA
The first version of Labscape relied on sensors that could
• verbalization (words listed in dictionary)
not “provide the detail, completeness, and reliability
• engagement
sufficient to the task.”
• focus on an adult (if the child is engaged)
• focus on another child (if the child is engaged) Steurer et al. have chosen a sensor-based approach for
• focus on a toy (if the child is engaged) another education environment, the Smart Kindergarten
• autistic behaviors [6]. The authors suggest that data collected by sensors in
a classroom can help teachers identify and address the
Video data are coded later but for the same variables,
learning problems of individual children.
except proximity to other CWAs, interactions with other
CWAs, and autistic behaviors. This difference exists DESIGN OF PROTOTYPE
because research assistants may not know which children With our prototype — designed in Macromedia Director
in videos are CWAs. Because of this similarity, we chose and later implemented in Java — we transferred the Pla-
the Pla-Chek for our prototype. Chek to a Tablet PC (Figure 1(b)). The prototype
Pla-Cheks place cognitive burdens on research assistants. captured handwritten data as well as video from a webcam
They observe children for intervals of ten seconds, which worn at the research assistant’s beltline. The system
are counted mentally, then record values in a line of cells. tabulated the data as they are collected rather than
Each line is followed by ten more seconds of observation. requiring a teacher to do so later. The interface reduced the
The next line is filled and the process repeated until research assistants’ cognitive load by providing a timer
twenty intervals are done. Counting time complicates the that counted two ten-second intervals for each line of data:
recording, which requires strict objectivity. an observation interval, then a handwriting interval.
Pla-Cheks for each CWA are recorded on ten consecutive The access interface (Figure 2(a)) contained the video and
days each calendar quarter. Classroom coordinators two visualizations of the data: a “macro” timeline of the
ten sessions recorded quarterly for each child, and a

Video
capture
Video window
capture
window

(a) (b) ( c)
Figure 1: The paper Pla-Chek (a) was the template for our initial capture interface (b), in which we maintained, as
much as possible, the look and feel of the original. User
138feedback led to the second iteration of the interface (c).
“micro” timeline of the session being viewed. Data were The system has three INCA modules: a capture module to
represented on these timelines by dots. Variable names record annotations and video; a storage module to hold
were displayed on the Y-axis and grouped by dot colors: that information for later access; and an access module to
red for proximity to and interaction from adults, gray for provide synchronous access to multiple integrated streams
proximity to and interaction from typical children, green of information gathered from context-based queries.
for proximity to and interaction from other CWAs, black The capture interface is built on INCA’s capture module,
for verbalization, blue for engagement and focus, and pink which supports the recording of video data and behavioral
for autistic behaviors. Graphed on the X-axis of the macro variables (Figure 3(a)). The video and handwritten
timeline were the ten quarterly sessions; on the X-axis of annotations captured — with metadata describing when,
the micro timeline, numbers indicated the progression of what, and for which child information is being captured
time, measured in minutes, through the video. — are stored in a relational database using the storage
Dots in the micro timeline were uniform in size, and module (Figure 3(b)). The access module draws on this
represented single positive recorded occurrences of database to compose the access interface (Figure 3(c)). In
variables; dot sizes in the macro timeline varied to this interface, each marked behavior is an index into the
indicate the percentage of positive results recorded in each video (Figure 3(d)).
session. There were five sizes of dots, representing values The first capture interface used the Quill toolkit as a
in 20-percent increments. We considered using more sizes gesture recognizer, with a few changes that allowed for
for finer granularity, but we believed that constraints of automatic interpretation and tabulation of the observers’
screen real estate would prevent clear distinctions in sizes. data [2]. While this design supported a familiar method of
When the user rolled over a dot in the macro timeline, the data input, its deployment on a Tablet PC failed. Writing
interface displayed the percentage represented. In both on a tablet was different from writing on paper in two
timelines, the percentages and number of occurrences of important ways: calibration and resolution. Annotating
each variable were displayed at the end of the line. The boxes in the electronic form that were the same size as
user selected a session for review by clicking on its those on a paper version proved to be noticeably difficult,
column in the macro timeline. That session’s micro and the imperfect handwriting recognition resulted in a
timeline and video then appeared. A vertical line moved significant amount of time and effort being spent
along the micro timeline to help viewers relate variables correcting the data. The research manager also found it
to the actions displayed in the video. The access interface difficult to keep children in the video frame while
does not necessarily have to be viewed on the Tablet PC, observing and annotating behaviors.
although doing so would allow access in many settings. We redesigned the prototype to simplify capture. We used
SYSTEM IMPLEMENTATION screen real estate more economically by replacing the
The Walden system was developed on top of the spreadsheet with click boxes for “yes,” “no,” and “can’t
INfrastructure for Capture and Access Applications tell” (Figure 1(c)). The same set of boxes is used for each
(INCA) toolkit [7]. INCA provides abstractions and recording interval, with the number of the interval noted
reusable components that address capture-and-access at the top. We replaced the cells for writing the names of
concerns and facilitate application development. teachers and classroom activities with drop-down menus
from which the names can be selected. We added buttons

Video
access
window

(a) (b)

Figure 2: The access interface (a) has at the bottom a “macro” timeline that shows an overview of a child’s ten
quarterly Pla-Chek sessions. The micro timeline at the top right shows the results of the selected session,
and the video for that session appears at the top left. A researcher performs capture during naturally occurring
classroom activities, using a Tablet PC with
139 a head-mounted camera attached (b).
Video
capture
window

Figure 3 : The capture interface (a) is built on the capture module of INCA, which supports the recording
of video data and behavioral variables. The storage module (b) saves the data for use by the
access module (c) in composing the access interface (d).

that can be used to place marks in the timeline when 2. Long, A.C. Jr., Landay, J.A., Rowe, L.A., and
teachers or activities change; these marks remind the Michiels, J., “Visual Similarity of Pen Gestures,” in
research assistants to make the changes using the drop- Proceedings of CHI 2000, The Hague, Amsterdam,
down menus after the session, avoiding interruptions. the Netherlands.
Handwriting and gesture recognition are no longer issues. 3. Mackay, W.E., Fayard, A.-L., Frobert, L., and
Each ten-second interval is added to a canvas that renders Médini, L., “Reinventing the Familiar: Exploring an
a quick review of the CWA’s behavior throughout the Augmented Reality Design Space for Air Traffic
session. A head-mounted bullet camera — which ensures Control,” in Proceedings of CHI 1998, Los Angeles,
all data are recorded during the heads-up observation California.
interval — replaced the beltline webcam (Figure 2(b)). A 4. Maurice, C., Green, G., & Luce, S.C. (eds.). Preface
notepad was also added, allowing the research assistants to Behavioral Intervention for Young Children with
to associate handwritten notes with each recorded interval. Autism: a Manual for Parents and Professionals.
FUTURE WORK Austin, Texas: PRO-ED Inc., 1996.
We will add a harness to support the weight of the Tablet 5. Romanczyk, R.G., “Behavioral Analysis and
PC, as well as a belt-worn pack to hold the battery and Assessment: the Cornerstone to Effectiveness.” In
controller for the bullet camera. We will develop a plan Maurice, C., Green, G., and Luce, S.C., (eds.),
for deploying the capture and access modules, recording Behavioral Intervention for Young Children with
and reviewing quarterly data for several children, and Autism: a Manual for Parents and Professionals (pp.
evaluating the usefulness and usability of the system. 195-217). Austin, Texas: PRO-ED Inc., 1996.
ACKNOWLEDGMENTS 6. Steurer, P., and Srivastava, M.B., “System Design of
We are grateful to the staff, parents, and children of the Smart Table,” in Proceedings of PerCom 2003,
Walden Early Childhood Center. Dallas–Fort Worth, Texas.
REFERENCES 7. Truong, K.N., and Abowd, G.D. “Enabling the
1. Arnstein, L., Borriello, G., Consolvo, S., Franza, R., Generation, Preservation & Use of Records and
Hung, C.-Y., Su, J., and Zhou, Q.H., “Labscape: Memories of Everyday Life.” Georgia Institute of
Design of a Smart Environment for the Cell Biology Technology Technical Report GIT-GVU-02-02.
Laboratory.” Intel Research Seattle Technical Report, January 2002.
IRS-TR-02-008, 2002.

140
The Narrator : A Daily Activity Summarizer Using Simple
Sensors in an Instrumented Environment
Daniel Wilson Christopher Atkeson
Robotics Institute Robotics / Human Computer Interaction
Carnegie Mellon University Carnegie Mellon University
5000 Forbes Avenue 5000 Forbes Avenue
Pittsburgh, PA 15217 USA Pittsburgh, PA 15217 USA
[email protected] [email protected]

ABSTRACT networks, RFID (Radio frequency identification) badges,


People tracking provides the basis for automatic and infrared or ultrasound badges [1, 2, 3, 6, 9, 11, 13, 14].
monitoring. This service can help people with disabilities Cost of sensors and sensor acceptance are pivotal issues,
or the elderly live independently by providing day-to-day especially in the home. Many people are uncomfortable
information to physicians and family. The Narrator system living with cameras and microphones. Laser scanning
uses information generated by a tracker to generate devices are anonymous, but costly and have limited range.
concise, scalable summaries of daily movement activity. We find that people are often unwilling, forget, change
We demonstrate output from the Narrator as well as the clothes too often, or are not sufficiently clothed when at
workings of an underlying tracker in an instrumented home home to wear a badge, beacon, set of markers, or RF tag.
environment. We show that in a system made up almost Elderly individuals are often very sensitive to small
entirely of sensors that do not report identity information, changes in environment [4], and a target population,
we can maintain identity information and recover from institutionalized Alzheimer's patients, frequently strip
identification errors. themselves of clothing, including any wearable sensors [5].
We have chosen to explore a set of sensors that are already
Keywords
present in many homes as part of security systems (motion
Ubiquitous Computing, People Tracking, Simple Sensors
detectors, contact switches, and other simple binary
INTRODUCTION sensors). These sensors are cheap, computationally
Knowledge of the identity and position of occupants in an inexpensive, and do not have to be continuously worn or
instrumented environment is a basic element of automatic carried. We aim for room level tracking, as our sensors do
monitoring. Automatically generated summaries of daily not provide the higher spatial resolution of other types of
activities for people with cognitive disabilities can be used tracking systems.
to improve the accuracy of pharmacological interventions, Combining anonymous sensors and sensors that provide
track illness progression, and lower caregiver stress levels identification information for people or object tracking is
[7]. Additionally, [15] has shown that movement patterns an open problem. Our tracking problem is similar to object
alone are an important indicator of cognitive function, identification. The goal is to determine if a newly observed
depression, and social involvement among people with object is the same as a previously observed object. The
Alzheimer's disease. solution offered by [12] has been applied to tracking
In this paper we describe a people tracker and a derivative automobile traffic using cameras, extending the technique
service -- the Narrator. The Narrator is a finite state introduced by [8] to accommodate many sensors. In a
machine that parses movement information provided by a recent experiment [13], laser range finders and infrared
tracker and generates a concise, readable summary. Our badges were used to track six people simultaneously in an
tracker consists of a discrete state Bayes filter and office environment for 10 minutes. The range finders
associated models that use information gathered from provide anonymous x,y coordinates while the badge system
binary sensors to provide low-cost automatic tracking in a identified occupants. Our system uses a single RFID-
home environment. We demonstrate results from an off- sensor with many anonymous sensors to provide room-
line smoothing algorithm, although online filtering level tracking. We collect data over long periods to provide
techniques are possible. We instrumented a permanently an ever-improving model of the unique motion patterns of
occupied home and conducted a series of experiments to each occupant. These models can be used later for
validate our approach. occupant identification in lieu of additional ID-sensors.
RELATED WORK NARRATOR
People tracking has been approached via a variety of The purpose of the Narrator system is to provide a
sensors, including cameras, laser range finders, wireless summary of daily movements, using information generated

141
automatically by a tracker in an instrumented environment. • Daniel passed through the first floor hallway,
entered the kitchen and stayed for 10 minutes.
This summary represents important daily events in a
compact, readable format, although the tracker provides • Daniel walked to the kitchen and stayed for 10
minutes.
many thousands of second-by-second location predictions.
On the most basic level, the Narrator could produce an Sensor Granularity
English account of the second by second location The tracker can predict location at the granularity of
predictions. In our instrumented environment there were an individual sensors, although the current implementation
average of 2000 readings per day. This scheme would reports at room level. The Narrator allows the user to scale
produce volumes of not very useful information. Instead, the granularity from room level to floor level and to the
we make a few simplifying assumptions and provide user- entire house. The sentences below demonstrate room level,
scalable levels of abstraction. floor level, and house level granularity, respectively.
• Daniel woke at 8am. He walked to the bathroom
We make two assumptions. First, although we track several and stayed for 15 minutes. He walked downstairs
occupants simultaneously, we choose to create summaries to the kitchen and stayed for 10 minutes. He
for one occupant at a time. We also report only movement passed through the foyer to the front porch and
left the house.
information and do not attempt activity recognition, except
for sleeping. For sleeping we use a simple rule – if an • Daniel woke at 8am. He stayed on the second
floor for 15 minutes. He went to the first floor
occupant spends more than four hours in the bedroom, that and stayed for 10 minutes. He left the house.
time is tagged as sleeping. Second, the Narrator directly
• Daniel woke at 8am. He stayed home for 25
uses the maximum likelihood predictions of the tracker. minutes. He left the house.
Each of these predictions has an associated posterior
Algorithm
probability, which we ignore for now. In future work we
plan to incorporate this confidence measure into the The Narrator algorithm is a conceptually simple
Narrator's output. deterministic finite state machine. It is composed of a set
of states, an input alphabet, and a transition function that
We identify two areas in which reporting may be maps symbols and states to the next state. The set of states
abstracted. First, we use duration of time spent in a location represent English words and phrases, while the input
to scale the amount of information reported on that alphabet is composed of sensor readings and times. To add
movement. Second, we use sensor granularity to scale some variety to the language, some states have more than
reporting from room level up to house level. one transition for a given symbol. A lookup table maps the
Transient Locations room and occupant ids reported by the tracker to room and
Some locations are less interesting than others, because occupant names.
they are traversed constantly and quickly in order to reach TRACKER
end locations. Usually, transient locations are stairways and We wish to estimate the state of a dynamic system from
hallways. These locations demonstrate a marked decrease sensor measurements. In our case, the dynamical system is
in the average amount of time spent compared to other one or more occupants and the instrumented environment.
locations. For example, in our experiments the staircases For this paper we track people at the room level, so a
had mean durations of 5.5 seconds and hallways had mean person's state, x, indicates which of N rooms they are in.
durations of 10.3 seconds. On the other hand, the living Measurements include data from motion detectors, pressure
room and study had a mean of 8.2 minutes. mats, drawer and door switches, and radio frequency
The transience property of a location determines what identification (RFID) systems. We solve the tracking
detail to report travel through that location. We use a problem off-line with a technique commonly known as
threshold on mean duration spent in a room to identify smoothing which uses information from both past and
transient spaces. We fit a Gaussian to the amount of time future time steps, providing higher accuracy for off-line
spent in these rooms to obtain an overall measure of purposes, such as a daily summary of movement activity.
transience. The Narrator tags travel through any room as Technological Infrastructure
transient if the amount of time spent there is within the We instrumented a house in order to conduct experiments
transient mean and variance. In this way we simplify the using real data. The three story house is home to two
summary without restrictive rules that completely ignore males, one female, a dog, and a cat. Our environment
certain areas. With this information the user may choose to contains forty-nine sensors and twenty different rooms.
fully or partially ignore transient locations, and focus
instead upon end locations where the occupant spends the • Radio Frequency Identification (RFID): We use low
most time. The below sentences were generated by the frequency RFID to identify occupants entering and
Narrator and demonstrate the three scales. leaving the environment. Each occupant and guest is
given a unique transponder, or 'tag'. When the credit card
• Daniel entered the first floor hallway and
stayed for 2 seconds. Daniel entered the kitchen sized tag nears the RFID antenna it emits a unique
and stayed for 10 minutes. identification number. Upon recognition of a tag the

142
tracker places a high initial belief that the occupant is at Data Association
the antenna location. Note that using this tag is no Each sensor reading must be assigned to at least one
different than using a house key; it is not necessary to occupant or to a noise process. This is the data association
carry the tag throughout the environment. step. Our solution is to use an EM process to iteratively 1)
estimate the likelihood of each occupant independently
• Motion detectors: We use wireless X10 Hawkeye ™
generating a given sensor sequence, and then 2) maximize
motion detectors. Upon sensing motion a radio signal is
by re-assigning ownership of sensor values [10]. We use
sent to a receiver, which transmits a unique signal over
the forward-backward algorithm to estimate the posterior
the power line. This signal is collected by a CM11A
beliefs, and then maximize the following quantity:
device attached to a computer. The detectors are pet-
resistant, require both heat and movement to trigger, and
run on battery power for over one year. There are twenty ∑p
x
u
t ( y | x) ⋅ Beltu ( x) .
four motion detectors installed.
• Contact switches: Inexpensive magnetic contact Occupant Independence
switches indicate a closed or open status. They are Currently, we assume that occupants behave independently,
installed on every interior and exterior door, selected an obvious approximation. In reality occupant movements
cabinet drawers, and refrigerator doors. There are twenty are highly correlated. Conditioning on the presence of
four contact switches. several other occupants increases the computational
complexity of the problem, while including guests causes
The sensors are monitored by a single Intel Pentium IV 1.8 further growth in the number of required models. For this
GHz desktop computer with 512MB ram. We use an paper we were interested in testing the performance of a
expanded parallel port interface to monitor contact simpler model.
switches, a serial interface to a CM11A device to monitor
motion detector activity, and a serial interface to the RFID Motion model
reader. All activity is logged in real-time to a MySQL The equation, p u ( xt | xt −1 ) , represents the motion model for
database. a specific occupant. This model takes into account where
Tracking Formulation the occupant was at the previous time step and predicts
Our goal is to estimate the probability distribution for each how likely the current room is now. Our data is a time
person's location, conditioned on sensor measurements. series of sensor measurements. All occupants are
This probability distribution, the tracking system's “belief” constantly generating streams of data that are combined in
or “information” state, is encoded as a length N vector, the database. For this reason, we learn motion models for
whose elements give the probability of being in the each occupant using the entire database of sensor readings
respective rooms. We use a discrete state Bayes filter to in which that occupant is home alone. We map each sensor
maintain the belief state Bel. Our belief that a person u is in to a state that represents a room and counted to generate an
room i at time t is: [N x N] table of transition probabilities.
EXPERIMENTS
Beltu [i ] = ptu ( x = i | y1 ,..., yt ) .
We performed an uncontrolled experiment on a single
Here p() indicates probability and y1 ,..., yt denotes the occupant using 1288 sensor readings from when that
occupant was home alone, collected over a two-day period.
data from time 1 up to time t. Given a new sensor value, we During this time one person moved through the house,
can update the beliefs for all rooms. For room i : visiting every sensor and moving with varying speed and
Beltu+1[i ] = direction. The occupant conducted several common tasks,
such as making a sandwich and using the computer. The
system was not running while the occupant slept. The
η ⋅ ptu+1 ( y | x = i ) ⋅ ∑p
j =1... N
u
( xt +1 = i | xt = j ) ⋅ Beltu [ j ].
tracker used a motion model trained for the occupant being
tracked. Accuracy is measured as the fraction of time that
The variable, η , is a normalizing constant, so that the the room location was predicted correctly. We performed
elements of any Bel vector sum to 1. In using a Bayes filter, 10 trials, training motion and sensor models on 90% and
we assume that our room-level states are Markov. This is testing on a rolling 10%. Using smoothing we found an
an approximation, and one research question is whether we accuracy of 99.6% ± 0.4.
can accurately track people after making this We also report results from five days of continuous,
approximation. We assume that each person u has a unplanned, everyday movement of one to three people in
different motion model p u ( xt +1 = i | xt = j ) and sensor the house. We measured tracker performance over a
model p u ( y | x) . continuous five-day period. The tracker used individual
motion models for the three occupants. There were no
guests during this period. To evaluate performance we had

143
to hand-label the data. To make hand labeling feasible we 5. Burgio, L., Scilley, K., Hardin, M., Hsu, C. (2001).
gathered additional information from eight wireless Temporal patterns of disruptive vocalization in elderly
keypads. The keypads have one button for each of the three nursing home residents. International Journal of
occupants and one for guests. During that week when Geriatric Psychiatry. 16, 378-386.
anyone entered a room with a keypad, they pushed the 6. Clarkson, B., Sawhney, N., and Pentland, A. (1998).
button corresponding to their name. This information acted Auditory Context Awareness via Wearable Computing.
as road signs to help the human labeler disambiguate the In the Proceedings of the Perceptual User Interfaces
data stream and correctly label the movements and identity Workshop, San Francisco, CA.
of each occupant.
7. Davis, L., Buckwalter, K., Burgio, L. (1997).
There were approximately 2000 sensor readings each day Measuring Problem Behaviors in Dementia: Developing
for a total of 10441 readings. When the house was a Methodological Agenda. Adv. Nurs. Sci, 20(1),40-55.
occupied on average there was one occupant at home 13%
of the time, two occupants home 22% of the time, and all 8. Huang, T., and Russell, S. Object identification in a
three occupants home for 65% of the time. Note that each Bayesian context. In Proceedings of the Fifteenth
night every occupant slept in the house. On the whole, the International Joint Conference on Artificial Intelligence
tracker correctly classified 74.5% sensor readings (IJCAI-97), Nagoya, Japan, August 1997. Morgan
corresponding to 84.3% of the time. There was no Kaufmann.
significant difference in accuracy between occupants. The 9. Kanade, T., Collins, R., Lipton, R., Burt, P., and
tracker was accurate 84.2% of the time for one occupant, Wixson, L. Advances in cooperative multi-sensor video
81.4% for two occupants, and 87.3% for three occupants. surveillance. In Proceedings of the 1998 DARPA Image
Accuracy for three occupants drops to 74.5% when Understanding Workshop, volume 1, pages 3-24,
sleeping periods are removed. November 1998.
CONCLUSION 10. McLachlan, G.J., and Krishnan, T. (1997). The EM
We described the Narrator, a service that uses information Algorithm and Extensions. Wiley Series in Probability
from a tracker to provide daily movement summaries. We and Statistics, 1997.
described algorithms that exploit information from binary 11. Mozer, M. C. (1998). The neural network house: An
sensors to perform tracking of several occupants environment that adapts to its inhabitants. In M. Coen
simultaneously. We validated our algorithms using (Ed.), Proceedings of the American Association for
information gathered from an instrumented environment in Artificial Intelligence Spring Symposium on Intelligent
a series of experiments and provided example output of the Environments (pp. 110-114). Menlo, Park, CA: AAAI
Narrator. Press.
REFERENCES 12. Pasula, H., Russell, S., Ostland, M., and Ritov, Y.
1. Abowd, G., Atkeson, C., Bobick, A., Essa, I., Tracking many objects with many sensors. In
MacIntyre, B., Mynatt, E., and Starner, T. (2000). Proceedings of the Sixteenth International Joint
Living Laboratories: The Future Computing Conference on Artificial Intelligence (IJCAI),
Environments Group at the Georgia Institute of Stockholm, Sweden, 1999. IJCAI.
Technology. In Proceedings of the 2000 Conference on
13. Schulz, D., Fox, D., and Hightower, J. People Tracking
Human Factors in Computing Systems (CHI 2000), The
with Anonymous and ID-Sensors using Rao-
Hague, Netherlands, April 1-6, 2000.
Blackwellised Particle Filters. In Proceedings of the
2. Addlesee, M., Curwen, R., Hodges, S., Newman, J., Eighteenth International Joint Conference on Artificial
Steggles, P., Ward, A., Hopper, A. Implementing a Intelligence (IJCAI), 2003.
Sentient Computing System. IEEE Computer Magazine,
14. Sidenbladh, H. and M. J. Black. (2001), Learning image
Vol. 34, No. 8, August 2001, pp. 50-56.
statistics for Bayesian tracking. In: IEEE International
3. Bennewitz, M., Burgard, W., and Thrun, S. Learning Conference on Computer Vision, ICCV, Vol. 2. pp.
motion patterns of persons for mobile service robots. In 709-716.
Proc. of the IEEE Int. Conference on Robotic &
15. VanHaitsma, K., Lawton, M.P., Kleban, M., Klapper,
Automation (ICRA), 2002.
J., and Corn, J. (1997). Methodological Aspects of the
4. Burgio, L., Scilley, K., Hardin, M., Janosky, J., Bonino, Study of Streams of Behavior in Elders with Dementing
P., Slater, S., and Engberg, R. (1994). Studying Illness. Alzheimer Disease and Associated Disorders.
Disruptive Vocalization and Contextual Factors in the Vol. 11, No. 4, pp. 228-238
Nursing Home Using Computer-Assisted Real-Time
Observation. Journal of Gerontology, Vol. 49, No. 5,
Pages 230-239.

144
Part III

Interactive Posters
Device-Spanning Multimodal User Interfaces
Elmar Braun, Andreas Hartl
Telecooperation Group
Department of Computer Science
Darmstadt University of Technology
Alexanderstr. 6, 64283 Darmstadt, Germany
{elmar, andreas}@tk.informatik.tu-darmstadt.de

ABSTRACT of the displays. The TA notes that it is able to associate with


Despite the large variety of mobile devices available today, a screen, and relays this information to the order applica-
none of them is without flaws: small devices like cell phones tion, which decides to show its menu there. After scanning
are very limited regarding interaction, while larger devices the menu, Alice simply says ”sixty-five, large” into her TA,
like laptops lack true mobility. We are investigating how and resumes walking. The remaining steps of the order are
both mobility and rich interaction can be achieved by fed- presented to her as a pure voice menu again.
erating small mobile devices with other interactive devices
on demand. However, this raises the question how user in- 2. AUTHORING ADAPTIVE APPLICATIONS
terfaces can be authored if the target device is not static and For authoring applications that make use of the TA and its
known, but rather a changing set depending on user context. associated devices, we are going to enhance the idea of wid-
We approach this using device independent widgets, which gets as simple elements of interaction [1]. Widgets currently
are mapped to device specific widgets at runtime. are considered merely as graphical elements; there is no such
high-level approach for other modalities. In our broader ap-
1. INTRODUCTION proach we think of them as objects that represent a general-
Mobile devices have continuously become more powerful ized, general purpose solution towards interaction between
and cheaper over the last years. However, they still lack flex- users and applications.
ibility as most of them are designed to be used stand-alone, The framework for programming the TA will provide basic
and offer only limited interaction means. We present an ar- device independent logical widgets which developers may
chitecture which integrates the multitude of mobile devices use to create their multimodal applications. These are com-
with each other to augment the users’ experience with them. parable to the atomic interactors of XWeb [3]. XWeb fea-
While we expect most of such mobile devices can be shared tures a browser-like approach to data exchange, whereas we
and accessed as needed, we also think that there still will be provide an event system like traditional GUI widget tool-
a personally owned device in such a world for security and kits. Our widget system is extensible: It is possible to extend
privacy reasons. Such a device should be unobtrusive and existing widgets, to create entirely new ones or to combine
as small as possible. Our group has designed a headset with widgets to more powerful ones. While further research will
voice based interaction and communication abilities. Other be done in this area, we have so far identified the following
devices may be associated to this Talking Assistant (TA) [2] as some of the basic widget classes to be provided by the
as needed, and may also bring in other means of interaction system:
such as graphical user interfaces. • free form text input; output of simple and structured text
1.1 Example Scenario • yes/no input for boolean elements
What is the benefit of a voice based assistant, and of associ- • select one of several mutually exclusive elements
ating other interactive devices with it on the fly? We show
this in a scenario which describes how a user named Alice • input and output of date and time
orders a pizza online while hurrying out of her office. Alice
notices that she is hungry exactly as she locks her office be- • a grouping element combining several widgets
hind her. She considers turning back and ordering from her 3. MAPPING WIDGETS TO A DEVICE
desktop PC, but she is wearing her TA and decides to access Application developers define the user interface of their ap-
the pizza order application by voice menu instead. Alice in-
plications by creating a tree of logical widgets. These logical
structs the TA to access this application and orders a bottle of
widgets are purely abstract representations of the user inter-
soft drink. The next step is choosing a pizza topping. Before face with no association to a specific device. In a two step
making a choice, Alice would like to see the menu. But how process, these logical widgets are mapped to a device.
should the menu be presented to her? Speech synthesis is
unsuitable for reading such a long list. Luckily, the building First, the mapping subsystem creates physical widgets out of
has public displays installed in the hallways. Rather than logical widgets. Logical widgets are mere data objects rep-
turning back and restarting the order on a better suited de- resenting only the kind of interaction. Their physical coun-
vice with a larger display, Alice simply stops in front of one terparts contain methods to render themselves onto a spe-

147
cific modality and/or device. As a result, physical widgets fixed. If users move in and out of range of associated de-
are device dependent. The mapping subsystem utilizes con- vices at runtime, the virtual target device of the mapping
text metadata such as the device used, its primary interaction changes drastically at runtime, making mapping the UI
method (graphics based or voice based) and additional infor- for multiple devices a much more dynamic process.
mation about the capabilities of the interface (e.g. the voice
recognizer used, window metrics, etc.). • Despite constantly changing context, the mapping should
not present the user with a constantly changing interface.
Physical widgets are registered to the mapping subsystem This would inhibit usability since the user would have
with the information what logical widget they map to, what no chance to become accustomed to the UI. Therefore a
modality they implement, and what constraints they have. history of how a UI was presented to a user before needs
Several physical widgets may be registered for one logical to be considered as an additional form of context.
widget, e.g. for different modalities or for different imple-
mentations of one modality. At runtime, the mapping sub- • In the case of a single device, each widget is rendered
system searches for the physical widget which best fits the exactly once. When using a federation of devices, it can
device and chooses it to substitute the logical widget. make sense to render an element more than once, e.g. in
different modalities to achieve multimodality, or on dif-
In the second step, the physical widgets render themselves ferent devices to create some form of remote control.
onto the user interface. How this is done is modality specific.
For voice based interaction, this could involve using text- We are investigating several mapping methods that take these
to-speech for doing the output and generating context free criteria into account. Currently we are building a test bed
grammars for specifying the input. The equivalent physical that allows us to create distributed UIs, and to automatically
widget for GUIs may just call the appropriate element of the send components of these to different devices in the room
operating system’s widget toolkit. infrastructure. This allows us to experiment with distributed
UIs, and to try out mapping algorithms for the distributed
4. ASSOCIATION AND MULTIPLE DEVICES case. In future work it will be used for user studies.
A user interface can obviously not exceed the limitations of 5. CONCLUSION
the device it runs on. When mapping an interface to a small We have presented a way to create multimodal applications
mobile target device, it may allow basic interaction in the whose user interface may span across several devices. The
absence of a better terminal. However, mobile devices are approach is based on a generalized concept of widgets as
not always used in isolation. Often, the surrounding infra- interaction elements. We have developed a mapping sub-
structure could provide additional means of interaction. We system that determines the appropriate mapping of a logical
intend to dynamically associate mobile devices with devices widget at runtime based on the target device. By mapping at
from the infrastructure in order to overcome their limitations runtime we can support several modalities concurrently.
regarding interaction. Since the number of possible combi-
nations of devices is rather large, hand-coding a specialized We have shown how several devices may be integrated into
UI for each combination is infeasible. The mapping subsys- federations in order to define the target for device-spanning
tem will provide a scheme to render an interface on a fede- user interfaces. While the mapping subsystem is designed
ration of multiple devices. This concept has so far only been to cope with several modalities, distributed user interfaces
considered for playback of multimedia content [4]. pose additional challenges to the mapping which we have
identified and are currently working on.
Before such a federation can be established, one must detect
that a device is within the user’s range and that the device can REFERENCES
be associated. We currently use two methods for detecting [1] A. Hartl. A Widget Based Approach for Creating Voice
possible association between users and devices. One is the Applications. In Proceedings of MobileHCI, Udine,
TA, which determines its wearer’s head position (using two Italy, 2003. to appear.
cameras tracking an infrared beacon on the TA) and gaze
direction. The other consists of tags on each device, which [2] M. Mühlhäuser and E. Aitenbichler. The Talking
transmit their ID using short range infrared, and badges on Assistant Headset: A Novel Terminal for Ubiquitous
each user, which receive a tags’ ID if the user is standing in Computing. In Microsoft Summer Research Workshop,
front of the tagged device, and relay them to the network. Cambridge, Sep 2002.
The advantage of the latter solution is the low cost. [3] D. R. Olsen, S. Jefferies, T. Nielsen, W. Moyes, and
Mapping a user interface to span multiple devices introduces P. Fredrickson. Cross-modal interaction using XWeb. In
a number of novel problems: Proceedings of the 13th annual ACM symposium on
User Interface Software and Technology, pages
• When adapting for a single device, there is no choice re- 191–200, San Diego, USA, 2000. ACM Press.
garding which device to present a widget on. If several
devices are available, the mapping needs decide how to [4] T. Pham, G. Schneider, and S. Goose. A Situated
distribute widgets to devices, factoring in usability and Computing Framework for Mobile and Ubiquitous
device characteristics. Multimedia Access Using Small Screen and Composite
Devices. In Proceedings of the 8th ACM international
• While there are some dynamic device characteristics (e.g. conference on Multimedia, pages 323–331, Marina del
battery status), most characteristics of a single device are Rey, USA, 2000. ACM Press.

148
On the Adoption of Groupware for Large Displays:
Factors for Design and Deployment
Elaine M. Huang Alison Sue, Daniel M. Russell
College of Computing IBM Almaden Research Center
GVU Center, Georgia Institute of Technology USER Group
Atlanta, GA, 30332-0280 USA 650 Harry Road
+1 404 385 1102 San Jose, CA, 95120 USA
[email protected] {alisue, daniel2}@us.ibm.com

ABSTRACT seven systems that had had varying success in being adopted
Groupware systems on large displays are becoming into normal workgroup tasks.
increasingly ubiquitous in the workplace. While these FACTORS AFFECTING THE ADOPTION OF LDGAs
applications face many of the same challenges to adoption as Our research uncovered five important factors that were
conventional desktop-based groupware, the public and shared common across many of the systems we studied. Each
nature of these systems heighten these challenges as well as stemmed from the four common characteristics of LDGAs
present additional difficulties that can affect adoption and that we identified. The factors are a combination of technical
success. Our field study of seven large display groupware and social issues that influence system design as well as
applications (LDGAs) uncovered several factors of their techniques for deployment that affect adoption and usage.
design and deployment that influenced their adoption and
usage within the workplace. 1. Task specificity and integration
The value and usefulness must be more evident than for
Keywords conventional groupware because users may spend less time
Large displays, groupware, collaboration, adoption patterns exploring and experimenting with LDGAs.
INTRODUCTION In many LDGAs, the specificity of the tasks involved was
In his seminal CSCW article, Grudin outlined a number of crucial to the adoption of a tool that seemingly supported
challenges for the successful creation of groupware general collaboration practices. Systems introduced for the
applications [1]. In the realm of LDGAs, we have found that sake of promoting specific collaboration or information
common characteristics of these systems that distinguish sharing tasks generally were more successfully adopted than
them from desktop applications heighten the existing those introduced for general collaboration purposes. Tools
challenges and present new ones. Four of these characteristics designed or deployed to support specific tasks were more
are: likely to be successful if they either deployed for a task for
• Form factor – The size and visual impact of large which their use was critical or a task whose content itself was
displays cause users to perceive and interact differently. critical to the user. In one example, professors teaching
certain classes chose to make use of a collaborative display
• Public audience and location – The location in shared
for teaching and class discussions. The use and interaction
space affects the amount of attention users direct at
with the technology was critical for the tasks of taking or
LDGAs as well as the visibility and privacy of
teaching the class; students taking the class used the display
interactions.
not because they were required or told to do so, but because
• Not in personal workspace – The location outside of it was deeply integrated into critical tasks involved with
users’ personal workspaces affects the amount and type being a part of the class. In another case, an LDGA was
of interaction and exploration in which users engage. introduced and adopted for space exploration planning, a
• Not individually owned—The lack of personal ownership critical task whose inherently collaborative nature increased
of LDGAs affects the extent to which people use them or scientists’ ability to carry out the task efficiently.
interact with the content. 2. Tool flexibility and generality
We conducted a study involving three different groups: a) LDGAs that support general collaborative practices may be
researchers working on LDGAs b) members of workgroups adopted by new user groups or for novel tasks because of
in which LDGAs were deployed, and c) salespeople for a their high exposure and public and shared nature.
corporation that produces large displays and LDGAs. Our Although LDGAs introduced for specific tasks or tightly
goal was to identify common factors affecting the success of integrated with important tasks have had good success in
adoption of these applications. Our study entailed face-to- being adopted, we have also observed the value of broad and
face interviews, telephone interviews, and observations of

149
flexible collaboration support in their design. Most successful term deployment. The researchers attributed this to the lack
systems we observed provided support for a breadth of of an easy installation process.
different practices that people employ to collaborate, even
5. Dedicated core group of users
though the systems were deployed to support specific tasks.
Advocates and a core set of users early on help others to
In short, tools that offer a variety of interaction methods that
perceive usefulness and reduce hesitancy to use the system
users can select as needed have been more widely adopted
stemming from their form factor and location.
than those that lock users into very specific interactions.
With all groupware applications, achieving critical mass is
A flexible tool that is deployed to support a specific task may
crucial to adoption [1]. Because LDGAs are generally less
be also appropriated for other tasks as people realize the
amenable to exploration and experimentation than desktop
tool’s potential. A system that supports a broad set of
groupware, they are more likely to fall into disuse soon after
collaborative practices may be used beyond its intended
deployment. Researchers who developed systems that were
purpose. In one case, a tool designed to help visiting
not very task specific found that adoption was aided by
scientists collaborate was appropriated by teams of resident
having a dedicated core group of users early in the
engineers because it provided them with general tools for
deployment. This group, which often included the
creating shared digital artifacts as well as an easy method of
researchers, used the system regularly and encouraged usage
distributing documents among users.
by others after the initial burst of “novelty use” died down.
3. Visibility and exposure to others’ interactions Continued use by the core group ensured that displays
The interactions of others demonstrate usage and value remained dynamic and content fresh rather than stale. The
because the form factor and public nature of these perception that displays were being used and viewed
applications can make user behaviors highly visible. encouraged further adoption into everyday use by a wider
Although certain features existed of which users were aware, audience. Additionally, the core group advocated others’ use
they were exposed to the potential value of the features after by directly encouraging others to use the applications. For
observing others making use of them. In one particular one application designed to share user-submitted items, core
instance, the item forwarding feature of an information users encouraged coworkers to post information onto the
sharing application in an LDGA existed in the interface for displays that they had previously emailed to others. This
approximately three months before it received use. Though encouragement was positive feedback to the senders of the
the feature was highly visible and people were aware of it, information and helped lower initial hesitancy they felt about
users did not perceive it as useful until they saw others using interacting with a new system, both technically and culturally.
it. Through seeing people forwarding items and possibly from FUTURE WORK AND CONCLUSIONS
receiving forwarded items, users began to use that feature and The shared and public nature of LDGAs poses unique
it became widely adopted. Because large displays are challenges for their design and deployment in addition to the
perceived as more public than desktop systems [2], the value challenges faced by conventional groupware. By surveying
of exposure to others’ interactions on LDGAs can influence several systems, we identified some common factors affecting
usage and the perception of value. their success of adoption. Future work includes applying
4. Low barriers to use these lessons to our own LDGAs and refining our findings to
Barriers must be low so users can quickly discover value better understand the dimensions, roles, and usage of these
because LDGAs may be less amenable to exploration and systems within workgroups.
have a lower frequency of use than desktop groupware. ACKNOWLEDGMENTS
It is important that users be able to interact successfully and The authors would like to thank E. Churchill, A. Fass, R.
easily with the system early in their usage in order for the Grawet, S. Greenberg, P. Keyani, S. Klemmer, L. Leifer, A.
system to be adopted into normal tasks. Systems that require Mabogunje, A. Milne, J. Trimble, R. Wales, and T.
significant time to install or configure, have time-consuming Winograd for sharing their projects, reflections, and valuable
steps to initiate use, or have functionality that is not visible insights with us.
tend to find small audiences or a drop in usage after the initial REFERENCES
deployment. In one application that requires user-submitted 1. Grudin, J. Groupware and social dynamics: Eight
content, users have the option of posting information via a challenges for developers. Communications of the ACM,
web form or an email address. Because email is perceived as 37, 1, 1994, 92-105.
quicker and easier than going to a form and filling it out, it is
2. Tan, D.S., Czerwinski, M. (2003). Information
often used to post, while the web form is not. Another system
Voyeurism: Social Impact of Physically Large Displays
that requires users to install and configure an application on
on Information Privacy. Extended Abstracts of CHI 2003,
their desktop machines in order to use the LDGA is used by
Fort Lauderdale, FL.
only a small portion of its workgroup, despite a steady, long-

150
Super - Compact Keypad

Roman Ilinski
Cybernetics Council Labs, Moscow, Russia
CRS DM, 141 N 76 St, Seattle, WA 98103
http://www.geocities.com/senskeyb
[email protected]

ABSTRACT interface or multiplexed with the keystroke data interface


A compact design for a sensitive keypad construction is to the computer.
presented here that includes a touch-sensitive keypad with When key <A> is touched, then sensor <A> sends “A” to
a single pushbutton mechanical key. While the user's the application for preview, afterwards if key <A> is
finger seeks the desired key, a few small keys could be pressed then the mechanical key sends the same “A” and
touched and then pressed by that finger at the same time. “A” has been entered. The same thing happens when <B>,
The desired character can be defined and shown before <C>, or other keys have been touched and pressed. A
pressing as the centroidal point of the sensors that are visual map is produced and displayed with the location of
touched simultaneously. Only a modicum of accuracy is the operator’s finger plus a keypad layout diagram for
required to operate such a compact keypad. The user does assisting the operator to locate keys before manual
not need to push a small-targeted key precisely because all actuation.
keys have been pushed jointly. The keypad may be made so
small as to allow the tactile sensibility and the stable The main observation is that a sensitive keyboard and
positioning of fingers to fit a design of ultra-portable interface provide reduced action by the users to move their
devices. focus between the keypad and the screen. The possibility of
easily changing from one- to two-handed operation and the
Keywords
fact that there is no need for a particular hand position to
Touch-sensitive keypad, haptic interface, tactile feedback, find the necessary keys reduces fatigue and allows
pre-typing visual feedback. extended use.
INTRODUCTION C O M P A C T T O U C H -SENSITIVE KEYPAD
Touch-sensitive keyboard technology is based on the use of If the keyboard size is too small one problem needs to be
sensors, incorporated in the keypad. The identity of the solved. While the user’s finger seeks the desired key, a few
touched key is tracked and monitored on a display before small keys could be touched (and then pressed) by that
the key is pressed, so that the operator can see when the finger at the same time (Fig.1).
finger is over the correct key. Data entry will be made
simpler without looking at the keyboard [1]. A visual map
is represented by a keypad layout diagram to monitor
finger motion. This visual feedback assists the user in
locating keys before or after the keystroke without looking
at the keyboard. To combine visual feedback with tactile
feedback, the common surface of the sensors must be
shaped so that convex or concave sections will mimic the
corresponding keys. The user can keep his attention on the
screen and does not need to shift focus between display
content and keyboard layout. Since the system can display
what will happen when a given key is touched, the user
Fig. 1. Partial construction of the sensitive keyboard, when
can predict the effect of the action [2].
keys <A>, <B> and <C> are touched at the same time.
Motion detection sensors, including any types of object
The desired character can be defined (and shown before
sensing, field responsive devices, or a camera may be
pressing) as the single representative of the keys that are
applied to the keyboard for providing information of the touched simultaneously [3]. In accordance with a present
hand motion. technology, a conventional mechanical pushbutton keypad,
The electrical detection signal for identification of hand such as one for a computer, a handheld, a phone, a remote
motions on top of the key caps is generated by the sensor control or any other kind of data entry device, is covered
detection circuit and provided through an accessory by a touch-sensitive and shaped cover, includes the fingers

151
position sensors. If any key is touched, its sensor sends a fingertip of the user. Such a thin and elastic cover surface
corresponding identity to the application for preview. If can be used for speed typing and is versatile enough to be
any key is pressed, then a mechanical key sends a common made in different sizes and shapes to fit a design of ultra-
input signal, because the application already knows which portable devices.
key identity needs to be entered. Each mechanical
pushbutton key does not need to send the key’s identity
signal to the application – only the input command needs
to be sent.

Fig. 3. Linear keypad prototype.


Fig. 2. Partial construction of the sensitive keyboard with a
single mechanical pushbutton key. CONCLUSION
The following benefits could be stated after the
All sensors can be placed on the common background for
experimental use of the prototype:
joint pushing (Fig.2). Only one or a few parallel
pushbutton keys can be used, because all of them send the • The user is not required to push a small targeted key
same input command. If the user’s finger touches a few precisely as all keys are pushed jointly.
sensors simultaneously, then the central point of the • The shaped, sensitive surface offers tactile feedback
touched figure represents the targeted key. The user does for the finger location that provides comfortable control.
not need to push a small-targeted key separately, because
all keys have been pushed jointly. • The keypad could be made as small as the tactile
sensibility and the stable positioning of fingers allow.
Using various predefined algorithms can provide a
calculation of the representative. For example, the centroid • The keyboard exterior design is not limited by the
or “center-of-mass” coordinates can be a valid parameters of mechanical keys, but by sensors only.
representative characteristic. The use of pressure-sensitive • The design flexibility provides the use of a mobile
sensors allows the calculation of the centroid of the finger device in a naturally comfortable hand and wrist posture.
pattern with more accuracy. For more accuracy other REFERENCES
characteristics (Extent, Solidity, Eccentricity etc.) can be 1. Cheung, N. Computer Data Entry Apparatus with
added to the algorithm. Each key’s sensor can also be Hand Motion Sensing and Monitoring. US Patent
implemented as a multi-sensor element. The use of a 5,736,976 (1998).
surface with multiple mini-sensors allows the calculation
of the centroid of the finger pattern with more accuracy. 2. Hinckley, K. and Sinclair, M. Touch-Sensing Input
Devices. In CHI'99 Proceedings, pp. 223-230, (1999).
THE PROTOTYPE
The sensitive linear keypad (Fig.3) was implemented as 3. Levy, D. The Fastap Keypad and Pervasive Computing.
elastic, plastic bar with touch-sensitive stripes. Twelve- In Pervasive'02 Proceedings, Zurich, Switzerland, pp.
58-68, (2002).
keys pad actual size is 8 x 32 mm ( 3/8 x 11/4 in.).
4. Hiraoka, S., et al. Behind Touch. In IPSJ Interaction’03
The virtual keyboard image can show a full set of keys or
Proceedings, Tokyo, Japan, pp. 131-138 (2003).
only part of the keyboard, which is touch-activated at a
given time. Before pressing the key, the position of the 5. Rekimoto, J., et al. SmartPad: A Finger-Sensing
operator’s fingertip is tracked and monitored on a display, Keypad for Mobile Interaction. In CHI’03 Proceedings,
so that the operator can see when his/her finger is over the pp. 850-851 (2003).
correct keyboard indicia. 6. Ilinski, R. Interface with Pre-Typing Visual Feedback
Simple detectors’ topology incorporates a code that for Touch-Sensitive Keyboard. In CHI’03 Proceedings,
uniquely represents the keys to determine a location of the pp. 750-751 (2003).

152
EnhancedMovie: Movie Editing on an Augmented Desk
Yoko Ishii * Yasuto Nakanishi* Hideki Koike* Kenji Oka* * Yoichi Sato* *

Graduate School of Information Systems Institute of Industrial Science


University of Electro-Communications University of Tokyo
1-5-1 Chofugaoka, Chofu, Tokyo 182-8585 Japan 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 Japan
{ishii, naka, koike}@vogue.is.uec.ac.jp {oka, ysato}@iis.u-tokyo.ac.jp

ABSTRACT by gesture recognition. The user also puts both hands on


In this paper, we describe our prototype system for movie EnhancedDesk while making a right angle with a thumb and a
editing on an augmented desk. We aim to enable a user to edit first finger, which is equal to selecting a command and
a movie in an intuitive operation using both hands. In the specifying two corners. A two-handed system [1] allows users
current prototype system, a user can make a movie by editing a to perform certain tasks which are usually done one-by-one
sequence of pictures. We introduce some hand gestures in the with one mouse simultaneously.
current system and then propose other hand gestures which Movie editing might be one application which can make use of
would be implemented in the future. this aspect, because the operations in movie editing described
above need users to specify a command, a length or a location.
Keywords We are developing a system with which users can edit a movie
Augmented reality, computer vision, gesture recognition, on our augmented desk interface system; this new system is
movie editing. called “EnhancedMovie”. This paper describes the prototype
system that we have developed, and introduces gestures that
1. INTRODUCTION should be integrated into our system.
With the wide spread of digital videos and digital cameras,
PCs with an IEEE1394 port have become general, and it has 2. EnhancedMovie
become popular to edit and make a personal movie by oneself.
Because movie editing requires a large workspace, it would be 2.1 Prototype system
convenient to use a large display, and persons who like a In this section, we introduce our prototype system, which
large-scale display will increase more. PDH using a large makes a movie from a sequence of pictures. It loads pictures in
display as a round augmented table for sharing a story that the specified directory and displays them on the desk using a
contains many pictures [4]. Editing a movie, requires such flow layout.
operations: cutting movies by setting a starting point and a
ending point; changing the sequence of movies or pictures;
inserting another movie or another picture. However, having
such operations in a GUI application with a large display
requires that people select a command and then move a cursor
a lot, and several times actions would be required for
completing such operations.
We have developed an augmented desk interface system called
the “EnhancedDesk.” By using an infrared camera and
advanced computer vision techniques, the system provides
users intuitive interaction by allowing them to use their own
hands or fingers for direct manipulation of both physical and
projected objects. We have developed some applications,
including an X window system that can be operated by fingers Figure 1: gestures with one hand
[2], as well as a system that links a drawing tool using gesture
recognition by HMM [3]. In these applications, the merits The system recognizes closing all fingers as the “grabbing”
seem to be that such an operation needs setting a length or a gesture and opening all fingers as the “releasing” gesture.
size after selecting a command like making a circle or a When a user makes the grabbing gesture on a picture with one
rectangle. For example, in making a rectangle, users in a GUI hand, the picture is selected and is specified as a target for
system have to select a command and then specify two corners subsequent operation. The selected picture is highlighted in
(or a center point and one corner); these tasks require three red, which will let the user know that the picture has been
actions. In our system, the user can make a rectangle by selected. When the user moves the fist by closing the hand and
drawing a vague one with a finger, an action which is realized then performs the releasing gesture, the selected picture is

153
moved to a location between the two pictures where the gestures [3]; those gestures include: drawing a circle; drawing
releasing gesture was done (Fig. 1a). When the place is not a rectangle; and waving a finger. We will make the following
between two pictures, moving the picture is not done. When functions correspond to these gestures: forwarding and
the user moves the fist outside of the desk keeping the hand rewinding a movie; inserting a frame for texts; and undoing an
closed, the picture is cut (Fig. 1b). operation (Fig. 4a). We will implement other gestures which
utilize directions of moving-hands, and those that join both
hands, and sliding a hand. The gestures will correspond to the
following functions: grouping pictures or movies; and adding
a animation effect to the movie (Fig. 4b). When the user joins
{1} {2} {3} {4} both hands on two pictures, pictures between the two will be
(a) moving some pictures at once: {1&2} grabbing pictures with both grouped together. An animation effect will be added according
hands. {3} moving the pictures. {4} releasing the pictures.
to the direction of the moving-hand. For example, when the
user moves a hand on a picture to the right, the system will add
a slide-out effect to the right direction.

{1} {2} {3}


(b) cutting some pictures at once: {1} two fingers are opening. {2} these
are closing. {3} these are opening again.

Figure 2 : gestures with both-hands


gestures with finger-trace
When the user makes the grabbing gesture on two pictures
with both hands, pictures between the two pictures are selected
and are specified as the start and end points for the subsequent
operation. The selected pictures are highlighted in green.
When the user moves the fists and then does the releasing
gesture, the pictures are moved to a location between which the
two pictures where the gesture was done (Fig. 2a). This
intermediate point is shown as a white square. While the user
moves the fists with hands closing, the square moves together, grouping adding a effect
so that the user will know the inserting point. In order to cut (b) gestures with direction of moving-hand
some pictures, the system recognizes another gesture. When
Figure 4 : gestures to implement
the user puts both hands on two pictures while opening only
the index finger and the middle finger and gathers the two
fingers, the pictures between the two specified pictures are cut 3. DISCUSSIONS
(Fig. 2b). The user puts both hands on two pictures while We introduced the prototype of the movie editing system on
opening the index finger for two seconds; then the system our augmented desk interface and the gestures which we will
makes a movie composed of pictures between the two, and implement. The current system loads only pictures; in the next
starts to play the movie in another window (Fig. 3). When the system, users will be able to load some movies and to edit
gesture is completed, the pictures are highlighted in blue. them. We will also evaluate it by comparing it to a GUI
While the system is making a movie, the color changes into application using a mouse.
white gradually. In this process, we utilize JMF (Java Media
Framework) 2.1.1 and QuickTime for Java. 4. REFERENCES
[1] Bier, E. et al., Toolglass and Magic Lenses: The See-
Through Interface, Proc. ACM SIGGRAPH 1993,
pp.73-80, 1993.
[2] Koike, H. et al., Integrating paper and digital
{1} {2} information on EnhancedDesk: a method for real-time
{1} A start point and an end point are specified. {2} The made movie finger tracking on augmented desk system, ACM
starts to play in another window. Trans. on CHI, Vol.8, Issue.4, pp.307-322, 2001.
Figure 3 : making a movie [3] Koike, H. et al., Two-handed drawing on augmented
desk, Extended Abstracts of ACM SIGCHI 2002,
2.2 Gestures planned to integrate pp.760-761, 2002.
The gestures implemented in the current system utilize only
the numbers of fingers and the locations of fingers. We will [4] Shen, C. et al., Personal Digital Historian: Story
integrate our system’s capability for recognizing various hand Sharing Around the Table, ACM Interactions, Vol. 10,
Issue.2, pp.15-22, 2003.

154
Instructions Immersed into the Real World–
How Your Furniture Can Teach You
Florian Michahelles¹, Stavros Antifakos¹, Jani Boutellier¹, Albrecht Schmidt², Bernt Schiele¹
¹ETH Zurich, Switzerland ²University of Munich, Germany
{michahelles, antifakos, janbo, schiele}@inf.ethz.ch [email protected]
http://www.vision.ethz.ch/projects/furniture

ABSTRACT SELF-DESCRIPTION OF PHYSICAL OBJECTS


In this paper we show a simple way to immerse An ideal design should not require any instructions at all:
instructions into the real-world. In particular, we by simply looking at the physical objects the user can guess
and understand the functionality. For this phenomenon
propose to enhance static affordances of objects Gibson coined the term affordance [2] and it was widely
by using LED’s attached to the objects. Using the spread in HCI by Norman: “Affordances are perceived
example of a flat-pack furniture demonstrates properties of an artifact that indicate how it can be used”
how to guide and teach the user during the [3]. Affordances are static properties of physical objects
assembly . that can be perceived by a user. Furthermore, these
KEYWORDS
properties can invoke a user’s mental model explaining
instructive interaction, physical interaction, dynamic functionality of the object and possible actions. Depending
affordance on the properties of the object and the experience of the
user, affordances cannot always be easily perceived and
INTRODUCTION
may require additional signs: instructions.
In previous work [1] we introduced the notion of proactive
instructions that service users specifically to their needs in Instructions change over time as they depend on the current
the current situation. Using a piece of a flat-pack furniture state in the task. Consequently, this paper proposes to
we showed how to determine the assembly actions the user enhance static affordances of objects by dynamic signs that
is performing: sensors attached to unassembled furniture give additional hints to the user adapted to his situation:
parts can recognize the user’s actions and send the data to a instructions should mediate the dialog between the user and
separate computer. This computer holds an assembly plan, the physical objects. This introduces dynamic affordances
which contains all possible states of assembly similar to a as proposed in [4].
finite state machine. In the first state all items are
unassembled, subsequent states are reached based on the DESIGNING LEARNING-BY-DOING
user’s physical actions. The final built up state becomes People learn to do things by hearing, being told or
valid if all assembly steps have been performed properly. instructed, seeing, being shown, or by doing. Despite of
The underlying principle is that the system can track, by individual differences for one mode or another, learning by
deploying sensors, the user’s assembly actions and give doing is often very important. A user recognizes something
recommendations specifically to the user’s needs in the novel (“I see two screws and two boards…”) guesses the
current situation. next appropriate action (“…perhaps the screws fit in
here...”), executes the action (“…let’s put in the screws…”)
Presenting these recommendations to the user in a proper and is immediately rewarded by discovering if the guess
way is crucial. Augmented Reality (AR) is very established was correct (or wrong). Ideally, instructions provide hints
to visually integrate virtual knowledge into a user’s that subtly but infallibly guide users toward correct
physical environment. However, AR is cumbersome and conclusions. This requires three principles that support
typically computationally expensive. Audible instructions learning by doing [5]: explorability, predictability, and
offer a cheaper way of immersion but have to tackle with intrinsic guidance.
the problem of addressing the appropriate parts by a
vocabulary the user is familiar with or has to learn before. Explorability enables users to explore, experiment, and
There is the possibility of presenting information on a discover functionality without penalization of unintentional
screen, as in our prior work [1]. However, the integration or mistaken actions. In particular, this requires infinite-
of instructions with the task remains unsolved. level undo and redo operations in a coherent and consistent
manner. Predictability builds upon intuition: a user can
This paper studies how affordances of physical object can draw conclusions based on first impressions without
be exploited and enhanced by dynamic cues: LED’s extensive thought or chains of reasoning. Accordingly,
attached to the parts draw the user attention to and signal familiar things must behave as expected and novel or
the next action. unfamiliar things must behave in ways that are reasonable
and immediately understandable. Intrinsic guidance

155
provides help as needed without requiring any special
action or initiative on the part of the user.
FURNITURE INSTRUCTIONS
For the furniture application we identified five types of
feedback the user should receive:
1. direction of attention
2. positive feedback for right action
3. negative feedback for wrong action
4. fine grain direction Fig. 2: Architecture Diagram
5. notification of finished task
For the output functionality we have developed a custom
This enables users to explore how the furniture has to be layout board carrying eight dual green/red LEDs. Those
assembled. Users unwrap the flat-pack and their attention boards are attached to the connecting edges of each
gets directed immediately (1) to the parts they are supposed furniture part (Fig. 3).
to start with. User’s actions, such as turning and moving
boards are sensed and blinking green light patterns indicate
which edges have to be connected in which manner. If
boards are aligned in the proper way, a synchronized green
light pattern (Fig. 1) indicates a well performed action (2).

Fig. 3: Prototype: Guidance through LED’s


CONCLUSIONS
This paper demonstrates how static affordances can be
Fig. 1: Flash patterns: right/wrong action enhanced by dynamic cues mediating the interaction
If the user takes a wrong action, a red light pattern appears between users and physical objects. By augmenting parts of
(Fig. 1) reporting a mistake (3). Additionally, a green flash a flat-pack furniture with sensing capabilities and LED’s
pattern shows the right alternative (2). After boards have we demonstrated the feasibility of this approach This
been aligned together in the right way, individual green augmentation of objects allows to integrate instructions
lights direct user’s attention to the holes where the screws directly into the objects, gives the user the flexibility to
have to be inserted and tightened (4). Once the final draw own conclusions and provides intrinsic guidance if
assembly state is reached, synchronous flash patterns on all appropriate.
LED’s indicate that the task is finished (5). ACKNOWLEDGEMENTS
These light patterns extend an parts’ static affordances and The Smart-Its project is funded in part by the Commission
can teach the user in a learning-by-doing manner how parts of the European Union under contract IST-2000-25428,
fit together: As a physical notion of undo and redo, and by the Swiss Federal Office for Education and Science
attached boards can be continuously detached and (BBW 00.0281).
rearranged, which fosters explorability. Furthermore, the REFERENCES
LED’s also contribute predictability to the assembly as red 1. Antifakos, S., Michahelles, F., Schiele, B., Proactive Instructions for
(green) light immediately indicates a right (wrong) action. Furniture Assembly, UbiComp 2002, pp. 351-360, Göteborg, Sweden,
Intrinsic guidance is provided by dynamic instructions that September 2002
2. Gibson, J.J., The Ecological Approach to Visual Perception, Houghton
adapt to the current assembly state. This allows the user to Mifflin, Boston, 1979
take any sequence of actions without being constrained to a 3. Norman, D. (1988). The Psychology of Everyday Things, New York,
certain predefined linear sequence. Basic Books, pp. 87-92.
4. Cook, S., Brown, J., Bridging Epistemologies: The Generative Dance
SYSTEM ARCHITECTURE & FUNCTIONALITY Between Organizational Knowledge and Organizational Knowing,
We have developed a first prototype to display the Organizational Sciences, 10 (4), pp. 381-400, 1999.
5. Constantine, L. and Lockwood, L., Instructive Interaction: Making
instructions dynamically for a flat-pack furniture [6]. For Innovative Interfaces Self-Teaching, User Experience, Winter, 14-19,
sensing functionality accelerometers reveal orientation, 2003
force sensors measure screw tightening, and IR sensors 6. Holmquist, L.E.., Antifakos, S., Schiele, S., Michahelles, F., , Beigl, M.,
measure co-location of boards, see [1] for details. Sensor Gaye, L., Gellersen, H.W., Schmidt, A. and Strohbach, M.: Building
Intelligent Environments with Smart-Its. SIGGRAPH 2003, Emerging
data processing and wireless communication is established Technologies exhibition
by the Smart-Its platform [7]. 7. Beigl, M., Zimmer, T., Krohn, A., Decker C., and Robinson, P., "Smart-
Its - Communication and Sensing Technology for UbiComp
Environments" Technical Report ISSN 1432-7864 2003

156
i-wall: Personalizing a Wall as an Information Environment
with a Cellular Phone Device
Yu Tanaka, Keita Ushida, Takeshi Naemura, Yoshihiro Shimada
Hiroshi Harashima NTT Cyber Space Laboratories,
The University of Tokyo NTT Corporation
7-3-1 Hongo, Bunkyo-ku, 1-1 Hikari-no-Oka, Yokosuka-shi,
Tokyo 113-8656, Japan Kanagawa 239-0847, Japan
+81 3 5841 6781 +81 468 59 3114
{yu, ushdia, naemura, hiro}@hc.t.u-tokyo.ac.jp [email protected]

ABSTRACT The remarkable point of services of i-wall is location-


The authors’ aim in this paper is to attach information specificity. For example, i-wall records events in front of
environments in streets in a natural way. We utilize walls the wall in its memory and users can play it back. Through
in places where people pass by as an attempt to the aim, these services, i-wall creates a value-added space as well as
and have implemented i-wall (intelligent/information/inter- an information-accessible environment.
active wall). i-wall is normally an ordinal wall, but when a EXPERIMENTAL SYSTEM
user comes, it provides an information environment to
Implementation
him/her. The services of i-wall are not only a conventional The overview of the experimental system is shown in Fig.
information terminal but also a location-specific informa- 1. The system consists of: (1) a video projector to give a
tion (e.g. events which occurred in front of the i-wall function of a display to the wall, (2) a camera to capture the
system) service. i-wall employs web-accessible cellular scene in front of i-wall, (3) a position sensor, (4) PCs.
phone devices for its interface and aims for an easy-to-use
Camera
environment using users’ accustomed devices which they Display PC 1
take along. Projector
Keywords Wall Web server
Information environment, Wall, Cellular phone device, Cellular phone Optional
camera
User interface PC 2
User
INTRODUCTION
Amazing progress in mobile infrastructure has enabled us Access point
to access information anytime and anywhere. And, aug- Position sensor
mented reality and interface technologies have enabled us
to build information environments in the real world (daily Fig. 1: Overview of the experimental system
life space). We can see the stream toward the “ubiquitous For ease of attaching it to an existing wall, we attach a
computing” [4] era. Considering the background, we aim video projector on the ceiling (Video projectors are popular
to build easy-to-use information environments in public devices for displaying information to existing environ-
space where people pass by. To achieve this concept, we ments, e.g. [1]). The white wall is suitable for projecting
pick up various existing objects and enhance them. In this images. A camera is placed on the projector to capture and
paper, the object is a wall, since there are walls that could record the scenes. Capturing interval is 30 seconds in
be utilized efficiently everywhere. We name the enhanced today’s implementation. As a position sensor, the authors
wall for an information environment i-wall. used infoFloor [5], which is an RFID-based system.
Suppose that a person coming in front of the wall can When the user comes in front of the i-wall system, the
occupy a part of it as an information environment for user’s position is detected (shoes with RFID reader are
him/her. To put it concretely, an information window will needed in the current implementation) and a window
open in front of him/her and he/she can interact with it. appears on the wall (Fig. 2). The window follows the user
The interface for interaction is a cellular phone device, and its size becomes larger when he/she gets closer to
which is used like a remote controller. Cellular phone (occupies more space of) the wall. When two users come
devices are very common and each user knows how to use by, individual window appears in front of each user and
his/her own one. Thus, they would be useful for clear they can share the wall.
interface of systems a lot of people use.

157
displayed in the user’s window, not on the whole wall. To
see the whole image, he/she has to move along the wall,
which gives a feeling to the user as if he/she is seeking
treasures on the wall.

Fig. 2: An information window appears in front of the user


Windows can be operated with the users’ web-accessible Fig. 4: The past scene is reproduced on the very spot
cellular phones. They access the web page for operation
(Fig. 3) and submit data through CGI. Available operations CONCLUSION AND FUTURE WORK
are: window operation (size, position and on/off), applica- The concept and a prototype system of i-wall (an enhanced
tion switching and specifying the time to playback (as we wall for information environment) are described here. We
mention below). The page is designed with symbols for are planning to improve the implementation (hardware and
intuitive operation and configured to fit in the display size software) and install the i-wall system in public space for
of cellular phones. further evaluation. Cellular phone devices are getting more
and more sophisticated and groping for new interface with
them is also an interesting topic.
RELATED WORK
“i-wall” is an attempt to information environments in
streets. Ubiquitous Display [2] and ActivePoster [3] are
similar especially because they use mobile terminals.
In Ubiquitous Display, users can control large displays con-
nected to the broadband network to get services that mobile
terminal cannot handle because of lack of its capacity. In
contrast, i-wall’s services deal with the real world and peo-
ple’s activity there, rather than information on the network.
Fig. 3: The web page for operation ActivePoster obtains personal information from users’
mobile terminals and displays personalized information
Information Services
(advertisement) actively. i-wall does not necessarily
The distinguished service of i-wall is its ability to playback perform actively because it is a part of the environment and
the events which took place in front of it at the very spot
activated only when users want to use it.
where they happened. The reproduction achieved this way
has a reality brought from the situation/location. ACKNOWLEDGMENTS
We thank Mr. Kaoru Sugita for technical advice on
As explained above, i-wall records scenes in front of the
implementation of the i-wall system.
wall. Users can playback the scene by specifying the time
when they want to see or flipping scenes like a slide show REFERENCES
(the operation is done with their cellular phones). The 1. C. Pinhanez. The Everywhere Display Projector: A
reproduced scene is displayed in each information window. Device to Create Ubiquitous Graphical Interfaces.
This service evokes an application as follows; As seen in Proc. of UbiComp 2001, 315-331, 2001.
Fig. 4 left, a person was waiting in front of i-wall, another 2. K. Aizawa et al. Ubiquitous Display Controlled by
person comes later and playbacks the past scene (Fig. 4 Mobile Terminals. IEICE Trans. E85-B, 10, 2214-2217,
right), he finds out his friend had waited and has gone. 2002.
Or, you can stamp your activity on the place, which makes 3. K. Suzuki et al. An Effective Advertisement Using Ac-
the place special for you. You can reproduce your stamped tive Posters. IPSJ Tech. Rep., HI-92-11, 79-86, 2001.
trace at the very spot anytime you want. 4. M. Weiser. Some Computer Science Issues in Ubiqui-
Of course, the i-wall system can be used as a conventional tous Computing. Commun. ACM 36, 7, 75-84, 1993.
information terminal. By application switching, you can 5. Y. Shimada et al. A Position Sensing System with
view information stored in the system or from the network, Contactless IC. Tech. Rep. of IEICE, PRMU2000-48,
and watch the remote view from the optional camera (or 23-28, 2000.
over the network). Note that these images are basically

158
Healthy Cities Ambient Displays
Morgan Ames1, Chinmayi Bettadapur1, Anind Dey1,2, Jennifer Mankoff1
1 2
Group for User Interface Research Intel Research, Berkeley
EECS Dept., University of California, Berkeley Intel Corporation
ambient@guir. berkeley.edu
ABSTRACT rippling shadows, pinwheels that provide awareness
The Healthy Cities project addresses the lack of publicly- through sound and air flow [2], a pixellated ambient display
available information about city health. Through interviews [3], a “Digital Family Portrait” that gives peripheral
and surveys of Berkeley residents, we have found that city awareness of remote family members [4], and informative
health includes a wide variety of economic, environmental, art pieces [5].
and social indicators. We are building public ambient
METHOD
displays that make city health more visible and encourage
We began our investigation of city health by conducting in-
change by highlighting the value of individual
depth, exploratory interviews of six East Bay residents.
contributions.
Participants were recruited from flyers in grocery stores and
Keywords: ambient displays, peripheral displays, city posts on Craigslist (an online community forum). We
health, sustainability indicators followed up the interviews with a culture probe [6],
INTRODUCTION consisting of four postcards that encouraged our six
City datasets such as air quality, crime rates, energy usage, participants to provide additional details on their day-to-day
or recycling amounts can be powerful indicators of city perceptions of city health.
health; however, it is often difficult for city residents to Responses were categorized into broad topics, which were
access this information or interpret it. Despite the wealth of used to create a follow-up survey. The survey included 33
information collected about various aspects of city health, yes-no and Likert-scale questions and ten written-response
residents know little about this information or how they can questions, asking about the importance of various indicators
make a noticeable contribution, leading to feelings of of city health. Questions were divided into ten groups:
frustration or helplessness. The Healthy Cities project aims neighbors and safety, diversity, environment and
to make city health information more publicly visible by conservation, public events, city history, volunteerism,
displaying easily interpretable health indicators in public shopping and economics, schools, transportation, and
places such as transit hubs, shopping districts, or public individual health. Surveys were distributed to over 300
buildings. We hypothesize that this information will people in post offices and farmers’ markets in Berkeley,
empower residents to improve city health by giving them a and a link to an online survey was published on Craigslist.
better sense of what they can do and by making them feel
RESULTS
like their actions are visible.
The interviews and surveys showed us that city health
Healthy Cities includes myriad indicators such as public school conditions,
We have chosen to display city health information in the air quality, effective minimum wage, maintenance of houses
form of an ambient display, which provides a continuous and streets, unemployment, individual health, racial
stream of information in a simple format that can be diversity, pedestrians, public events, and more. Of these
interpreted at a glance. Because our target locations are indicators, the ones that are quantitative and are updated
places where people will be passing through and will have often are more suitable for public ambient displays.
only peripheral awareness of their surroundings, the easily-
Interviews
readable nature of ambient displays lends itself well to these
The interviews and culture probe postcards gave us a
locations. We have also noted that few ambient displays
qualitative sense of city health. The participants were two
have been built for the general public, and were interested
women and four men, with ages ranging from 25-55 years.
in exploring this design space.
Three were Caucasian, and the three others were Lebanese,
Ambient Displays Asian and Latina, respectively. Although our participants
Ambient displays are devices that peripherally provide a had diverse definitions of city health, most or all mentioned
continuous stream of information. Ambient displays show certain indicators: the number of locally-based businesses in
non-critical information in a simple, intuitive, and aesthetic the community (all 6 participants), the number of parks or
way, reducing the cognitive load of users. Researchers at amount of green space (5), diversity (5), uniqueness (5),
PARC, M.I.T. Media Lab, Carnegie Mellon University, safety and poverty (4), pedestrians (4), and public events
Georgia Tech., Viktoria Institute, and elsewhere have (4). These gave us a sense of areas to cover in our survey.
designed various displays, including a “dangling string” that
twitches with network activity [1], a water lamp that casts

159
Surveys has been made to work in a simulated environment where
145 residents of Berkeley and nearby Oakland, El Cerrito, the addition of a can is simulated with the clicking of a
and Richmond completed the survey, 95 from in-person button on the touch-screen.
recruiting and 50 online. Of these, 90 were female and 52
We have also designed a preliminary electricity display,
male, and the ethnic and income distribution was very
which uses computer vision to sense the amount of light
similar to Berkeley’s 2000 census data, suggesting that we
pollution given off by lights in the city of Berkeley at night.
succeeded in getting a uniform sample by ethnicity and
Multiple cameras are used to collect aerial views of the city
income, though not by gender.
every few minutes. These images are analyzed for
In our analysis of the survey, we found that thirteen brightness characteristics and aggregated across cameras.
indicators received average ratings 4.0 or above out of 5, in The resulting brightness information is overlaid on a map of
terms of their importance to city health (5 being “very Berkeley and presented on a screen to users.
important”). All of these had modes of 5. These indicators
FUTURE WORK
are summarized in Table 1.
We plan to continue design on our two display prototypes,
and possibly design more displays on other city health
indicators such as air quality or public events. These
displays should be evaluated for their effects on public
awareness and action. If successful, Healthy Cities displays
could be extended to other cities to raise awareness of city
health.
ACKNOWLEDGMENTS
We would like to thank Joseph McCarthy, Greg Niemeyer,
David Gibson, and Timothy Brooke for their feedback and
suggestions.
REFERENCES
1. M. Weiser, J. S. Brown. Designing calm technology.
Table 1. Indicators that received average ratings of at http://www.ubiq.com/weiser/calmtech/calmtech.htm.
least 4 out of 5 in importance to city health. December 1995.
Displays 2. H. Ishii, B. Ullmer. Tangible Bits: Towards Seamless
While all of these indicators could be used to develop Interfaces between People, Bits and Atoms.
interesting displays, the two indicators we chose to focus on Proceedings of the Conference on Human Factors in
first are electricity usage, as part of resource management, Computing systems, pages 234-241. ACM Press,
and recycling. Although these were not brought up in our March 1997.
interviews, we chose them because they were important to 3. J. Heiner, S. Hudson, K. Tanaka. The Information
our survey takers (which had a much larger sample size Percolator: Ambient Information Display in a
than our interview pool) and are quantitative, measurable, Decorative Object. ACM Symposium on User Interface
constrained, and frequently updated, and have accessible Software and Technology, pages 141-148. ACM Press,
data sources. These characteristics are important because November 1999.
the display should be credible and should noticeably change
for people who will see it on a daily basis. 4. E. Mynatt, J. Rowan, A. Jacobs, S. Craighill. Digital
Family Portraits: Supporting Peace of Mind for
Unfortunately, we could not gain access to citywide data for Extended Family Members. Proceedings of CHI 2001,
either source, so we have focused on the activity in one pages 333-340. ACM Press, March 2001.
recycling bin as a microcosm of city recycling, and light
pollution levels at night as an estimate of electricity usage. 5. J. Redström, T. Skog, L. Hallnäs. Informative Art:
Using Amplified Artworks as Information Displays.
We have designed a preliminary recycling display, which Proceedings of DARE 2000, pages 103-114. ACM
will use load cells to sense a can thrown into a particular Press, April 2000.
recycling bin. A visual meter rises when the weight in the
bin changes to give users a sense of what their contribution 6. W. Gaver, T. Dunne, E. Pacenti. Cultural Probes.
was worth. The interface runs on a Sony Clio, and currently Interactions, pages 21-19. ACM Press, Jan/Feb. 1999.

160
LaughingLily: Using a Flower as a Real World Information
Display
Stavros Antifakos and Bernt Schiele
ETH Zurich, Switzerland
{antifakos, schiele}@inf.ethz.ch

ABSTRACT Holmquist et al. [1] refers to a set of paintings, which


Ambient displays and calm technology as termed by display different kind of information in a subtle way. The
Weiser and Brown [5] are key techniques to help information displayed ranges from weather forecasts of
ubiquitous computing applications enter our everyday life. distant cities, to recent earthquake activity around the
Here we present an ambient display in the form of a globe. By slowly adapting to information changes the art
physical object that can be moved around in physical space, pieces don’t attract the user’s attention. However, the user
thus being highly adaptive to the users needs. LaughingLily can bring the art piece to the center of his attention if he or
is an artificial Lily enhanced with an electromechanical she wants to extract some information.
system that enables the flower to let its petals hang or open Let’s regard the example of a meeting, where information
them up to full bloom. Using the flower as a meeting needs to be displayed unobtrusively. In order to be visible
mediator we show how well it can be integrated into the for all participants the ambient display should be close if
real world and how easily it is accepted by potential users. not in the center of the physical space of the people and
Keywords their activity. We propose to integrate the display into the
Calm Technology, Ambient Display, Real World Display physical world by actually using a real physical object as
the display itself. This enables a display to be omni-
INTRODUCTION
directional in order to be visible for all meeting
Many ubiquitous computing applications exist, which use participants. It should display peripheral information
elaborate sensing of their environment to adapt to local without distracting from the actual task. Finally the aim
context. But they mostly still rely on classic output should be for a visually appealing object since it is more
methods such as computer displays or projection screens. likely to be accepted and even enjoyed by people.
Many researchers have worked on integrating output into
the physical world by using classic displays in different This paper proposes such a display, named LaughingLily,
ways [1,2,3]. Experience however shows that classic where a physical object - here an artificial flower -
displays often become the main focus of attention, which becomes the ambient display itself. Depending on the
can distract the user from his activity. Oppositely displays information to be displayed the flower can let its petals
hanging on the wall as in [1] can seemingly vanish into the hang, or open them up to full bloom (see Figure 1). By
periphery if someone is looking the opposite way. nature a flower is omni-directional and can be perceived by
all participants. By careful design of the movements and
Weiser and Brown [5] proposed the concept of Calm blooming behavior of the flower LaughingLily can display
Technology. Their goal is to design technology so it is information without distracting.
equally encalming and informative. To do so they propose
presenting information in the user’s periphery. This makes MEDIATING BETWEEN MEETING PARTICIPANTS `
it possible for the information to move out of the user’s Team meetings with a handful of participants can
focus when it is not important, and to move to the center as sometimes be very cumbersome. Do you remember your
soon as an important event occurs. Calm Technology aims last meeting that got out of hand? When your colleagues
to manage this transition easily. In the AmbientROOM kept arguing about the same thing over and over again?
project [6] the presence of distant people is represented Oppositely there can be brainstorming sessions where
with different peripheral displays. As soon as someone nobody has any ideas and a laming silence fills the room.
enters the distant but connected space light reflections To mediate between meeting participants we built a flower
appear on the wall and water ripples start moving. By
presenting information in the periphery and only moving to
the user’s center of attention at specific events these
Ambient Displays represent a form of Calm Technology.
Besides having the ability of moving between the periphery
and the center of attention Calm Technology also can
enhance the user’s peripheral reach by bringing more
details into the periphery. Informative Art as presented by
Figure 1: LaughingLily opening her petals to full bloom.

161
that can droop its petals or show its bud in full bloom; thus environments LaughingLily could be used to warn
representing a sad or a happy state. The flower stands on computer users from repetitive strain injuries by letting the
the middle of the meeting table and changes its stature petals droop if someone hasn’t had a break for a long time.
depending on the surrounding sound. If nobody is talking Further LaughingLily could display to co-workers how
the flower lets its petals droop. If a conversation at an interruptible one is depending on approaching deadlines,
intermediate volume is going on the flower moves towards calendar information or e-mail load.
full bloom. If an argument breaks out the flower starts In the domestic environment LaughingLily could act as a
drooping again. progress bar for simple procedures. For example, the
To be able to react on the audio activity of the participants flower could show how far the washing machine is by
the flower is connected to a microphone. Using multiple slowly elevating its petals. It could show how long the cake
directed microphones each connected to an individual has been in the oven. The petals would then simply start to
flower the display can show which participants in the droop again if the cake was is in for too long.
meeting are dominating the discussions and which have not Finally, LaughingLily can display the interaction level
spoken for some time. A similar effect can be achieved by between conference participators and the poster presenter
placing two or three flowers on the table at different at a conference such as UbiComp 2003.
positions. This way, the meeting participants are not
directly exposed as too loud or too silent, but the one side CONCLUSIONS AND FUTURE WORK
of the table is accused as a whole. In this paper we have presented LaughingLily – an ambient
display embodied by a flower. Although we have yet to
LaughingLily - Implementation conduct a comprehensive user study, we have shown how
LaughingLily is an artificial Lily extended with a electro- well such an ambient display - in the form of a flower - can
mechanical system. The microphone on a Smart-Its [4] integrate into the environment. We believe that displaying
sensor board was used to capture an audio signal. The feedback to the user in a physical object is the key to
onboard processor (PIC microcontroller) calculates the making ubiquitous computing applications calmer and
energy level of the signal, representing the loudness of the more suitable to human needs.
people speaking. To make the flower move a servo motor
was controlled directly from the sensor board. A shaft Besides exploring LaughingLily’s effects on meeting
connecting the servo motor with a cup-shaped plastic part participants in a larger user study we want to continue
actuates the flowers petals. The whole system can either be developing further physical feedback devices.
powered by batteries (working for several days using ACKNOLEDGEMENTS
4xAA batteries) or directly from the mains. The Smart-Its project is funded in part by the Commission
of the European Union under contract IST-2000-25428,
First Impressions and Discussion and by the Swiss Federal Office for Education and Science
The first afternoon LaughingLily was standing on the (BBW 00.0281).
coffee table in our hall many comments from office
colleagues were made about how sad or how pretty the REFERENCES
flower looked. They soon learnt that when someone is 1. .Holmquist L. E., and Skog T. Informative Art:
talking in the environment the flower starts lifting up her Information Visualization in Everyday Environments.
petals. After the novelty wore off everybody continued In Proceedings of Graphite 2003.
with their everyday business. LaughingLily had become 2. Johanson B., Fox A., and Winograd T. The Interactive
integrated into the physical space and became a peripheral Workspaces Project: Experiences with Ubiquitous
display. Computing Rooms. In IEEE Pervasive Computing 1(2)
We believe the quick integration of LaughingLily resulted 2002.
mainly from having a physical object as a display itself. An 3. Pinhanez C. The Everywhere Displays Projector: A
object can be moved around in space and be placed amidst Device to Create Ubiquitous Graphical Interfaces. In
people. In this way the display is adapted to the situation UbiComp 2001.
instead of having the people adapt to the display. 4. Smart-Its Project: http://www.smart-its.org/
As expected due to the natural association between 5. Weiser M., and Brown J. S. Designing Calm
LaughingLily and a real flower, first experiments showed Technology. December 1995.
how people’s emotions can be invoked. A drooping flower http://www.ubiq.com/hypertext/weiser/calmtech/calmte
is naturally associated with sadness, whereas a flower in ch.htm
full bloom can trigger happiness to a certain extent.
Exploiting these associations people have with physical 6. Wisneski, C., Ishii, H., Dahley, A., Gorbet, M., Brave,
objects could be a powerful tool in interface design. S., Ullmer, B., and Yarin, P. Ambient Displays: Turning
Architectural Space into an Interface between People
OTHER APPLICATIONS and Digital Information. In Proceedings of
Beyond the meeting application presented in the previous International Workshop on Cooperative Buildings
section many more applications are imaginable. In office (CoBuild '98).

162
Habitat: Awareness of Life Rhythms over a Distance Using
Networked Furniture
Dipak Patel and Stefan Agamanolis
Human Connectedness group
Media Lab Europe, Sugar House Lane, Bellevue, Dublin 8, Ireland
{dipak, stefan}@medialabeurope.org

ABSTRACT visualisation of daily rhythms to convey a sense of


The demands of modern working life increasingly lead awareness between partners separated by distance.
people to be separated from loved ones for prolonged
periods of time. Habitat is a range of connected furniture
for background awareness between distant partners in just
such a situation. The project particularly focuses on
conveying the patterns of daily routines and biorhythms
that underlie our well-being, in order to provide a sense of
reassurance and a context for communication between
people in relationships.
Keywords
Awareness, biorhythms, limbic regulation, connectedness,
networked furniture
INTRODUCTION
Intuition leads us to believe people have an innate desire to Fig. 1 - Habitat being used to link two distant partners.
have an up-to-date understanding of the emotional and BACKGROUND AND RELATED WORK
physiological state of loved ones. When two people form a Research into the physiology of the brain is now starting to
close bond, awareness of each other is essential to convey unravel some of the issues on why humans have such an
feelings and needs to one another and ensures that the affinity to one another [3]. The limbic brain, which was
relationship can survive and flourish. once believed to only co-ordinate sensations from the
external world to internal organs, is now thought to be
Awareness of a partner’s activities and biorhythms, such as
responsible for regulating our emotions. This mechanism
sleeping, eating, socialising and working, is useful as these
for the mutual exchange and internal adaptation between
rhythms can be indicators of well-being - providing
two mammals, whereby they become attuned to each
feelings of reassurance and connectedness, stimulating
other’s internal states is known as limbic resonance. This
comparison and synchronisation between the pair-bond.
theory is developed further, proposing that the human
The knowledge of any deviation from regular patterns and
nervous system is not autonomous or self-contained but an
cycles is of equal significance.
open-loop system that is continually rewired through
Today our lives are enriched by pervasive technology that intimacy with nearby attachment figures - a process of
conquers distance to such an extent that the anxiety of interactive stabilisation, known as limbic regulation.
being apart is minimal. But a corollary to technology
The field of psychobiology provides us with a body of
mediated relationships is that people can still feel
experimental evidence on biorhythms and their impact on
disconnected or not attuned with their partner, especially if
our well-being [2]. Our biorhythms and internal body
they happen to be in different time-zones. Old-fashioned
clocks are affected by a number of external factors, most
methods of keeping in-touch such as letter-writing are
importantly people we are bonded to.
accepted as conveying a greater sense of intimacy but lack
the instantaneity we are now used to. The majority of Habitat also draws upon ideas from previous projects in
modern communications technology such as telephones, ubiquitous computing that employ furniture and
text messaging and e-mail, cause untimely interruptions, architecture as display devices, such as Ambient Displays
can be in-contiguous or can require a significant amount of [6], Roomware [5], Peek-a-Drawer [4] and The
effort to use while doing other tasks. RemoteHome (exhibition - London/Berlin 2003).
Habitat explores the potential of addressing these issues by TECHNOLOGY AND DESIGN GOALS
using household furniture as a network of distributed The initial range of Habitat appliances are in the form of
ambient display appliances that centre on the capture and two geographically separate, networked coffee tables.

163
Each station consists of a networked Linux computer, a Privacy and trust issues are dealt with implicitly as the
RFID tag reader and a video projector. furniture only connects into the personal space of a loved
Two people having a long distance relationship (Figure 1), one, a person that a high level of trust is already shared
use the Habitat system as follows: When objects (with a with. Users are also made well aware of the specific
RFID tags embedded inside) are placed on the coffee table, artifacts that trigger the communication between Habitat
they are sensed by the tag reader, which uniquely identifies stations. Reciprocity is important for limbic regulation,
each object. The tag reader is polled regularly by the since each station is a duplicate, awareness flows in both
computer to check if any items have been added or directions in a continual feedback loop.
removed. Such events cause messages to be sent to the CURRENT STATUS AND FUTURE DIRECTION
coffee table in the remote partner’s living space. The The first phase of Habitat is complete, a proof of concept
remote coffee table displays a corresponding representation demonstrator system which acts as a platform for
of the opposite person’s activity (Figure 2) and their overall conducting experiments and extending ideas. A range of
daily cycle on the surface of the table, using an visualisations that describe remote activities have been
appropriately mounted video projector. When items are created. A forthcoming trial will be used to determine the
removed, the displaying coffee table gradually fades away effectiveness and appeal of these different visualisations to
that representation. potential users.
Future versions of Habitat will concentrate on the capture
of more complex routines and activities. We plan to use
biomedical technologies in concert with the connected
furniture platform, to monitor users’ body temperatures,
heart rates and other well known metrics for tracking
biorhythms with additional accuracy. Humans have several
bodily rhythms that affect how we feel in addition to
circadian rhythms, such as ultradian (~90 minutes),
infradian (many days) and circannual (~1 year). There are
also several environmental factors that alter or reset body
clocks (known as zeitgebers) that could be accounted for
within visualisations.
The aim of this research is to determine if we can
successfully convey awareness of rhythms over a distance
and if doing so can provide similar levels of reassurance
and intimacy as physical proximity of partners in a
Fig. 2 - A typical sequence within a visualisation domestic setting.
Habitat takes into consideration several design guidelines The eventual goal would be to install suitably evolved
in creating connectedness applications [1]: - iterations of the technology with many groups of people
outside of the laboratory environment and assess their use
• The system should behave like an appliance that is
in a study - prime candidates being people who endure
always on and connected, to foster sense of continuity -
separation from family and partners for prolonged periods
an open link between the users.
of time, such as off-shore workers or military personnel.
• Participating with Habitat should require no change in
REFERENCES
the user’s normal behavior and not alter the furniture's
original use. 1. Agamanolis, S. "Designing Displays for Human
Connectedness," in Kenton O’Hara et. al., eds., Public
• The visualisations should be non-distracting, so they
and Situated Displays, Kluwer, 2003.
can be viewed across the room and in the periphery of
vision without distraction. The visualisations are designed 2. Bentley, E. Awareness: Biorhythms, sleep and
to indicate presence of the remote partner over a duration dreaming. Routledge, 1999.
of time, so that observers are free to move around the 3. Lewis, T., Amini, F. and Lannon, R. A General Theory
living space and not have to constantly watch the display. of Love. Vintage Books USA, 2001.
• The system should express the notion of a digital 4. Siio, I. et. al., “Peek-a-drawer…,” in CHI’02 Extended
wake. A digital wake is a visual construct that allows the Abstracts, ACM Press, 2002.
users to ascertain the history of previous interactions.
5. Streitz, N. et. al., “Roomware: The Second Generation,”
When an activity ends, its representation gradually fades
in CHI'02 Extended Abstracts. ACM Press, 2002.
out but is never completely removed from the display.
This gives users who return to their living space a 6. Wisneski, C. et. al, “Ambient Displays….,” Proc.
mechanism to interpret what took place while they were CoBuild’98, Springer, 1998.
absent.

164
Smart Home in Your Pocket
Louise Barkhuus Anna Vallgårda
Department for Design and Use of IT Department of Computer Science
The IT University of Copenhagen University of Copenhagen
Glentevej 67, Copenhagen 2400, Denmark Universitetsparken 1, Copenhagen 2100, Denmark
[email protected] [email protected]

ABSTRACT THE HYP APPLICATION


In this poster we present HYP, an application that enables a The HYP application is developed in Java, using its micro-
mobile phone user to create his own context-aware services edition API [3] in order to make it run on a handheld device
for his smart home. By setting criteria tailored for the with limited processing power and memory. It is in its
individual user, HYP can for example warn the inhabitants development phase and therefore still only a prototype with
of a house if the TV is on when no one is watching. We no back-end so far; eventually the goal is to connect it to a
developed HYP in J2ME, making it possible to run on any sensor equipped (test) home in order to make further user
java enabled handheld device. Future work includes testing. The core concepts of HYP are the actions and the
attaching it to a real smart home, in order to test the actual conditions as seen in figure 1. An action is a single action,
employment of the application. for example ‘turn on coffee maker’ or ‘turn off TV’. The
conditions are the more complex user specified criteria that
Keywords make up one action. Conditions can for example be ‘light
Context-aware computing, smart homes, handheld devices on in bedroom’ or ‘motion detected in bathroom’. The
conditions for each action are all defined in an ‘and’-
INTRODUCTION aggregation. The conditions can be wrapped in a timer, so
The seamless interaction between a house and its the user can specify a specific time frame for the condition,
inhabitants that context-aware homes strive for is shown to e.g. ‘for at least 10 minutes’, however, a condition can be a
be difficult to achieve [6]. The advantages of a home with time interval in itself, meaning that all conditions are
functions that adapt and assist according to sensor measures restricted to a specified time frame, for example 6-9 PM. In
seem numerous; however, people have very different habits order to provide more insight into the use of HYP we
and ways of leading everyday life, and applications that are provide three examples of sub-applications.
developed for one lifestyle therefore might not work for
another. For example routing phone calls to the room where
the receiver is present might work in a busy nuclear family, Action Condition Time
but for the elderly couple, who both enjoy getting calls interval
from their grown children (meaning the call in not directed
at one person, but both), the function might not be optimal Figure 1: HYP model of concepts
or even relevant.
Examples of Tailored Functions
In this project we approach the problem by suggesting a The first example function alerts the user when the next bus
new application, which enables people to create their own will leave from the closest bus stop. The user models this
context-aware applications for a sensor saturated home. The from the criteria that the fridge door opens between 7 and 9
user defines the sensor measures (criteria) that should be in the morning, while the light in the bathroom has been on
taken into account when performing a specific action; the for more than 10 minutes. This particular user knows that
action then depends on user specified criteria, making the he drinks a glass of milk in the morning after showering,
applications more flexible and tailored, rather than having a which always takes more than ten minutes; therefore the
programmer specify each application. We have developed a bus schedule is relevant at this point in his schedule, as
prototype of the system that runs on a handheld device, but opposed to a fixed time.
it is still not attached to a real smart home or any sensors.
We first present the HYP application and examples of In another function, the user defines the action of turning on
functions the user can develop, second, we review related the record button on the VCR. The user defines the five
work. Finally, we discuss the implications, conclude and criteria as there are no lights on in the living room and the
suggest future research. cell phone is not in its cradle; the front door has not been
opened the last 10 minutes and the TV is not on. If it is
Sunday night, 9pm, the VCR turns on and tapes the latest
episode of the user’s favorite show: Sex and the City.

165
The final example alerts the user if he is about to forget his to keep HYP simple, the options provided are limited.
cell phone in the morning. If he opens the front door When creating a timer for example, the user is left with few
between 7 and 9 in the morning and the cell phone is still selections (see figure 2), which in some cases might not
located in its cradle, the cell phone alerts the user with a satisfy the user’s needs. It is likely that the user wishes to
loud beep that he has not taken it with him. create applications that are not possible and finds that it is
difficult to define the right criteria for a specific action.
Most people lead irregular lives, resulting in exceptions that
might initiate the action at the wrong time. However, the
HYP approach makes users understand why the system acts
like it does, because they specify the conditions themselves.

CONCLUSIONS AND FUTURE WORK


We have presented our prototype application HYP that
illustrates how users can create their own smart home
functions on a handheld device. The concept of HYP
emphasizes the user’s individual and unique lifestyle by
letting him define his own criteria. This approach lets the
user stay in control of the technology and thereby prevents
the user from not using the application due to irrelevance. It
is our belief that this type of context-aware applications
will, in many environments, be preferred over the more
autonomous ones, which, in their effort to work smooth and
Figure 2: Screen shots from HYP: actions and conditions transparent, leaves little free choice to the users.

RELATED WORK Since we have not performed any formal user evaluation on
A fair amount of research has focused on developing smart HYP, this is the next step. Testing how users interact with it
homes; one example is The Aware Home at Georgia and see if they are able to create desirable applications, are
Institute of Technology [5]. Here, the purpose was to make essential for further deployment. Our second goal is to
the sensors learn about the users’ habits to facilitate the connect the HYP prototype to a sensor equipped smart
development of human-centered applications for a rich home in order to develop the application further and get real
sensor infrastructure. MIT’s House_n is build with another user feed back. Finally, it should be considered which other
goal: to teach and motivate the user to take control in a environments will likely benefit from a similar approach.
sensor augmented house instead of having the smart house
overriding the user’s actions with inappropriate REFERENCES
behavior[4]. In our view, the goal of a smart home is to 1.Abowd, G. et al (1997): Cyberguide: a mobile context-
assist individual inhabitants with everyday tasks by aware tour guide, Wireless Networks 3(5):421–33.
tailoring functions to their habits and behavior. 2.Cheverst, K. et al. (2000): Developing a context-aware
electronic tourist guide: some issues and experiences,
Other relevant work includes context-aware applications for Proc. of CHI, pp. 17–24.
handheld units such as the Tour Guide and the Cyber
Guide[2,1]. These applications change their content 3.Java 2 Platform, Micro Edition, sun.java.com/j2me.
according to the surrounding context, for example location, 4.Intille, S. S. (2002): Designing a Home of the Future,
time and identity of the user. Finally, iCAP is a system that Pervasive Computing, April–June, pp. 80–86.
enables users to create context-aware applications [7]. But
5.Kidd, C. et al. (1999): The Aware Home: A Living
where iCAP focuses on end-user programming on a desktop
Laboratory for Ubiquitous Computing Research, Proc. of
computer, HYP goes all the way and enables mobile users
the 2nd In. Workshop on Cooperative Buildings,
to define their own criteria ‘on the go’.
Integrating Information, Organization., and Architecture,
pp. 191–198.
IMPLICATIONS OF THE HYP APPROACH
While HYP is in essence still an outer layer of the 6.Meyer, S. and Rakotonirainy, A. (2003): A survey of
prototype, it illustrates a new way of specifying a smart Research on Context-Aware Homes. Proc. of the
home. It is our goal to empower the users by giving them Australasian information security workshop conference
simple options for dynamic functions. By making it easy to on ACSW frontiers 2003, pp. 159–168.
revise existing sub-applications, the chance that users will 7.Sohn, T. and Dey, A.K. (2003): iCAP: an informal tool
reject context-aware functions is diminished, because the for interactive prototyping of context-aware applications,
user can change it to better fit his needs. However, in order Proc. of CHI ’03 Extended Abstracts, pp. 974–97.

166
SiteView: Tangibly Programming Active Environments
with Predictive Visualization
Chris Beckmann Anind K. Dey
Computer Science Division Intel Research, Berkeley
University of California at Berkeley Intel Corporation
[email protected] [email protected]

ABSTRACT and internal state of his machine learning system is


Active environments – those with sensing and actuation essentially opaque to the user [5]. The Accord toolkit also
capabilities – are often difficult for end users to control. supports home automation, and encourages explicit
We describe SiteView, a system for creating and viewing programming by direct manipulation, but it reverts to a
automation control rules. SiteView has an intuitive tangible simplified GUI as a programming environment [1]. To ease
interaction method for creating control rules and enhances programming in SiteView, we built tangible interactors,
user understanding of the system by appropriately exposing drawing upon previous tangible programming work in
internal state. SiteView also supports users’ visualization of Gorbet et al’s Triangles and Blackwell’s Media Cubes
the active environment through a photographic display [4,3]. The world-in-miniature ties our tangible interactors
keyed to control rule conditions. to the physical environment, and originated as a technique
by Stoakley et al. for navigation of virtual reality spaces
Keywords
[6]. Here, we describe the design and implementation of a
Tangible interaction, end-user programming
system for intuitive end-user programming of active
INTRODUCTION environments, and motivate the design with a use scenario.
Research in active environments – those with sensing and
SYSTEM DESIGN
actuation capabilities – usually involves sophisticated
Our tangible interaction interface makes end-user
technology, but usable active environments need not be
programming of automated environments simpler by
complicated for end users. Much automation can be
leveraging spatial and visual correspondences between the
achieved with simple sensing and actuation, and, indeed,
control interface and the automated environment, and by
commodity toolkits and protocols such as X10 exist for this
enabling seamless visualization of the active environment
purpose [7]. However, use of everyday automation tools is
and the automation rules. As a proof of concept, we
generally limited to hobbyists and those comfortable with
implemented our interface design to provide control for a
traditional programming techniques. We argue that this can
new laboratory space used by our research group. There are
be seen as chiefly a problem of user interface. Specifically,
five main components of the interface’s physical design.
commodity automation toolkits do not correspond spatially
The first three support the act of rule creation and the last
to how users interact with their environments and do not
two support feedback about those rules:
offer feedback about the internal state of the control system
[7,8]. Furthermore, these toolkits lack situated visualization Interactors are physical objects that logically correspond
– they do not show the user the future effects of her rules at to rule conditions, such as afternoon, and automated
the time she is programming them. actions, such as light on. Tangible interfaces allow for
collaborative and two-handed interaction, require less
SiteView addresses these issues by lowering the learning
dexterity than traditional input, and better preserve spatial
curve for environment automation, while maintaining
relationships between virtual objects and their real-world
enough logical expressiveness to remain useful. SiteView
counterparts [4].
programs consist of rules with a simple conjunctive
The world-in-miniature is a small-scale representation of
predicate and one or more consequent actions. Users create
the environment, used as an interaction space for the user
rules by manipulating tangible interactors representing
to spatially specify automation actions. The WIM is a
sensed conditions and automated actions within a world-in-
physical artifact with a spatial and logical correspondence
miniature (WIM) model representing the active
to the user’s view of the environment, unlike GUIS and
environment. To enhance transparency, our interface offers
scripting languages, enhancing intuition about
explicit user feedback during programming. It shows what
programming actions. The condition composer is an area
control rules are applicable given the user’s conditions and
that senses and structures the user’s specification of rule
it provides an image of what the environment will look like
conditions. Conditions are represented as discrete tokens.
under those conditions and actions specified by the user.
The environment display shows what the environment
Our project draws inspiration from a broad base of will look like when a rule is activated. The environment
previous work. While Mozer’s neural network house display shows photographs of what the active environment
investigated making home automation usable, the behavior will look like for a given set of user-specified conditions

167
sentence: if it is raining and a weekday and morning, then
turn on the north lamp. She sets the thermostat interactor to
a warmer temperature and places it in the WIM. The rules
display now shows the new rule, which handles light and
temperature on rainy mornings, along with the original set
of rules. The environment display reflects her new rule, and
shows her office lit by her floor lamp on rainy mornings.
SiteView then turns on the floor lamp and adjusts the
temperature.
EVALUATION
An initial user study of SiteView demonstrated that end
users could create rules that control their environment. The
tangible interface appears to be intuitive and the
environmental display and rules display are useful for
Figure 1: The condition composer is at front; the large screen is the helping users create rules and view the effects of these
environment display. The laptop is the rules display, and there is a rules. Overall, the system was usable for generating a
lightbulb interactor on the WIM floorplan. The configuration shown variety of rules, each using one to three rule conditions,
creates a rule to turn on the north lamp on rainy Monday mornings.
and each triggering one or both of the lights and the
(weather, day, and time). The user can also use the thermostat settings. The system also made the effects of
environment display to simply check automation settings composing multiple active rules transparent. For example, a
for a particular set of conditions, including the current rule that turned on the lights in the evenings was
ones. understood to be combined with another rule that set the
The rules display shows the rule as it is created and shows temperature at 55 degrees on overcast weekends to turn on
other rules applicable for the given set of conditions. The the lights and set the temperature to 55 on a Saturday
rules display provides the user with explicit feedback about evening. One confusion that arose during the study was
the internal state of the control system, which supports a the duration of time-based rule conditions. While the use of
more transparent user understanding of system behavior. natural words for time-of-day appeared transparent, one
As rules are being created, SiteView displays them as user was unsure if a rule that specified turning down the
English-like sentences. SiteView also displays the relevant heat at 8 PM would still be in effect at 8:15 PM or later.
set of existing rules as the user specifies predicate Our future work includes further user evaluation of
conditions. SiteView to determine the types of tasks it is appropriate
USE SCENARIO for, providing support for disjunctive relationships, and
As an illustration, consider the following scenario. On a exploring how the tangible nature of SiteView can be used
rainy morning, Dana finds her workspace too dark and too to constrain user input for novices.
cold and wants to adjust the lighting and room temperature. REFERENCES
She consults the SiteView rules display, which, by default, 1. Åkesson, K-P. et al. “A toolkit for user re-configuration of
shows the rules active in the current situation. She notes ubiquitous domestic environments”. Companion to UIST
that the active control rule handles weekday mornings in 2002. (2002).
general, but not rainy weekday mornings in particular. 2. Blackwell, A.F. and Hague, R. “AutoHAN: An architecture
Rather than manually changing the temperature and light for programming the home”. IEEE Symposia on Human-
conditions using the available thermostat and light Centric Computing Languages and Environments. pp. 150-
switches, Dana uses SiteView to add a new rule, so the 157. (2001).
active environment will behave appropriately now and in 3. Gorbet, M.G. et al. “Triangles: Tangible interfaces for
the future. First, she places the interactors signifying rain, manipulation and exploration of digital information
morning and weekdays in the appropriate slots on the topography”. CHI '98. pp. 49-56. (1998).
condition composer (left, center and right, respectively, in 4. Ishii, H., and Ullmer, B. “Tangible bits: towards seamless
Figure 1). The rules display (far left of Figure 1) shows all interfaces between people, bits and atoms”. CHI'97. pp. 234-
applicable control rules for those conditions, including (the 241. (1997).
currently active) one for a more general condition, 5. Mozer, M.C. “The Neural Network House: An environment
weekday mornings, and the visualization display shows an that adapts to its inhabitants”. AAAI Symp. on Intelligent
image of the office similar to the office’s current Environments. pp. 110-114. (1998)
appearance. Next, she places the light on interactor on the 6. Stoakley, R. et al. “Virtual reality on a WIM: Interactive
portion of the WIM signifying her floor lamp. Now that worlds in miniature”. CHI '95. pp. 265-272. (1995).
Dana has specified a valid rule – both a condition and an 7. Smarthome X10 Kit. http://www.smarthome.com/
action – the rules display shows it as an English-like
8. Home Director. http://www.homedirector.com

168
Towards Ubiquitous End-User Programming
Rob Hague Peter Robinson Alan Blackwell
University of Cambridge Computer Laboratory
William Gates Building
15 JJ Thomson Avenue
Cambridge CB3 0FD UK
{Rob.Hague, Peter.Robinson, Alan.Blackwell}@cl.cam.ac.uk

INTRODUCTION ities, but also different programming tasks, as psychology


We believe that end-user programming capabilities are an of programming research has demonstrated that no language
essential part of any flexible ubiquitous computing system. can be best for all applications - different notations give bet-
When these are well designed, and tightly integrated with ter support for different programmer activities (for example,
the system as a whole, they allow users to add functionality creating a new program, modifying an existing program [4]).
that was not, and in many cases, could not have been, antic- Hence, the system should allow a single program to be rep-
ipated by the system’s designers. This enables users to ben- resented in a variety of notations for different users and dif-
efit fully from the possibilities ubiquitous computing offers. ferent tasks.
However, End-User Programming in a ubiquitous comput-
ing context faces several novel issues, in particular the com- LINGUA FRANCA - SCRIPTING IN MANY LANGUAGES
munication channels available and the diversity of the user In order to create a system in which a user may manipulate
population. a single program via multiple notations, we have designed
We have taken as the domain for our research the domes- Lingua Franca, a common XML-based intermediate form
tic environment. There are already a range of programmable for scripting languages. Using this intermediate form has
microprocessor controlled devices that are routinely found in several advantages. For example, automated enforcement of
the home, ranging from alarm clocks, security systems and policies that limit the action of scripts is of particular im-
boiler controls to VCRs and personal video recorders such as portance for end-user programming in domestic ubiquitous
TiVoTM . Several of these devices already pose a notorious us- computing. Both home owners and authorities are likely
ability problem for large segments of the population [1]. As to be concerned that end-user programs should not inadver-
home appliances start to interact with each other, the com- tently or maliciously bypass fire alarms, security systems,
plexities of end user programming and customisation will or payment mechanisms. In order to achieve these safety
become far more severe. Home networking systems are al- provisions, the system must be able to reason about the be-
ready becoming a widespread site of ubiquitous computing, haviour of new programs as they are created, in order to as-
both in research prototypes [2], and (in more limited form) sess whether they conflict with existing policies. The com-
in existing systems such as X10. mon representation allows a common enforcement mecha-
nism across languages.
Is it possible that home-owners will ever be able to config-
ure and customise interaction between the appliances in their Lingua Franca goes beyond conventional multiple-language
homes? This is a critical question for the acceptance of ubiq- systems in it’s support for translations between source lan-
uitous computing. If the combined functionality of many guages (as opposed to simply translating multiple source
appliances is no more powerful than that of the individual languages into the same form of object code). Various source
appliances purchased separately, then ubiquitous computing languages in Lingua Franca are supported via “language en-
will not have any significant impact on regular lifestyles. vironments” that translate between the source language and
Lingua Franca. Note that not all environments allow trans-
We have applied recent theoretical approaches in end-user lation in both directions; some language environments only
programming [3] to the problem of domestic automation. translate from the source notation to Lingua Franca (and
Unlike end-user programming in the business context, where may only be used to create script), whereas other only trans-
programming is done by “power-users” with respectable (if late from Lingua Franca to some other notation (and may
incomplete) technical knowledge, programming in the home only be used to display script). The most general class of lan-
can be done by people with a very wide range of abilities. guage environments perform translation in both directions;
The end user programming languages in products such as these may be used to edit a script, by first translating from
Excel already present a serious design challenge in support- Lingua Franca to a “source” notation, modifying that repre-
ing both casual users and serious developers [5]. In the do- sentation, then finally translating it back to Lingua Franca.
mestic environment, we recommend an approach in which To allow this bidirectional transformation, language environ-
a range of programming paradigms are made available via a ments must conserve all information in the Lingua Franca
common programming architecture to support different modes representation, regardless of whether it is meaningful in the
of interaction with the underlying ubiquitous computing ar- present language or not. (Contrast this to traditional com-
chitecture. Not only will this support a range of user abil- pilers, where information and structure not relevant to the

169
result is usually discarded.) Perhaps the most unusual of the language environments be-
ing developed for use with Lingua Franca is the Media Cubes
The two types of information that are most commonly dis- language. This is a “tactile” programming language, in other
carded when translating a script from one form to another words, a language where programs are constructed by ma-
are secondary notation, such as comments, and higher level nipulating physical objects—in this case, cubes augmented
structure, such as loops. Both of these may vary greatly such that they can determine when they are close to one an-
from language to language. Lingua Franca allows multi- other. The faces of the cube signify a variety of concepts,
ple secondary notation elements to be associated with a part and the user creates a script by placing appropriate faces to-
of a script; each such element is tagged with a notation type, gether; for example, to construct a simple radio alarm clock,
to allow language environments to determine which (if any) the “Do” face of a cube representing a conditional expres-
to display. Higher level structure is represented by group- sion would be placed against a representation of the act of
ing; again, each group is tagged with a type (such as ”while switching on a radio, and the “When” face against a rep-
loop”), which may imply a particular structure, and language resentation of the desired time. In an appropriately instru-
environments may use this to determine how to display the mented house, the representation can often be an existing,
group’s members. Unlike secondary notation, any environ- familiar item, or even the object itself. In the above exam-
ment that can display Lingua Franca can display any group, ple, a time could be represented using an instrumented clock
as in the worst case it can simply display it as a grouped face, and turning the radio on could be represented by the
collection of primitive operations. radio or its on switch.
We have implemented a interpreter that stores the “corpus” The Media Cubes language is intended to be easy for those
of scripts that have been entered into the system. Language unfamiliar to programming, and as such would provide a
environments communicate with this interpreter via HTTP, low-impact path from direct manipulation to programming.
allowing them to read, add to and update the Lingua Franca However, the language as it stands is unusual in one very sig-
code (represented as XML) that makes up the corpus. In nificant respect—scripts do not have any external represen-
addition, the interpreter is responsible for executing Lingua tation. This means that it is only feasible to construct small
Franca code, and interfacing the Lingua Franca environ- scripts, and that, once created, scripts may not be viewed,
ment with the rest of the ubiquitous computing system. and hence may not be modified. However, as the language
exists within the Lingua Franca framework, we do not need
A MENAGERIE OF PROGRAMMING LANGUAGES
to abandon the language, with its substantial advantages.
A wide variety of scripting languages are being developed Lingua Franca makes it feasible to include niche languages
in order to demonstrate the flexibility and range of the Lin- such as the Media Cubes in a system without sacrificing
gua Franca architecture. These languages are designed to functionality.
complement each other, in that they may be used to perform
different manipulations on the same script with ease. Each REFERENCES
language is embodied in a language environment that pro- 1. Blackwell, A.F., Hewson, R.L. and Green, T.R.G. (2003)
vides an interface via which the user can view and/or ma- Product design to support user abstractions. Handbook
nipulate a particular notation, translates between the nota- of Cognitive Task Design, E. Hollnagel (Ed.), Lawrence
tion and Lingua Franca, and communicates with the Lingua Erlbaum Associates.
Franca interpreter via HTTP.
2. Blackwell, A.F. and Hague, R. (2001). AutoHAN: An
A textual language provides an interface familiar to those Architecture for Programming the Home. Proceedings of
with experience of conventional scripting languages. It is en- the IEEE Symposia on Human-Centric Computing Lan-
visioned that this will be primarily used for editing substan- guages and Environments, pp. 150-157.
tial scripts, a task most likely to be undertaken by someone
with at least some programming background. (It is of course 3. Blackwell, A.F., Robinson, P., Roast, C, and Green, T.R.G.
possible to manipulate Lingua Franca directly in XML form, (2002). Cognitive models of programming-like activity.
but this is needlessly difficult and carries the risk of intro- Proceedings of CHI’02, 910-911.
ducing malformed code into the database, or accidentally re-
moving or modifying data associated with another language 4. Green, T.R.G, Petre, M. and Bellamy, R.K.E, Compre-
environment.) hensibility of visual and textual programs: A test of su-
perlativism against the ‘match-mismatch’ conjecture, Em-
Two forms of visual language are in development, serving pirical Studies of Programmers: Fourth Workshop,
slightly different needs. The first is a purely presentational J. Koenemann-Belliveau, T.G. Moher, S.P. Robertson (Eds):
diagram that cannot be used to create or edit scripts, but Norwood, NJ: Ablex, 1991
only to display them. This allows it to be specialized in or-
der to facilitate searching, navigation and comprehension of 5. Peyton Jones, S., Blackwell, A and Burnett, M. (in press)
scripts. The second, a mutable diagram, allows scripts to A user-centred approach to functions in Excel. To appear
be edited, and is likely to be the main environment for the in proceedings International Conference on Functional
manipulation of mid-sized scripts. Programming.

170
TunA: A Mobile Music Experience
to Foster Local Interactions
Arianna Bassoli, Cian Cullinan, Julian Moore, Stefan Agamanolis
Human Connectedness Group
Media Lab Europe, Sugar House Lane, Bellevue, Dublin 8, Ireland
{arianna, cian, julian, stefan}@medialabeurope.org

ABSTRACT what someone else is listening to. This application has been
Can the Walkman become a social experience? Can anyone developed following a recent social study that we
become a mobile radio station? With the TunA project we conducted for a project called WAND (Wireless Ad hoc
are investigating a way to use music in order to connect Network for Dublin)[2]. WAND is an infrastructure, based
people at a local scale, through the use of handheld devices on 802.11b and in the process of being installed in the city
and the creation of dynamic and ad hoc wireless networks. centre of Dublin. It is designed to support and run
TunA gives the opportunity to listen to what other people applications that exploit an ad hoc, decentralised, and peer-
around are listening to, synchronized to enable the feeling to-peer type of communication. An ethnographic study was
of a shared experience. Finally, TunA allows users to share organised in order to understand the socio-cultural
their songs in many situations, while moving around, dynamics of the area covered by the network, to involve
fostering a sense of awareness of the surrounding physical users in the project development, and to inform and inspire
environment. content and service providers for the design of new
Keywords applications. In this framework, we see TunA as targeted to
802.11, music, synchronisation, local networks, shared some of the communities identified during this study, in
experience, ad hoc networks particular students, skaters, and commuters. The goal of the
INTRODUCTION
project is not only to create new social links but also to
R. D. Putnam claimed a few years ago: “Social networks strengthen existing ones; established communities like the
based on computer-mediated communication can be skaters could in fact use TunA to reinforce their identity,
organised by shared interests rather than by shared and to express themselves in new creative ways.
space”[1]. As the market of PDAs spreads and new
wireless technologies are being improved, we research
instead a way to create and support social networks of
people who share the same physical space. In the
application we are currently developing music constitutes
the main interest around which communities, virtual and
real, can be formed.
We wish, in general, to contribute to the understanding of
how wireless networks, so far mainly considered for their
“globalising” potential, could also make people more aware
of their local reality. By connecting PDAs in an ad hoc way
with 802.11b, we focus on the creation of dynamic local
networks in which users are able to share information and
resources with others who are in range.
In order to find a subtle and non-intrusive way to connect Fig. 1: Example scenario of TunA usage—people on a bus
people who are nearby through mobile devices, we decided
to explore the concept of a “shared music experience.” TECHNOLOGY
Music is commonly used as a form of mobile Tuna is ideally meant to work on any handheld device that
entertainment, through personal devices such as Walkman supports 802.11 technologies. We are now working on a
or digital players. While so far listening to music when prototype for iPaqs, running the GPE 0.7 version of Linux
moving around has been mostly an individual and quite Familiar, connected in ad hoc mode through 802.11b.
isolating process, we are here suggesting making it a fun TunA can be used as a standard mp3 player for personal
and socialising experience. music; at the same time it visualises, in one single screen,
MOTIVATION all the other TunA users who are in range, and gives
The TunA project is about being able to access the playlists options to access their playlists, their profiles, and the
of other users who are near, and to listen synchronously to songs they are listening to. The user has an option to “tune

171
in” and start listening to what another person is listening to. in common their passion for skating along with a
An important aspect of this work is the synchronisation of specific set of rules and behaviours. TunA could help
the listening experience. The "tune in" option gives in fact this community to reinforce their identity through
only access to the song another user is currently listening music. Instead of bringing their stereo and listening to
to, and this is what we refer to as a "shared music their songs loudly, which would cause problems for the
experience". Finally, in order to keep track of the songs and surrounding environment, they could use TunA to have
the users encountered TunA gives the possibility to have a a shared music experience, while still keeping their
record of "favourites". privacy and an individual listening process. At the
same time they could provide a source of music, a sort
of "skaters' radio station", for other people around.
RELATED WORK
The recent success of the new version of Apple iTunes,
which uses the Rendezvous technology to share music
playlists over the same local network, has proven the
potential of wireless peer-to-peer applications that count on
the physical proximity of the users. iTunes is mostly
suitable for office spaces or in general "static" settings,
while TunA focuses on a mobile fruition of music, and on
the social dynamics fostered by an ad hoc shared music
experience. It is moreover based on handheld devices
instead of desktop computers, and this makes it a very
flexible application.
Fig 2: TunA interface in development Along the same lines as TunA, the SoundPryer project [3]
is about a peer-to-peer wireless exchange of music files
through devices, especially designed for car travellers.
SCENARIOS TunA, targeted mainly to people moving around in an
TunA can accommodate a number of occasions in which urban environment, translates the profiling process that
people gather during the course of the day. While SoundPryer uses to identify vehicles into a more personal
conducting the ethnographic study for WAND, previously one. With TunA the identity of each source of music is
mentioned, we ran across some recurring situations linked to the information users want to give about
happening in the city centre of Dublin, where TunA could themselves. Moreover, the shared experience TunA wishes
play an active role in connecting people who are nearby. to provide is connected to the concept of synchronisation,
Queuing for the Bank. On Thursdays most of the which is for us at this stage one of the main technical issues
employees receive their salary. A wide number of to face.
people gather in the main branch of AIB (Allied Irish FUTURE WORK
Banks) to collect the money to spend over the In order to make TunA progressively more flexible and
weekend. To make the action of queuing more engaging, we plan to implement, in future versions, ad hoc
interesting and engaging, music enthusiasts could use networking protocols, to allow search options. We also see
TunA to feed their curiosity about what other people in TunA as ideally integrated with an Instant Messaging
the queue are listening to. application; messages exchanged among users could in fact
Commuters. The 123 bus is one of the main links become the result of the shared music experience.
between opposite sides of the city. Many commuters ACKNOWLEDGMENTS
spend part of their daily routine on this bus, sometimes This research has been supported by sponsors and partners
getting curious about each other's presence. TunA of Media Lab Europe.
could provide a platform for light-weight interactions,
REFERENCES
in which people can discover who else commutes 1. Putnam, R., Bowling Alone, Simon & Schuster, New
during the same hours, find out if they have music
York, 2000, p.172.
tastes in common, and finally listen to what others are
listening to. 2. Bassoli, A. et al, Social research for WAND and new
media adoption on a local scale, Proc. of dyd02.
Skaters of the Central Bank Square. A well-established
community of teenagers gathers everyday in front of 3. Axelsson, F., Östergren, M., SoundPryer: Joint Music
one of the main buildings of the city centre. They have Listening on the Road, Adjunct Proc. of UBICOMP'02.

172
AudioBored: a Publicly Accessible
Networked Answering Machine
Jonah Brucker-Cohen and Stefan Agamanolis
Human Connectedness group
Media Lab Europe
Sugar House Lane, Bellevue, Dublin 8, Ireland
{jonah, stefan}@medialabeurope.org

ABSTRACT RELATED WORK


AudioBored is a publicly accessible networked answering AudioBored’s focus on augmented networked appliances
machine with two components: an online audio message and shared audio messaging systems invokes references to
board and a physical device used to access voice messages considerable past research. In appliances, projects range
and topics of discussion. The project focuses on adding from the 3COM’s Kerbango [1] radio that brought Internet
networking capabilities to the familiar household interface radio streams to a standalone radio, to Tobi Schneidler’s
of the answering machine, a widespread device that RemoteHome [2] which features a collection of networked
maintains social ties in an asynchronous manner. The household appliances and furniture. VoiceMonkey [3] and
project also incorporates an online voice messaging website AudioBlog [4] demonstrate uses of live phone-based
that allows for people to post messages with their posting to the internet where people can add messages to
telephones and listen to the posts online. Usage scenarios their personal websites. Finally, Lakshmipathy’s TalkBack
for AudioBored include voice-based online forums, [5], describes adding a networked component in the form of
situated voice posting from live events, and accumulated a screen that displays pictures of the caller along with their
public voice message histories. message.
Keywords
Audio messaging, bulletin board, answering machine,
telephone interface, online discussion list, community
messaging
INTRODUCTION
AudioBored is a framework for a shared public audio
messaging system where anyone can record a message and
share it with the world. AudioBored augments the
traditional telephone answering machine by adding a
networked component to its everyday use and situating it in
public space. Since answering machines are devices that Figure 1. Caller leaving message on AudioBored
have reached ubiquitous penetration in many areas of the answering machine
world, they are already familiar interfaces to people of
varying computer literacy. The project aims to extend the TECHNOLOGY
possibilities of public communicative spaces away from The AudioBored prototype incorporates a server-side
pictorial and written interaction and opens them to the voice-based technology to allow people to call in and
potentially richer and more human-centered medium of record messages and a hardware component for the
voice-based messaging. The system allows for dynamic answering machine. Over the phone, a VoiceXML script
threading of incoming messages by sender and can store prompts users to record a candid message. [Fig.1] Once
messages over time to build a personal or public archive of recorded through a PHP script, their message is saved
ongoing communications within an organization, individual according to topic and caller in a threaded database and
relationship, or community. AudioBored adds a voice posted online. The answering machine communicates
component to the previously text-only platform of online serially with a PC via a microcontroller to access the
message boards by integrating network access and flexible database for recent messages and updates the device when
architecture into an easily navigable physical device. a new message is available. Users can see a display of
AudioBored extends on existing voice-based online topics and total messages on an embedded LCD display.
messaging applications by focusing on being a publicly [Fig.2] There are two sliders – one to navigate “topics” of
accessible, always-connected physical answering machine messages and one for selecting individual messages within
that allows for numerous message threads and archiving of threads. The slider ranges expand dynamically according to
shared conversations collected over time. the number of incoming messages available online.

173
this would allow for an ongoing protest to take place
through contributors experiencing the event online.
3. Public Voice Histories: With most physical answering
machines and voice-mail systems, there is a limited
amount of message storage and lack of a way to sort
incoming messages into separate storage mailboxes.
AudioBored addresses this by storing all threaded
messages on a server that can be instantly accessed
through the physical interface. The device gains
importance in public spaces where PC access to
messages might be awkward or prohibitive, and it exists
as a shared community resource. Over time, personal
voice histories of messages left by community members
Figure 2. Close up of LCD display and slider can accumulate while the hardware architecture can
SCENARIOS scale to adjust for the new messages. This database of
Below are a few specific examples of possible applications public voice messages could possibly provide an
of the system. invaluable historical resource for future generations.
1. Voice-based Online Forums: AudioBored allows for FUTURE RESEARCH
people to contribute to a shared online public space Future versions of AudioBored will allow for more
without a computer. Since standard telephones customized message information that will be catalogued
(including mobile and fixed lines) are ubiquitous and along with individual clips and made into a directory
exist in far greater numbers than computers, they searchable by contributor and subject matter. We plan
provide an alternative entry point to the Internet. Using additional work on interactive visualizations of information
VXML as the voice input system, the project opens up collected by the system, such as the geographic origin of
the landscape for public contribution to distributed messages. The device could also gain Internet access
online audio forums where a greater number of people through public wireless hotspots, allowing for it to be
can potentially contribute to the discussion. Since most placed in a wider variety of public spaces in order to
online bulletin boards exist in text format, identity and maximize its user base. A detailed study is also planned on
authenticity of users can be concealed. Voice message uses of the system along with an analysis of message
posting can still maintain anonymity, but it potentially content to gain inspiration for potential deployment
adds a more personal touch to messaging applications. locations and future refinements.
For instance, users who communicated on a text-based ACKNOWLEDGMENTS
web board could use AudioBored as a means of This research has been supported by sponsors and partners
“hearing” each other’s voices for the first time, which of Media Lab Europe.
may ultimately bring their community closer together
by adding a more human element to their previous REFERENCES
interactions. (All web references last visited 3/03)
2. Situated Voice Posting: AudioBored provides a shared 1. Kerbango (discontinued), 3COM Corporation,
public outlet for people to post candid voice messages http://www.rnw.nl/realradio/features/html/
on the Internet from any phone. This becomes kerbango010322.html
especially interesting in the midst of events where 2. Schneidler, T., Remote Home,
Internet access is not easily available. For example, http://www.remotehome.org/
people in the midst of a crowded protest march could
3. VoiceMonkey, http://www.voicemonkey.com
voice their opinions from the center of the action. These
candid comments might better reflect the electric 4. AudioBlog, http://www.audblog.com
atmosphere and excitement of such a live event, adding 5. Lakshmipathy, V., Schmandt, C., & Marmasse, N.,
a sense of immediacy to the collected messages. Each “Talkback: a conversational answering machine”, Proc.
voice message is immediately recorded, stored in a of UIST ’03, Vancouver, Canada (to appear).
database and made public for people to listen to on the
device or online. Since all messages are sorted by topic,

174
Dimensions of Identity in Open Educational Settings
Alastair Iles Matthew Kam Daniel Glaser
Energy and Resources Group Computer Science Division Interdisciplinary Doctoral Program
U.C. Berkeley U.C. Berkeley U.C. Berkeley
[email protected] [email protected] [email protected]

ABSTRACT that ubicomp needs not only to focus on digital identity, but
Based on our deployments of Livenotes, a Tablet-based also on social and physical identities where educational and
application for collaborative note-taking in open collaborative work settings are concerned.
educational settings, we observe that communication OBSERVATIONS
breakdowns, potentially affecting learning, arise from We made observations while analyzing five multi-session
imperfect knowledge about other users' identities. This deployments of LN in educational settings (at UC Berkeley
leads us to argue that user identity is an under-explored and the University of Washington). The deployments were
topic in ubicomp. We show that the concept of identity not in controlled settings [1], but in open contexts including
needs to be expanded to include digital, social, and physical a graduate seminar (STS), reading group (TSD), design
features. We conclude with preliminary design implications. studio (DMG1, DMG2) (Figure 1), and undergraduate
Keywords lecture (CS). The data includes transcripts of the written
identity, education, tablet computing, proximity, familiarity conversations (~500 pages) and, in some cases (~12 hours),
video and audio recordings.
INTRODUCTION
We study how people learn via distributed dialogue.
Livenotes (LN) [1] is an application for collaborative note-
taking and drawing in classrooms. Using LN, groups of 3-7
students are wirelessly connected to one another via their
handheld tablets, such that students may exchange notes
synchronously on a multi-user, multi-page whiteboard with
peers from the same group. LN users are currently
identified by being assigned unique ink colors and through
logging in, with login names defaulting to machine names.
The most prevalent method for users to identify themselves
to a computer is through logins, an explicit form of input.
Nonetheless, traditional logins center heavily on the
desktop model, assuming a single user who is bound to a
given computer terminal for a substantial period of time. In
contrast, Livenotes uses the “common pool” model, in Figure 1. Livenotes deployed in an architectural studio
which Tablet PCs do not have fixed users and can be easily review session DMG1. Graduate students and faculty
swapped between users in a session. Pea and Rochelle [2] swapped, picked up, and set aside tablets at will.
argued for device mobility in the education context. In this In the deployments, we discovered a number of disruptions
model, however, users can lose track of who is engaged in to small group dialogue, and we then explored the
communication at a specific moment. mechanisms that people develop to resolve these problems.
Ubicomp therefore becomes important as a way to address In each deployment, groups were confused over who was
this problem. Abowd et al. [3] highlight the relevance of making what inputs on the whiteboard at different points
context, such as identity, to ubicomp, where applications because users would drop out of the dialogue, swap Tablets,
accurately keep track of their users through implicit or come and go from the classroom. Group dialogue
sensing, instead of relying on logins. We argue, however, improved over time with greater familiarity with technology
and user identity, provided that groups remained stable and
did not swap Tablets freely. Still, break-downs occurred
from time to time because of user identity issues. In a
computer science lecture in April 2003, for example, group
dialogue stopped when the group realized that a member
had just entered the room, and wondered “who is red?”

175
They asked “red” to identify himself, resuming dialogue frequently. DMG2 had high scores on all dimensions.
when he did so. However, when people overcome the lack of identity
To avoid such communication break-downs, users can knowledge through social, participative processes like a
“challenge” one another and identify themselves throughout roll-call, they appear to engage in greater dialogue. Other
a session, particularly at the outset. Once, users even variables such as personality and the classroom setting
performed a “roll call” where people took the initiative to (lecture or studio) also affect the level of dialogue.
report who they are (e.g. red, “roll call”, red, “mark”, green Developing this framework leads us to conclude that the
“john”, blue, “jeremy”, green “hi”, as seen in a computer concept of identity needs further development in ubicomp.
science lecture). Identities are established through a social In ubicomp literature, key distinctions between digital,
process that everyone can witness and participate in. The social, and physical identities are usually not made.
group becomes more aware of each other.
DESIGN IMPLICATIONS
Finally, we observed that group members developed a sense Potential design solutions for identity issues exist to aid
of user identity through non-explicit but physical means, ubicomp applications in open educational settings. These
such as associating Tablet use with screen activity, or solutions can use our framework to determine how identity
gesturing to and looking at each other. is being continuously influenced in conditions where people
ANALYSIS swap Tablets, drop out and re-enter dialogue, come and go
To explain how user identity is one important factor from classrooms, or are mobile.
shaping collaborative group dialogue and how users resolve One solution has been proposed by Maniates [4]: the
identity problems in the absence of cues provided by LN introduction of a “person” layer to the network protocol
user interfaces, logins, or social processes like roll-calls, we stack used in wireless, mobile systems, or routing messages
developed a framework that extracts four dimensions of by recipient instead of machine names. Another solution is
each educational setting that LN is deployed in. These to change the user interface to enable a roll-call feature to
dimensions are: physical stability (did people come and go, help people identify each other through social, participative
or change groups), temporal stability (did people stay with means. Another is that user activity can be incorporated into
the tablet conversation), proximity (were people sitting near the group awareness display, thus augmenting user login
each other), and social familiarity (did users know each and ink color information. Data from other sources of input
other previously). Each dimension affects how much group (Active Badges, computer video cameras, and
members are aware of each other. The higher the level of all microphones) can also be cross-referenced to help
dimensions, the more likely it is that groups will effectively determine identity. Hence, there are computational ways of
resolve identity issues and generate sustained dialogue. enhancing stability and familiarity, overcoming the
We did an initial analysis, to measure all five deployments challenges that open classroom settings and workplaces
in terms of the framework and created a relative scale to pose to discourse. All these solutions are co-existing and
compare them along each dimension: see Table 1. This target social, physical, and digital identities jointly. We plan
scale runs from low to high, based on our joint judgments to investigate how the solutions can be integrated in future
of how much of each dimension each group appeared to iterations of LN interface design and deployments.
have. ACKNOWLEDGMENT
We gratefully thank MS Research for providing the TabletPCs in this
Physical Stability Temporal Stability Proximity Familiarity study. Also to John Canny and Ellen Yi-Luen Do for their support.
STS
REFERENCES
TSD
1. A. Iles, D. Glaser, M. Kam, and J. Canny, “Learning via
DMG1
Distributed Dialogue: Livenotes and Handheld Wireless
DMG2 Technology”, in Proceedings of Computer Support for
CS Collaborative Learning ’02 (Boulder CO, January 2002),
legend: high med o low
Lawrence Erlbaum Associates, Inc., NJ, 408-417.
Table 1. Dimensions of each educational setting for the 2. J. Rochelle and R. Pea, "A walk on the WILD side: How
five deployments. wireless handhelds may change computer-supported
collaborative learning," International Journal of Cognition
Deployments varied greatly in their dimensions and and Technology, vol. 1, pp. 145-168, 2002.
therefore the level of their distributed dialogue, measured 3. G. Abowd, E. Mynatt, and T. Rodden, "Human Experience,"
by learning metrics such as: amount of dialogue, extent of IEEE Pervasive Computing, vol. 1, pp. 48-57, 2002.
participation by everyone, or the depth of ideas generated. 4. P. Maniates, M. Roussopolous, E. Surierk, K. Lai, G.
Two architecture studio groups differed markedly in their Appenzeller, X, Zhao, and M. Baker, “The Mobile People’s
dialogue rate and content because one group (DMG2) sat Architecture,” ACM Mobile Computing and
together and could see who the users were, while the other Communications Review, vol. 3, no. 3, July 1999.
group (DMG1) was more dispersed and swapped Tablets

176
Digital Message Sharing System in Public Places
Seiie Jang and Woontack Woo Sanggoog Lee
KJIST U-VR Lab. SAIT Ubicomp Lab.
Gwangju 500-712, S.Korea Suwon 440-600, S.Korea
+82-62-970-2226 +82-31-280-6953
{jangsei,wwoo}@kjist.ac.kr [email protected]

ABSTRACT messages when the users approach the object of interest,


In this paper, we propose cPost-it, which allows users to e.g. office, classroom, shopping mall, etc. It also helps
share digital messages in public places by exploiting users to access classified information by providing
context such as the user’s identity, location, and time. The messages in a good order based on the context such as the
cPost-it, consisting of the Client, Object, and Server, users’ profile.
provides location based service (LBS) by retrieving CONTEXT-BASED INFORMATION SHARING SYSTEM
embedded information from the real-world objects. Also, it The cPost-it, as shown in Figure 1, consists of Object,
provides the personalized information in the indoor Client, and Server. The cPost-it Object links the
environment according to the user’s identity, location, time, information to the real-world entity by providing the cPost-
etc. According to the subjective evaluations, the proposed it Client with URL of the cPost-it Server through IrDA.
cPost-it framework may play important roles in sharing Then, the cPost-it Client provides the user’s context to the
information for the ubiquitous computing environment. Server and gets the augmented information on the object
Keywords through the PDA. The cPost-it Server manages the request
Context-aware, Ubiquitous Computing, Personalized from the Client and provides corresponding information
Service, Location based Service according to the user’s context.
INTRODUCTION
In general, it is inconvenient for users to share information
in public places through the current information sharing
system such as a whiteboard and post-it. For example,
paper-based handwritten documents can be removed
accidentally or be messily attached to an object. These
problems have been relieved in part by NaviCam[1],
CyberGuide[2], Guide[3], Cooltown[4], GeoNotes[5],
Figure 1: The concept of cPost-it
comMotion[6], Stick-e Note[7], etc. by introducing digital
messages (such as text, voice, picture, video, etc.) with a As shown in Figure 2, if the user with a cPost-it Client
Personal Digital Assistant (PDA) according to the user’s exists in the working area of a cPost-it Object, the Client
location. However, these systems mainly exploit location receives URL of the cPost-it Server. When the Client
information to provide users with proper information, connects to the Server, it transmits the user’s identity, and
rather than considering various types of contexts. current time as a user’s context. Then, the Server generates
In this paper, we propose cPost-it, which allows users to the personalized digital message and transmits them to the
access digital messages with a PDA, i.e. augment or Client immediately. The resulting information is in a good
retrieve information of a real-world entity such as a place order according to the provided context. For handling the
or object by exploiting the contexts such as user’s identity, context, the cPost-it is implemented using the unified
location, and time. The main features of the proposed context-aware application model, what is called ubi-
cPost-it are as follows: At first, it provides a natural way to UCAM[8], which consists of ubiSensor and ubiService.
access augmented information related to a physical object cPost-it Client
through a short range wireless network such as IrDA. In The cPost-it Client consists of the ubiSensor of ubi-UCAM
addition, it allows users to retrieve personalized digital [8] and (Web-based) user interface. The ubiSensor in the
PDA receives the URL of a Server from an Object and then
makes a connection between the Client and Server. After

This work was supported by University Research


Program of MIC in Korea.

177
establishing the connection, it delivers the context to the PDA according to the user’s identity. Also, cPost-it
Server. The interface transfers the user’s identity to provides a user with personalized information services such
ubiSensor. The identity specifies the right of access to the as classified messages by exploiting the user profile about
shared information classified by the name of a user or the message of interest entities.
group. Note that unspecified persons in public places The cPost-it guarantees to keep the individual notes and to
belong to an “All” group. The resulting messages are share personalized messages among just group members.
provided in the form of Web. Because all messages are categorized into three parts;
‘Personal’, ‘Group’, and ‘All’, it provides users in public
places with proper messages according to the access right
which the user will specify. As long as the user’s access
right is preserved, the private messages can be safely
shared in public places. In addition, all services of cPost-it
are protected by the security mechanism of a Web server.

Figure 2: The Architecture of cPost-it


cPost-it Object
The cPost-it Object consists of a real-world entity and
Smart Sensor. Anything can be used as the entity of the Figure 3: Implemented cPost-it System
cPost-it Object which will be augmented with digital FUTURE WORK
information. For example, the object can be a public place To prove the usefulness of the proposed context-based
or an individual appliance such as door, TV, furniture, etc. information sharing system, we have experimented
The Smart Sensor, ubiSensor [8], is a device that includes a implemented cPost-it in ubiHome [9], a test-bed for
short range wireless networking module and a simple ubiComp-enable home environment in KJIST U-VR Lab.
processing module to provide URL. It is bound with the Now, we are improving the system based on the evaluation
entity. When a user triggers the signal of the IrDA within such as users’ satisfaction about context-based services and
the working area of cPost-it Object, the Smart Sensor system faults. After the usability tests, we will release the
senses the signal and sends URL of the cPost-it Server to results of the improved context-based information sharing
the Client. system.
cPost-it Server REFERENCES
The cPost-it Server consists of the database (DB) and Web- 1. J.Rekimoto: NaviCam: A Magnifying Glass Approach to Augmented
based ubiService [8]. The cPost-it Server manages the DB, Reality. MIT Presence, Vol.6, No. 4 (August 1997)
the saved information of a cPost-it Object which is 2. G.D. Abowd, C.G. Atkeson, J.Hong, S.Long, R. Koorper,
virtually connected to the cPost-it Server. To help M.Pinkerton: Cyberguide: A Mobile Context-aware Tour Guide. ACM
Wireless Networks, (1997) 3:421-433
ubiService generate context-based queries, it manages
every digital message with additional information such as a 3. N. Davies, K. Mitchell, K. Cheverst, G. Blair: Developing A Context
Sensitive Tourist Guide. Technical Report Computing Department,
file name, a right of accessing file, and the frequency of Lancaster Univ. (March 1998)
usage of each message in a day. The ubiService provides
4. Cooltown:Http://www.champignon.net/TimKindberg/CooltownUserE
Web-based services such as adding, editing, removing the xperience1.htm
shared information according to the user’s contexts from
5. GeoNotes: http://geonotes.sics.se/
the cPost-it Client and the information of the DB.
6. N. Marmasse, C. Schmandt: Location-aware information delivery with
PERSONALIZED INFORMATION SHARING comMotion. The HUC Proceedings (2000) 157-171
We used Compaq iPAQ H3130 and H3600 to implement 7. J. Pascoe: The Stick-e Note Architecture: Extending the Interface
the Smart Sensor of the cPost-it Object and the Client, Beyond the User. International Conference on Intelligent User
respectively. The Server is implemented with MS- Interfaces, Orlando, Florida, USA. ACM. (1997) 261-264
SQL2000 and the ubiService which is based on the Web 8. S. Jang, W. Woo: ubi-UCAM: A Unified Context-Aware Application
server. As shown in Figure 3, when a user carrying cPost-it Model for ubiHome. LNAI 2680 (2003) 178-189
Client approaches the door (the cPost-it Object), the 9. S. Jang, W. Woo: Research Activities on Smart Environment. IEEK,
Magazine. Vol.28. (2001) 85-89
augmented information (personal notes, video manuals of
appliances, public place notices, etc.) are retrieved on the

178
The Spookies: A Computational Free Play Toy
Tobias Rydenhag12, Jesper Bernson1, Sara Backlund12, and Lena Berglin1
ToyLabs Ltd1 & PLAY, Interactive Institute2
Hugo Grauers Gata 3
SE-41296 Gothenburg, Sweden
{tobias, jesper, sara, lena}@toylabs.se

ABSTRACT social interaction. Most interaction is further limited to a


We present Spookies, a computer embedded toy to support predestined purpose as a few built-in games and songs not
natural Free Play activities. Free Play is defined as creative, supporting the child to use it as a part of other natural play
active and spontaneous everyday play activities where situations. To sum up, few or none of today’s interactive
several children play together. Since this kind of play toys support Free Play any better than regular toys,
behaviour finds little support in the interactive toys of generally far worse.
today, Spookies have been designed addressing this issue.
Spookies presents children with a flexible yet simple tool
to use as they see fit in everyday play situations. The
fourteen specialised units of sensors and output devices can
be turned into complex functions by pattern of physical
assembly, providing simple end-user programming for
creative use.
Figure 1: The Spookies
Keywords
SPOOKIES
Free Play, Interactive Toys, Embedded Computing, Meeting the call for an essentially new kind of interactive
Tangible Interfaces, Physical Programming. toy that supports Free Play activities, Spookies have been
INTRODUCTION developed. Spookies are interactive, unlike most other Free
Playing is a central activity in children’s lives. It is Play toys, augmented by embedded computer technology.
essential to their well being but also to their cognitive, When trying to support Free Play behaviour with a toy it is
social and physical development [1, 2]. Playing allows important not to limit or restrict the play by adding a
children to learn about the world and experience life. In predestined structure of use. Thus, the toy must be flexible
exploring their environment children enjoy engaging in to support creative usage.
various kinds of play activities; activities involving toys or Hence, Spookies have been designed as a collection of
defined games but also activities just involving playing specialised yet flexible units of input sensors and output
with each other. Using their imagination they can create devices, providing children with a strong tool for creative
exciting play settings and experiences out of their everyday play. A total of fourteen different units divided into seven
environment. A broom might become a horse, an old log couples have been developed to this date: Audio; Tracker;
might become a pirate ship and a lit-down kitchen might Code; Light; Motion; Picture and Time, all designed with
become a dungeon. This kind of pretending along with unique abilities (example in Figure 1). The
spontaneity, physical activity and social interaction states communicational model of all Spookie units is set to
important elements to the definition of Free Play [3]. predefined couples. These couples communicate with each
Supporting this kind of play with computer technology has other through a wireless network transmitting the specified
previously been proven very difficult. The interactive toys sensory input of the particular units with a current range of
available today are mostly designed to assume the role of a 250 meters outdoors. This input is then displayed with a
regular friend or a pet in children’s lives. By proper output medium on the receiving unit. Most Spookie
accommodating this role the toys usually put themselves in couples could be understood by this simple transmitter-
the centre of attention, supporting a one-way interaction receiver model. The network protocol however also allows
between the toy and its user, replacing the child’s need of multiple receivers being connected to multiple or single
transmitters enabling a distributed network structure. This
enhancement is easily controlled by physical end-user
programming, using a similar technique as described in [4],
letting objects be grouped together by shaking them. The
shaking generates a similar pattern perceived by
accelerometer sensors and is then compared to other nodes

179
in the network. Enriching the creative use of Spookies all already ongoing play of Hide-and-Seek, enriching it with
units can also be physically connected to each other the ability of secretly perceiving and communicating
combining their abilities in order to create more complex information about the seeker among the hiders. Used
functions. All units are connectable to each other in a separately, Spookies proved a good tool for supporting
consistent model of physical assembly without any active play events like: sneaking; hiding; seeking and
limitations to how many Spookies can be included in one running, stimulating spontaneous and physically active
combination. Spookies are combined by magnet connectors play. By combining different Spookies as bricks or
hidden under the texture surface on the top, bottom, left building blocks the children could create new patters of
and right side of the units. When connected, the state of a functionality supporting their creativity but also stimulating
unit is important. Active units (defined by input sensor their understanding of logics. Most interestingly, the
threshold or signal sent from the transmitter unit) can force children were able to easily come up with new areas or
or permit the activation of other physically connected units ways of usage not previously though of. This supports our
depending on the pattern of assembly. This physical idea of Spookies as a tool for inventive Free Play
distributed network is controlled by IR-diodes enabling behaviour.
sending and receiving information through the texture.

Figure 2: The Treasure Hunt.


SCENARIO
Figure 2 describes a play situation where the children
playing are searching for a hidden treasure. The treasure Figure 3: User Group
has previously been hid by another child or a grownup and
CONCLUSION
is defined by three Spookies: Photo Spookie; Timer
Spookie and Tracker Spookie connected in certain We believe that Spookies meets the expectations set in this
combination. The children trying to find the treasure are project, utilizing a broad aspect of Free Play and
equipped with an Image Spookie and a Tracker Spookie. contributes a resourceful computer-embedded support to
To initiate the play the children searching opens the Image natural play situations, old and new. This by not being
Spookie to receive a picture. A picture is then sent from the designed in a manner that limit or restrict the play to some
Photo Spookie to the Image Spookie giving a clue to what predestined structure or context of use but by providing a
the environment surrounding the treasure looks like. few main abilities given by the general concept and the
Opening the Image Spookie also activates the Photo individual units defining a flexible support to creative
Spookie, in its turn forcing the Timer Spookie connected to usage. Clearly important to the aspects of Free Play is the
it to activate. The timer then starts to countdown a preset use of Spookies as a communicational platform when
period of time and will during this time keep the Tracker playing together in a group. Another strength lies in the
Spookie connected next to it active. During this period of users possibility of creating more complex functions by
time the children searching also are able to use the Tracker physically assembling several units into a pattern, easily
to see how their distance to the treasure changes. This is a reprogrammed by the end-user.
good support, as the Tracker will notice if they are going in REFERENCES
the wrong direction giving them a good initial hint of 1. Piaget, J. (1962). Play, dreams and imitation in
where to search for the treasure. childhood. New York: Norton.
DEVELOPMENT & EVALUATION 2. Vygotsky, L. (1978). Mind in Society: The
Prototyping and User Participation have constituted a Development of Higher Mental Processes. Cambridge,
central part of the design process. The prototypes MA: Harvard University Press
developed have provided a useful aid to communicate and
evaluate ideas as well as in actively involving children in 3. Rydenhag, T. (2003). Design for Free Play. Chalmers
the design process. By letting a user group of five children University of Technology Press, Sweden.
aged four to nine play with a fully functional set of six 4. Holmquist, L. E., Mattern F., Schiele, B., Alahuhta, P.,
Spookies (Figure 3) a lot of answers and support could be Beigl, M., Gellersen, H. W. (2001) Smart-Its Friends.
returned to the development process. The children used Proc. Ubicomp 2001, Springer-Verlag
Spookies as a communicational platform supporting their

180
k:info: An Architecture for Smart Billboards for Informal
Public Spaces
Max Van Kleek
MIT Computer Science and Artificial Intelligence Laboratory (CSAIL)
200 Technology Square
Cambridge MA, 02139 USA
[email protected]

ABSTRACT systems exclusively rely on statistical collaborative filtering


High-traffic public spaces in the workplace are rich breed- algorithms to choose what to display. These algorithms work
ing grounds for informal collaborations among knowledge by logically clustering people based upon how similarly peo-
workers; yet, very little technology currently inhabits these ple like or dislike items they have previously seen. Statistical
spaces today. k:info is a context-aware information billboard collaborative filtering systems usually require users to state
that aims to inspire informal interactions in these spaces by this information explicitly, such as by having users assign
providing a dynamic display of items that are relevant and scores or specify rankings to each item. Obtaining scores
easily visible to users nearby. in such a manner, however, is impractical for high-traffic
public displays, due to the reason that the interaction du-
1. INTRODUCTION ration between the user and such a display is typically ex-
Public gathering spaces such as lounges, elevator lobbies, tremely short. Furthermore, statistical collaborative filter-
and hallways are places where informal social encounters ing algorithms suffer from the “cold-start”, or “ramp-up”
occur most frequently in the workplace[4]. In addition to problem, meaning that they require a large amount of ini-
serving as the crossroads for day-to-day activities, these spaces tial data about each user before they can make recommen-
harbor a relaxed social atmosphere, where people feel nat- dations. The problem is compounded by the potentially un-
urally inclined to gather and talk casually about anything bounded size of the user-base of such public displays, as well
that may be on their minds. As a result, these spaces en- as the large probability that any given user may not have ever
courage social connections to be made, shared interests to used the system before. Finally, making a collaborative fil-
be discovered, and, perhaps most importantly, informal col- tering engine situationally or environmentally context-aware
laborations to form among people who may otherwise never (as defined by [6]), requires an exponential amount of data to
have realized the opportunity to work together. Despite the train the system, because new users must be classified along
importance of such social encounters and informal collabo- each of the contextual dimensions.
rations in knowledge-driven organizations [1], these spaces
These observations have led us to a new approach that com-
still largely lack any information infrastructure. This in-
bines a conventional collaborative filtering engine with a sym-
spired the Ki/o project [5] to design such an information in-
bolic knowledge-based recommendation architecture. This
frastructure, which consists of an intelligent kiosk platform
architecture explicitly represents various states of the world,
and software architecture to be integrated into these spaces.
such as user profiles, and maintains heuristics that can make
One of the first applications being developed for the Ki/o context-sensitive recommendations based on this knowledge.
platform is a “smart” information bulletin/billboard called
k:info that opportunistically uses available contextual clues 2.2 Information Composition and Display
from the environment to schedule items for display. Like the Content presentation, or the way information is conveyed to
advertising billboards and dynamic newspaper information users, may be made context-aware as well. Information per-
displays in Stephen Spielberg’s science fiction thriller Mi- taining to the capabilities and characteristics of the physical
nority Report, this information billboard dynamically adapts kiosk display, as well as user presence information, such as
its display for its audience and based on contextual changes how closely users are standing to the display, or how many
in the environment. But unlike the Minority Report adver- users are nearby, can be used to optimize an article for read-
tising billboards which aim to persuade, the aim of k:info ability. If the display is small relative to the distance users
billboards is to spark informal social conversations among are standing from the display, for example, k:info should
passers-by, by displaying information that coincides with choose a display technique that is well-suited and readable
their common interests. for users at their standing distance, such as a rapid-serial
visual presentation (i.e., slide-show) method, instead of a
2. DESIGN CHALLENGES bulletin-board, collage-style layout.[2] Similarly, if the sys-
tem has identified a user with a disability in the vicinity, such
2.1 Content selection
as a user with a visual impairment, alternate channels such
Realizing content personalization on billboards and other
as text-to-speech may be activated.
wide-audience public information displays is challenging for
a number of reasons. Most contemporary personalization
181
3. APPROACH displayable information items. These recommendations
3.1 Knowledge-Based Selection include a numeric value, indicating the strength of the
The k:info system performs two functions: collection fol- recommendation, a reference to the item(s) being recom-
lowed by selection. Specifically, k:info must collect updated mended, and the name of the recommender agent who
display candidates (e.g., news articles and event announce- made the recommendation. Case-based or collaborative
ments) and choose from among them what to display at any recommenders use statistical collaborative filtering or clas-
particular moment. To make this judgment, the system also sification techniques to make recommendations based upon
needs to collect the contextual information it requires for “learned” past interactions, once sufficient data has been
determining the relevance of each item to the current con- acquired.
text. This includes information directly perceived from the
3.4 Scheduling Display Items
physical and digital “surroundings”, such as the time of day,
weather outside, or user presence and identity information, Scheduling items for display, then, involves collecting the
as well as more static information that can be explicitly up- posted recommendations and producing a display schedule.
dated by the system’s maintainer, such as display device The current simple scheduler bases its schedule on the total
characteristics and configuration. Thus, k:info requires per- recommendation level (calculated as a sum over the recom-
ceptual capability, as well as a facility that allows knowledge mendations for each item) as the probability that a particular
to be inspected and updated quickly and easily. item will be displayed next. Items with negative recommen-
dation totals are omitted from the schedule.
3.2 k:info Blackboard Architecture
4. FUTURE WORK
Knowledge-based selection requires the ability to consoli-
date a large assortment of heterogeneous types of informa- 4.1 Performance and User Evaluation
tion. Blackboard architectures, as popularized by the Hearsay The most important work that has yet to be completed is
II first speech recognition system are well-suited for this an evaluation of the system. From a developer perspective,
task[3]. Blackboards consist of independent modules, called the blackboard architecture has provided a useful structure
Knowledge Sources (KSes) that either embody a type of ex- that has simplified system development and improved mod-
pertise or represent an external data source, and which com- ularity. A user study is planned, which will survey users as
municate new information across the blackboard, a persis- to whether they found displayed items to be of interest, and
tent knowledge repository. A simple Java blackboard archi- whether they felt the display provided a useful or detrimental
tecture called the Context Keeper was designed for k:info.[5] distraction in public spaces.

3.3 k:info Knowledge Sources 5. ACKNOWLEDGMENTS


Knowledge sources in k:info are divided into three func- This work is being done with the Agent-based Intelligent Re-
tional categories: active Environments (AIRE) group at the MIT CSAIL Lab-
oratory, under the supervision of Dr. Howard Shrobe and Dr.
1. Perceptual Knowledge Sources. The first, perceptual KS Kimberle Koile. It is being funded by MIT Project Oxygen.
agents, add the lowest-level information to the blackboard.
This includes data gathered from hosts on the Internet, REFERENCES
such as news feeds (currently CNN and BBC), announce- [1] D. Cohen and L. Prusak. In Good Company: How
ment lists (currently MIT and CSAIL Events Calendars), Social Capital Makes Organizations Work. Harvard
e-mail messages, and the current date and weather. Other Business School Press, Cambridge, Massachusetts,
perceptual KSes provide presence and identity informa- 2001.
tion of users in front of the display, through the use of
local sensors such as cameras and motion sensors. [2] J. Laarni. Searching for optimal methods of presenting
dynamic text on different types of screens. In
2. Domain-specific expert Knowledge Sources. The second Proceedings of the second Nordic conference on
category of knowledge sources contains domain-specific Human-computer interaction, pages 219–222. ACM
experts, which contribute external wisdom into the cur- Press, 2002.
rent situation by triggering on knowledge produced by
lower-level perceptual KSes as well as by other domain- [3] R. Reddy, L. Erman, R. D. Fennell, and R. B. Neely.
expert peers. An example of a simple domain-specific The HEARSAY speech understanding system: An
expert KS would be an agent which knows all national example of the recognition process. IEEE Transactions
holidays, and posts these holidays when appropriate days on Computers, pages 427–431, 1976.
arrive. These agents, in effect, classify concrete states of [4] Employees on the move. Steelcase Workplace Index
the world into familiar situational characterizations that Survey, Apr. 2002.
are recognizable by the recommender agents.
[5] M. Van Kleek. Intelligent environments for informal
3. Recommenders. Recommender agents form the highest- public spaces: the Ki/o Kiosk Platform. M.Eng. Thesis,
level agents in the k:info blackboard architecture. These Massachusetts Institute of Technology, Cambridge,
agents associate world state with candidate items to dis- MA, February 2003.
play. Once an appropriate combination has been identi-
fied, a recommender posts a recommendation for either [6] T. Winograd. Architectures for context.
a single specific item, or a broader class of candidate Human-Computer Interaction, 16:401–409, 2001.
182

An Intelligent Broker for Context-Aware Systems
Harry Chen Tim Finin Anupam Joshi
University of Maryland University of Maryland University of Maryland
Baltimore County Baltimore County Baltimore County
[email protected] [email protected] [email protected]

ABSTRACT explicit description of concepts in a domain of discourse (or


We describe Context Broker Architecture (CoBrA) – a new classes), properties of each class describing various features
architecture for supporting context-aware systems in smart and attributes of the class, and restrictions on properties [8].
spaces. Our architecture explores the use of Semantic Web In order to create computer systems that can “understand”
languages for defining and publishing a context ontology, and make full use of a context model, the contextual in-
for sharing information about a context and for reasoning formation must be explicitly represented so that they can
over such information. Central to our architecture is a bro- be processed and reasoned by the computer systems. Fur-
ker agent that maintains a shared model of the context for all thermore, shared ontologies enable independently developed
computing entities in the space and enforces the privacy poli- context-aware systems to shared their knowledge and beliefs
cies defined by the users and devices. We also describe the about context, reducing the cost of and redundancy in con-
use of CoBrA in prototyping an intelligent meeting room. text sensing.

Keywords
The need for a shared context model. CoBrA maintains a
model of the current context that can be shared by all de-
Context-aware systems, smart spaces, semantic web, agent
vices, services and agents in the same smart space. The
architecture
shared model is a repository of knowledge that describes the
1. INTRODUCTION context associated with an environment. As this repository
Context-aware systems are computing systems that provide is always accessible within an associated space, resource-
relevant services and information to users based their situ- limited devices will be able to offload the burden of main-
ational conditions [3]. Among the critical research issues taining context knowledge. When this model is coupled with
in developing context-aware systems are context modeling, a reasoning facility, it can provide additional services, such
context reasoning, knowledge sharing, and user privacy pro- as detecting and resolving inconsistent knowledge and rea-
tection. To address these issues, we are developing an agent- soning with knowledge acquired from the space.
oriented architecture called Context Broker Architecture that The need for a common policy language. CoBrA includes
aims to help devices, services and agents to become context a policy language [5] that allows users and devices to de-
aware in smart spaces such as an intelligent meeting room, a fine rules to control the use and the sharing of their private
smart vehicle, and a smart house. contextual information. Using this language, the users can
By context we mean a collection of information that char- protect their privacy by granting or denying the system per-
acterizes the situation of a person or a computing entity [3]. mission to use or share their contextual information (e.g.,
In addition to the location information [6], an understand- don’t share my location information with agents that are not
ing of context should also include information that describes in the CS building). Moreover, the system behavior can be
system capabilities, services offered and sought, the activ- partially augmented by requesting it to accept new obliga-
ities and tasks in which people and computing entities are tions or dispensations, essentially giving it new rules of be-
engaged, and their situational roles, beliefs, desires, and in- havior (e.g., you should inform my personal agent whenever
tentions. my location context has changed).

Research results show that building pervasive context-aware 2. CONTEXT BROKER ARCHITECTURE
systems is difficult and costly without adequate support from Our architecture differs from the previous systems [3, 7] in
a computing infrastructure [1]. We believe that to create such the following ways:
infrastructure requires the following: (i) a collection of on- • We use Semantic Web languages such as RDF and the
tologies for modeling context, (ii) a shared model of the cur- Web Ontology Language OWL [8] to define ontologies
rent context and (iii) a declarative policy language that users of context, which provide an explicit semantic represen-
and devices can use to define constraints on the sharing of tation of context that is suitable for reasoning and knowl-
private information and protection of resources. edge sharing. In the previous systems, context are of-
The need for common ontologies. An ontology is a formal, ten implemented as programming language objects (e.g.,
Java class objects) or informally described in documenta-
∗This work was partially supported by DARPA contract F30602-
tion.
97-1-0215, Hewlett Packard, NSF award 9875433, and NSF award
0209001. • CoBrA provides a resource-rich agent called the context
183
broker to manage and maintain a shared model of con- places, agents (both human and software agents), devices,
text1 . The context brokers can infer context knowledge events, and time. We have also prototyped a context broker
(e.g., user intentions, roles and duties) that cannot be eas- in JADE2 that can reason about the presence of a user in a
ily acquired from the physical sensors and can detect and meeting room. In our demonstration system, as a user enters
resolve inconsistent knowledge that often occurs as the the meeting room, his/her Bluetooth device (e.g., a SonyEr-
result of imperfect sensing. In the previous systems, indi- icsson T68i cellphone or a Palm TungstenT PDA) sends an
vidual entities are required to manage and maintain their URL of his/her policy to the broker in the room3 . The broker
own context knowledge. then retrieves the policy and reasons about the user’s context
using the available ontologies. Knowing the device owned
• CoBrA provides a policy language that allows users to by the user is in the room and having no evidence to the
control their contextual information. Based on the user contrary, the broker concludes the user is also in the room.
defined policies, a broker will dynamically control the
granularity of a user’s information that is to be shared and 4. FUTURE WORK AND REMARKS
select appropriate recipients to receive notifications of a We believe an infrastructure for building context-aware sys-
user’s context change. tems should provide adequate support for context modeling,
context reasoning, knowledge sharing, and user privacy pro-
tection. The development of CoBrA and the EasyMeeting
system are still at an early stage of research. Our short-term
objective is to define an ontology for expressing privacy pol-
icy and to enhance a broker’s reasoning with users and ac-
tivities by including temporal and spatial relations. A part of
our long-term objective is to deploy an intelligent meeting
room in the newly constructed Information Technology and
Engineering Building on the UMBC main campus.
REFERENCES
[1] C HEN , G., AND KOTZ , D. A survey of context-aware
mobile computing research. Tech. Rep. TR2000-381,
Dartmouth College, Computer Science, Hanover, NH,
November 2000.
[2] C HEN , H., F ININ , T., AND J OSHI , A. An ontology for
Figure 1: A context broker acquires contextual informa-
context-aware pervasive computing environments.
tion from heterogeneous sources and fuses it into a co-
Special Issue on Ontologies for Distributed Systems,
herent model that is then shared with computing entities
Knowledge Engineering Review (2003).
in the space.
[3] D EY, A. K. Providing Architectural Support for
Figure 1 shows the architecture design of CoBrA. The con- Building Context-Aware Applications. PhD thesis,
text broker is a specialized server entity that runs on a Georgia Institute of Technology, 2000.
resource-rich stationary computer in the space. In our pre-
liminary work, all computing entities in a smart space are [4] FIPA. FIPA ACL Message Structure Specification,
presumed to have priori knowledge about the presence of December 2002.
a context broker, and the high-level agents are presumed to [5] K AGAL , L., F ININ , T., AND J OSHI , A. A policy
communicate with the broker using the standard FIPA Agent language for a pervasive computing environment. In
Communication Language [4]. Proceedings of the IEEE 4th International Workshop on
Policies for Distributed Systems and Networks (2003).
3. EASYMEETING: AN INTELLIGENT MEETING ROOM
To demonstrate the feasibility of our architecture, we are [6] P RIYANTHA , N. B., C HAKRABORTY, A., AND
prototyping an intelligent meeting room system called Easy- BALAKRISHNAN , H. The cricket location-support
Meeting, which uses CoBrA as the foundation for building system. In Proceedings of MobiCom 2000 (2000),
context-aware systems in a meeting room. This system will pp. 32–43.
provide different services to assist meeting speakers, audi-
ences and organizers based on their situational needs. [7] S CHILIT, B., A DAMS , N., AND WANT, R.
Context-aware computing applications. In Proceedings
We have created an ontology called COBRA-ONT [2] for of the 1st IEEE WMCSA (Santa Cruz, CA, US, 1994).
modeling context in an intelligent meeting room. This on-
tology, expressed in the OWL language, defines typical con- [8] S MITH , M. K., W ELTY, C., AND M C G UINNESS , D.
cepts (classes, properties, and constraints) for describing Owl web ontology language guide.
1
http://www.w3.org/TR/owl-guide/, 2003.
Notice that we have a broker associated with a given space, which 2
can be subdivided into small granularities with individual brokers. Java Agent DEvelopment Framework: http://sharon.
This hierarchical approach with collaboration fostered by shared cselt.it/projects/jade/
3
ontologies helps us avoid the bottlenecks associated with a single The description of the URL is sent to the broker in a vNote via the
centralized broker. Bluetooth OBEX object push service
184
Containment: Knowing Your Ubiquitous System’s Limitations

Boris Dragovic and Jon Crowcroft


University of Cambridge Computer Laboratory
{firstname.lastname}@cl.cam.ac.uk

1. Extended Abstract ing engine and enforcement environment; an unambiguous


and inseparable data object tagging system and application-
level support; the context model.
1.1 Overview
In this work, we concentrate on modeling context for the
purpose of applying it in the proposed security paradigm.
This is a position paper outlining our current research To date, ubiquitous computing projects have largely exploited
in the area of ubiquitous systems security. We recognized location to determine context [5]. This approach, although
that from the data objects’ point of view, the frequency of often efficient, has major drawbacks with regards to hetero-
context changes caused by migration through the environ- geneity, compatibility, and availability as well as the inabil-
ment implies changes in the threat models to which the data ity to scale the proposed location infrastructures[4].
objects are exposed.
In contrast, we model context as containment, based on
We propose a novel proactive security paradigm to miti- inter-device visibility (1.3) and inherently independent of
gate the varying security risks by manipulating data objects’ traditional notion of location. One of the main aims of our
format. Here we concentrate on the modeling context for research is to explore expressiveness and applicability of
this purpose. this approach, as envisaged in the next section.
1.2 Motivation
1.3 Container-View Model
Motivation for this research stems from consideration of
threat models in the ubiquitous world. In general, threat Containment is established by determining physical en-
models can be viewed as attributes of contexts. Entities in closure and locally visible devices. Visibility field of an
device is defined with respect to its communication capa-
the ubiquitous world experience, through their mobility, fre-
quent context changes. Thus, the security risks and threats bilities (1.3.0.1). Containments may exist in multiple in-
that entities are exposed to vary as they migrate within the stances concurently, depending on the granulairty at which
they are defined. Each containment can be associated with
environment.
a set of application specific attributes. In our example, we
The notion of an entity in our research corresponds to in- attribute containments with respect to risks and threats for
dividual data objects of arbitrary, sub-file, granularity. Data data objects.
objects migrate within the environment either by being con-
tained on a mobile device or by being transmitted through We split the notion of containment into: physical con-
communications channels. This emphasizes the dynamicity tainment 1 and views. We name the resulting model Container-
of threat model changes for the data objects. View model.

Due to the dynamicity and complexity present in the ubiq- 1.3.0.1 View
uitous world, it is unrealistic to expect humans to be able to
reason and act effectively to address security risks. We pro-
pose a new security paradigm that aims to mitigate security
risks and threats present in contexts for data objects by au-
tomatic, proactive data format management.
As data objects are viewed on sub-file granularity the
format management can, in addition to bulk file operations,
provide fine-grained transformations such as e.g. anonymiza- Figure 1. A spherical and a directed view.
tion, partial data quality degradation and other types of se-
lective data constraining. A view (Figure 1) represents one-hop reachability within
a communications channel. Each view has a view generator
We identify three main aspects of the proactive data man-
agement system as: the policy definition language, reason- 1
Referred to as containment.

185
and a view type. For example, if a PDA is IrDA equiped and reasoning at differing levels of granularity as required by
is within range of a IrDA capable mobile phone we say that the highly heterogeneous environment.
the mobile phone is within a view of the PDA; the view type
is IrDA and the view generator is the PDA. Consequently, 1.3.0.5 Collaboration model
we define visible relation which is reflexive, antisymmet- As entities migrate through the ubiquitous environment they
ric and intransitive. By migrating through the environment, will experience differences in quality and quantity of avail-
entities dynamically enter and leave views. able Container-View model data. This will be reflected on
the accuracy of the model. The model should support three
1.3.0.2 Container ways of obtaining relevant information: through environment-
embedded services which provide precomputed models as
required; by using hints based on entity’s sensing capa-
bilities or obtained from other entities present locally; and
through inference process. The model will also incorporate
a trust management infrastructure for the collaboration and
supports reasoning about containment information captur-
ing confidence.
Figure 2. Nested containment.
1.3.0.6 Inference
The notion of a container defines a physical enclosure To provide entities with a certain level of independence from
(Figure 2). Containers may be nested. The main charac- ubiquitous infrastructures, we are currently working on suit-
teristic of a container is that any movement action and its able inference mechanisms. There are two stages in the
consequences are directly reflected onto enclosed entities. model operation at which inference is needed: determin-
For example, a data object is contained within a PDA; the ing, i.e. capturing, current model state and reasoning about
PDA is contained within a car; as the car moves, so do the the model. For the former, in cases where the model infor-
PDA and the data object. Contains relation is irreflexive, mation is unobtainable from trusted third parties, we are fo-
antisymmetric and transitive. Physical enclosure, i.e. con- cusing on Bayesian inference methods [2, 3]. Reasoning is
tainer, can be determined through the visibility property, in to be supported by an algebra roughly based on Egenhofer’s
presence of landmarks, or by using dedicated infrastructural Container-Surface algebra [1] and substantially extended to
support (1.3.0.5). support: considerable difference between physical surfaces
and views, container-view relationship constraints, mobility
1.3.0.3 Container-View relations and information vagueness and indeterminacy.
A container can be within a view of another entity. A con-
tainer can be either transparent or opaque to a view. The 1.4 Summary
former means that the contents are in the direct view as well.
The only per se inference that can be made is that if a con- By considering security issues in ubiquitous computing,
tainer is not within a view then its contents are not within the we have identified a need to address the problem of fre-
view either. To support other types of inference the model quently changing threat models for migrating data objects.
provides for constraints to be specified. Furthermore, we We propose a system for proactively managing data object
define inter-container and inter-view paths to denote one- format. As a first step, we define context as containment
hop links along which data objects can migrate among dif- with respect to physical world and communications chan-
ferent containers and views; e.g. a door between rooms or a nels. The Container-View model represents formalization
bridge between IEEE 802.11 and GPRS respectively. of the notion of containment based on local inter-entity re-
lationships and is independent of absolute location and lo-
1.3.0.4 Formal model cation infrastructures. We are set to evaluate the expres-
Owing to envisaged device constraints, heterogeneity of de- siveness and applicability of the Container-View model as
vice capabilities and differing model usage the decision was envisaged.
made to model physical containment and views separately.
As physical containment is highly hierarchical we are in-
clined to use lattices to model it. Apart from being compu- REFERENCES
tationally feasible, lattices, aid reasoning about neighbors, [1] M. Egenhofer and A. Rodrguez. Relation algebras over containers
ancestors and descendants and can incorporate the notion and surfaces: An ontological study of a room space.
of paths. Modeling views, on the other hand, is more de- [2] N. Friedman, D. Geiger, and M. Goldszmidt. Bayesian network
manding as the model has to be chosen based on the nature classifiers. Machine Learning, 29(2-3):131–163, 1997.
[3] P. Korpip, J. Mntyjrvi, J. Kela, H. Kernen, and E.-J. Malm. Bayesian
of a view type and its propagational characteristics, e.g. di- approach to sensor-based context awareness. Personal and
rectional vs. omni-directional etc. We intend to develop a Ubiquitous Computing, 7:113–124, 2003.
taxonomy of views to aid appropriate model choice. Indi- [4] C. A. Patterson, R. R. Muntz, and C. M. Pancake. Challenges in
location-aware computing. IEEE Pervasive Computing, 2(2):80–89,
vidual models can be aggregated into a bigger picture based Apr. 2003.
on individual application requirements. Our approach facil- [5] A. Schmidt, M. Beigl, and H.-W. Gellersen. There is more to context
itates partial, distributed, model evaluation and constrained than location. Computers and Graphics, 23(6):893–901, 1999.

186
ContextMap: Modeling Scenes of the Real World for
Context-Aware Computing
Yang Li, Jason I. Hong, James A. Landay
Group for User Interface Research, Computer Science Division
University of California, Berkeley
Berkeley, CA 94720-1776 USA
{yangli, jasonh, landay}@cs.berkeley.edu

ABSTRACT by a human being has not only intentional aspects but also
We present a scenegraph-based schema, the ContextMap, to operational aspects. This reveals how social activities can
model context information. Locations with hierarchical be performed through physical actions and objects.
relations are the skeleton of the ContextMap where nodes of Context information itself is recursively related. For
people, objects and activities can be attached. Context example, linguistically, the context of a word is the
information can be collected by traversing the ContextMap. sentence, which in turn gets its context from the paragraph.
The ContextMap provides a uniform method to represent The Berkeley campus has the climate context of the City of
physical and social semantics for context-aware computing. Berkeley, which inherits it from the San Francisco Bay
In addition, context ambiguity can be modeled as well. Area of California based on location containment.
Keywords To leverage the abundant interaction semantics of context,
Context-aware computing, scenegraph, context ambiguity it is necessary to have an efficient way to model the
INTRODUCTION context. We devised the ContextMap (see Figure 1) to
Context is the glue to link the real world with the virtual model the situation of the real world for context-aware
world. Context is “any information that can be used to computing as a scenegraph-like structure. The ContextMap
characterize a situation” [4]. We call the situation a scene of provides a consistent way to model context information and
the real world. The information can be the temperature of a addresses the correlation and ambiguity of context data.
region. It also can be the activity of a person, e.g., reading a RELATED WORK
book, or the activity of a group, e.g., having a meeting. The Active Map [5] provides a basic organization of
Both the physical and the social semantics of a situation are context that consists of a hierarchy of locations with a
required by context-aware computing. Social semantics are containment relation. We employed the location hierarchy
embodied through physical activities, and physical as the skeleton of the ContextMap, but we include relations
activities can be fully understood only under certain social in addition to location containment.
circumstances. For example, we can see “running” as a Crowley et. al. [2] described context as a network of
status of a person at a physical level. It can mean “catching situations concerning a set of roles and relations. Roles may
a bus” at a social level. Activity theory [1] sees an activity be “played” by one or more entities. Dey formulated three
as functionally subordinated hierarchical levels, i.e., kinds of entities for context-aware computing: people,
activities, actions, and operations. Each action performed places and things (or objects) [4]. We model these roles and
UC Berkeley Campus
entities as nodes and edges of a ContextMap.
happen The scenegraph [6] has been widely used in computer
contain
contain contain
graphics. Its dynamic propagation of graphical attributes
Research Soda Hall Cory Hall South Hall greatly simplifies the representation of a scene and it proves
Education
happen contain happen an efficient way to model complicated scenes. To model
contain 0.3 happen
scenes of the real world, we extended the scenegraph to
EECS
happen
Soda 523
SIMS
deal with the context semantics of the real world.
contain 0.9
contain
INTRINSIC AND RELATIONAL CONTEXT ATTRIBUTES
UI research Bob Alice
conduct
The context information of an entity can be classified into
use
use use use 0.9
intrinsic and relational attributes. Intrinsic attributes of an
writing pen whiteboard computer entity can be described without referring to others, e.g., the
contain contain identity of a person can be his name. A person’s status can
be his age or health condition. However, relational
camera tablet
attributes of an entity can only be specified by its relations
Figure 1: An example ContextMap. Rectangles indicate with other entities. For example, the position of an entity
Place nodes. Diamonds stand for Activity nodes. People can usually be described as a relative spatial relation with
nodes are represented as ellipses and Object nodes are other entities, e.g., near or far and in or out.
ellipses in gray.

187
NODES AND EDGES OF A CONTEXTMAP MODELING CONTEXT AMBIGUITY
Like a traditional scenegraph, a ContextMap is a directional In reality, both sensed and interpreted context is often
acyclic graph (see Figure 1) and the context attributes are ambiguous [3]. The ContextMap models context ambiguity
collected by a depth-first traversal. An entity, i.e., a place, a by tagging edges and the intrinsic attributes of nodes with
person or an object, is represented as a node of the graph. confidence values. For example, the intrinsic attribute
Each node maintains the intrinsic attributes of an entity that “health condition” of “Alice” could be 0.8. In Figure 1, the
it represents. Relational attributes of an entity are confidence of “Alice” in “Soda 523” is 0.9 and in “Cory
represented by edges directly or indirectly linked to its Hall” it is 0.3. Edges without labelled values have the
node. So the context of an entity is represented not only by default confidence value “1.0”.
the attributes in its node but also by the node’s position in Here we describe a simple method to calculate the
the entire ContextMap. A ContextMap is a view of the real confidence of transitive relations.
world that can be shared by multiple applications.
α β αβ
Another kind of node in a ContextMap is the Activity node, Given x → y and y → z , x → z .
which represents the social semantics of an entity or a For example, the confidence of “Alice” using “computer” is
group of entities, e.g., reading a book or having a seminar. 0.9. Since the confidence of Alice in Soda 523 is also .9, the
It can be applied to a sub-graph of a ContextMap like the confidence of “computer” in “Soda 523” is 0.81.
dynamic propagation of graphical attributes in a
scenegraph. It means that the activity is conducted by However, the confidence of “whiteboard” in “Soda 523” is
people with certain tools (physical objects) at a certain the average of the confidences of all paths from “Soda 523”
location. For example, in Figure 1, “UI Research” happens to “whiteboard”. It is 0.95 based on [Soda 523, Bob,
in Soda 523 and it indirectly indicates the activity of Bob, whiteboard] = 1 and [Soda 523, Alice, whiteboard] = 0.9.
Alice and the tools they are using to achieve this activity. CONCLUSION AND FUTURE WORK
Place nodes stand for entities that are places or sites. They ContextMap enables an efficient representation of
can refer to a large region (“California”) or a small area complicated situations, particularly for relational context,
(“close to whiteboard”). The containment relation between by using dynamic attribute propagations and transitive
Place nodes is stable and hierarchically structured, e.g., the relations. Both social and physical semantics of context can
UC Berkeley campus contains Soda Hall and will always be represented in a consistent manner. Attributes and
do so. Place nodes and their containment relations relations of nodes can be updated based on sensed
constitute the skeleton of a ContextMap, which can be information, e.g., a person’s location and its confidence, or
enriched by nodes describing people, physical objects, and manually, e.g., an Activity node can be manually added in
activities. A ContextMap can be built by establishing a or manipulated beforehand or in runtime. ContextMaps will
static Place hierarchy first. Directional edges from Place be provided as an infrastructure service to applications. We
nodes can indicate contain relations for physical are continuing to refine the representation and evolution
containment and happen relations for locations where some mechanisms of the ContextMap, and to enable easy
events, i.e., social activities or roles, happen. For example, construction of and access to ContextMaps.
“education & research” happens on the “UC Berkeley REFERENCES
Campus”. An Object node is for a physical object, e.g., a 1. Bertelsen, O.W. and Bodker, S. Activity Theory. HCI
pen, which can have directional contain edges to its sub- Models, Theories, and Frameworks Ed. by Carroll,
components. Contain relations are transitive. J.M. Morgan Kaufmann Publishers. 2003, pp. 291-324.
A Person node represents a person entity. Directional edges 2. Crowley, J.L., Coutaz, J., Rey, G. and Reignier, P.
from a Person node can indicate conduct or use relations, Perceptual Components for Context Aware Computing,
specifying the person is conducting an action or using a Proceedings of UBICOMP2002, Sweden.
physical object (tool), respectively. A use relation can
3. Dey, A.K., Mankoff, J., Abowd, G.D. and Carter, S.
transfer the semantics of a contain relation. For example,
Distributed mediation of ambiguous context in aware
the fact that “Bob” is in “Soda 523” and he is using the
environments, Proceedings of UIST 2002, pp. 121-130.
“pen” indicates that the “pen” is also in “Soda 523”.
4. Dey, A.K., Salber, D. and Abowd, G.D. A Conceptual
A node can be referenced by multiple nodes. For example
Framework and a Toolkit for Supporting the Rapid
in Figure 1, both “Bob” and “Alice” are using the
Prototyping of Context-Aware Applications, Human-
“whiteboard”. The multi-reference to a node can also be
Computer Interaction, 2001, 16(2-4), pp. 97-166.
used to model context ambiguity. For example, “Alice”
could be either in “Soda Hall” or “Cory Hall” in Figure 1. 5. Schilit, B. and Theimer, M. Disseminating Active Map
Information to Mobile Hosts, IEEE Network, Vol. 8,
Intrinsic attributes of a node can be tagged by a timestamp
pp. 22-32, 1994.
to indicate when they are updated or a time span to indicate
their validity. Moreover, a directional edge can be tagged to 6. Strauss, P.S. and Carey, R. An Object-Oriented 3D
indicate the valid period of a relation. Graphics Toolkit, ACM Computer Graphics, 1992,
26(2).

188
Service Platform for Exchanging Context Information
Daisuke Morikawa Masaru Honjo Akira Yamaguchi Masayoshi Ohashi
KDDI R&D Laboratories Inc.
2-1-15 Ohara Kamifukuoka, Saitama 356-8502 JAPAN
+81 49 278 7883
{morikawa, honjo, yama, ohashi}@kddilabs.jp

ABSTRACT • The candidates of activity corresponding to each object


This paper describes how to capture user’s activities as a are defined in advance. For simplicity, in this paper, we
form of context information based on a complementary assume that one object corresponds to one activity.
relation between user’s activity and objects near the user. Examples of this correspondence are shown in Table 1.
We propose a platform that stores users’ context In our prototype implementation, user activity is detected
information, allows service providers to access that through the following procedure. First, an ID attached to an
information, and provides various services to the users. We object is detected via a tag reader. Next, the property of the
also present a prototype system of the messaging service object is determined based on the detected ID and the
for exchanging user’s context information. candidates of activity corresponding to the determined
Keywords object are inquired. Finally, the appropriate activity context
context, service platform, ID-tag, privacy control is selected.
INTRODUCTION Table 1 Examples of object to activity relations
Recent mobile communication systems with GPS enable Object Corresponding activity
mobile users to provide various location-based information Sofa Rest / Meeting
(e.g., nearby shop information and related navigation Dining table Breakfast / Lunch / Dinner
maps). In order to provide more suitable information based (It depends on time)
on the user’s demands, it is necessary to collect not only SERVICE PLATFORM ARCHITECTURE
location information but also various information regarding Service providers may have their own user’s context
the user’s context, and it is also necessary to utilize context such that they can provide services to specific users, but the
information for personalized service provisioning. In this amount of context detected by each service provider would
paper, we first describe how to capture user’s activities as a be limited. This remains an unsolved issue that service
form of context information and then consider the providers are facing. This paper proposes a service
requirement for the context-aware service platform. platform in which aggregated user’s context information
Finally, we present a prototype of our proposed platform. (which is not limited to the user’s activities described in
A CASE OF CONTEXT INFORMATION: USER ACTIVITY this paper) is open to service providers under an
A number of RFID tag systems have recently been appropriate access control such that various context-aware
proposed (e.g. [1]). In these systems, passive RFID tags are services can be provided. The proposed service platform is
attached to various objects in the physical world and shown in Fig. 1, and the following functions are defined.
information corresponding to each object (i.e. electric ID) • The Context registrar (CR) has functions for capturing
is managed on a networked server in the virtual world. user’s context information (Label 1 in Fig. 1) and
In the physical world, a mobile user interacts with registering it to the Context manager (CM) attached with
various objects. A user may rest by sitting on the sofa and an open level indicator (OLI) (Label 2 in Fig. 1). Every
taking refreshment. The act of resting involves interacting time a user detects an ID attached to an object, activity
with nearby objects such as a sofa, a table and a cup etc. context related to the user is registered and accumulated.
This means that the relation between activities and objects The CR also has a function for setting access level
is complementary in that a user usually does not perform indicators (ALIs) to Context-based service provider (SP)
any activity without interaction with surrounding objects. and Context User (CU) (Label 3 in Fig. 1).
We make the following assumptions in order to determine • The Context Manager (CM) has a function for storing
the user’s activities. the context information as a context repository. This
• The above RFID tags systems are deployed and mobile context information is exclusively generated and
users have a networked mobile terminal equipped with an registered for each user. The CM also has a function for
RFID tag reader in order to identify interacting objects. executing the access control based on the relation
between the OLI setting to the target context information

189
and ALI setting to SP and/or CU. The CM should be CONTEXT EXCHANGING SERVICE
under perfect control of the CR and should maintain We designed the messaging service of exchanging user’s
independence from other functions for the purpose of activities with each other, which is provided by an SP.
user’s privacy protection. Mobile terminal A (MTA) has the functions of both
registering CRA’s context information and requiring CUB’s
Context Manager (CM) and CUC’s context information. The functions equipped
-context repository
-management of context 4) Request for user’s
with MTB and MTC are determined in the same manner as
-access control context info in MTA. An example sequence in exchanging user’s
context information is presented in Fig. 2. The context
2) Registering context info Context-based
with open level indicator (OLI) Service Providers (SPs)
exchange service is based on the trigger of CU and CR.
5) messaging Context user Context-based SP Context manager Context registrar
1) Capturing (mobile terminal) ( networked server) (personal server) (mobile terminal)
context information service
3) Setting to exchange user’s AB C AB C A BC
access level context info Trigger require
indicator: ALI from CRB's context register
sofa TV PC context forwarding CRB 's
Context
Registrar (CR) user process context
Having Watching Web Access require
a rest TV Email Exchange
CR B's context
A case of context : user’s activities Context User (CU) Access Control
(based on the relation between “object” and “activity”)
result
Figure 1 Schematic illustration of a service platform for re-formatting
exchanging user’s context information result

• Context-based Service Providers (SPs), which have Trigger


notify CR A's
register
from CRA's
high potential for providing various context-aware context
context registration
context
services based on context information, have a function forwarding
registrar
process
for accessing context information (Label 4 in Fig. 1). require
• The Context User (CU) has a function for accessing CRA's context
Access Control
context information and then context-based services are
result
provided through SPs (Label 5 in Fig. 1). It is necessary re-formatting
to clearly distinguish CR and CU because this platform result Notify it
aims at opening the context information to the CU via result to CU B and CU C
SPs. Figure 2. The sequence of context exchange service
In our prototype implementation, the functions of CR RELATED WORKS
and CU are implemented on a user’s mobile terminal, the Ubiquitous applications based on passive RFID tags
function of CM is implemented on a user’s personal server, have already been studied [2]. Various kinds of context
and the function of SP is implemented as a network server. platform have also been studied [3] and it has been pointed
ACCESS CONTROL TO CONTEXT INFORMATION out that privacy of context remains an issue [4].
The CRA sets the following values: SUMMARY
-OLIA (I), which represents OLI to context Based on the requirements of a service-provisioning
information I set by CRA and indicates an integer with platform for exchanging user’s context information with
the range from 1 to n. The higher the value, the easier each other, we developed a prototype and examined the
to open. configuration of the system. Details of the prototype
-ALIA(SP1), and ALIA (B), which represent to ALIs to system will be presented in the poster session.
SP1 and CUB set by CRA, and indicate an integer with REFERENCES
the range from 1 to n, respectively. Users with smaller 1. URL: http://www.autoidcenter.org/main.asp
value are allowed much more context information. 2. Roy Want et al., “Bridging Physical and Virtual Worlds
When CM receives the request message that CUB with Electronic Tags,” Proc. of CHI 99, 1999.
requires CRA’s context information via SP1, if the 3. A. K. Dey et al., “A conceptual framework and a toolkit
condition described in Equation 1 is satisfied, then CUB for supporting the rapid prototyping of context-aware
can access the target context information via SP1. applications,” Human-computer interaction, Special
OLI A ( I ) ≥ max{ ALI A ( SP1 ), ALI A ( B )} (1) Issue: Context-aware Computing, Vol.16, 2001.
In addition to this access control as a minimum 4. Mark Ackerman et al., “Privacy in context,” Human-
condition, the certification of each user and authorization computer interaction, Special Issue: Context-aware
of context information access are also required. Computing, Vol.16, 2001.

190
The State Predictor Method for Context Prediction
Jan Petzold, Faruk Bagci, Wolfgang Trumler, and Theo Ungerer
University of Augsburg
Institute of Computer Science
Eichleitnerstr. 30, 86159 Augsburg, Germany
{Petzold, Bagci, Trumler, Ungerer}@Informatik.Uni-Augsburg.DE

ABSTRACT to the domain of context prediction. We investigated several


Ubiquitous systems use context information to adapt appli- so-called state predictors [2] of which we choose the follow-
ance behavior to human needs. Even more convenience is ing two predictors for this paper: (1) the one-level two-state
reached if the appliance foresees the user’s desires and acts predictor and (2) a local two-level context predictor with 2-
proactively. This paper focuses on context prediction based state predictors in the second level.
on previous behavior patterns. We present the newly devised
state predictor method, which is motivated by branch predic- 2. THE ONE-LEVEL TWO-STATE PREDICTOR
tion techniques of current high-performance microproces- The 2-state context predictor is a modification of the two-bit
sors. We exemplify the method by investigating two state branch predictor with saturation counter. The first entry de-
predictors. notes the next context. The second entry is used for changing
between the strong and weak states. The context stored in
Keywords the first entry is thus always predicted independently of the
context awareness, context prediction, location prediction, second entry, which influences training and retraining speed.
proactive The denotation “2-state context predictor” stems from the
provision of two states for each predicted context.
1. INTRODUCTION The retraining of a 2-state predictor is slowed down such
Ubiquitous systems strive for adaptation to user’s needs by that an one-time change of the habit does not cause an effect.
utilizing information about the current context in which an In the case of two successive deviations from the habit the
user’s appliance works. A new quality of ubiquitous systems system notes the change. If more than two deviations of a
may be reached if context awareness is enhanced by predic- habit should not yet lead to a retraining, the number of states
tions of future contexts based on current and previous con- must be increased leading to a k-state context predictor.
text information [1]. Such a prediction enables the system to
proactively initiate actions that enhance the convenience of 3. THE TWO-LEVEL TWO-STATE PREDICTOR
the user or lead to an improved overall system. Two-level context predictors regard a sequence of the last
contexts that stand for a person to predict the next context.
Humans are creatures of habit. Humans typically act in a
This could be either global or local to a specific context. The
certain habitual pattern, however, they sometimes interrupt
previous contexts are stored in a kind of shift register that
their behavior pattern and they sometimes completely change
constitutes the first level of the predictor. If a new context is
the pattern. Our aim is to relieve people of actions that
occurred all entries of the register are shifted to the left and
are done habitually without determining a person’s action.
the new context is filled in from the right. The length of the
The system should learn habits automatically and reverse
shift register is called the order, which denotes the number
assumptions if a habit changes. The predictor information
of last previous contexts that influence the prediction. The
should therefore be based on previous behavior patterns and
second level consists of a pattern history table that stores all
applied to speculate on the future behavior of a person. If the
possible patterns of context sequences in different entries.
speculation fails, the failing must be recognized, the specula-
Each entry holds additionally a 2-state predictor entry, which
tively initiated actions withdrawn, and the predictor updated
in fact predicts the next context. The pattern in the shift
to improve future prediction accuracy.
register is used to select an entry in the pattern history table.
To predict a future situation learning techniques as e.g. Mar-
kov Chains, Bayesian Networks, Neural Networks are ob- 4. EXAMPLE: LOCATION PREDICTION
vious candidates. In our work we choose a completely dif- Our sample application predicts the next location of peo-
ferent approach. Branch prediction techniques [3] as know ple moving within an office building. We consider the floor
from high-performance processors are transferred and adapted plan in figure 1: C (corridor), S (secretariat), B (office of the
boss), and E (office of the employee).
Figure 2 shows the corresponding prediction graph of the
2-state predictor for the corridor. Similar prediction graphs
are necessary for the other rooms. The denotations of the
states consist of the ID of the next room to be predicted and
a counter. If a person enters for the first time the boss’s office
191
from the area of data compression. Here a maximum order
m is applied in the first stage instead of the fixed order. Then,
starting with this maximum order m, a pattern is searched
according to the last m rooms. If no pattern of the length m
is found, the pattern of the length m−1 is looked for, i.e. the
last m − 1 rooms. This process can be accomplished until
the order 1 is reached.
Figure 1: Floor plan of corridor, boss’ office, secretariat,
and employee’s office 5. EVALUATION
Evaluation is performed by simulating the predictors with
B from the corridor, the initial state B0 is set. If the person behavior patterns of people walking through a building as
reenters the corridor, the office of the boss B is predicted as workload. The evaluation of the implemented predictors
next location. If the prediction proves as correct, the predic- used synthetic movement sequences, because of the lack of
tor switches into the strong state B1. Thus, next time the real movement pattern. The usage of various synthetic pat-
office of the boss B will be predicted again. If the person tern lead to a good differentiation between the predictors,
interrupts her habit once by entering a room different from which are summarized as follows:
the boss’ office, the state is set back from B1 to B0. Thus
the boss’ office is still predicted. If the person goes now The simulations show that the one-level two-state predictor
from the corridor into the secretariat (resp. the employee’s reaches a prediction rate of 42.6% to 79.4% correct predic-
office) the predictor switches into the state S0 (resp. E0) tions, whereas the two-level two-state predictor reached even
independently of the room entered from the corridor before, higher prediction rates of 55.4% to 98.2%. The two-level
and predicts thus the secretariat (resp. the employee’s office) two-state predictors are better suited for complex patterns,
as next. B
but the advantage of the two-state predictor is its very fast
training and retraining speed (for more details see [2]).
ª
GFED
@ABC
B1 6. CONCLUSION
O
¬B We propose two state context predictors suitable for appli-
² B
GFED
@ABC ances with limited resources which are motivated by branch
B0 Ba B
E {{{{= BB BBB prediction techniques and evaluated using person’s move-
{ { B B
{} {{{{{B E S BBB!B ment patterns in a building.
GFED
@ABC o
E0 / GFED
@ABC
S0 B` B To avoid misguidance of persons or systems with wrong pre-
E {{{= S BB B¬S
{ { BBBB dictions, the confidence of the predictions should be taken
{} {{{{¬E S BÃ into account. Meaning that a prediction should only be made,
8 GFED
@ABC
E1 GFED
@ABC
S1 if the prediction reaches a high confidence level and sup-
T
E S
pressed otherwise.
Our future work concerns construction of new predictors and
Figure 2: Prediction graph of two-state predictor for the evaluation of these and of the described predictors with real
corridor C movement sequences. A person tracking system, currently
We assume an order of 3. For the local two-level two-state build up at the University of Augsburg, will generate such
predictor for the corridor with 3 neighbor rooms there are real movement patterns. Time is another important point in
33 = 27 patterns and therefore 27 entries in the pattern his- learning human habits. Therefore the predictors shall be en-
tory table. Figure 3 shows this case assuming the room se- hanced to be time-dependent.
quence E S B E B S E S E B S E B S. We consider the
REFERENCES
pattern E B S. After first occurrence no room was predicted,
but as initial state for the two-state predictor E0 was set. Af- [1] Michael C. Mozer. The Neural Network House: An
ter second occurrence the state E0 was changed to E1. Now Environment that Adapts to its Inhabitants. In AAAI
the prediction is that the employee’s office E will be entered Spring Symposium on Intelligent Environments, pages
next. 110–114, Menlo Park, CA, 1998.
[2] Jan Petzold, Faruk Bagci, Wolfgang Trumler, and Theo
E B S pattern two state
Ungerer. Context Prediction Based on Branch
EBS E1 Prediction Methods. Technical Report 2003-14,
… … Institute of Computer Science, University of Augsburg,
BSE B0 Germany, July 2003. http://www.informatik.uni-
… … augsburg.de/skripts/techreports/.
[3] T.-Y. Yeh and Y. N. Patt. A Comparison of Dynamic
Figure 3: Local two-level two-state predictor for the cor- Branch Predictors that use Two Levels of Branch
ridor C History. In Proceedings of the 20th Annual
The two-level context predictors can be extended using a International Symposium on Computer Architecture
method motivated by Prediction by Partial Matching (PPM) (ISCA-20), pages 257–266, San Diego, CA, May 1993.
192
Collaborative Capturing of Interactions by Multiple Sensors
 ÝÞ
   Þ   Þ 
 
Ü 
 
ßÞ
Ý  
 Þ       
Ü
   
  ß 


    
         

    
 
    
 
        
ABSTRACT

  
   
   
    
   
    
   
CAPTURING INTERACTIONS BY MULTIPLE SENSORS
     
 
  
 
    
 
   
   
   
 
     
    
     
  
    

    
      
     2

   
          
 
  
 
  

    
         
 
 
  
         
   
  
 

       Ubiquitous sensors (video
camera, microphone, IR tracker)

KEYWORDS:         


     LED tags attached to objects

            !"


INTRODUCTION

 
       
         
    #$%   
 


   
        
 #$%
&'%
Video camera,
IR tracker,

(       
     
 
LED tag

   


    
Humanoid robot Microphone

  
  
 
    
       ( 
 PC

  
    

       
 
  
  

 

   

   )     
  
Figure 1: Setup of the ubiquitous sensor room.

4  
   !    
 
 
    
 
 

 
  
*   
  )
  
  +
  
       
      /       %0

              
        
./-   
    
*            
   
      

  +
    
  %-
   , 
  


    
  /       
./- *./- +  
        *%0  
    
 %0       
      
 +            
          
        *      
./-   
         +
  (   
   
   
 ,   
./-  

1   
    
   
  ) 
 
2            
 
         )    
      
 
     
      


   
 
 
        
  3" 
         

   
            
        / 
     
566     

  

576 666  
 
' 
   
  
             
  
     

  
 ,   
        
    
INTERPRETING INTERACTIONS
    
 
       
      
    
    

      
      
  
 

   
    

 
     
'     
    
  
  
     
    8
      
 
 
%0   
 
       ) 
 2            
       9:     
   
     4    
193
       ,  *     
 +
      
,    

 %0 ;   
     
./- 


   
       
   
Summary video

 
    
of the user’s
entire visit


,      
Staying Coexistence
List of
highlighted
scenes during
the user’s visit
Annotations for
each scene:
time,
Gazing at an object Joint attention Attention Focus: Socially important event description,
duration

Conversation

IR tracker’s view LED tag

Figure 2: Interaction primitives.

4   3         


Overhead camera Partner’s camera Self camera

Figure 3: Automated video summarization.


 (  %0         ./-
     <     
  
 
  
    
    
  
  (   %0      ./-    * 
 
  
 =
    <      + 4 
   
 
  
  
  < :
  :
  
   
 
 


 ( %0          ./-    
      

    <              
 

 
 
  
 
     
     
   
0  

  ( ./-      ,     
     %0      < 

  ./-          2

 , 
 
     , 
, 
  
 
 
           
   %0   
,          
  2 %0       2 ./-   
CONCLUSIONS

       


     
  
   
 
<  
           (   
 
  
 
  
  
 

   
    8

  
   
VIDEO SUMMARY

      


 
 9 :
  

 %*#$% 

   
      
 
        4   5 
        
    + 
   


 
 

     ) 



     
    
      8            
      
        
         
      


   
    

     


   
 2       
 
 
  
    
  
 
  
      
 
ACKNOWLEDGMENTS
   
 
 !     "
 
        
        

 #$    
   

 ! %
 
    


  

&&    
"
    >  ' (  )&%
> 
      
        
 REFERENCES
*% +
 $ %  &   ,*
  % 


   
  
  
  
    
   
$ ,-./0123456*15$ *44*%
      
    ,% $ %$ #$ )%$  +$ %  

 !?   


       7%   

 
  =     
  
  $ &&% 06*1$ *444%

194
Ubiquity in Diversity – A Network-Centric Approach
Rajiv Chakravorty, Pablo Vidales, Boris Dragovic, Calicrates Policroniades, Leo Patanapongpibul
Cambridge Open Mobile Systems (COMS) Project Initiative
University of Cambridge Computer Laboratory and Engineering Department
William Gates Building, JJ Thompson Avenue
Cambridge CB3 0FD, U.K.
COMS Web: http://www.cl.cam.ac.uk/coms/

Wireless networking has witnessed strong growth recently due to device or transmitted through a communication link in the het-
the popularity of WiFi (802.11b-based WLANs) and world-wide erogeneous space. However, if the context in the heterogeneous
deployment of wide-area wireless networks such as GPRS and 3G. space is known, we can easily identify relevant security and pri-
Devices that can connect to multiple networks (e.g., GPRS-WLAN vacy threats that the data object is exposed to; and then mitigate
cards) are becoming increasingly affordable, and in future mobile the identified risks by proactively managing the data object for-
devices such as laptops, PDAs and handhelds will be equipped to mat. The challenge in this context model is to match heterogeni-
connect to multiple different networks. As the environment be- ety with device capabilities, quality as well as confidence levels
comes more diverse and heterogeneous with a range of networks, available from the model, while at the same time tapping the full
devices and services to choose from – a key issue that will need to potential of myraid technologies for sensing the context.
be addressed is that of heterogeneity. In this poster abstract, we dis-
cuss our practical efforts in building a truly ubiquitous environment In Cambridge Open Mobile Systems Project [1], we are investigat-
for secure heterogeneous networking. ing how we can achieve this vision of secure heterogeneous net-
working. As a first step, we have already investigated the extent to
Using an experimental testbed that creates a heterogeneous environ- which Mobile IPv6 can be used to successfully migrate TCP con-
ment, we are investigating the following: nections during inter-network handovers [2].

Transparent Mobility with Mobile IPv6. We are exploring how


mobile users can transparently move across networks – wired as CGSN
SGSN GGSN
BS BSC
well as wireless. Here, we are investigating the performance of SERVICE PROVIDER’s Correspondent
BACKBONE NETWORK Node
Mobile IPv6 for wireless networks integration, and schemes that Gb Gn

can improve performance during vertical handovers [2]. MIPv6−Enabled BS
Gi
GPRS Edge
Router
Mobile Node
Mobility Management with Context-Aware Networking. Con- [SIT IPv6/IPv4 tunnel
end−point for GPRS]
IPv6/IPv4
Router

text can play an important role in heterogeneous environments. Well Provisioned


IPSec VPN
PUBLIC
Although context has broader dimensions, we are interested in INTERNET IPv6 Tunnel
Firewall Edge
the networking context, one that enables mobile clients to be sit- Router
IPv6/IPv4
uation aware so as to efficiently adapt to various events during Router
  

 
handoffs within and across different networks, and to other en- IEEE 802.11b
WLAN APs 

 BTExact IPv6 Network


(MIPv6 HA for WLAN/LAN)  
IPv6/IPv4 Router
vironmental events. We are currently building a mobility agent [Trace Collection] [SIT IPv6/IPv4 tunnel]
GPRS IPv6
for such heterogeneous environments that can support adaptive Access Router

mobility using network context.


Figure 1: A 3G-GPRS–WLAN–LAN Testbed
Fine-grained Data Adaptation. We are addressing a fundamen-
tal issue in data management – how can we efficiently manage
data in presence of heterogeneous wireless links?. With links We have implemented a loosely-coupled Mobile IPv6 based 3G–
having vastly different characteristics as that typically seen in het- GPRS–WLAN–LAN testbed (see, figure 1). By using a testbed,
erogeneous environments, there is a strong need to assist applica- consisting of the world’s two most widely deployed wireless data
tions to perform better. We are looking at assisting applications networks – local-area wireless network (WLANs) and wide-area
in two different ways: firstly, with a data abstraction where its wireless (GPRS) – we have analysed what happens when multi-
logical structure and type can be explicitly retained in the system mode mobile devices perform vertical handoffs using Mobile IPv6.
(for example a file system) and, secondly, with a richer metadata We have closely examined the handover process itself, its effects on
model. An additional advantage for fine-grained data manage- TCP, and have given reasons for its under-performance (see, [3]).
ment comes from the network context information readily avail-
To understand the performance issues during such inter-network
able from the mobility agent. This in turn allows for fine-grained
handovers, we characterized a handover process in Mobile IPv6 in
data manupulation based on several environmental requirements.
two steps – a handoff decision and execution. Handoff decision is
Security in Heterogeneous Spaces. Security issues stems while the ability to decide (by the mobile node, or network or by both) of
networking in heterogeneous spaces. These originate from using when to perform a handoff. After the decision to handoff is taken,
data models in systems that are present in some form within the the handoff execution process comes into play. Handoff decision

195
and detection steps can overlap, as there are scenarios when deci- mobile environments, we will use the Sentient Car that is situa-
sion process may require more probing of the network (for example, tion aware based on its location (using GPS), movement direction
duplicate address detection time). and speed. The Sentient Car is an outcome of joint research of
different departments of the University of Cambridge.
We have partitioned the handoff (execution) latency into three com-
ponents – detection, configuration and registration times. We have
investigated the extent to which Mobile IPv6 could be used to suc- Mobile Access Router (MAR).
cessfully migrate TCP connections during inter-network handoffs. MAR [4] is a system consisting of a MAR Client – a multimode
Using the testbed, we have evaluated the impact layer-3 hard hand- mobile device used as a mobile accesss router, and connected
offs have on transport protocols such as TCP – a more thorough to different wireless networks simultaneously (e.g., GPRS, 3G,
description is available in the form of a separate technical report WLAN etc.) to communicate with a MAR Server proxy located
[2]. Besides, we have experimentally evaluated schemes that im- in the wired infrastructure. The MAR client is a mobile access
prove vertical handovers – Fast Router Advertisements (RAs), RA router to be placed in a car, bus, train etc., and performs band-
Caching, and Binding Update simulcating in Mobile IPv6, smart width striping (aggregation) across multiple network interfaces
buffer management using TCP proxy in GPRS, and soft handovers to exploit the distributed spatial diversity available from differ-
that improves TCP performance dramatically [2, 3]. ent wireless access networks. Diversity provides a highly reliable
Building further on this work, our ongoing research is focussed at “always-on” wireless communication channel. The MAR project
broadening the concepts of secure and efficient heterogeneous net- can extend the use of Mobile IPv6 in this environment.
working under the aegis of COMS project [1]. As previously dis- Our poster illustrates several such practical intricacies using a real
cussed, we have already evaluated schemes that improve handover testbed, and provides a sound description of our ongoing research
performance and we are currently focussed into exploiting several on secure heterogeneous networking. Please visit our project COMS
potential areas for secure heterogeneous mobility – mobility man- web-page,
agement and networking with context, and using feedback informa- http://www.cl.cam.ac.uk/coms/
tion from this context to provide fine-grained adaptation for data, for further details and information about our ongoing research and
and to identify the threats of this data model. papers.
Other practical applications of the testbed includes two potential re- REFERENCES
search areas for mobile networking – Context Aware networking 1. Cambridge Open Mobile System Project.
using the Sentient Car, and Mobile Access Router (MAR) [4]. The http://www.cl.cam.ac.uk/coms/
two research areas are closely knitted, and both require a good un-
2. R. Chakravorty, P. Vidales, L. Patanapongpibul, K. Subramanian, I.
derstanding of mobility in heterogeneous environments. Pratt and J. Crowcroft. “On Inter-network Handover Performance us-
Sentient Car for Context-Aware Networking. In this project, ing Mobile IPv6”. University of Cambridge Computer Laboratory–
Technical Report, May 2003.
we are investigating how networking context (situation aware-
http://www.cl.cam.ac.uk/coms/publications.htm
ness) based on location, movement direction and speed can be
used to make better, informed decisions during inter-network han- 3. R. Chakravorty, P. Vidales, K. Subramanian, I. Pratt and J. Crowcroft.
dovers. “Practical Experiences with Wireless Networks Integration using Mo-
bile IPv6”. Poster and 2-page Extended abstract in ACM MOBICOM
2003, San Deigo, October 2003.
http://www.cl.cam.ac.uk/coms/publications.htm

4. Rajiv Chakravorty, Ian Pratt and Pablo Rodriguez, “Exploiting Net-


work Diversity in MAR - A Mobile Access Router System”, University
of Cambridge Computer Laboratory – Technical Report, July 2003.
http://www.cl.cam.ac.uk/coms/publications.htm

5. P. Vidales, L. Patanpongpibul, and R. Chakravorty. “Ubiquitous Net-


working in Heterogeneous Environments”, in Proceedings of the 8th
IEEE
Mobile Multimedia Communications (IEEE MoMuC’2003), October
2003. (to appear) http://www.cl.cam.ac.uk/coms/publications.htm
Figure 2: Sentient Car for Context-Aware Networking. 6. C. Policroniades, R. Chakravorty, P. Vidales, “A Data Repository for
Fine-Grained Adaptation in Heterogeneous Environments”, in Pro-
Any sophisticated handoff mechanism meant for heterogeneous ceedings of the 3rd ACM Workshop on Data Engineering for Wireless
environments, can make use of context-awareness in their imple- and Mobile Access (ACM MobiDE’2003), San Diego, October 2003
http://www.cl.cam.ac.uk/coms/publications.htm
mentations. For example, based on the exact position, movement
direction and velocity information available to a highly mobile 7. B. Dragovic, “Containment: Knowing your Ubiquitous Systems Limi-
host (e.g., Sentient Car), a co-located infrastructure proxy can as- tations”. Poster to be presented in Ubicomp 2003, Seatle, USA, Octo-
sist host mobility by tracking and accurately predicting when a ber 2003.
handoff can occur. This in turn can assist in flow adaptation (e.g.,
TCP) even before a handoff occurs.

To realize the full potential of Context-Aware Networking in highly

196
A Peer-To-Peer Approach for Resolving RFIDs
Christian Decker, Michael Leuchtner, Michael Beigl
TecO, University of Karlsruhe
Vincenz-Priessnitz-Str. 1, 76131 Karlsruhe, Germany
http://www.teco.edu
{cdecker, leuchtner, beigl}@teco.edu

ABSTRACT P2P approach no single authority can trace all resolution


We present a system using a Peer-to-Peer network for re- queries. Together with a strong encryption of queries and
solving associations of Radio Frequency Identification their responses this provides anonymity and security. Fur-
(RFID) tagged objects to their virtual presence. A query, thermore, P2P networks allow non-authoritarian extension.
which consists of an identification string, is sent to the net- The information offered by a participant in this network is
work and receives the appropriate resolution data. We pay not restricted to a particular format..
particular attention to the authenticity and security of the Requirements
exchanged data, in order to prevent tracing of resolution The P2P network for resolving RFIDs consists of enquir-
queries. The usage of a Peer-to-Peer network enables an ers, resolving services (“resolver”) and an intermediate
non-authoritarian yet easily managed extension by further network directing queries and responses to respective par-
resolving services, such that these services do not need to ties. The enquirer and resolver do not talk directly to each
share any information with an authoritative organization. other. Their communication is performed via multiple, in-
Supply Chain Management (SCM) and Customer Relation- termediate peer systems. On a cautionary note, as neither
ship Management (CRM) represent potential application party imposes control over the P2P network, their commu-
areas. nication is susceptible to attacks like man-in-the-middle
Keywords [2]. This implies that queries and responses have to be en-
Peer-to-Peer, RFID, Resolving Service, SCM, CRM crypted and authenticated in order to prohibit unwarranted
disclosure of the content of the communication and to vali-
INTRODUCTION
date packet origin.
In Ubicomp there is ongoing research regarding the unifi-
cation of the real world with the virtual world, leading to IMPLEMENTATION
the electronic acquisition of real world activities. Projects We established a P2P network on several computers in our
like CoolTown[1] have demonstrated the diversity of ap- department using the JXTA[3] protocol set. A RFID reader
plications enabled by the transition of real world to virtual for I-Code transponders was connected via a serial line to
presences. CoolTown experimented with beacons, RFID the enquirer. Two resolving services were then included on
transponders and other small devices that provide a unique the network. The setup is summarized in figure 1.
identification string. This string was then mapped onto a Enquirer Resolving Service
URL in order to create the association with the virtual pres-
P2P Network
ence. The resolving mechanism here could either be manu-
RFID Reader Resolving Service
ally selected or relied on a service similar to a domain
name service (DNS). However, we present a Peer-to-Peer Figure 1: P2P Setup with Enquirer and Resolvers
(P2P) approach for resolving such associations.
When an object with an attached RFID transponder was
Motivation read, the enquirer queried the network and the resolving
In our approach we are using RFID transponders in order service for the identification string and replied with exten-
to identify objects. The transponders are cheap, small and sive information regarding the virtual presence of this ob-
robust. Nevertheless, available memory on the transponders ject. As a consequence of the requirements we used
enables storage of additional information apart from the GnuPG[4], a freely available tool for secure communica-
built-in identification string. The usage of a P2P network tion using an asymmetric Public-Key algorithm. We con-
has particular advantages when compared to other ap- sider communication authenticated and secure when the
proaches. Other than centralized resolving services, the enquirer holds a valid public key for each resolving ser-
P2P approach does not necessitate the sharing of any in- vice. On the other hand, a resolving service must also pos-
formation about a virtual presence with the network. DNS- sess the appropriate public key of the enquirer. Public and
like or tree-based resolving services typically require cen- private keys were generated beforehand and installed on
tralized knowledge about object-virtual presence associa- the respective computers. The key lengths were set to 1024
tions, because the root node of the tree has to know all as- bits providing a strong encryption. The resolving mecha-
sociations in order to perform a successful resolution. In a nism works as follows: When a transponder is read it pro-

197
vides a fixed identification string of 8 bytes and a service extensibility by just adding another resolving service or
identification string of 44 bytes from its memory. The en- enquirer and the anonymity. A manufacturer providing a
quirer uses the service identification to query the resolver. resolving service does not need to share any information
The network replies with peer advertisements matching the with an authoritative organization, and can use his own
service identification. At this point the authenticity of the identification scheme for his items. Anonymity grants that
resolving service has not yet been proven. The enquirer queries for item identifications are not traceable by others.
therefore connects to all advertised peers. A message Mq Furthermore, the asymmetric encryption ensures authentic-
containing a randomly chosen session ID, the service iden- ity and protects exchanged data. The control of information
tification and the RFID from the transponder is encrypted is completely on the manufacturer's side. Therefore we also
with the public key of the resolving service, signed using see an application area in workflow management systems
the enquirer's private key and then sent to each connected controlling processes interwoven between various manu-
peer. A resolving service can now verify the authenticity of facturers.
the message using the public key of the enquirer and de-
RELATED WORK
crypt the message using his private key. The query request Auto-ID center[5] aims to create standards for an "Internet
can then be fulfilled by the resolving service. A message of things". Identification of objects is based on RFID trans-
Mr containing the received session ID, the service descrip- ponders. The resolving service uses a DNS like tree-based
tion and the response data is then encrypted, signed and system called Object Naming Service (ONS) returning a
sent back to the enquirer, which can now prove the authen- resource address for extensive information about an object.
ticity of the resolving service. The next figure summarizes With CueCat[6] users could scan an item's barcode which
the resolving mechanism. was sent encrypted over the Internet to CueCat's manufac-
Enquirer P2P Resolving Service
turer returning the URL of an appropriate website about the
query (serv
ice) item. The encryption was cracked and it was found that the
ply (pee rs) manufacturer collected personal data from each scanner
re
A connect device. In research on security on P2P networks reputation-
based approaches and protocols like XREP[2] were devel-
sigE (pk (M oped to handle various attacks. However, reputations need
RS q ))
to be shared and as in our scenario enquirers don't share
))
sigRS (pkE (Mr information this method cannot be applied here.
CONCLUSION AND FUTURE WORK
We presented a system design and its implementation for
Figure 2: Resolving Mechanism resolving RFIDs using a P2P network where queries and
Our tests showed an average response time of six seconds responses are encrypted and signed. This approach is
for a query, mainly caused by the encryption algorithm and marked by anonymity, security and non-traceability of que-
the delays while waiting for replies of peer advertisements. ries and responses. Furthermore, it enables easy adhoc and
non-authoritative extension and redundancy. Ubicomp ap-
DISCUSSION AND APPLICATIONS
Apart from the strengths like anonymity, authenticity and plications benefit from this system as it provides a middle-
security, there are also weaknesses. The exchange of the ware for resolving associations between real-world objects
public keys is an overhead during protocol initialization, and their virtual presence. Future investigations will look
making the setup of new resolving services and enquirers into group creation for performance and redundancy rea-
inconvenient. An initial direct and secure connection be- sons and into possibilities of using this system as a generic
tween enquirer and resolving service can be applied. Fur- resolving mechanism.
thermore, the management of possibly several thousand REFERENCES
keys on a machine requires a large effort to secure the en- 1. Kindberg T. et al. (2000). People, Places, Things: Web
quirer and resolving services. There are also performance Presence for the Real World. WMCSA 2000, p 19.
issues: the signature of all messages arriving at the resolv- 2. Damiani E. et al. A reputation-based approach for
ing service must be checked for each known enquirer, choosing reliable resources in peer-to-peer networks.
which causes a huge load when the network scales up. Ad- ACM CCS 2002, 207-216
vanced features like group creation implemented in JXTA 3. Project JXTA. http://www.jxta.org [ac-
might be helpful to balance the load. On the application cessed:7/10/2003]
side we see a huge potential, when manufacturers can elec-
4. GNU Privacy Guard (GnuPG). http://www.gnupg.org
tronically trace their items. Applications in the field of
[accessed: 7/10/2003]
SCM and CRM systems might benefit from the ubiquity of
extensive information about items, which becomes easily 5. Auto-ID Center. http://www.autoidcenter.com [ac-
and securely accessible by our approach. The major cessed: 7/10/2003]
strengths of the P2P approach are the non-authoritative 6. CueCat. http://www.cuecat.com [accessed: 7/10/2003]

198
Single Base-station 3D Positioning Method using
Ultrasonic Reflections
Esko Dijk1,2 Kees van Berkel1,2 , Ronald Aarts2 ,
1
Eindhoven University of Technology Evert van Loenen2
2
5600 MB Eindhoven Philips Research Laboratories Eindhoven
The Netherlands Prof. Holstlaan 4, 5656 AA Eindhoven
Phone: +31-40-2742256 The Netherlands
[email protected] [email protected]

ABSTRACT Time (ms)


0 2.9 5.8 8.7 11.6 14.5 17.4 20.3 23.2 26.1 29
In context awareness applications the locations of people, 1

devices or objects are often required. Ultrasound technol- 0.9


ogy enables high resolution position measurements indoors.

Scaled signature Amplitude (no units)


0.8
A disadvantage of state-of-the-art ultrasonic systems is that
several base stations are required to estimate a 3D position. 0.7

Since fewer base stations leads to lower cost and easier setup,
0.6
a novel method is presented that requires just one base sta-
tion. The method uses information from acoustic reflections 0.5

in a room, and estimates 3D positions aided by an acoustic


0.4
room-model. The method has been verified within an empty
room. It can be concluded that ultrasonic reflection data con- 0.3

tains valuable information on the 3D position of a device.


0.2

Keywords 0.1
Location awareness, location systems, ultrasonic positioning
0
0 1 2 3 4 5 6 7 8 9 10
1. INTRODUCTION Distance (meter)
In future consumer electronics, context awareness will play
an important role. Often, the locations of people, devices Figure 1: Measured signature at a receiver position. The
and objects are part of the required context information of horizontal axes show time (top) and corresponding dis-
which consumer devices need to be ‘aware’. Within the tance interval [0, 10] m.
PHENOM project [1], several application scenarios were de-
veloped that require in-home 3D device position informa-
tion. trasonic waves against the walls, floor and ceiling of a room.
How these reflections may help in position estimation will
The required position accuracy (typically ≤ 1 m) can not be be explained in this section. A typical (processed) ultrasonic
delivered by wide-area systems like GPS. Therefore, a spe- signal measured at some receiver in a box-shaped room is
cialized indoor positioning system is required. It may use ra- shown in Fig. 1. At time t = 0 a source emits a burst-like
dio waves (RF), magnetic fields, ultrasonic waves, or combi- signal. Using time synchronization between devices (e.g.
nations thereof. We investigate systems based on ultrasonic by an RF link such as in the Cricket system [4]) the re-
waves, because of the potential high accuracy at low cost. ceiver can measure the time-of-flight of ultrasonic signals,
State-of-the-art ultrasonic systems calculate distances from and then calculate the distance to the source. In the figure,
ultrasound time-of-flight measurements, and then use trian- the first peak at 2.89 m is the line-of-sight distance. The sub-
gulation algorithms to calculate a 3D position. A disadvan- sequent peaks are caused by reflections. These reflections
tage of this approach is that several units of infrastructure are were found to contain information about the position of the
required at fixed known positions in a room. Generally four receiver. The information is contained within the pattern of
base stations (BS) are required in a non-collinear setup to amplitude peaks, called the signature, shown in the figure.
estimate 3D position. In special cases like ceiling-mounted
BSs, three is sufficient. Fewer BSs would make positioning Note that the fixed BS can be chosen to be either transmit-
systems cheaper, and easier to set up. Therefore we investi- ting or receiving ultrasound. We chose it to be a transmitter,
gate whether a positioning system can work with fewer BSs, to allow many mobile device receivers to co-exist without
or with just one BS (of small size) in the extreme case. causing ultrasonic interference problems between devices.

2. METHOD 2.1 Acoustic model


A novel concept was developed [3] to realize a single-base- To use reflections for positioning, a model was developed
station 3D positioning system. It exploits reflections of ul- that relates 3D positions to reflection signatures. The fol-
199
Image Source 1
FLOPS, implying an update time of 1-10 s per measure-
ment for an optimized implementation on a modern PC. This
could be significantly improved by a smarter choice of C.

Image Source 2
Since the acoustic model also needs a candidate orienta-
Source tion to calculate a signature, this orientation has either to
be known in advance or estimated on-the-fly. Initially the
former approach was used [3], but currently methods of ori-
entation estimation are being developed.

3. RESULTS
Receiver
A measurement setup was built to test the method. It consists
of one piezo-electric ultrasound transmitter base station (BS)
Figure 2: 2D top view of a room, containing one acous- and one receiver, both connected to a measurement PC. All
tic source and one receiver. Two acoustic reflections (ar- processing steps are implemented in software. Preliminary
rows) and associated image sources (crosses) are shown. experiments have been performed in an empty office room,
to verify the acoustic room model and to test the method
in best-case conditions. The transmitter BS was fixed at a
lowing example will show the model’s principle. Figure 2 wall and the mobile receiver was placed at 20 different po-
shows a top view of a room with an ultrasound source. Two sitions. A good position estimate was found in 18 positions,
reflections of ultrasonic waves off walls are shown. These all with a positioning error of less than 20 cm. Two positions
reflected waves can be considered as originating from two had higher errors of 0.77 m and 1.20 m. The errors were
conceptual image sources marked by crosses. Many more caused by a combination of three effects in the measured
image sources than those shown exist in a room, which can signature (‘missing’ peaks, ‘noise’ peaks, and random devi-
be calculated using the image method [2]. From here on we ation of peak-amplitude from its expected value) that will be
assume that the source shown is a BS at a fixed known po- further investigated.
sition. It will give rise to many image sources, that can be 4. CONCLUSIONS AND FUTURE WORK
seen as virtual base stations (VBS). We can think of VBSs It can be concluded that measured ultrasonic signals contain
as possible replacements for real BSs, thereby reducing the useful information about the mobile device’s 3D position.
number of real BSs. To calculate the positions of VBSs, the We propose to use this information to perform device posi-
room dimensions have to be known. The current room model tion estimation, using a single base station per room. The
includes 91 VBSs, and room dimensions are measured ± 5 signature matching method was developed for this purpose.
cm accurate. Initial experiments show that the method works within an
However, signatures are not only affected by position but empty office room.
also by device orientation. Therefore, source/receiver orien- Future work is aimed at applying the method in realistic non-
tations and the directional beam pattern of ultrasound trans- empty rooms. To realize this, several improvements to the
ducers are included in the acoustic model. The model fur- basic method are being considered for increased robustness
thermore includes the attenuation of ultrasound in air, res- and calculation speed. One approach is a tracking system
onance characteristics of piezo-electric ultrasound transduc- that integrates information from several measurements over
ers, acoustic interference effects between reflection peaks (in time. Other approaches are based on small-sized transducer
case reflections arrive approximately at the same time), and arrays, embedded in the base station.
wall reflection attenuation factors [3].
REFERENCES
2.2 Signature matching method [1] PHENOM project, 2003. www.project-phenom.info.
Using the acoustic model, it is possible to calculate an ex-
pected acoustic signature given a 3D position and orienta- [2] J. Allen and D. Berkley. Image Method for Efficiently
tion. However, the reverse problem, of directly calculat- Simulating Small-Room Acoustics. J. Acoust. Soc. Am.,
ing 3D position and orientation given a measured signature, 65(4):943–951, 1979.
proves to be much harder. Therefore the former approach [3] E. O. Dijk, C. van Berkel, R. Aarts, and E. van Loenen.
was used as our initial method for 3D position estimation, Ultrasonic 3D Position Estimation using a Single Base
the signature matching method. It simply tries a set C of Station. In Proc. European Symposium on Ambient
mobile device 3D candidate positions in the room, calculates Intelligence (EUSAI), Veldhoven, The Netherlands,
an expected signature at these positions using the model, and 2003. Springer Verlag (to be published).
compares those to the measured signature. Finally the best-
matching candidate position is picked as the likely mobile [4] N. Priyantha, A. Miu, H. Balakrishnan, and S. Teller.
device 3D position. Note that set C is a well-chosen subset The Cricket Compass for Context-Aware Mobile
of all possible room positions. Its size Nc ranged from 7243 Applications. In Proc. ACM 7th Int. Conf. on Mobile
to 11131 in our experiments, with a space between candi- Computing and Networking (MOBICOM), pages 1–14,
date positions of ≤ 5 cm. The current computational load Rome, Italy, 2001.
for signature matching over set C is of the order O(Nc · 105 )
200
Prototyping a Fully Distributed Indoor Positioning System
for Location-aware Ubiquitous Computing Applications
Masateru Minami Hiroyuki Morikawa Tomonori Aoyama
Shibaura Institute of Technology Graduate School of Frontier Sciences Graduate School of Information Science
3-9-14 Shibaura The University of Tokyo and Technology,
Minato-ku, Tokyo Japan The University of Tokyo
[email protected] 7-3-1 Hongo Bunkyo-ku, Tokyo Japan 7-3-1 Hongo Bunkyo-ku, Tokyo Japan
[email protected] [email protected]
ABSTRACT location of the nodes. The RF transceiver is used for time
This paper describes an indoor positioning system called synchronization and message exchange among nodes.
DOLPHIN (Distributed Object Localization System for The key idea in our positioning algorithm is based on hop-
Physical-space Internetworking) that enables various by-hop localization. For example, in the bottom left of
physical objects to obtain their location in a fully distributed figure 1, node D can determine its position by receiving
manner. We present prototype implementation and ultrasound pulses from the reference nodes A, B, and C.
experimental evaluation of the DOLPHIN system made However, node E and F cannot receive ultrasonic pulses
from off-the-shelf hardwares. from reference nodes due to physical obstacles such as wall.
KEYWORDS Here, if the position of node D is determined, and node E
Indoor Positioning System, Distributed Algorithm can receive ultrasonic pulse from node D, node E can
compute its position by using distances from node B,C, and
INTRODUCTION D. If the locations of node D and E are determined, node F
In ubiquitous computing environment, physical location of can compute its position using node C, D, and E. In this
indoor objects is one of the key information to support way, all nodes in the DOLPHIN system can be located.
various applications. To obtain indoor location information, There are two main advantages to this mechanism. First, the
several positioning systems have been proposed. Active Bat system requires only a few (minimum three) reference nodes
[1] and Cricket [2] use ultrasonic pulse TDOA (Time to determine all node positions. Second, nodes can
Difference of Arrival) to measure high precision 3D determine their positions even if they cannot receive
position and orientation in indoor environment, but they ultrasound from any reference nodes directly.
require an extensive hardware infrastructure. However, such
The positioning algorithm runs by exchanging several
systems usually require manual pre-configurations of the
messages as shown in figure 2: ID notification message
locations of reference beacons or sensors. The setup and
(IDMsg), measurement message (MsrmtMsg), and the
management costs would be unacceptably high if we apply
location notification message (LocMsg). The nodes in the
them to large scale environment such as an office building.
system play three different roles: there is one master node,
Ad-hoc localization mechanism described in [3] can be
one transmitter node, and the rest are receiver nodes.
applied to such problem. In [3], the authors proposed
Consider the example depicted in figure 1, where nodes A,
collaborative multilateration algorithm to solve localization
B, and C are reference node, and nodes D, E, and F are
problem in a distributed manner, and performed detailed
normal nodes (the position of the nodes are unknown).
simulation-based analysis of a distributed localization
Here, we assume that nodes A, B, and C have node lists [B,
system. To design practical location information
C], [A, C], and [A, B] respectively. We also assume that
infrastructure, we believe that experimental analysis is also
node E and node F could not receive ultrasonic pulse from
needed to discover practical problem in distributed
node A because of obstacle such as a wall.
localization system.
Now consider that node A acts as a master node. Figure 2
From this point of view, we have developed a distributed
shows the timing chart of our positioning algorithm. First,
positioning system called DOLPHIN (Distributed Object
node A chooses one node randomly from its node list [B,
Localization System for Physical-space Internetworking)
C]. If node B is chosen, node A transmits MsrmtMsg
that can determine objects’ position using only few
including ID of node B. On receiving the message, node B
manually configured references. The system is made from
becomes transmitter node and generates ultrasonic pulses.
off-the-shelf hardware devices, and implements a simple but
At the same time, nodes C, D, E, and F become receiver
practical distributed positioning algorithm.
nodes and start their internal counters (synchronization
Positioning Algorithm phase). When a receiver node detects ultrasound from node
Figure 1 shows overview of the DOLPHIN system. The B, it stops its internal counter and calculates its distance
system consists of a number of DOLPHIN nodes that from node B. After several ms (this depends on the time
containing 2400bps RF transceiver, several 40kHz omni- taken by the overflow of the internal counter), node B sends
directional ultrasonic transducers, and a HITACHI LocMsg to notify receiver nodes of its position. Receiver
H8S/2215 16MHz CPU. The CPU is for calculating the nodes that could detect the ultrasound pulse from B store the

201
location of node B and their distances to node B in their MsrmtMsg from other nodes within a certain period (e.g. 10
position table (measurement phase). After that, all nodes seconds), the advertisement timer in the node expires. Thus,
listen IDMsg for several ms (advertisement phase). If there that means the node is not recognized as a node capable of
is node that could determine its position based on three or master node by other nodes. In this case, the node
more distances, it advertises its ID in this phase. This ID is retransmits IDMsg at advertisement phase in each
added to the node list of every other node. In the above positioning cycle. Note that to avoid IDMsg collision at
example, because nodes D, E, F cannot determine their advertisement phase, the node sends IDMsg at a certain
positions, no IDMsg is sent in this phase. The sequence of probability which determined by the number of nodes in the
the above phases defines one cycle of the positioning node list.
algorithm in the DOLPHIN system.
Experimental Result and Future Work
In the next cycle, node B, which acted as a receiver node in We placed seven nodes as shown in figure 3, and computed
the previous cycle, becomes a master node. And the the average and the variance of the measured position of
positioning algorithm proceeds in the same manner. After each normal node (nodes D-G) for 1000 cycles. The results
three or more cycles of positioning, node D can determine showed that the system could determine objects’ position
its position based on measured distances from nodes A, B, with an accuracy of around 15 cm in actual indoor
and C. At which time, node D can send its IDMsg in the environment. However, positioning error increases at nodes
advertisement phase. All other nodes that received the E-G compared to that at node D. This is because the
IDMsg from node D add the ID of node D to their node positioning error at node D affects position determination of
lists, and node D is recognized as a candidate master node. nodes E-G that determine their position based on node D.
After node D becomes master node, node E and node F can Although this error propagation problem is inherently
measure their distances from node D. Then, node E can unavoidable in the DOLPHIN system, we expect to
determine its position and advertise its IDMsg. Finally, minimize positioning error by placing reference nodes at
based on nodes C, D, and E, node F can determine its appropriate locations.
position. In this way, we can locate all nodes in the
Since current prototype is a handmade system, the
DOLPHIN system.
performance of the system may be insufficient to support
In the DOLPHIN system, we have to prepare for two types many indoor location-aware applications. In addition, the
of failures, node failure and recognition failure, to number of nodes is too limited to measure the performance
continuously execute the above mentioned positioning in large scale environment. Currently we are designing
algorithm. The node failure is that the node suddenly stops improved version of the system that can handle practical
because of unpredictable accident, and the recognition problems such as multipath propagation, node mobility as
failure is that the IDMsg transmitted from the node capable well as scalability problem in large scale environment.
of master node does not reach other node because of bad
communication channel or message collision. To recover References
from those failures, each node in the system has a recovery [1] A.Ward, et al.: A New Location Technique for the
timer and an advertisement timer. The recovery timer is set Active Office. IEEE Personal Communications Magazine,
when nodes receive MsrmtMsg, and expires if there has Vol. 4, No. 5, October 1997. [2] N.Priyantha, et al.: The
been no MsrmtMsg for a certain period (e.g. 1.5 second). If Cricket Compass for Context-aware Mobile Applications.
the recovery timer expires, a node is chosen to become a Proc. MOBICOM2001, July 2001. [3] A. Savvides, et al.:
master node randomly, and the positioning algorithm Dynamic Fine Grained Localization in Ad-Hoc Sensor
continues. If a node capable of master node does not receive Networks. Proc. MOBICOM2001, July 2001.

Node Selection Node Selection


Omni-directional Ultrasonic 200
Node Architecture MsrmtMsg to B MsrmtMsg to C
Reference Nodes

Transducer Array (5ch) (xave, yave)=(199, 163)


A Send (σ2x, σ2y)=(4, 155)
Reference node
Ultrasound
USB Interface B
H8S/2215 Analog LocMsg Position 150 Node-C Node-E
(16MHz) Interface Determined (xave, yave)=(102, 105)
C Counter (σ2x, σ2y)=(15, 21)
y(cm)

Stops Node B becomes


D 100
Power Supply Master Node IDMsg Node-D
(USB or Battery) 429MHz/2400bps (xave, yave)=(160,56)
RF Transceiver E (σ2x, σ2y)=(81,34)

Range and /or 50 Node-B


Node-F
Wall

D F Position Calculation (xave, yave)=(161, 5)


Start Counters Overflow (σ2x, σ2y)=(111, 15)
E
B Sync. Phase Msrmt. Phase Adv. Phase Adv. Phase 0 Node-A Node-G
C IDMsg : For advertising node ID to other nodes 0 50 100 150 200 250
MsrmtMsg : For time synchronization and activation of a transmitter node
LocMsg : For notifying 3D position of a transmitter node x(cm)
F Prototype
A
Implementation

Fig.1 System Overview Fig. 2 Positioning Algorithm Fig. 3 Experimental Result

202
Connectivity Based Equivalence Partitioning of Nodes to
Conserve Energy in Mobile Ad Hoc Networks
Anand Prabhu Subramanian
School of Computer Science and Engineering,
College of Engineering, Guindy,
Anna University, Chennai – 600 025
Tamil Nadu, India
[email protected]

ABSTRACT that every node is treated equally and the life time of the
The nodes in Mobile Ad Hoc Networks (MANETs) work over all network is increased.
on low power batteries. So, reducing energy consumption RELATED WORKS
has been the recent focus of wireless adhoc network Reducing energy consumption has been the recent focus of
research. The power in the nodes dissipates even when the wireless adhoc network research. The Geographic Adaptive
network interface is idle. In this paper, we present a Fidelity (GAF) [5] scheme of Xu et al. self configures
topology maintenance algorithm, Equivalence Partitioning redundant nodes into small groups based on their
method which is based on the connectivity among the geographic locations and uses a localized, distributed
nodes in the network. This algorithm partitions the network algorithm to control node duty cycle to extend network
into equivalence sets in which one of the nodes in the set is operational lifetime. But in many settings, such as indoors
active and the other nodes in the set turn off their radio. or under trees where GPS does not work, location
This algorithm takes care that the capacity or connectivity information is not available. The dependency on global
of the network does not diminish significantly. This is a location limits GAF’s usefulness. In addition, geographic
simple, distributed, randomized algorithm where nodes proximity does not always leads to network connectivity.
make local decisions to form the equivalence partitions and The SPAN [1] scheme of Chen and Jamieson proposes a
go to on or off state. In addition, this topology maintenance distributed algorithm for approximating connected
algorithm can be made to work along with the 802.11 dominating sets in an adhoc network that also appears to
power saving mode to improve communication latency and preserve connectivity. SPAN elects coordinators by
system lifetime. actively preventing redundant nodes by using randomized
Keywords slotting and damping. Equivalence partitioning differs from
Equivalence partitioning, on state, off state, active node GAF as it constructs the partitions based on the
connectivity information rather than the geographic
INTRODUCTION
location of the nodes. Also unlike SPAN, it constructs
Wireless multi-hop adhoc networking has been the focus of
equivalence partitions and randomly rotates the active
many recent research and development efforts for its
nodes within the partition.
applications in military, commerce and educational
environments. Most of the protocols that have been EQUIVALENCE PARTITIONING DESIGN
proposed to provide multi-hop communication in wireless In Equivalence Partitioning technique, we divide the
adhoc networks [2, 3] are evaluated in terms of route length network into different sets of equivalent nodes, so that one
[4], routing overhead, and packet loss rate. But minimizing of the nodes in the partition can be active in order to
the energy consumption is an important challenge in maintain the connectivity and the rest can remain in their
mobile networking. Since the network interface may be power saving mode. The role of the active node is
often idle, power could be saved by turning off the radio randomly chosen so that the burden of forwarding, sending
when not in use. But the coordination of power saving with and receiving data is distributed evenly to all nodes.
routing in adhoc wireless networks is not straight forward. Partitioning the network into Equivalence Sets
The subject of this paper is to present a topology
maintenance algorithm which partitions the network in
such a way that one on the nodes in each partition must be
active so that the connectivity of the network does not
diminish and the other nodes can turn off their radio. The
responsibility of the active node is randomly changed so Figures 1: A network with five nodes

203
This is a distributed randomized algorithm for connecting Compatibility with 802.11 Power saving mode
equivalence partitioning among the nodes in the network. This topology maintenance algorithm can be used along
Consider the network shown in Figure 1. The nodes B, C, with the 802.11 power saving mode to improve the system
D are in the path between the nodes A and E. In this case lifetime. An interesting question is how a node in the off
all the three nodes need not be awake to forward the state handles traffic originating from it or destined to it. In
packets from node A to E. We treat that the nodes B, C, D the former case, if the node has data to send it can simply
form an equivalent partition it is sufficient that one of the power on its radio and send out data. In the later case, the
nodes to be awake to maintain the connectivity. This 802.11 power saving mode can be used in which the active
Equivalence Partitioning algorithm is as follows. nodes can temporarily buffer data for the nodes in the off
state and send data later.
• The node Ni constructs its neighbor set by sending
HELLO packets to its one hop neighbors. The nodes RESEARCH CHALLENGES AND FUTURE WORK
hearing this packet respond with a HELLO reply so that The simplicity and the fast convergence of the Equivalence
the node Ni constructs its neighbor set. Let NHi be the partitioning algorithm would further lot of research
neighbor set of node Ni challenges. We are currently working on in finding the
optimal way of choosing the active node in a partition and
• Now, Ni advertises its neighbor set to its one hop
the random rotation policy. Different heuristics related to
neighbors so that it can find out the number of pairs of its
the rotation of the active nodes are being analyzed so that
neighboring nodes connected via this node.
all the nodes in the network are treated evenly and the
• Find the intersection between the neighbor sets of the overall network lifetime increases. More evaluation of the
adjacent nodes. Let C be the cardinality of the partitioning algorithm should be performed, to determine
intersection set with the first neighbor. convergence time and the adaptability to network mobility.
• If the cardinality is equal to or more than two, then The cases in which the active node moves far from the
form an equivalence partition and assign a unique remaining nodes and the value of the optimal time after
partition id to the nodes. which the partitioning algorithm must be undertaken
should be analyzed. We have presented a topology
• Consider the next neighbor. Let C‫ י‬be the cardinality maintenance algorithm, and have shown its benefits. It is
of the intersection set between the node Ni and its our belief that this approach opens up new areas of
neighbor currently considered. If C‫ > י‬C, a new group is research in energy conservation in mobile adhoc networks.
formed between the node Ni and this neighbor, We have provided a basis for discussion of a number of
destroying the previous partition. research issues that need to be addressed to improve the
performance of the overall network.
• If C‫ = י‬C with same elements then add the new
neighbor to the same partition and assign the partition id. REFERENCES
1. B. Chen, K. Jamieson, H. Balakrishnan, R. Morris
• Repeat the above process until each node receives the SPAN. In the proceedings of ACM/IEEE International
neighbor set from all its one hop neighbors. Conference on Mobile Computing and Networking
Each and every node is exactly in one of the partitions. (MobiCom) (Rome Italy, July 2001)
Active Node Announcement 2. C. Perkins. Ad hoc on demand distance vector
Once the Equivalence partitions have been constructed and (AODV) routing. Internet-Draft, draft-ietf-manetaodv-
the nodes have their partition id, the active node in the 04.txt, pages 3-12, October 1999, Work in Progress.
partition must be elected. The following strategies can be 3. J. Broch, D. B. Johnson, and D. A. Maltz. The dynamic
used to elect the active node. source routing protocol for mobile ad hoc networks.
• When we start with a new network all the node will INTERNET-DRAFT, draft-ietf-manetdsr-03.txt.,
have the same power. In this case, the node with the least October 1999. Work in Progress.
id in the partition can be chosen to become the active 4. J. Broch, D. Maltz, D. Johnson, Y. Hu, and J. Jetcheva.
node. A performance comparison of multihop wireless ad hoc
• When the power among the nodes in the partition is network routing protocols. In Proceedings of the
not equal, then the node with the maximum power or the ACM/IEEE International Conference on Mobile
maximum estimated lifetime can be chosen to be active. Computing and Networking, pages 85-97 October 1998.
The nodes remain active for a time T seconds which is 5. Xu, Y., Heidemann, J., Estrin, D. Geography informed
dependent on the application. The active nodes can be Energy conservation for AdHoc Routing. In the
randomly rotated in round robin fashion or based on Proceedings of the Seventh Annual ACM/IEEE
heuristics which take the expected life time of the node into International Conference on Mobile Computing and
consideration. Networking(MobiCom)(Rome Italy, July 2001)pp.70-84

204
Self-configuring, Lightweight Sensor Networks
for Ubiquitous Computing
Christopher R. Wren and Srinivas G. Rao
Research Laboratory
Mitsubishi Electric Research Laboratories
201 Broadway; Cambridge MA USA 02139

ABSTRACT served humans[1, 3, 5]. These approaches all assume accu-


We show that it is possible to extract geometric descriptions rate tracking as a precondition. This paper strives to demon-
of the spaces observed by sensor networks, even if the net- strate that this expensive perceptual process may not be nec-
work consists of sensors that are of very limited ability: such essary for some tasks, such as auto-configuration.
as motion detectors. By using statistical techniques and rely-
ing only on the unconstrained patterns generated by the oc- 3. OUR SENSOR NETWORK
cupants of the building we show how to recover information We have covered 175m2 of office space with 17 ceiling-
about sensor geometry. This is important to the ubiquitous mounted sensors and collected motion event data. The sen-
computing community since ubiquitous sensors and the con- sors report motion events in their active area at 7.5Hz. They
text that the provide will only become a reality if the sensors adapt to novel, but perfectly stationary objects, and other
are cheap, low-power, and self-configuring. changes in the environment, on a 20 second time-scale.
The area covered consists of the high-traffic core of our build-
Keywords
ing: the elevator lobby, reception lobby, restroom entrances,
sensor networks, adaptive, geometry, calibration and connecting hallways.
1. INTRODUCTION In fact, for this experimental setup, the sensors are cheap,
The occupants of a building generate patterns as they move IEEE-1394, board cameras. They are mounted in the ceil-
from place to place, stand at a corner talking, or loiter by the ing, pointed straight down at the floor with 75 degree angle
coffee machine. A cheap network of sensors can sense these lenses. The imagery from the cameras is processed by an
patterns and provide useful information to all of the context adaptive background subtraction algorithm[7] built on top of
sensitive systems in a building, but what makes such a net- the Open Computer Vision Library[2]. Obviously this is not
work cheap? As the sensing and computational elements be- the cheapest way to implement motion detectors, but it does
come cheaper to manufacture, the cost of such a network is provide the maximum flexibility for experimental design.
quickly becoming dominated by installation, configuration
and maintenance costs. 4. THE EXPERIMENTAL SETUP
Since the sensors are cameras, it was possible to use well-
This paper explores some of the possibilities that exist for known techniques to recover the geometry of the cameras
such networks to auto-calibrate, given only the unconstrained relative to the space observed. This provides us with ground-
movements of those being observed. Furthermore we strive truth about the positions and viewing areas of the sensors
to adopt an approach that will limit computational overhead. that we can use to validate our experimental results.
That means that the algorithms should not require recogni-
tion, tracking, or any but the absolute simplest of perceptual Since we treat the cameras simply as motion detectors, the
mechanisms. In fact, we will assume for the rest of this pa- underlying representation of the data will be the event list:
per that our sensors are simple motion detectors. We also as- Ej,t . The event list Ej,t = 1 if there was a motion event at
sume that the system will consist solely of sensors embedded time t in sensor j. These events indicate merely the presence
in the environment, and not any component that navigates or of some kind of motion anywhere in the field of view, but no
is carried through the environment. indication of the number of people, the direction of motion,
or any other such secondary information.
2. RELATED WORK
Our low-cost perceptual engine will be co-occurrence statis-
Many ubiquitous context projects start from the assumption
tics: Ci,j,δ . The co-occurrence is the count of events that
that the human inhabiting the space will be an active par-
co-occur at a given temporal offset:
ticipant in the system[6], or that the system will accomplish
calibration by utilizing an active element that can explore inf
X
the environment[4]. For many applications, the level of de- Ci,j,δ = Ei,t Ej,t+δ
tail desired about the building geometry does not warrant t=0
this level of labor cost or system complexity.
where δ ≥ 0, and Ei,t is a boolean value. For a given tempo-
There is a significant body of literature on modeling typi- ral offset, it is useful to manipulate the i × j co-occurrences
cal patterns and finding atypical patterns in behavior of ob- between all sensors, for a given time offset, as a matrix. For

205
Graph Estimated from Ground truth Distances Graph Estimated from Delays
6 8

4 6

4
2

2
0

−2
−2

−4
−4

−6
−6

−8 −8
−10 −8 −6 −4 −2 0 2 4 6 8 −8 −6 −4 −2 0 2 4 6 8

Figure 1: The ground truth overlap (left) compared to Figure 2: The ground truth distance map (left) compared
the statistical transition probability matrix (right). to the peak-delay map (right). Distance in meters.

For our dataset, discounting the global scale ambiguity, we


a given pair of sensors, it is also useful to consider the fam- obtain an average error of 2.2m with only 4 hours of data. If
ily of co-occurrences parameterized by the temporal offset. we only consider a sub-set of the sensors that do not overlap,
Taken together, the Ci,j,δ , for all possible δ are equivalent we obtain a slightly higher average error of 2.4m. Our sen-
to the cross-correlation of the event lists for sensors i and sors monitor 3.7m × 4.9m rectangles, so both of the figures
j. However, the entire cross-correlation is not useful, and represent sub-pixel accuracies.
is very memory-intensive to compute, so we will only ever
consider relatively small values δ: in particular, values that 6. CONCLUSION
represent time-scales that are relevant to human behavior. We have shown that it is possible to extract descriptions of
the spatial arrangement of a sensor network with very little
computation, very poor sensors, and limited constraints on
5. RESULTS the behavior of the people inhabiting the space. This is im-
We can demonstrate two things from this data: co-occurrence portant to the ubiquitous computing community since ubiq-
matrices that reveal the structure of the sensor overlap and uitous sensors will only become a reality if they are cheap,
structure in peak offsets in the co-occurrence matrices that low-power, and self-configuring.
reflect the relative distances between sensors.
REFERENCES
The Ci,j,0 co-occurrence matrix shows us the sensors that [1] W.E.L. Grimson, C. Stauffer, R. Romano, and L. Lee.
exhibit synchronized events. Since sensors always instanta- Using adaptive tracking to classify and monitor
neously co-occur with themselves, we see the highest values activities in a site. In IEEE CVPR, June 1998.
on the diagonal. However, off-diagonal elements with high
values indicate sensors that overlap: they are often seeing the [2] Intel Corporation. Open Source Computer Vision
same event. Given that there are an unrestricted number of Library Reference Manual, 2001.
people moving around the space, we expect noise from co-
incidental events, but Figure 1-right shows that this noise is [3] N. Johnson and D. Hogg. Learning the distribution of
low compared to the signal. For this sensor network, we get object trajectories for event recognition. Image and
97% of the 136 non-trivial overlap decisions correct. Fur- Vision Computing, 14(8), 1996.
thermore, all the false-negatives (3 of the 4 total errors) are [4] Anthony LaMarca, Waylon Brunette, et. al.David
actually mistakes in the ground-truth: two situations where Koizumi, Matthew Lease, Stefan B. Sigurdsson, Kevin
un-modeled walls block views from sensors that would oth- Sikorski, Dieter Fox, and Gaetano Borriello. Plantcare:
erwise overlap, and one case where the geometry predicts a An investigation in practical ubiquitous systems. In 4th
tenuous overlap that is obscured by un-modeled radial dis- Intl. Conf. on Ubiquitous Computing. Springer, 2002.
tortion in the lens of the sensor. Leaving out these errors
gives us a 99% accuracy. [5] L. Lee, R. Romano, and G. Stein. Monitoring activities
from multiple video streams: Establishing a common
The windowed cross-correlation represented by Ci,j,δ over
coordinate frame. IEEE PAMI, 22(8), 2000.
all δ and a given pair of sensors provides a way to estimate
the average trip time between the two sensors. The time off- [6] Nissanka B. Priyantha, Anit Chakraborty, and Hari
set corresponding to the first major peak for a set of cameras Balakrishnan. The cricket location-support system. In
provides an estimate of the average trip time between the Proc. of the Sixth Annual ACM International
sensors. If people only ever transit uninterrupted between Conference on Mobile Computing and Networking,
these sensors, then we could simply take the maximum of August 2000.
the cross-correlation, as in audio localization. We can use
these pairwise constraints to form an estimate of the relative [7] Kentaro Toyama, John Krumm, Barry Brumitt, and
geometry of the whole network. These results are shown in Brian Meyers. Wallflower: Principles and practice of
Figure 2. On the left is the recovered geometry from the background maintenance. In ICCV, pages 255–261.
ground-truth distance constraints. On the right is the recov- IEEE, 1999.
ered geometry from the estimated inter-node transit times.

206
Grouping Mechanisms for Smart Objects Based On
Implicit Interaction and Context Proximity
Stavros Antifakos, Bernt Schiele Lars Erik Holmquist
ETH Zurich, Switzerland Viktoria Institute, Göteborg, Sweden

ABSTRACT various kinds. More specifically we exemplify how non-


When everyday objects become equipped with computation accidental movements of objects can be used to support
and sensors, it will be important to explore interaction implicit HCI. By using accelerometers attached to everyday
techniques that rely on natural actions. We show examples objects, it is possible to detect if two or more objects share
of how non-accidental simultaneous movement of “smart” the same movement pattern. This information can be used
objects can be exploited as implicit interaction. to support everyday tasks, without introducing any
Applications include implicit access control when opening additional interaction demands to the user. Other context
a door and an automatic packing list creator. This principle parameters besides movement can be used in a similar
of implicit interaction based on non-accidental movement fashion for implicit interaction. We call the resulting
patterns can be extended to other context parameters, principle the context proximity hierarchy.
forming a context proximity hierarchy. AN EXPLICIT GROUPING MECHANISM: SMART-ITS
INTRODUCTION FRIENDS
As defined by Weiser, ubiquitous computing is “invisible, Smart-Its Friends [1] is an example of a grouping
everywhere computing that does not live on a personal mechanism based on explicit interaction. When a user
device of any sort, but is in the woodwork everywhere.” [3] wants to tell two or more “smart” objects that they belong
In some ways, this vision could prove to be as much a to the same group, she holds them together and shakes
problem as a solution! When more and more everyday them. Via radio communication, all objects continuously
artifacts and environments become augmented with communicate their trajectory, as determined by
computation and sensing, new problems arise in the design accelerometers. Since the objects that are shaken together
of human-computer interaction, since every object becomes will be the only ones that have the same trajectory, they can
a potential input device. Much like peripheral and ambient use this information to create a grouping.
information displays have been introduced to lessen the The underlying principle of Smart-Its Friends uses an
strain of information overload, various ways of background explicit gesture – shaking – to group and establish a special
sensing and interaction will need to be developed to avoid relation between objects. This principle has many
potential problems in users’ interaction with computer- interesting applications: If you want to be sure that your
augmented environments. wrist-watch beeps whenever you leave your cell-phone
One solution to this problem would be to design interfaces behind you simply shake them in order to make them
based on implicit human-computer interaction. This has “friends”. Even though the underlying principle is general
been defined as “an action, performed by the user that is and powerful in itself it does require an explicit action from
not primarily aimed to interact with a computerized system the user. Rather than to rely on explicit interaction this
but which such a system understands as input.” [2] In other paper explores implicit interaction based on non-accidental
words, whereas the user continues to interact with everyday movement patterns to establish a special relation between
objects as normal, we may use these actions as a sort of objects.
“side-effect” to also produce input for a computer system. TWO EXAMPLES OF IMPLICIT INTERACTION BASED
We are exploring how we can create implicit interaction ON NON-ACCIDENTAL MOVEMENT
with everyday artifacts that are equipped with sensors of Access Control
Today, many access control systems are installed, so that
(restricted) access can be granted to people. Those access
control systems usually require an explicit action from the
employee such as to swipe an identification badge or to use
a specific number key which are prone to be lost or
forgotten.
Here, we propose to use the action of pressing the door
handle – which is necessary to open the door – to identify
Figure 1: Access Control example, showing the door the person, and give him the appropriate access. For this
handle and the person’s wrist equipped with we use two accelerometers: one on the door handle and one
accelerometers (left). Acceleration values of the door on the person’s wrist (Figure 1). When the person presses
handle and the persons hand (top right). and the
correlation measure use to detect the use of the door
handle by a certain person (lower left). 207
Table 1: Context Proximity Hierarchy
the door handle we detect the simultaneous acceleration Level Physical characteristics/events
pattern of the door handle as well as the wrist. By verifying
dynamics of the object movement, light changes, …
that the owner of the wrist-accelerometer is indeed allowed
objects
to access this particular door, the system can grant
permission to that person by opening the lock. This is an dynamics of the people moving, light switching on/off,
example of implicit interaction since the only action environment doors banging, …
required from the person is the normal door-opening action static state of the “weather”, temperature, light level, noise
namely pressing the door-handle. Figure 1 shows 2D- environment level, …
acceleration data of the door handle and of the person’s
Table 1 shows a more general approach to classifying
hand pressing the handle twice. The correlation measure
different types of physical characteristics for comparing
between the signals clearly shows how the pressing of the
context. We call this approach a “Context Proximity
door handle can be detected.
Hierarchy” as the context of two entities can be compared
Automatic Packing List Generation on any of the given levels. In this hierarchy we have
The task of packing a set of goods into a box and then classified the movement of the objects in the first level,
having to generate a packing list is common in both namely the “dynamics of the objects”. All examples
industry and everyday life. For instance, at a typical presented above draw their context information from this
Internet retailer, books or other items belonging to an order level. When object movement is not available the
are packed in a box and an invoice is generated. In other “dynamics in the environment” can be used to gain
industries mechanical parts, computers, or raw materials knowledge about the situation the objects are in. Here the
are packed and labeled before shipment. Even when effects of events such as people moving about in the
moving your household you would be happy to know in surroundings, doors being banged, people talking, or lights
which box you packed that fragile set of crystal glasses or being switched on and off can be captured by sensors. On
some essential piece of clothing. the lowest level the comparison of the static characteristics
By attaching accelerometers to the goods, we can record of the environment is modeled. These consist of the
the individual movements of the goods and determine physical parameters of the environment such as light, noise
which possess similar movement patterns. The normal level, and temperature, which might be subsumed as
action of moving the box around serves as an implicit “weather” data. They can be used to get a prior about
grouping mechanism of all items in that particular box. The whether the objects might have a similar context or not.
similarity of the movements of those items is again non- This information could for instance be used as a baseline to
accidental since the items packed in the same box will be make the grouping mechanisms more reliable.
the only ones that have the same trajectory. Determining CONCLUSION AND FUTURE WORK
the similarity of the movements is therefore sufficient to We have shown how a basic grouping mechanism can be
group those objects, which are packed together. When the implemented and used to provide implicit input for
objects have been grouped, a packing list of all items can everyday tasks. Our current implementation is based on
be generated, or other checks on the goods could be exploiting the non-accidental movements of two or more
performed such as completeness of an order. objects to determine if they are moved together. In the
Implementation Details future, as the cost of sensors and communication
The above demonstrations are based on Smart-Its technology decreases, we will likely see sensors added to a
technology [4]. We used the standard configuration of the variety of objects and find a multitude of uses for them. In
Smart-Its sensor board including a 2D-acceleration sensor this case implicit interaction techniques such as those
(ADXL 202). This is combined with a radio frequency presented above might help to decrease the complexity of
communication module, also part of the Smart-Its platform. human-computer interaction in many everyday situations.
To decide whether two or more objects are moving ACKNOLEDGEMENTS
together, it is sufficient to calculate the correlation value The Smart-Its project is funded in part by the Commission
between the objects acceleration values signals, which of the European Union under contract IST-2000-25428,
gives us a measure of how likely the objects, are to be in and by the Swiss Federal Office for Education and Science
the same group. In the demonstrations a Smart-It was (BBW 00.0281).
attached to each object, transmitting its acceleration values REFERENCES
to a central processing unit, which is then responsible of 1. Holmquist, L.E., Mattern, F., Schiele, B., Alahuhta, P.,
calculating the similarity between the movement Beigl, M. and Gellersen, H-W. Smart-Its Friends: A
trajectories. Technique for Users to Easily Establish Connections
CONTEXT PROXIMITY between Smart Artefacts. UbiComp 2001, USA.
The detection of non-accidental movements can be viewed 2. Schmidt, A. Implicit Human-Computer Interaction
as comparing a part of their context. The next step is to through Context, Personal Technologies 4 (2&3), 2000.
compare other contextual information of objects to enable
3. Weiser, M. Ubiquitous Computing (definition #1).
applications where the moving of objects is not realistic or
http://www.ubiq.com/hypertext/weiser/UbiHome.html
not desired.
208 4. Smart-Its Project: http://www.smart-its.org
Inside/Outside: an Everyday Object for Personally
Invested Environmental Monitoring
Katherine Moriwaki, Linda Doyle and Margaret O’Mahoney1
University of Dublin, Trinity College
Networks and Telecommunications Group
Trinity College, Dublin 2, Ireland
[email protected]
{margaret.ohmahony, linda.doyle}@tcd.ie

ABSTRACT RELATED WORK


Inside/Outside explores distributed networking, Before detailing Inside/Outside it is useful to note the
information delivery, and wearable technology. We have increased interest in the design of everyday objects and
developed a wearable accessory that measures and displays smart textiles for a ubiquitous computing environment.
environmental factors in real-time and keeps a data diary of Conductive fabric, embroidery, and textile materials are
environmental exposure. The accessory can be networked enabling the integration of interactive elements into
with other accessories to form a mobile distributed clothing, accessories, and furniture. [1] Meanwhile design
environmental sensor network, providing users with locally approaches that integrate aesthetics and functionality [2] are
specific and personally invested access to information about gaining new currency, alongside the development of
their environment. applications for ambient media. [3] However, despite the
emergence of networked and distributed systems utilizing
Keywords
familiar interfaces such as Pin & Play [4] few projects have
Mobile, ad-hoc, ubiquitous, wearable, fashion, smart integrated these concepts into clothing and everyday
textiles, distributed systems, networks, data collection, wearable accessories.1
information retrieval, visualization
INTRODUCTION
This work is part of a body of research that focuses on the
behavior of people in public and urban space. While
services exist to alert individuals of daily environmental
conditions few personal devices exist to provide real-time
and cumulative information regarding environmental
exposure. This research explores how ubiquitous everyday
objects can be used to deliver such information to users in
urban zones personal and engaging ways. In particular
Inside/Outside integrates environmental sensors with an Figure 1. Changes in environmental factors cause the
ordinary fashion accessory (the handbag) to provide an handbag surface to change color.
aesthetically and functionally integrated object. Through CONCEPTUAL OVERVIEW
the manipulation of the fashion features of the bag
Digital Familiars
Inside/Outside has the ability to present information to the
Like many common everyday objects the handbag has a
user in alternative and new ways. This Information in
familiar interface and functionality that many people can
Disguise, can provide an aesthetic experience while
already relate to. While an ordinary handbag might collect
fulfilling a functional role of data gathering and data
physical objects, Inside/Outside collects digital data about
visualization. When multiple Inside/Outside bags are worn
the surrounding environment. The combination between
by different individuals at the same time a distributed and
new and established qualities of the everyday object
mobile environmental sensor network is formed, providing
promotes cooperative interaction between the object and the
users with locally specific and personally-owned access to
user.
information about their environment. Information about
environmental exposure can be shared with others, possibly Information in Disguise
leading to collective changes in urban behaviors, and By embedding information into everyday objects new
altered urban economic relationships as new valuations and significations of existing objects can emerge. Useful data is
mappings of the city are formed by the provided data. This integrated into everyday objects in a way that neither
paper illustrates the main design concepts involved in the
project and describes the initial prototypes that have been 1
designed as part of this research. Center for Transportation and Research Innovation for
People (TRIP), Department of Civil, Structural, and
Environmental Engineering, Trinity College

209
disrupts nor fundamentally alters the object’s original use, Network Communication and System Design
but enhances the functionality already present. When Inside/Outside sits on top of the DAWN ad-hoc network.
presenting the collected digital data back to the user, the [Fig.2] Sensor data is sent through the communications
decorative qualities of the handbag are accented creating a stack to the desktop application. The aggregate data
spontaneous street “performance” for the user and casual provides a data diary of environmental exposure levels.
observers. People who do not own an Inside/Outside bag The project has the potential to make use of all the ad-hoc
can benefit from being able to view and interpret the data networking capabilities of DAWN, as modular design of
presented on bags carried by other people on the street. the system will allow for additional functionality to be
easily added to the system as development of the project
Personally Invested Information Access
continues.
Inside/Outside can function as a stand-alone personal
environmental monitoring system or be part of a CONCLUSION &FUTURE RESEARCH
distributed sensor network. The environmental sensing Initial prototypes for Inside/Outside are complete. Early
capabilities of the bag belong personally to the user. This evaluations show promising results. There is interest in the
creates a sense of identification and empowerment as data is project from individuals who are usually un-interested in
collected locally and stored, allowing users to decide for computing gadgets, though detailed user studies need to be
themselves how to use and interpret the information they conducted to confirm this. New scenarios and prototypes,
receive. When a network of bags form and collective which address intercommunication between the
readings of the sensors input are examined, detailed and Inside/Outside handbag and other environmental elements
locally specific information about “micro-climates” of and bag nodes, will developed, as well as continued
pollution can be identified for the community, possibly exploration and exploitation of the ad-hoc networking
changing behavioral patterns in the city over time. capabilities of the project, especially in relation to mobility
and parasitic deployment of sensor networks within the
IMPLEMENTATION
urban zone. As a wearable everyday object Inside/Outside
The Inside/Outside handbag is integrated with provides a compelling context for research into public space
environmental sensors, and smart textiles and utilizes the and urban behavior.
DAWN [5] wireless network infrastructure (DAWN is a
Trinity College wireless network test-bed). Initial ACKNOWLEDGMENTS
conceptual designs were based on informal surveys and This research is supported by the TRIP project at Trinity
workshops conducted with city dwellers and pedestrians College Dublin.
from Dublin, Ireland and Los Angeles, California. The two
REFERENCES
cities were selected to maximize differences in lifestyle,
1. Post, E.R., Orth, M., Russo , P.R., Gershenfeld N.
culture, and urban behavior.
E-broidery: Design and fabrication of textile-based
computing. IBM Systems Journal Vol. 39, Nos. 3&4,
2000, 840 – 860.
2. Hallnäs, L. and Redström, J. Abstract Information
Appliances; Methodological Exercises in Conceptual
Design of Computational Things. In DIS2002: Serious
reflection on designing interactive systems, pp. 105-
116. ACM.
3. Ishii, H. and Ullmer, B., Tangible Bits: Towards
Seamless Interfaces between People, Bits and Atoms, in
Proceedings of Conference on Human Factors in
Computing Systems (CHI '97), Atlanta, March 1997,
ACM Press, pp. 234-241.
Figure 2. System Diagram
4. Van Laerhoven , K., Schmidt, A. and Gellersen , H.W.
The Handbag "Pin&Play: Networking Objects through Pins". In
The Inside/Outside handbag uses an air quality sensor and Proceedings of Ubicomp 2002. Springer
audio microphone input connected to a microcontroller. As 5. O'Mahony, D. & Doyle, L., Beyond 3G:Fourth-
the user carries the bag through the city, changes in Generation IP-based Mobile Networks, in Wireless IP
ambient air quality and noise levels cause conductive and Building the Mobile Internet, Ed Dixit, S., Artech
embroidery on the bag surface to heat and subsequently House, Norwood, MA, 2002. Chapter 6, pp 71-86.
cool. [Fig.1] Thermo-chromic pigments mixed with acrylic
paint and applied onto a fabric substrate create a visible
color change that is both controlled and programmable.

210
i-Beans: An Ultra-low Power Wireless Sensor Network
Sokwoo Rhee Deva Seetharam Sheng Liu Ningya Wang
Jason Xiao
Millennial Net. 201 Broadway, Cambridge, MA - 02139.
{sokwoo, dseetharam, sliu, nwang, jxiao}@millennial.net. http://www.millennial.net

ABSTRACT
This paper presents a newly developed short-range, ultra-
low power wireless device called the “i-Beans”, an ad
hoc, self-organizing network protocol, and their appli-
cation to low data-rate ubiquitous computing applica-
tions.

0.1 Keywords
Wireless sensors networks, low-power sensor networks,
low data-rate networks, i-Beans.

1. INTRODUCTION Figure 1: i-Bean Network. (E - Endpoint, R - Re-


Self-organizing, wireless sensor networks have immedi- peater, G - Gateway)
ate utility in a variety of industrial, medical, consumer
and military applications. But, several challenges need
to be addressed before these applications can be real- sensors and actuators. Multiple sensors/actuators
ized. can be connected to an endpoint.
We think designing a sensor network that is suitable 2. Repeater (or Router) - Repeaters extend the trans-
for applications with very different requirements - data mission range of endpoints. Routers are small - 56
rates, reliability, power requirements, cost, etc can be x 33 x 5 mm. They consume more power than end-
too complex a design problem to solve. We have fo- points as they remain active all the time.
cussed our research on developing a sensor network tai-
lored for applications that require low-data rate (< 115 3. Gateway (or Base station) - Gateway is also compact
kbps) and limited computing resources. By studying (64 x 51 x 5 mm). It serves as the gateway between
these applications, we find that the following represents i-Bean network and host computers. A base station
the most common modes of acquiring and propagating can be connected directly to a RS-232 port of a host
sensor data: 1. Periodic Sampling (for e.g., temper- computer and gets power from it. While there can be
ature sensing in a conditioned space) 2. Event Driven multiple repeaters and endpoints, there is only one
(for e.g., fire alarms, door and window sensors) 3. Store- gateway in an i-Bean network.
and-Forward (sensor data can be captured and stored 2.1 User Interface
or even processed by a remote node before it is trans-
We have developed a simple monitoring program that
mitted to the central base station).
runs on host computers. This program can be used to
To support these applications, we have developed a reli- monitor the state of i-Bean networks and modify various
able and ultra low-power sensor network platform called operating parameters of i-Beans such as sampling rate,
the i-Bean network. The system details are presented digital input-output channels, ADC and DAC channels
next. etc.

2. SYSTEM DETAILS 2.2 Significant Features


As shown in Figure 1, the i-Bean network is composed The significant features of this system are power effi-
of three types of devices that are interconnected using ciency and a robust networking protocol. They are de-
RF links. The devices are: scribed in the following sections.

1. i-Bean (or Endpoint) - These are the devices that 2.2.1 Power Efficiency
are directly connected to sensors and embedded in Power efficiency is a critical factor in wireless sensor
the operating environments. They are tiny (25 x 15 networks. Although power consumption must be mini-
x 5 mm) and power efficient. Each endpoint pro- mized at all points in the system, power consumed by
vides four 8-bit analog input channels, four digital endpoints must be optimized to a higher degree since
I/O channels, and an UART port for interfacing with there are more endpoints in the network than any other

211
device and also replacing their batteries would be more Dust [2], BTnodes [1], and Pushpin Computing [3]. i-
difficult, as they could be deployed in inaccessible op- Bean network is different from these platforms in the
erating environments. following respects:
We employ the following techniques to optimize power 1. These systems are composed of homogeneous nodes
consumed by i-Beans: (identical hardware) that perform specialized func-
tions in runtime by using different software; whereas
• Dual Processors - Each endpoint has two processors: i-Bean network is composed of three different types
1. a high speed processor that usually executes tasks of devices. The heterogeneous system makes it pos-
related to RF circuitry. 2. a low speed processor sible to assign complex functionality to routers and
that usually executes conventional computing and to simplify endpoints, thereby reducing their power
I/O tasks. A process called coordinator running on consumption.
one of these processors allocates tasks in such a way
that tasks are run on slower of the two processors 2. They intend to be general purpose sensor networking
and the unused processor is placed in sleep mode. A platforms, whereas i-Bean network is tuned for low
substantial amount of power is saved by putting the data-rate applications.
high speed processor in sleep mode for most of the
time. 3. Their end nodes are capable of performing relatively
complex computations. We use endpoints only to
• Heterogeneous Nodes - Endpoints, repeaters and gate- interface with sensors and actuators.
ways perform totally different functions. Endpoints
can either be source or destination of network data, 4. DISCUSSIONS AND FUTURE WORK
but cannot forward data for any other nodes. This From our preliminary studies, we find that power con-
frees endpoints from active listening and they can sumption in i-Bean networks is extremely low. For in-
conserve power by being in sleep mode while not stance, when powered by a small coin battery (CR2032)
communicating or computing. The repeaters are solely with a capacity of 220mAh, the average current con-
responsible for routing data in the network. Further, sumed by an i-Bean is approximately 100 µA, when the
i-Beans conserve power by transmitting low-power sampling rate is one sample per second and therefore
signals; the repeaters in the vicinity forward their battery will last for about 80 days. If the sampling rate
packets to the destination using high power signals. is decreased to one sample per 120 seconds, average cur-
rent consumption drops to 1.92 µA and increasing the
• Bottom-Up Networking - Endpoints do not waste battery life to about 13.1 years. 1
precious power listening to periodical beacon signals;
instead they stay in power saving mode most of the We need to perform more experiments to understand
time and wake up occasionally according to their own the impact of our design decisions and tradeoffs when
communication schedule. the network is extremely large (> 1000 nodes), since
even simple protocols and algorithms can exhibit sur-
Please see our paper [4] that focuses on power conser- prising complexity at scale.
vation strategies for complete details.
We are also working on further optimizing our algo-
2.2.2 Robust Network rithms, protocols and hardware.
The devices in the i-Bean network self-organize them-
REFERENCES
selves into a network and reconfigure themselves if there
[1] J. Beutel and et al. Bluetooth smart nodes for
is any change in the network. The network is self-
mobile ad-hoc networks. Technical report, Swiss
organizing, self-healing and yet power efficient. As shown
Federal Institute of Technology, Zurich,
in Figure 1, the topology of i-Bean network is a star-
Switzerland, 2003.
mesh hybrid. This hybrid topology takes advantage of
the power efficiency and simplicity of the star topology [2] J. Kahn and et al. Next century challenges: Mobile
for connecting i-Beans to routers and reliability and networking for smart dust. In MobiCom ’99, pages
reach of mesh networks for interconnecting routers to 217–224, 1999.
achieve fault tolerance and range.
[3] J. Lifton and et al. Pushpin computing system
We also utilize several other innovative techniques such overview: A platform for distributed, embedded,
as generating true random numbers from RF noise, pro- ubiquitous sensor networks. In International
gressive search (devices search using short messages and Conference on Pervasive Computing, 2002.
employ complete messages only after establishing con-
nections) etc to increase reliability of these networks. [4] S. Rhee and et al. Strategies for reducing power
Please see the publications on our website for further consumption in low data-rate wireless sensor
details. networks. Submitted to ACM Hotnets 2003.

3. RELATED WORK 1
Any number more than 10 years may be meaningless, since
Researchers have developed several wireless sensor net- the battery shelf life itself may be less than the computed
working platforms. A few prominent ones are Smart time.

212
A Rule-based I/O Control Device for Ubiquitous Computing
Tsutomu Terada, Masahiko Tsukamoto
Keisuke Hayakawa, Atsushi Kashitani
Tomoki Yoshihisa, Yasue Kishino, Shojiro Nishio
Internet Systems Research Laboratories,
Grad. School of Information Science and
NEC Corp.
Technology, Osaka University
ABSTRACT In conventional active databases, database operations such
In this paper, we describe a rule-based I/O control device as SELECT, INSERT, DELETE, and UPDATE are
for constructing ubiquitous computing environments, considered events and actions. Since ubiquitous computers
where we can acquire various services using embedded may have little processing power and small memory, we
computers anytime and anywhere. The capability of our simplify language specification of the ECA rule while
device is very limited, however, it has flexibility for keeping the capability to fulfill various requirements in
changing its function dynamically by applying rule-based ubiquitous computing environments.
technologies to describe the behavior of the device. We As shown in Figure 1, we suppose that various sensors and
design the behavior description language and develop a devices are connected to our device. The device evaluates
prototype of this device. inputs from these sensors and devices, and outputs
Keywords information to connected devices. With this assumption,
I/O Control, ECA Rule, Active Database we defined the events and actions as shown in Tables 1
and 2.
INTRODUCTION
In this paper, we propose a new computing style using Other Computers
Buzzer
rule-based I/O control devices for realization of ubiquitous
computing environments, where we can acquire various
services with embedded computers anytime and anywhere.
In ubiquitous computing environments, following three
characteristics are required for computers:
Button
(1) Autonomy: computers process automatically without
human operations LED

(2) Flexibility: computers are applied for various purposes


(3) Organic cooperation: complex behaviors are achieved
by organic coordination with multiple computers
Wireless
We propose the rule-based ubiquitous computing to satisfy PC Communication
Appliance
these three characteristics. Device

RULE-BASED UBIQUITOUS COMPUTING Figure 1. A Supposed Ubiquitous Computer


Generally, a person comprehends an event in the real
world as a causal relation. Therefore, we apply this Table 1. Events
principle to describe behaviors of ubiquitous computers by Name Contents
using ECA rules as the programming language. An ECA RECEIVE Data reception via the serial port
rule consists of following three parts:
TIMER Firing a timer
• EVENT(E): Occurring event
• CONDITION(C): Conditions for executing actions Table 2. Actions

• ACTION(A): Operations to be carried out Name Contents


OUTPUT On/Off control of output ports
ECA rules have been used for describing behaviors of an
active database. An active database is a database system OUTPUT_STATE On/Off control of state variables
that carries out prescribed actions in response to a TIMER_SET Setting a new timer
generated event inside/outside of the database. Since SEND_MESSAGE Sending a message
system behaviors are expressed by a set of rules, system SEND_COMMAND Sending a control command
functions can be changed/customized easily by adding, HARDWARE Hardware control
deleting, modifying some rules.

213
Table3. Commands for the SEND_COMMAND action
Name Contents
ADD_ECA Adding a new ECA rule
DELETE_ECA Deleting specific ECA rule(s)
REQUEST_ECA Request for specific ECA rule(s)

As an example of ECA rules, we show door-buzzer rules


in Figure 2. These rules represent a service that sounds
buzzer if a user leaves the door open for more than 5
seconds. Rule1 detects the door opening and sets a 5
seconds timer. Rule2 sounds the buzzer when the timer
fires. If the door is closed within 5 seconds, Rule3 resets
the timer.
RULE1 RULE3 Figure 4. I/O Ports of the Prototype Device
E: E:
C: I1=0, S1=0 C: I1=1, S1=1
A: S1=1, TIMER(5sec) A: S1=0, TIMER(0), O1=0
RULE2
E: TIMER I1 : input from the door sensor
C: S1: state
A: O1=1 O1: output to the buzzer
Figure 2. An example of a set of rules

PROTOTYPE DEVICE
We developed a prototype of rule-based I/O control devices
as shown in Figure 3.
Figure 5. An Example of Connections
CONCLUSION
In this paper, we designed and developed the rule-based
I/O control device for ubiquitous computing. Using our
devices, we can construct the ubiquitous computing
environment based on rule-based architecture as shown in
Figure 6.
UC (Rule-based I/O control device)

Action Sensor
Event

USER Event
Action
USER
Event Condition
Event UC UC
UC
Figure 3. A Prototype of Rule-based I/O Control Device Condition UC Action
Condition
Action
Actuator
Action UC Action
This device consists of two parts; one is the core-part Event(Timer)
Condition Action
(34mm) that has a micro processor (PIC16F873), the other Event
is the cloth-part (59mm) that has Li-ion battery and Action Action
Event UC
connectors for attaching sensors and devices. As shown in Condition
Figure 4, the core-part has 6 input-ports (IN1-6), 12 output UC
Event Condition
ports (OUT1a-6a, 1b-6b), 6 power-supply ports (VCC), Sensor

and 2 serial ports (COM1-2). Figure 5 shows the


connection example between prototype devices, sensors Figure 6. The Ubiquitous Computing Environment with
and other devices. Rule-Based I/O Control Devices

214
Smart Things in a Smart Home
Elena Vildjiounaite, Esko-Juhani Malm, Jouni Kaartinen, Petteri Alahuhta
Technical Research Centre of Finland
Kaitovayla 1, Oulu, P.O.Box 1100, 90571 Finland
{Elena.Vildjiounaite, Esko-Juhani.Malm, Jouni.Kaartinen, Petteri.Alahuhta}@vtt.fi

ABSTRACT products; finding the ingredients for a certain recipe;


This work presents a prototype context-aware system for finding which parts of a suit are waiting to be washed; or
household applications built up from a number of everyday helping to collect all the things needed for a journey and
objects augmented with sensing, communicational and checking continuously that no items are lost.
computational capabilities. The main challenges in making SYSTEM DESCRIPTION
everyday objects smart are raised by the limited computing The system prototype was built by attaching generic
resources of the objects and mobile devices and the need to hardware called Smart-Its [1] (developed for research
deal with large quantities of objects. The system presented purposes in Smart-Its project) to everyday objects (see Fig.
here is dealing with these problems by adding interaction 1).
capabilities and organising the objects into temporal
collectives according to the current task, so that each
collective fulfils its task independently and communicates
with the mobile device only for the presentation of results.
Keywords
Smart objects, sensing, context-awareness, interaction
INTRODUCTION
The constantly decreasing size and price of computation
and communication hardware will soon allow it to be Fig. 1 A smart object and a "journey" application scenario
embedded into literally any artefact, but its computational The user can exchange information with the objects via a
capabilities will be very limited, while the number of smart central node (desktop computer or Pocket PC) with a
artefacts will be large. This work presents a system Smart-It attached to it via a serial cable. The objects can
prototype for household applications intended for mobile work as a group and individually. The central node has a
users, since this does not need a powerful central computer. list of tasks and lists of items corresponding to each group
The smart objects themselves have very little computing task in its memory. After the user has edited or confirmed
power, as the application runs on PIC microcontrollers, but the task and the items involved, the task-related
they are nevertheless able to work as a group and to make information and the list of items are broadcast by the radio.
conclusions about the joint context of the group. Thus, a In the case of group work, the items with IDs on the list
central computer is needed mainly as a means of become members of the group which have to determine
communication between the user and the smart objects. whether they all satisfy the task requirements. They try to
The capability for involving the user in the resolving of detect this by exchanging and analysing radio messages
ambiguities helps to provide services with a simple system containing the items' IDs and their own context data. The
configuration. The system is intended to complement future item with the freshest battery broadcasts the result of the
smart environments capable of learning the patterns of their analysis.
users' lives.
CONTEXT RECOGNITION
APPLICATION SCENARIO In group work each object has first to detect its own
When we start to think what kind of support we would like context, to compare this with the task requirements and
to have from computing systems in order to make our broadcast the result in the form of member_data messages
everyday interactions with personal belongings easier, we (see Fig. 2), which contain the object's ID, energy and task-
find several different modes of interaction. First, we dependent symbolic context value. An object's own context
address objects as individuals (e.g. "Where is my consists of its movement type, location and context
passport?"). Second, we address objects by certain features attributes such as its class (food, clothes, container etc) and
(e.g. "Which spices do we have at home?"). Third, we features which are important in this respect (e.g. use-by
spend a lot of time organising our things into temporal sets. date and percentage of fat for food, contents and size for a
Examples include: finding which products have use-by container). In some tasks an object can decide for itself
dates that expire within a few days; finding low-fat whether it is a "good" or "bad" item by comparison of own

215
context with the task requirements. For these tasks the conclusions, and the central node should summarise the
symbolic context value is simply this decision. (E.g. for the received messages.
task of finding parts of a business suit, an object is "bad" if Since the system is intended to be deployed everywhere in
it is waiting in the bathroom to be washed. The decision is an ad-hoc manner, and since the computing resources of
made based on location context. Similarly a food product smart objects are very limited, the system includes certain
can be "bad" if it contains more fat than specified in the interaction capabilities [3] in order to help the user to
task requirements.) In the "journey" task the objects cannot resolve ambiguous situations.
decide whether they are "good" or "bad" at this stage but 1. The items know of special situations which increase the
send their movement type as the context value. certainty of context detection and help to choose the
moment for receiving the system's opinion. For physical
objects, one such situation is shaking, and it is very easy to
distinguish items which are simultaneously shaken hard
[2]. Shaking helps to give an alarm at the right moment,
much earlier than if the system were to wait until one or
more objects disappeared from the communication range.
Further, the system would normally be silent when nothing
is missing, but it sends an "OK" message upon shaking.
2. The system includes explanation capabilities intended to
correct both the user's mistakes and its own. Objects send
explanation data either upon request from the central node
or upon shaking. Possible sources of mistakes are a change
in the usual contents of a container or the effect of
reflection on the beacons' communication range. It is
Fig. 2 Tasks of each smart object sometimes necessary to move a beacon half a metre to tune
The next step for each object is to compare the contexts of the system, but the user needs to know which beacon's data
all group members. This results in the creation of a list of caused the error in the system. This option also facilitates
"bad" IDs (objects which are absent or fail to satisfy the addition of new objects to a database.
task requirements) and the choice of a speaker (decision on 3. The system allows the user to add or remove items from
whether the item should send this information by radio a group at any moment. This helps e.g. to deal with the
itself or let another item send it). For the "journey" task, objects left somewhere intentionally or bought after the
objects can decide which are "bad" (forgotten) with greater user has left home.
or less certainty depending on the user’s preferences. CONCLUSIONS
Objects are considered "bad" with a high degree of The group work of smart objects with limited computing
certainty in two cases: 1) after disappearing from the recourses was implemented by enabling each object to
communication range of the other group members; 2) if the make conclusions about the joint context of the group
movement type of several other group members is without comparing them with the opinions of other group
"shaking", while they have a different movement pattern. members. According to our tests, members' opinions differ
Objects are considered "bad" with less certainty if they stay mainly when the situation changes (e.g. items start or stop
in the same place while the other group members are moving); however, addition of the ability to analyse
leaving. In this case false alarms are more probable, but conclusions of other group members can be useful. Group
both this and the detection by "shaking" movement type work by smart objects reduces the workload of a central
helps to identify missing objects before they pass out of the node, which is important, if there are many objects and
communication range. many central nodes and all of them have limited resources.
The energy awareness of Smart-Its is based on the fact that
REFERENCES
all boards have an identical program and the battery status
1. M. Beigl, H. Gellersen Smart-Its: An embedded
is affected mostly by the number of temporal sets in which
platform for Smart Objects Smart Objects Conference
the object has taken part and the number of messages sent.
2003, Grenoble, France
Choice of a speaker (Fig. 2) means that objects'
conclusions (result_data messages) are sent by the object 2. Holmquist, L.E., Mattern, F., Schiele, B., Alahuhta, P.,
with the freshest battery, either according to timing Beigl, M., Gellersen, H.-W.: Smart-Its Friends: A
requirements specified in the task description or upon Technique for Users to Easily Establish Connections
shaking of the objects. Each object decides for itself between Smart Artefacts. Ubicomp 2001
whether it should send result_data message or not. If some 3. Vildjiounaite E., Malm E.-J., Kaartinen J., Alahuhta P:
objects are out of the communication range of other A Collective of Smart Artefacts Hopes for Collaboration
objects, they also send result_data message with there own with the Owner. HCII 2003

216
Resource Management for Particle-Computers
Tobias Zimmer, Frank Binder, Michael Beigl, Christian Decker and Albert Krohn
Telecooperation Office (TecO) University of Karlsruhe
Vincenz-Priessnitz-Strasse 1, 76131 Karlsruhe, Germany
http://www.teco.edu
{zimmer,binder,michael,cdecker,krohn}@teco.edu

ABSTRACT (real-time) tasks with a minimal overhead on our Ubiqui-


We present a system for real time management of the re- tous Computing platform.
sources of Particle-Computers. The particle-Computers are Software Architecture
a type of Smart-Its - a Ubiquitous Computing platform For providing maximum performance given the limited
equipped with sensing, computing and communication computing power and the small amount of available mem-
hardware. Our management system provides the developer ory, the software of the P-RMS was split in two main com-
with easy access to real time features needed in almost ponents: a runtime environment, implemented on the Parti-
every application for Ubicomp environments that is based cle-Computer platform and a development tool running on
on periodic or sporadic evaluation of sensor values. a standard personal computer.
Keywords P-RMS DEVELOPMENT TOOL
Real-time, resource management, Particle-Computer, de- The P-RMS development tool takes over some of the func-
veloper support tionality of a real-time resource management system that
INTRODUCTION can be applied at development time of the software for Par-
Particle-Computers (Figure 1) are technically advanced ticle-Computers. This is reasonable due to the resulting
Smart-Its [1], a Ubiquitous Computing platform that was reduction of the load on the Particle-Computers at runtime.
developed in our lab at TecO under the roof of the Smart- Functions that where transferred to a powerful personal
Its project [2]. As most platforms for context aware com- computer are the feasibility computation of a given set of
puting, Particle-Computers feature a number of input chan- tasks, the check of the reservation of shared resources other
nels including different sensors and inbound communica- than the processor and the generation of a runtime configu-
tion, a computation unit for analyzing and processing of ration for the Particle-Computer program.
contexts and output channels like actuators and outbound To achieve maximum flexibility it is possible to feed the
communication. development tool with a configuration containing different
real-time task-sets that may be executed on the Particle-
Computer alternatively. Thus we overcome the disadvan-
tage of a single predefined task-set at development time.
For all task-sets the feasibility computation is done sepa-
rately. This allows us to switch between the different sets
of tasks at run time on the Particle Computers.
Figure 1: Particle-Computer The scheduling we perform for the task-sets is an earliest
deadline first (EDF), non-preemptive scheduling strategy
Many applications in Ubiquitous Computing involve data without inserting idle times and using dynamic priority for
gathering or the provision of newly generated context in- tasks. Multiple sets of real-time tasks and one background
formation in per defined periodic time intervals as well as task can be scheduled. The system supports temporal as
sporadic when changes in the environment are detected. well as permanent resource reservations. Schedulability
These parallel functions like sampling different sensors, computation is performed for non-concrete task-sets con-
computing new contexts and communicating can best be taining sporadic and periodic tasks according to the formu-
implemented in separate tasks. Thereby it is more impor- las in Zeng and Shin [3], that were adapted to our special
tant to be able to guarantee a maximum time for an opera- requirements.
tion to be completed, like taking sample form a sensor, than
just to complete every operation as fast as possible. So we P-RMS RUNTIME ENVIRONMENT
developed the P-RMS (Particle Resource Management Sys- The runtime environment of the P-RMS includes the sched-
tem) to provide real time scheduling functionality of the uler, a real-time clock (RTC) and management routines for
resources of Particle-Computers to the software developer. switching between task-sets and single tasks. It needs about
The system is intended to manage the execution of multiple 5,5 Kbytes of program memory; the exact amount of data

217
memory required depends on the number of tasks in all resources allocated by one task (see Figure 2). The advan-
task-sets and the maximum number of instances of theses tage of independent resource assignment is that any given
tasks. It can be computed to resource is allocated only as long as it is needed. This en-
(96 + MaxNumberOfInstances * 4 + NumbeOfTasks * 4) bytes. ables maximum parallelism of tasks. The disadvantage of
This is feasible as the Particle-Computers are equipped that approach is, the schedulability of every resource has to
with 32 Kbytes of program memory and 1536 bytes of data be checked separately and dependencies between reserva-
memory, leaving enough recourses for user applications, tions have to be handled explicitly. In the P-RMS we de-
e.g. a typical test configuration we used contains 4 real- cided to go for collective resource assignment due to the
time tasks and a background task needs about 132 bytes of fact that only one processing unit is available and no virtual
data memory. parallelism of tasks can be introduced performing non-
preemptive scheduling. Details on all design decisions in P-
The P-RMS runtime environment provides various func- RMS can be found in [4]
tionalities to the applications running on the Particle-
Computers. This includes the management of periodic tasks IMPLEMENTATION AND TESTING
by setting the period length and ensuring that they are The implementation of the P-RMS followed the “test first”
started periodically. Furthermore, the runtime environment strategy, known from extreme programming [5]. Using this
creates sporadic tasks based on input events and sets their method, tests for the functionality of every unit of code are
starting time. The runtime environment also includes the designed and implemented prior to the implementation of
service routines for switching between the different prede- the code unit itself. This results in an early detection of
fined task-sets. errors in the implementation.
Scheduler EVALUATION AND FUTURE WORK
The scheduler is responsible for assigning the processor The evaluation of the P-RMS is still in process. We were
and the other allocated system resources to activated tasks able to determine some areas where further improvements
in the order of their priority and running the background in the performance and memory consumption of the system
task when the processor is not assigned to a real-time task. may be possible. E.g. one major improvement will be a
Priorities of the real-time tasks are assigned following the further reduction of the runtime of the scheduler on the
EDF scheduling strategy. The P-RMS scheduler works Particle-Computers. The maximum runtime of the sched-
very efficient due to the fact that the schedulability tests are uler depends on the maximum number of instances of tasks
performed at development time of the application software. in a task-set. This maximum is seldom reached, so per-
This guarantees that only schedulable task-sets are con- formance enhancements can be achieved by better predic-
tained in any given application. tion of those maxima. Additionally, a simplification of the
task1 RTC structure could reduce the runtime of a RTC-query
from 299 cycles to 26 cycles at the expense of some loss in
resource2
resource1 comfort in reading the current time and data. Another im-
processor provement we already identified for future implementation
0 1 2 3 4 5 6 7 8 9
time is the introduction of a hierarchical ordering of the re-
task2 sources. This will simplify the reservation of compound
resource2 resources.
resource1
processor REFERENCES
0 1 2 3 4 5 6 7 8 9
time 1. Michael Beigl, Tobias Zimmer, Albert Krohn, Christian
Independent resource allocation
Decker, and Philip Robinson. Smart-its - communica-
tion and sensing technology for ubicomp environments.
task1 Technical Report ISSN 1432-7864 2003/2, April 2003.
resource2 2. The Smart-Its Project. http://smart-its.teco.edu. 2003.
resource1
processor 3. Q. Zeng and K. G. Shin. On the ability of establishing
0 1 2 3 4 5 6 7 8 9
time real-time channels in point-to-point packed-switched
task2 networks. IEEE Transactions on Communications, vol.
resource2 42(2/3/4) :1096-1105, February/March/April 1994.
resource1
processor 4. Frank Binder. Ressourcenverwaltungssystem für Partic-
0 1 2 3 4 5 6 7 8 9
time
le-Computer. Master thesis at the TecO, University of
Collective resource allocation Karlsruhe. May 2003.
Figure 2: Assignment of resources performing independ- 5. Ron Jeffries, Ann Anderson and Chet Hendrickson.
ent or collective allocation Extreme Programming Installed. Addison Wesley.
Resource assignment in general can be performed inde- ISBN 0-201-708-426 October 2000.
pendent for each available resource or collective for all

218
Using a POMDP Controller to Guide Persons With
Dementia Through Activities of Daily Living
Jennifer Boger and Geoff Fernie Pascal Poupart Alex Mihailidis
Centre for Studies in Aging Dept. of Computer Science Simon Fraser University
2075 Bayview Ave. University of Toronto 2628-515 West Hastings St.
Toronto, Canada, M4N 3M5 10 King's College Rd. Vancouver, Canada V6B 5K3
+1 416 480 5858 Toronto, Canada, M5S 3G4 [email protected]
[email protected] [email protected]

ABSTRACT indicate that a step has been completed. This makes them
Researchers at the Centre for Studies in Aging and at unsuitable for persons with moderate-to-severe dementia
Simon Fraser University are developing ubiquitous as this group does not possess the capacity to learn the
assistive technology to aid persons with dementia required interactions.
complete routine activities. To ensure that the system is OBJECTIVE
useful, effective, and safe, it must be able to adapt to the Our objective is to design a more robust control system by
user and guide him/her in an environment that may not be using partially observable Markov decision process
fully observable. This paper discusses the merits of using (POMDP) algorithms to model the activity of
partially observable Markov decision process (POMDP) handwashing. We anticipate that using POMDPs will
algorithms to model this problem as POMDPs are able to enable the device to guide users more effectively and offer
provide robust and autonomous control under conditions a model that can be readily expanded for more complex
of uncertainty. A POMDP controller is being designed for activities.
the current prototype, which guides the user through the
activity of handwashing. APPROACH
The current prototype, dubbed COACH, uses colour-
Keywords based tracking software to follow the user's hand position
POMDP, dementia, Alzheimer disease, ADL, assistive through a camera mounted over the sink as the user
technology, cognitive orthosis. performed the ADL of handwashing. Figure 1 depicts six
INTRODUCTION steps of handwashing and the various alternative pathways
It is estimated 1 in 3 people over the age of 85 has the user could correctly wash their hands. Our first
dementia, with Alzheimer disease (AD) accounting for artificially intelligent (AI) agent employed neural
60-70 % of cases. The number of Americans with AD is networks to associate hand position with corresponding
estimated at 2.3 million and expected to reach 14 million steps and a simple vector search through the taxonomy
by 2050 if present trends continue [1,2]. At the onset of
dementia, a family member will often assume the role of Activity started
caregiver. However, as dementia worsens, the caregiver
will experience greater feelings of burden, which Use soap Turn on water
frequently result in the care recipient being placed in a Wet hands
long term care facility. A solution to relieve some of the Turn on water Use soap
financial and physical burden placed upon caregivers and
health care facilities is a ubiquitous, autonomous system Rinse hands
that will allow aging in place by improving the quality of
life for both the care recipient and their caregiver. Turn off water Dry hands
People with advanced dementia may have difficulty
completing even simple activities of daily living (ADL) Dry hands Turn off water
and require assistance from a caregiver to guide them
through the steps needed to complete an activity. Activity finished
Examples of ADL are handwashing, dressing, and
toileting. While there have been several cognitive aids Figure 1: Acceptable sequences of steps required to complete
designed to assist ADL completion, all of them require the ADL of handwashing. Note wetting hands is considered
explicit feedback from the user, such as a button press, to optional in the prototype as liquid soap is used.

219
constructed from Figure 1. This identified what step in the DESIGN AND BENEFITS
ADL the user is attempting to complete and if the user's A POMDP model is being constructed to guide a user
actions were correct. If the user seemed unsure of the next through the ADL of handwashing. COACH is separated
step or s/he attempts an inappropriate action, COACH into four modules, as can be seen in Figure 2. POMDP
played an audio cue to guide the user to the next algorithms will create a central controller that
appropriate step. If the user did not respond to prompts, encompasses the step identification, planning, and action
the caregiver was called to intervene. COACH has been modules. A great advantage to using a POMDP model is
tested through clinical trials involving AD inpatients in a that it eliminates the requirement of explicit user
retrofitted washroom at Sunnybrook and Women’s feedback, such as a button press, because the agent
College Hospital's long term care facility [4]. It was found autonomously estimates when a step has been completed
to significantly decrease the number of caregiver through observation of the activity. Another challenging
interventions by about 75 %. aspect of this research is to design an effective method of
determining user preferences, as there is no user feedback.
The current prototype assumes full observability of its POMDPs provide an excellent solution to this difficulty
washroom environment. This simplification does not by obtaining and incorporating user preferences
account for inherent uncertainty in step identification autonomously. For example, by keeping track of which
introduced through factors such as instrumentation noise cues have been observed to be the most effective in the
and obscured views. Upgrading the AI agent to a POMDP past, the system can not only tailor itself to the user, but
based controller provides a solution to this problem by also be sensitive to changes in user performance that
directly modeling the uncertainty. The incomplete and accompany the progressive nature of AD and will
noisy information provided by the tracking system is accommodate accordingly. The self-tailoring ability of a
translated into a probability distribution over the possible POMDP controller also eliminates the need for extensive
conditions of the user and the washroom environment. interaction with the caregiver, making this technology user
This distribution is continuously updated to reflect friendly.
observations made by the tracking system as time
progresses. By combining this distribution with a SIGNIFICANCE AND OUTCOMES
stochastic model of the user's future behaviour and a cost Results from this research are applicable to the
function measuring the consequences of playing various development of ubiquitous intelligent monitoring and
prompts, the POMDP agent is able to optimize the choice prompting of all people with cognitive limitations,
and time of prompts despite uncertainty. Following the including those with traumatic brain injuries, learning
principles of utility theory, the agent selects the course of disabilities, and Alzheimer's disease. Successful
action that minimizes expected future cost based on the implementation of POMDPs to the COACH handwashing
estimate of the user's status (modeled as a probability problem would represent one of the most advanced
distribution). Please see [3] for a more detailed review of applications of this technology.
POMDPs. The ability for the COACH system to make ACKNOWLEDGMENTS
good decisions under uncertainty is especially crucial to This research has been funded in part by the Alzheimer
complex ADLs, such as toileting, where observations will Society of Canada.
likely be limited and the costs of poor control are high.
REFERENCES
1. Alzheimer’s Association. Caregiver network helps
Image Analysis
• Location of user and hand position temper significant hardship in labouring for
• Location of task specific objects Alzheimer’s relatives. Primary Psychiatry, 3, (1996),
92-94.
Task Identification 2. Cummings, J., and Cole, G. Alzheimer Disease.
• Determines which step the user is attempting Journal of the American Medical Association, 287,
18 (May 2002), 2335-2338.
Planning 3. Kaelbling, L., Littman, M., and Cassandra, A.
• Decides if the user attempting an appropriate task Planning and acting in partially observable stochastic
• Which task the user should be prompted to attempt domains. Artificial Intelligence, 101, (1998), 99-134.
4. Mihailidis, A., Barbenel, J., and Fernie, G. The use of
Action artificial intelligence in the design of an intelligent
• Whether or not a cue should be played
cognitive orthosis for people with dementia. Assistive
• Level of detail of the cue
Technology, 13, (2001), 23-39.
Figure 2: Interaction of modules that constitute COACH
controller.

220
The Chatty Environment – A World Explorer for the
Visually Impaired
Vlad Coroama
Institute for Pervasive Computing
Swiss Federal Institute of Technology (ETH) Zurich
8092 Zurich, Switzerland
+41 1 63 26087
[email protected]

ABSTRACT to continuosly receive that much unneccessary information


Ubiquitous computing systems have often suffered the crit- since one has learned to focus on the interesting aspects
icism of providing only marginal value and not justifying only.
the serious amount of money spent for research in this area THE SYSTEM
[1]. In this extended abstract, we describe the vision and the
prototype of a ubiquitous computing environment for visu- We are currently in the process of building a prototype of
ally impaired people. The aim is to help them orient them- the chatty environment as part of the ETH Zurich campus.
selves in new, unknown environments and thereby enable The prototype consists of several components: a large num-
them to lead a more independent life. ber of tagged entities in the environment, a world explorer
in form of a portable device for the visually impaired user,
Keywords and a tag reader connected to the world explorer to pick up
Ubiquitous computing, visually impaired. the tags.
INTRODUCTION Smart Entities
Everyday Problems for Visually Impaired The objects of the chatty environment are electronically
Visually impaired people encounter many problems during tagged, either by passive tags – using radio frequency iden-
their daily routine that sighted people wouldn’t neccessarily tification (RFID) technology – or active tags – these could
think of. Take for example shopping in the local supermar- for example be active RFID tags, Berkeley TinyOS Motes
ket. Thousands of items, feeling all the same, spread over [3], or Smart-Its [4]. The main requirement is that the com-
dozens of shelves, all the same shape. Visually impaired munication between the tags and the user device does not
people will typically only go shopping to their local super- need line of sight. Not only do we want to follow Weiser’s
market and buy only few products in well known locations. vision of a ubiquitous computing system that works unob-
Or think of a modern airport terminal. Where is the check- trusively in the background without requiring explicit inter-
in counter for a certain airline? Where does one collect the action [2], we also need to make sure that a system for the
luggage after landing? Without external help, these issues
visually impaired does not require the user to point the por-
are almost unsolvable for the visually impaired.
table device to a certain object for triggering an action.
The Basic Idea Therefore, infrared beacons are not suited for tagging the
The common source of these problems is that the world environment objects. In our prototype, we use the Berkeley
reveals itself to us mostly over visual stimuli, which are Motes.
being withheld from visually impaired people. To cope with
World Explorer
some of these problems, we propose the paradigm of a
chatty environment. In this environment, the world uses an The portable device carried by the user receives the data
alternative channel, namely audio, to reveal itself to the transmitted by the environment objects. It can be either a
user. While walking by, entities in the environment keep stand-alone device carried by the user in her pocket or
talking to the user, thereby revealing their existence: “Here backpack, or an extension of the user’s cane.
is the shelf with milk products, down the next aisle are the The most important data the smart entities send is their
fridges with meat and ice”, “Here is track 9, do you want identity, such as “ticket booth”, “escalator”, “men’s
more informations on the departing trains?” restrooms”, “track 9”, or “train to Geneva”.
This (at first sight rather naive looking) feature of the sys- The device we are currently using as world explorer is an
tem will probably seem annoying to most sighted people. iPaq PocketPC, which will be replaced in a later project
An environment talking endlessly to the user sounds like a phase by a PDA especially designed for the visually
headache to many of us that we would surely turn off after a impaired. These devices have the advantage of providing
few minutes. However, speaking to members of the Swiss Braille input and output.
Association of the Blind, it turns out that for visually
impaired people there can almost never be too much audio User Interface
stimuli. This is comparable to the huge amount of visual The chatty environment keeps revealing itself to the user
informations sighted people pick up every second, few of until she chooses to investigate one of the environment’s
which they really use. Here, too, it feels far from annoying objects. By pressing a button on the device shortly after an

221
environment object has been presented to her by the device, We are currently working on integrating a navigation fea-
the user is capable of selecting this object. ture using a locally developed location system. The system
The user is then presented with a standardized audio inter- relies on the signal strength of WLAN 802.11, Bluetooth
face to the object. In the current implementation, the inter- and active RFID tags.
face consists of four options: User Input
Information Currently, the user can only interact with the system by lis-
By choosing this option, the user receives further informa- tening to the list of nearby objects (with support for skip-
tion about the chosen entity. This information is highly ping back and forth) and then choosing one of the four
dependend on what kind of object was selected. With a options described above. Future versions should also allow
supermarket product, the information could for example be: the user to actively search for an environment entity, either
“producer”, “ingredients list”, and “expiration date”. For a using Braille or voice input. For example, it should be pos-
train, the information might be: “final destination”, “depar- sible to find a pharmacy, even if it is neither in the immedi-
ture time”, “next stop”, and “list of all stops”. ate neighbourhood, nor on a virtual signboard.
Some of these points may in turn provide further details. Communication Issues
“Ingredients” may have as subitems “vegetarian (yes/no)”, There is a huge amount of data to be transferred from the
“organically produced (yes/no)”, and “display complete environment objects to the user device. Since the tags are
igredients list”. typically small devices with limited ressources, only the
Actions object identity, some basic information and a hyperlink is
Some of the objects in our chatty environment will allow stored on the object itself. By following that link through
the user to take some action on them. One example is a the device’s Bluetooth or WLAN 802.11 network interface,
train or bus allowing the user to open its nearest door. This arbitrary additional information can be gathered from the
is a well-known problem for visually impaired people, for wide-area computing infrastructure. Note that in case of
whom it is easy to miss a bus or train because they are intermittent connectivity, the world explorer’s text-to-
unable to find its doors during its brief stop at the station. speech engine can still render the human-readable object
Leave traces identity stored directly on the tag (this could be aided by a
The user can also decide to leave virtual post-its for herself dictionary in foreign-language environments).
or other users on an object. These will typically be audio Information Filtering and Selection
files reminding her of something that she noticed the last A challenging issue is choosing which information should
time passing by. On a traffic light, for example, one could be presented to the user. For example, when entering a shop
leave the information: “Big crossroad ahead, must be the third time, a user might not want to receive the same
crossed very quickly”. Information left like this would be information again. A similar problem arises when the user
automatically pushed onto the user’s device the next time enters an area with so much information that it cannot be
she would pass this object again. presented in a timely fashion. These issues of information
Our current prototype features only two options for leaving filtering and selection are currently under investigation and
or hearing a message: leaving messages just for oneself or will be addressed in future prototypes.
for anybody else, and hearing just personal messages or ACKNOWLEDGEMENTS
hearing everybody’s messages. This approach obviously
needs to be refined in future versions of the systems. Jürgen Bohn has contributed many ideas in early stages of
the “Chatty Environment” project, while Jürgen Müller
Take me there provided many helpful pointers regarding the daily routine
By choosing this option, the user is guided to the currently of visually impaired.
described entity, e.g., for an item on a sign.
This work has been funded by the Gottlieb Daimler- and
Virtual Information Boards
Karl Benz-foundation, as part of the “Living in a Smart
Sighted people orient themselves in a new and unknown Environment – Implications of Ubiquitous Computing”
environment not only by the objects they are able to see. project.
They also learn about distant or hidden objects through
signs. By mapping visual signs to audio-signs for the visu- REFERENCES
ally impaired, they can learn about objects not only in their 1. Araya, A.A. Questioning ubiquitous computing. Pro-
immediate neighborhood, but also further away, too. ceedings of the 1995 ACM 23rd annual conference on
To realise this goal, signs in our chatty environment are Computer science, 1995, 230-237.
enhanced by the same beacons used by all other objects. 2. Weiser, M. The Computer for the 21st Century. Scien-
But instead of revealing themselves to the user, a sign tells tific American, 265(3), September 1991, 94-104.
her about the objects they are pointing to. By selecting one
of these objects, the user can subsequently be guided there 3. Berkeley Motes. http://webs.cs.berkeley.edu/tos/.
using the “Take me there” interface option. 4. The Smart-Its Project. http://www.smart-its.org/

222
Support for Nomadic Science Learning
Sherry Hsi, Robert J. Semper Mirjana Spasojevic
Center for Learning and Teaching Mobile and Media Systems Lab
The Exploratorium Hewlett-Packard Labs
San Francisco, CA 94123 USA Palo Alto, CA 94304 USA
+1 415 674 2809 +1 650 857 8655
[email protected] [email protected]
[email protected]

ABSTRACT versions of sketches and usage scenarios were made,


We describe multiple scenarios and design challenges for validated, and refined based on user feedback (figure 1).
nomadic computing tools intended to support informal For General Visitors: Capturing and extending a
science learning and teaching at the Exploratorium, an visit
interactive science museum in San Francisco. This audience consists of individuals and families who
Keywords visit the museum as an enjoyable way to spend leisure time
Museum applications, usage scenarios, RFID, handhelds that has the added benefit of learning something new. Our
prior research has established that this population has
INTRODUCTION difficulty carrying handheld devices while interacting with
The I-Guides research project, a collaborative project the museum exhibits [2,3]. They prefer that their hands be
between the Exploratorium and HP Labs, investigates uses free to manipulate the exhibits. An adult in a family group
of nomadic computing technologies to support informal may want to know more about the scientific phenomena
science learning and teaching. Specifically, our goals are to being demonstrated by the exhibit but often is pulled away
understand ways in which handheld devices and wireless by the children to the next event. Thus, alternative forms of
networks can be designed to support lifelong science recording and capturing user experiences at the museum
inquiry: learners ask questions, seek explanations, and carry such as a smart watch, an RFID card, or a keepsake toy are
out personally relevant investigations with museum being considered. A token could be used to bookmark an
exhibits or other learning resources to make sense of exhibit, capture a memorable photo on the spot, or track
science. Informal learning and teaching occur across one’s conceptual pathway through the museum. Back
multiple episodes, physical settings, and virtual spaces home, the visitor can review additional information on a
which may or may not have the benefits or constraints of computer via a personalized Web page.
structured classroom-based learning. Building upon our
prior Electronic Guidebook research, we aim to accomplish For Explainers: Support tool for explanation
the following: Explainers are students and staff (ages 15–20), who help
• Create a functional nomadic computing infrastructure visitors be better at inquiry and explain science phenomena.
and online personalized delivery system to support Explainers answer questions about the exhibits, perform
nomadic inquiry. demonstrations, shepherd fieldtrip groups, and help
maintain exhibits. New explainers come to the museum on
• Identify an instructional design framework for creating a regular basis and their background knowledge about
resources and interactions for informal science learning, science and the exhibits varies. A wirelessly connected
teaching, and community-building capable of being handheld was identified as a promising information support
delivered on multiple devices. tool, mobile training resource, or workflow organizer to be
• Conduct user studies exploring the impact of a system used while not directly involved with visitors. Explainers
that balances virtual and real-world information on the would use a handheld in the presence of visitors to
learners’ use of museum resources before, during, and remotely control a larger exhibit, as a data collection device
after a visit. to capture the interesting phenomena, or as an appliance to
read a visitor’s tag and email the visitor addition
AUDIENCE AND USAGE SCENARIOS information.
The first step in our design process was to identify distinct
audience groups and better understand the needs of these For Educators: Linking schools to museums
audiences. Based on discussions, informal interviews, and Teachers and other educators use Exploratorium resources
focus groups conducted with educators, museum staff, to support current school-based curricula. They organize
visitors, and other museum researchers, we identified field trips and spend considerable time on preparation and
general visitors, educators, and museum docents follow-up to the museum visit. This group also uses the
(“Explainers”) as three distinct audience groups. Several Exploratorium for personal professional development as
members of the Teacher Institute program or as veteran

223
teachers charged with training and coaching other teachers understand the barriers to adopting nomadic computing
in their school districts. While this audience has some tools in the museum. Adoption of a particular solution by
unique goals, we believe that many of their needs could be the end user will only happen if we fully understand the
addressed through tools developed for helping Explainers, existing context in which the technology is being
as well as tools for capturing and extending museum visits. introduced.
DESIGN CHALLENGES Addressing these design challenges requires the
In the process of creating usage scenarios and prototypes, development of tools that go well beyond the existing
we identified several design challenges: research on mobile guides [1,5]. We are building on our
prior work which has established the feasibility of the basic
Data-driven versus inquiry-driven – One design tension is components and the infrastructure. Over a hundred users
what type of learning to support: learning that can occur have already tested the prototypes, helping us understand
because of rich media delivery or learner-centered inquiry device form factors, interfaces and usability issues for
that is supported by careful prompting. Because volumes of various audiences [2].
online science content exist, a tendency in the design
process is to focus on data-driven models of learning rather ACKNOWLEDGMENTS
than providing guidance for learner-driven inquiry, group We thank HP Labs and the I-Guides research group at the
gaming activities, or collaborative learning. We plan to Exploratorium, especially Steve Kearsley for his artistry. I-
address this issue by conducting studies that compare Guides is supported by the National Science Foundation under
different instructional designs with Explainers. Grant No.02056654. Any opinions, findings, and conclusions
or recommendations expressed in this material are those of the
Complex environment – The Exploratorium typically has authors and do not necessarily reflect the view of the NSF.
several hundred exhibits about science, art, and perception.
Many of the exhibits are noisy and involve sand, water, REFERENCES
1. Cheverst, K., et al. Developing a Context-aware Electronic
electricity, magnetism, heat, or soap. The exhibits are Tourist Guide: Some Issues and Experiences. In
frequently relocated within the museum as part of a Proceedings of the CHI 2000, pp. 17–24.
continual prototyping process. Some exhibits involve
observation or one-handed manipulation to move a knob or 2. Fleck, M., et al. From Informing to Remembering:
Ubiquitous System in Interactive Museums. IEEE
lever, while others involve two-handed manipulation or Pervasive Computing, April – June 2002, v. 1, no. 2, pp.
whole body interaction. Visitors often complain they are 13-21.
overwhelmed by the many choices, activities, and noise.
Introducing nomadic computing technologies into this 3. Hsi, S. The Electronic Guidebook: A Study of User
Experiences Mediated by Nomadic Web Content in a
environment requires deliberate design that doesn’t Museum Setting. Journal of Computer-Assisted Learning.
contribute to the complexity of the environment and September 2003, Vol.19 No.3
improves the user experience.
4. Tinker, R. & Krajcik, J.S., eds. Portable Technologies:
Addressing multiple stakeholder interests – Collecting Science Learning in Context. Netherlands: Kluwer
stakeholders’ viewpoints is critical to identifying key Publishers, 2001.
design issues in the scenarios. Stakeholders include the end 5. Woodruff, A., et al. Electronic Guidebooks and Visitor
users, museum staff, designers, technologists, industry Attention. In Proceedings of the International Cultural
partners, and others. Listening to stakeholders enables us to Heritage Informatics Meeting, 2001 pp. 437–45.

Figure 1: Sample scenario of informal science learning: general museum visitor with a smart watch

224
Development of an Augmented Ring Binder
Magnus Ingmarsson, Mikael Isaksson and Mats Ekberg
Department of Computer and Information Science
Linköping University
SE-581 83 Linköping, Sweden
{magin, x02mikis, x02matek}@ida.liu.se
ABSTRACT registering the history of such actions. On this basic functionality,
The era of ubiquitous computing gives rise to a variety of new it is then possible to build more complex and customized software
technology. We have developed an augmented binder that supports for specific applications.
document handling and workflow. This binder can provide Uses
automatic tracking of document flow, linking physical and virtual The use of patient folders in a medical setting is an area that may
documents. benefit from our approach even at the current cost levels. An
augmented binder may warn healthcare workers if any documents
We needed our binder to fulfill a few basic functional requirements. are currently missing. If new data for the patient is available that
The most essential was that the binder should be able to detect
has not yet been printed on paper, the binder may say so. Thus,
insertions and removals of documents. Also, many applications the clinicians can avoid making decisions on incomplete
will require some kind of alarm when certain conditions are
information, thereby reducing the risk of mistakes.
fulfilled. For example, important documents could be marked as
such, so that the binder may warn if they are missing. All these TECHNOLOGY
requirements had to be accommodated while keeping the Our goal was to construct a wireless and portable device, small
restrictions of weight, space and battery time in mind. and light to integrat e with a ring binder. As we shall see, this
turned out to be a very general task. The resulting design should be
Keywords useful in many similar circumstances.
Ubiquitous computing, Collaborative work, Distributed Cognition,
Document handling, Office application, Workflow, TINI, The identified technical requirements as obtained from the
Bluetooth, RFID functional ones included:
• Internet capability to enable access to external information,
such as a central document server.
• A small display fixed to the front of the binder to provide
information and feedback to the user.
• The user interface should not be more complex than a number
of pushbuttons.
• An RFID reader capable of reading multiple tags inside the
binder.
• Readily available tools for easy software development and
Figure 1: Front of binder with pushbuttons and the display prototyping.
visible.
• A battery operating time of a couple of hours.
INTRODUCTION
CPU
As shown by Luff et. al [1], artifacts play a crucial supporting
Early on we concluded that one of the several available micro java
role in today’s collaborative workplaces. For example, Bång [2]
platforms [3] would readily satisfy about half of our requirements.
points out that clinicians depend heavily on patient folders in their
We eventually chose the TINI platform, mostly because of its
daily work. In an office, a lot of activity is centered on documents
small form factor and low power requirements, but also because it
that are in binders. Therefore, by augmenting the binders with
is a mature product with a large user base. The TINI runs Java
ubiquitous computing technology, it is our hope that the work can
programs, however one should keep in mind that the TINI is
be made more efficient.
limited to a subset of the JDK1.1 specification. This is normally
Target audience no problem when developing from scratch, but may cause
Because binders are used in so many different contexts it is significant rewrites when porting present applications.
difficult to identify the typical user. However, we believe that our Wireless
conviction that most of the people that use binders should benefit The wireless property was a problem from the start. Our options
from this. We have therefore concentrated our work on the seemed limited. Many of the common solutions (WI-FI,
common denominators we have found. Among those is the ability bluetooth) were unavailable to us because they require interfaces
of the binder to detect insertions and removals, as well as the TINI doesn't have, such as USB or PC-card. We eventually

225
found a product, Blue2Link, which essentially is a virtual ethernet largest individual costs were the wireless ethernet devices with
cable over bluetooth. €400 for a pair, the RFID reader for €270 and the LCD display for
€230.
Software
None of the hardware came with suitable drivers. Luckily, most of
the protocols involved turned out to be mostly very simple but
the software writing still took a good part of our 6-month project.
We eventually produced about 10000 lines of Java code and 70
classes for the binder and supporting software (a simple document
server and a PC binder management GUI).
FUTURE RESEARCH
We have so far identified four plausible directions to pursue:
• Location and tracking. The system can be enhanced with
online tracking and location of documents. This approach
could for instance be used to compare supposed to actual
workflow.
• By adding linking between physical and virtual documents
one can for instance obtain easy access to documents in the
computer. For example, removing a physical document from
a binder, the same document could be opened in its virtual
form on the computer, removing the need for a possibly
laborious, manual search.
• Usability studies / UI-design. The current prototype does
Figure 2: Hardware as mounted in the binder. 1. TINI (viewed not emphasize usability studies or UI-design. This is an
from side), 2. TINI-experimental platform, 3. Blue2Link, 4. important aspect to consider since we want the usage of the
Display, 5. RFID connector adapter, 6. RFID-reader. folder to be kept as simple and streamlined as possible.
• Version tracking. The user can immediately know if the
RFID Reader
document they have in their hand is the latest version. This is
We expected the selection of RFID reader to be a straightforward
useful in any situation where people collaborate on a set of
task, but it turned out to be not quite that simple. Especially the
documents, for example a patient folder in a hospital setting.
capability to read multiple tags still seems to be unstable.
Eventually we ended up with the reader Feig MR100 which works SUMMARY
very well, but turned out to be more expensive and power hungry We have built a prototype wireless document handling aid
than we expected from our first quick look at the options available in the form of a binder, using off-the-shelf products. The
to us. result has many promising areas of use. However, any
specific application would require significant customization
Display
of software.
There are many options available in this area. We were however
limited somewhat by the low speed of our chosen CPU. ACKNOWLEDGEMENT
Therefore, we restricted our search to displays with built-in We want to extend a special thanks to the Vinnova, Swedish
memory and processing capability, for example character plotting Agency for Innovation Systems, for the grant, P22459-1A, they
and line drawing. This increases the cost of the display, but the gave the Department of Computer and Information Science for
extra cost is accompanied by a corresponding gain in making this project a reality..
responsiveness as well as application programmer productivity
because of the supplied high-level API. We chose a GLC24064 REFERENCES
from the US company Matrix Orbital. With a display area of 132 1. Luff P., Heath C. and Greatbatch D., 1992. Tasks-in-
x 39 mm and a resolution of 240 x 64 pixels, it is among the largest interaction: paper and screen based documentation in
such displays available. collaborative activity, In Proceedings of CSCW’92, New
Battery York: ACM Press 163-170.
A standard accumulator pack of 1500mAh provides about 10 2. Bång M., Berglund E. and Larsson A. 2002 A Paper-Based
hours of operating time, more than enough for a prototype such as Ubiquitous Computing Healthcare Environment, In Adjunct
ours. All hardware runs on the 12v accumulator, except the Proceedings Ubicomp 2002, Göteborg: Teknologtryck 3-4.
display, which requires 5v. Since the display has very low power
3. Andersson O. and Olsson P-O., 2002 Java in Embedded
usage, we have simply given it its own power supply consisting
Systems, Master Thesis, Linköping University, Linköping:
of regular battery cells.
Unitryck
Cost
Total cost of the hardware in the project is about €1000. The

226
Meaningful Traces: Augmenting Children’s Drawings with
Digital Media
Nassim Jafarinaimi, Diane Gromala, Jay David Bolter, and David VanArsdale
School of Literature, Communication, & Industrial Design Program
Culture College of Architecture
Georgia Institute of Technology Georgia Institute of Technology
[email protected], {diane.gromala, [email protected]
[email protected]}

ABSTRACT rule applies to children’s drawing activity as well; analysis


Paper is widely used by children in drawing and art of meaning depends heavily on the order in which the child
making, but it does not have the ability to contain other lays down the pencil or brush strokes [3]. However, it is
media such as a child’s audio description of the drawing. almost impossible to extract this information from the
Also, paper artifacts fade, tear, and get lost over time. We artifact once it is complete.
describe a system designed for recording and archiving Meaningful Traces is a digital device for capturing
children’s drawings together with their description of what children’s drawings and the sequence and context of their
is depicted (audio), the sequence of creation for each piece creations; e.g. dates, child’s audio description of the
(traces), parents’ annotations, and a tagging system to link drawing, and parents’ text annotations. The digital copies
the drawings on paper to the associated digital media. serve as a back-up (in case paper artifacts are lost for some
Keywords reason or simply fade over time). They facilitate different
Children’s art, children’s drawings archive, mixed reality, methods of display and sharing and can be automatically
augmented reality, paper user interface, capture and access organized by dates.

INTRODUCTION PAPER AS THE BASE FOR INTERACTION


In the same way that children learn to talk, they learn to There is a growing interest in developing tangible
express themselves through pictures. The first time they multimodal systems in which users can continue to employ
reach for a pen or pencil and the first scribbles are as their familiar physical tools such as paper, with
exciting as the first words they say. A similar emotional computational enhancements [4, 5]. On the other hand,
feeling is attached to the first drawing they create and label paper based activities are very common among children
“mom”, “dad”, etc. [1]. For parents, such drawings are because paper is widely available, cheap, tangible, and easy
reminders of moments in their children’s lives that they to use and carry. Paper does not require batteries, does not
want to remember and cherish. “No more than you ignore generally break or stop functioning [6], and poses little
their chatter would you ignore their makings on a paper direct danger to children. Thus, instead of trying to
[1].” Drawings are also a source of constant wonder to replicate physical paper and drawing mediums on a
children as they grow older, leading them to ask: “Did I computer screen, Meaningful Traces aims at augmenting
ever do things like that?” rather than the more usual paper with digital capabilities. To facilitate the creation of
questions: “Did I ever look like that [1]?” this tool we first conducted a user study and then built a
It is not only the drawings that are memorable and fun to non-functional prototype as a base for further user studies.
look at. Children often talk about what they have drawn in USER STUDY: DEFINING DESIGN GOALS
compelling ways and their comments make the scribbles At this phase children ages 6-8 are identified as the target
meaningful. However, in most case all that parents can users. To inform the study, ten parents who have children
save is the drawing on paper. There is no systematic way to in this age range were interviewed. According to these
save the stories and words attached to them. Very few of interviews, drawing is one of the most popular activities
the artifacts in any form, saved by parents are annotated, among children. They draw in various positions, on the
even with simple dates [2], because annotation is very floor, on the couch, in their bed, and at the kitchen table.
time-consuming. Seven out of ten children whose parents were interviewed
Psychologists and educators generally agree that children’s talk about their drawing after finishing it, while they show
graphic constructions should be viewed as a process or a it to others, and two describe it as they draw. The
sequence of steps. Sequence and direction are important interviewees save from 20% to 85% of their children’s
aspects of many activities: driving a car, playing the piano, drawings for sentimental reasons, to see the progress in
and giving a talk. In all these activities, the starting point their children’s development, and for children themselves
makes a difference to the success of the total action. This to have a sense of history as they grow up. They all express

227
interest in having a digital copy as a back-up although they
believe that these copies will not replace the actual physical
drawings on paper. However, they believe the digital
copies can replace some of the ones which are less
important to them. Seven out of ten parents currently
annotate the drawings mainly with dates, and four wished
they had time to do so. Four out of ten write descriptions
and stories down on some of the pieces.
Consequently, the goals of Meaningful Traces are 1) semi-
automatic capture which does not require parents’
involvement and time 2) portability and ease of use in
different positions by the child.
PROPOSED PROTOTYPE
The Meaningful Traces tablet is specifically designed for
the child to draw on. It is equipped with sensors on its
surface to record the pen strokes, a detachable sheet feed
scanner, a tagging system, a built-in microphone, and a
limited memory to store a number of records. Figure 1: The drawing tablet
What is captured? should not break if food or drink is spilled on it, or if it
Every record in the digital archive consists of: 1) the falls to the ground.
scanned copy of the drawing, 2) the date, 3) ID tag, 4) The user study will be extended and children behavior
child’s audio description, 5) “traces”, which are referred to concerning drawing activity will be studied. A non-
the process of the piece creation (They are in the form of functional prototype of the tablet will be tested with
snapshots of the work in progress or an animation of how children. The results will be used to revise the design.
the work was created), 6) parents’ text annotations. Later, a functioning prototype will be developed and tested.
Scenario of Use CONCLUSION
When the child starts drawing1, the device senses the Meaningful Traces can be used to keep record of a child’s
activity and the built-in sensors begin recording the pen artistic development. Parents can use it to archive and
strokes (traces). Once finished drawing, the child initiates preserve their children’s creations as reminiscences of their
scanning by pressing two buttons on the top of the device development. This preliminary research only addresses the
(Figure: 1). The device also attaches a tag (a number) to the needs and requirements of parents as the end users.
back of the paper at this step. Audio recording can be However, psychologists, art therapists, social workers, and
initiated at any time during or after the time of drawing. teachers can all potentially benefit from the proposed tool.
The system automatically attaches the audio to the most
recent record (records may be modified later). The records REFERENCES
are downloaded to a computer to be viewed, annotated and 1. Cox, M., The Child's Point of View. New York, NY, The
modified in an interface specifically designed for this Guilford Press, 1991.
purpose. Viewers can also search for keywords in the 2. Stevens, M., Vollmer, F. and Abowd, G. D., “The Living
annotations, print the drawings, or email them. They can Memory Box: Form, Function and User Centered Design,”
input the tag number to retrieve the media related to a in Extended Abstracts of CHI 2002, Minneapolis, MN, pp.
drawing on paper. The tablet can also be used to scan and 668-669.
input the drawings that have not been created on it. 3. Goodnow J., Children Drawing. Cambridge, MA, Harvard
Challenges and Next Steps University Press, 1977.
The current design of the physical prototype can only input 4. Dourish, P., Where the Action Is: The Foundations of
standard-sized paper (8”x11”) and does not accommodate Embodied Interaction. Cambridge, MA, The MIT Press,
children’s tendency to draw on larger sizes. The device 2001.
should be light and child proof: i.e. it should be safe, 5. Stifelman L., Arons B., and Schmandt C., “The Audio
Notebook: Paper and Pen Interaction with Structured
Speech,” in Proceedings of CHI 2001, Seattle, WA, pp.
1
Almost any utensil can be used. Traces may not be recorded 182-189.
for the mediums such as water color which apply very low
6. Johnson, W., Rao, R., Hellinek, H., Klotz, L., Card, S.
pressure. Finger paints may also be used, but to capture
“Bridging the Paper and Electronic Worlds: Paper as a
finger paint another set of sensors (heat sensitive) are also
User Interface,” in proceedings of INTERCHI 1993,
required. The scanner gap prevents wet paper to be smudged
Amsterdam, The Netherlands, pp. 24-19
and can input thicker paper and collages.

228
The Junk Mail to Spam Converter
Michael Weller, Mark D. Gross, Jim Nicholls and Ellen Yi-Luen Do
Design Machine Group / Department of Architecture / University of Washington
Box 355720
Seattle, WA 98195 USA
+1 206 543 1604
{philetus, mdgross, jnicholl, ellendo}@u.washington.edu
http://dmg.caup.washington.edu

ABSTRACT legs continue up past the paper shredder to provide a place


The junk mail to spam converter is a prototype designed to mount the laser pointer and web cam. Four rubber feet
and built to demonstrate the idea of a physical-to-virtual on the ends of the legs protect your furniture from scratches
filter. A piece of mail is fed into a slot in the front of the and prevent the vibration of the paper shredder from
machine, and a webcam takes a picture of the envelope and causing the JMtoSC to skitter around.
emails it to your account before the letter is shredded.
Keywords
junk mail, spam, converter, filter, physical-to-virtual
INTRODUCTION
As computers spread out from the desktop into the
environment [1] they threaten to compound the problem of
clutter in our physical spaces. To offset this trend we
propose the use of physical-to-virtual filters to shift
superfluous physical objects into the virtual realm and free
up physical space. The junk mail to spam converter
(JMtoSC) does not solve the problem of junk mail, it
transforms it into spam so that it no longer intrudes on our
limited physical space.
IMPLEMENTATION
Form
As it is intended to be your constant companion at home or
in the office, the JMtoSC is conceived as a sculptural
object. Like the three-headed dog at the gates of hell, this
sheet metal Kerberos shreds your mail’s physical
instantiation and casts its digital memory off into the abyss
of your email inbox.
Functionality
When a letter is fed into the slot in the front of the sheet
metal structure, it slides down a chute through the
JMtoSC’s innards where it triggers a breakbeam made of a
laser pointer and a light sensor. A handy board [2] bolted
into the guts below listens for the beam to be broken and
signals an applescript to snap a photo of the doomed
missive and send it to your email account before the handy
board flips the relay controlling to the shredder in the
bowels of the beast and grinds the letter into compost.
Architecture Figure 1 letter being fed into the JMtoSC
The skeleton is composed of hand-bent sheet metal pieces
riveted together. The front panel has an opening to insert A handy board mounted beneath the letter chute watches for
letters and comes down into two front legs. The mail chute mail in the chute and coordinates its documentation and
connects this front panel to the rear section where the paper destruction. A light sensor located under a hole in the letter
shredder is mounted over two shorter hind legs. The hind chute directly in front of the shredder is wired into one of

229
the handy board’s analog sensor ports. A battery-powered branch post office there is a button next to the picture that
laser pointer directed at this light sensor from above says ‘shred’. If you click the shred button the letter is
functions as a break beam. The handy board control loop shredded as soon as it arrives at your local branch.
listens for a drop in the light level due to a letter
Opt-in Virtual Mail
obstructing the laser beam. When a drop in the light level
If you sign up for this program with the post office, when
is detected, the document and destroy sequence is initiated.
letters addressed to you arrive at the processing center they
are opened by a machine rather than being routed to your
local branch. The envelope and contents are scanned front
and back, and then everything is immediately shredded.
The image files are sent to your email account. By
combining this system with optical character recognition
software, your mail could be run through a spam filter with
the rest of your email to automatically filter out junk mail.

Figure 2 junk mail to spam converter architecture


The handy board sends a signal over its serial cable to a
desktop computer running Mac OS 9 to initiate an
applescript program that documents the letter. The Figure 3 web cam image from email
applescript activates the web cam and captures an image of
the envelope. It then composes an html email containing CONCLUSION
the image and sends it to your email address. As embedded computing promises to bring information
processing out into the environment with us, physical-to-
After pausing to allow the applescript to run the handy virtual filters will mine the environment for physical
board activates a relay spliced into the paper shredder’s artifacts whose primary purpose is information transfer and
power cord. The paper shredder is allowed to run for a set storage. Converting these bulky physical records into
period of time more than sufficient to grind a business size virtual form will free up physical space and make
envelope into confetti before the relay is turned off again information readily available to be processed by embedded
and the handy board returns to its control loop while the devices.
JMtoSC awaits its next meal.
ACKNOWLEDGMENTS
FUTURE WORK We thank the UW Architecture Department for supporting
The JMtoSC illustrates the concept of a physical-to-virtual the Physical Computing program, Ken Camarata for
filter but does not provide any particular practical advantage providing technical advice, and Fred Martin for developing
over throwing your junk mail in the recycling bin because the handy board.
you must first filter out by hand any mail you would not
like to have shredded. A future goal of the project is to REFERENCES
explore methods for filtering physical mail before you 1. M. Weiser. “The Computer for the Twenty-First
receive it at home. Century,” Scientific American 265(3), 1991, pp. 94-
104.
The Junk Mail Early Warning System
When mail is initially processed and run through the zip 2. F. Martin. The Handy Board. http://handyboard.com/,
code sorting machine, a digital photo is taken of each July 7, 2003.
letter. A photo of each envelope addressed to you is sent to
your email account as an html email. If you check your
inbox before the letter has been delivered to your local

230
Part IV

Doctoral Colloquium
Communication from Machines to People with Dementia
T D Adlam
Bath Institute of Medical Engineering
Wolfson Centre, Royal United Hospital
Bath. BA1 3NG
+44 1225 824 107 / [email protected]
ABSTRACT MESSAGES
In this paper, I describe work in progress investigating The research is addressing two main classes of message to be
effective means of communicating messages to people with communicated to the person with dementia.
dementia that will be understood and in some situations effect The first is informative and does not necessarily require a
a behaviour change. Different media will be investigated for response. It informs the user that, for example, an action by a
their effectiveness. Communications will be evaluated in device has been completed or that a person will be calling to
domestic and laboratory contexts using hardware designed visit shortly.
for this work and existing hardware from the Gloucester Smart
House Project. The second class of message is directive and is intended to
modify the behaviour of its recipient. For example, a message
Keywords may be generated to discourage a person from leaving the
Dementia, communication, machine, behaviour, media, house at night in cold weather or to encourage a person to go
human/machine interface. to the toilet in the bathroom when they get up at night. Other
INTRODUCTION messages may combine these two classes.
Dementia MEDIA
Dementia is defined as ‘a progressive global impairment of Many different media are available. Most people are familiar
cognitive function in a conscious person that is usually with visual communications from televisions, computers,
untreatable.’ It mostly, but not exclusively, affects older advertising hoardings, books and magazines; and audible
people. Its primary symptom is the loss of short-term memory. communications from the telephone, radio, CD or record
Other symptoms include an inability to plan task execution, player or public address system. There are other media that
the loss of the ability to reason, the loss of the ability to learn, are not usually associated with communication that may prove
temporal disorientation, and social disinhibition. useful in this research such as music, odour and directed
Communications lighting.
It is the objective of this work to be able to present to a human/ Medium: Audio
machine interface designer a series of guidelines for the design Audio is a versatile medium for messaging in a building and
of interfaces actiong between machines and people with is frequently used in large public spaces.
dementia; specifically the most effective means of
Audio is pervasive – the message is present (for a hearing
communicating information from a machine to a person with
person) in all parts of the room simultaneously, whatever
dementia.
direction the person is directing their attention in. It is a means
This work is part of the Gloucester Smart House [2] project, of communication that people are accustomed to.
which aims to develop devices and technology that will enable
Audio is transient. When message transmission has ceased,
people with dementia to live more independently whilst being
the message is no longer present except in the hearer’s memory
supported by technology. For these devices to be successful,
which in the case of a person with dementia will be poor. It
they need to be able to communicate information to a person
may be possible to loop an audio message, but this could be
with dementia.
very irritating for the hearer.
Similar work is in progress in Canada [1] where the washroom
Other questions present themselves such as whose voice
has been used as a context to evaluate the response of people
should be used to deliver the message? Should the message
with dementia to verbal prompts during a daily living task.
use the first or third person? Should the message be delivered
by a concealed or visible audio device? A concealed device
allows for a physically present speaker.
Using a familiar existing audio device such as a radio may
habituate the user to acting on instructions, whereas a new
device may need to be introduced early in the course of the
dementia so that the user is comfortable with the device.

233
Medium: Text homes by carers and non-video sensors as they respond to
Text is another versatile message delivery medium used in prompts around key areas of the home. These areas are the
public spaces. It is persistent: it doesn’t disappear on delivery kitchen, bathroom, front door and the whole of the house at
and is there for a reader to come back to. It does not imply night-time.
that the author is physically present.
‘Wizard of Oz’ experiments with people with dementia will
Text is localised and requires the gaze and attention of the enable the testing of simulated devices in context, allowing
reader to be directed towards it for it to be effective. When a changes to be made quickly and reactively. A concealed
message is delivered, the attention of the user must be gained operator (the ‘Wizard of Oz’) can simulate the actions of an
before any communication can begin. intelligent device interacting with the subject of the
Other issues that must be addressed when designing a text experiment.
message are the type or script used, colours and size. Hardware developed for the Gloucester Smart House Project
Medium: Video is being used for medium term evaluation of user response to
Video too has advantages and disadvantages that do not make messaging systems. A compact battery powered long-term
it an obvious choice as a communication medium. data logger will record the user responses.
Like audio, video with sound is transient unless it is looped. DEFINITION OF MESSAGES.
If video is used without sound it is localised and persistent The messages used for this work are being defined for four
like text. domestic contexts.
Video does not imply the physical presence of the actor and The kitchen - a cooker monitor has been developed for the
can be used with real or animated faces. It is possible that an Gloucester Smart House project that can intervene to prevent
animated face will be perceived as a machine where as a a dangerous situation and inform the user of actions taken.
recorded face will be perceived as an remote actor. The bathroom – a bath and basin monitor has been developed
Video requires large electronic hardware overheads which for the Gloucester Smart House project that can intervene to
have cost implications for communications devices and prevent a flood and inform the user of actions taken.
networks (if the video is not stored locally in each device). The front door – a reminder system has been developed that,
Other Media with a timer and proximity sensors, will prompt a user on
Other media such as odour (which is powerful stimulus of appropriate exit from the building.
memory) and lighting will be introduced to highlight specific The bedroom – a system (the Night Light) has been developed
message from other messaging devices, or to stimulate the that uses lighting and prompts to guide a person at night-
memory of a previous communication. time.
Hardware requirements These systems in context currently use their own messaging
A versatile communication device is being developed that systems (audio) but will be developed to use a general purpose
will be able to transmit audio, video and text to the user for messaging device being designed for this project.
the purpose information and behaviour modification. Control ACKNOWLEDGEMENTS
of lighting will be achieved with the installation of a bus system I would like my supervisors for their help and advice in
in the house. In a domestic context this will be a wireless bus. preparing this programme of research.
EXPERIMENTAL WORK Dr. Roy Jones, The Research Institute for the Care of the
The first stage of the experimental work will compare the Elderly. University of Bath, UK.
responses of people with and without dementia to instructions
given for a simple task to determine key differences in the Dr. Roger Orpwood, Bath Institute of Medical Engineering,
way that people with dementia respond to instructions when University of Bath, UK.
compared to each other and people without dementia. Dr. Ian Walker, Department of Psychology, University of
The arbitrary task selected is to present the subject with an Bath., UK.
instruction to turn one of two knobs to a particular numbered REFERENCES
position. At the time of writing, the task equipment is being 1. Mihailidis, A. Barbenel, J.C. Fernie, G. (in-press). The
designed and built at BIME efficacy of an intelligent cognitive orthosis to facilitate
Secondly people with dementia will be observed in their handwashing by persons with moderate-to-severe
dementia. Neuropsychological Rehabilitation.
2. Orpwood R, Adlam T, Gibbs C, Hagan S, Jepson J. The
Gloucester Smart House. 6th Annual National
Conference of the Institute of Physics and Engineering
in Medicine; 2000; Southampton: Institute of Physics
and Engineering in Medicine; 2000.

234
Context Information Distribution and Management
Mark Assad
School of Information Technologies
Madsen Building, F09
University of Sydney, NSW, 2006 AUSTRALIA
+61 2 9351 5711
[email protected]

ABSTRACT Context information is necessary if we are ever to achieve a


In my research work I am investigating ways to combine completely invisible computer. An example scenario of how
both hierarchical and distributed hash table lookup methods context information may be used is as follows:
for the distribution of contextual information. This A user leaves Sydney for a conference in Seattle.
combination allows context information to be managed The user has their personal CD collection that they
locally, and the privacy of the information kept within the own catalogued in their context repository. As they
user’s control. I also stress the importance of mobility in walk down the street, they pass a music store that
context environments. sells a CD from a band that is common in their
Keywords collection that they do not own. A message is sent
Context information, location aware services, content based to their mobile phone to inform them that the CD
messaging, distributed hash tables, context mobility is available.
An architectural solution to this problem has been
INTRODUCTION
commonly addressed in two ways. The concept of using a
Ubiquitous computing services are aiming to be able single large centralized database (Figure 1). In this case
provide computing facilities to anybody, anywhere at any any updates to context information must be updated at the
time. To provide these services an infrastructure must be single point. Another solution, as pressed by the GLObal
developed that allows application programs running in the Smart Space[1] project, is to have a large number of
environment to efficiently locate the services in the user’s context databases that manage geographic areas (Figure 2).
local area. The system must also be able to supply
information about the user’s current context. This context
information may be basic sensed information such as
ambient temperature or more rich context such as “the user
is involved in a meeting.”

Figure 2 Distributed Database based on geographic


location
MOTIVATION
There are problems with these two approaches that I aim to
address. The single database model has a clear central point
of failure. Also, the number of updates that would be
Figure 1 Single Database model required to be performed to manage the contextual data for
the entire world would be enormous. The single database
solves the problem of locating a user regardless of where
they are. Distributed databases make it harder to find a
user’s location based on their identity as many databases
may need to be searched.

235
Both of these models revolve around the user’s context data buildings, and locations), and a distributed peer-to-peer
being stored in a central infrastructure controlled database. database for mobile entities that can move between
This means that the user does not have complete control geographic areas (such as people or mobile phones) (Figure
over who has access to their data. 3).
The single database model would be able to achieve the Each user is associated with a data storage solution in a
scenario, but at the cost of giving up the user’s privacy to local home network; this would be similar to a user’s mail
store the CD collection. The distributed database model server. In this way, the user is in control of his/her data,
does not support tracing the user’s location as they pass a and they have the option as to what data is made available
music store. to querying applications. Pointers back to these servers are
I aim to try and develop a system that will allow users to entered into the distributed hash table as a means of
efficiently access their context information regardless of locating the individual server.
their location. I want the user to be in complete control of Using the scenario as an example I envision a system where
their data by storing the information on their local the client would be able to leave their home network, and
resources. upon arriving in a foreign area the network would be able to
detect the user by their Bluetooth mobile phone. The
Bluetooth ID would then be used as a key to the distributed
table, returning a pointer to the user’s home database. A
message could then be sent to the user’s home system,
alerting it to the user’s immediate surroundings. This
information could then be used to inform the user about the
availability of the CD.

EVALUATION
I am to implement this strategy using the Elvin[3] content
based messaging service as means of handing the fixed
location entities, and using a distributed hash table such as
Chord[4] for creating pointers into this network for mobile
Figure 3 Proposed model with decentralised database devices. I wish to then evaluate the effectiveness of this
method as a globally distributed context framework.
PROBLEM STATEMENT
A problem arises when users start to travel from one area to
another. The infrastructure must be able to detect and REFRENCES
identify these people regardless of where they are initially 1. Dearle, A., et al., Architectural Support for Global
from. Also the users should be able to be detected without Smart Spaces, in Lecture Notes in Computer
prior arrangement with the local environment. I have Science 2574. 2003, Springer. p. 153-164.
developed applications that use the Bluetooth transmitters 2. Want, R., et al., The Active Badge Location
in mobile phones as a kind of “Active Badge”[2]. This System. 1992, Olivetti Research Ltd. (ORL):
technique passively detects the Bluetooth hardware address Cambridge.
of the user’s phone, and matches this to a known profile for 3. Segall, B. and D. Arnold. Elvin has left the
the user. The Bluetooth hardware address is not a building: A publish/subscribe notification service
hierarchical name, and as a result there is no simple way of with quenching. in AUUG97. 1997. Brisbane,
doing a global lookup between the user’s phone’s address, Australia.
and their profile 4. Stoica, I., et al. Chord: A Scalable Peer-to-peer
My research work aims to develop an infrastructure that Lookup Service for Internet Applications. in ACM
combines the features of the hierarchical naming structure SIGCOMM. 2001. San Deigo, CA.
for static entities in the environment (such as rooms,

236
Publish/Subscribe Messaging: An Active
Networking Approach
Michael Avery
School of Information Technologies
Madsen Building F09, The University of Sydney, NSW, Australia
[email protected]

to each of the possible consumers it is not a good idea.


ABSTRACT There will also be problems with addressing and service
One of the challenges in developing a ubiquitous advertisement in a network with billions of sensors.
computing environment is transferring information in an Ubiquitous computing requires a new networking
efficient way. Peer-to-peer networks are too inefficient paradigm.
for a network with many sensors and devices so we need A networking paradigm that could be useful in a
to find another paradigm for transferring information ubiquitous computing environment is a publish/subscribe
over a network. Content-based messaging may provide a system. In a publish/subscribe system, subscribers send
way of doing this. We propose to develop a distributed, messages describing the types of messages they want to
content-based publish/subscribe messaging system receive. When a publisher posts a message, the message
designed for a ubiquitous computing environment. It will is forwarded to any subscribers who have requested it. A
use an active networks approach to provide efficiency publish/subscribe system has a number of useful
and scalability, reliability in the event of failures and properties. One advantage is that publishers don’t have to
support for mobility. know who is interested in the data they are publishing; all
KEYWORDS they have to do is put the message on to the network.
Ubiquitous computing, active networks, event Similarly, the subscribers don’t need to know who is
notification, publish/subscribe, content-based, distributed generating the data; they only need to know that the data
hash tables matched their request.
INTRODUCTION RELATED WORK
Ubiquitous computing embeds computing power in the There have been a number of attempts at creating a
environment, rather than just on our desktop. These publish/subscribe system in the past. The two main types
computers should react to the users needs without the of publish/subscribe messaging systems are subject-based
user needing to know how the underlying technology and content-based. In group-based messaging systems
works or where the processing is taking place. every message belongs to a particular group. Subscribers
can then register to receive all messages from a certain
An implementation of this approach will involve many
group. The main problem with this type of messaging
different sensors and devices connected together in a
system is that the subscription messages are often not
network. For example, in a room we might have a
expressive enough.
microphone to listen to user commands, a video camera
to view gestures, an infrared sensor to detect how many Another, more flexible, type of messaging system is
people are in a room and a speaker through which content-based messaging. In content-based systems, the
feedback is provided. In this room, the output from the subscribers ask for messages where the content matches a
camera, microphone and infrared sensor can be sent to a certain pattern. The subscribers are not restricted to just a
remote server. This data will be processed and the results subject, they can ask for all messages where the data
sent to the speaker, providing the user with feedback. matches a certain pattern, for example, a user might want
all messages where the temperature is greater than 30
One of the challenges we face in implementing a
degrees. This additional flexibility would be very useful
ubiquitous computing environment is connecting all of
in a ubiquitous computing environment.
these devices and sensors together. Connecting in a peer-
to-peer network will provide many problems. Some of There have been a few attempts at making content-based
the devices in the network will have a limited power messaging systems in the past, but they all have issues
supply, so forcing the sensor to send its data that will make them unsuitable for a ubiquitous
computing environment. Elvin [1] implements content-
based routing on individual servers. The servers can be
connected to some extent, but it is not scalable enough to
work over a worldwide network. SIENA [2] and
Gryphon [3] are distributed content-based systems, but
they don’t cope well with mobile publishers and
subscribers or network failures. One other issue all of
these messaging systems is that they require separate

237
servers to run. Before a user can send a publish or a A major advantage of an active network messaging
subscribe message, it needs to locate a server. This is system has over traditional server based messaging
unreasonable for some devices because they can be systems is that publishers and subscribers will not have to
mobile or they may have very little processing power. search for a messaging server. Instead, they will simply
Another more powerful type of content-based messaging publish their messages to the network and any active
would require the subscriber to send its subscription in network node that picks it up will be able to execute it.
the form of code. When a message is received by the This is a very useful property to have in a network filled
server, the subscription code could then be executed with with devices with low processing and battery power.
the message as input, and the result of the code could be We also plan to investigate the use of distributed hash
used to determine whether to send the message to a tables, like Chord [5], in a messaging system. Chord
subscriber. This method gives subscribers full control provides a method of locating objects in a distributed
over the messages they receive but has a number of network, as well as providing support for fault-tolerance.
drawbacks in terms of processing power required and These are properties should prove useful in a content-
security. based messaging system.
The Active Networks approach [4] attempts to place REFERENCES
computing power inside network nodes. Active network 1. Segall, B., et al. Content based routing with
nodes receive packets containing code which they then Elvin4. in AUUG2K. 2000. Canberra.
execute. With this approach it is possible to upgrade 2. Carzaniga, A., D.S. Rosenblum, and A.L. Wolf.
routers ‘on the fly’ and install new protocols simply by Achieving Scalability and Expressiveness in an
putting the code for them on the network. This approach Internet-Scale Event Notification Service. in
aims to improve network efficiency and reliability. Symposium on Principles of Distributed
PROPOSAL Computing. 2000. Portland.
We plan to investigate how active networking technology
can be applied to the problem of content-based 3. Strom, R.E., et al., Gryphon: An Information
messaging and then see how this can be used in a Flow Based Approach to Message Brokering.
ubiquitous computing environment. We hope this will Symposium on Software Reliability
lead to the development of a messaging system that is Engineering, 1998.
efficient, fault-tolerant and able to support mobility. 4. Tennenhouse, D.L., et al., A Survey of Active
We plan to view the publish and subscribe messages as Network Research, in IEEE Communications
code that is to be executed by the messaging system. We Magazine. 1997. p. 80-86.
will then investigate allowing subscribers to send 5. Stoica, I., et al., Chord: A Scalable Peer-to-peer
complex subscription messages to see what implications Lookup Service for Internet Applications. 2002,
this has on performance and scalability. MIT: Cambridge.

238
Workspace Orchestration to Support Intense Collaboration
in Ubiquitous Workspaces
Terence Blackburn
Dept of CIS, University of South Australia
Mawson Lakes SA 5095
[email protected]

ABSTRACT Future workspaces will be augmented with various


The combined social and technological aspect of intense, interactive display devices, personal information
co-located, collaborative work is a relatively new field of appliances and natural language interfaces. Approaches
research. There is currently little support for the such as augmented reality, virtual presence and
procedural and cognitive processes that exist in these conversational agents will help transform the way in
group workspaces. This work will investigate the concept which people use information to support collaborative
of workspace orchestration to support intense group workspace activities.
collaboration in ubiquitous workspaces.
LiveSpaces is an experimental test bed for exploring the
Keywords enabling aspects of ubiquitous workspaces. The reference
Workspace orchestration, ubiquitous workspaces, architecture for LiveSpaces, see Figure 1, has at its core a
cognitive activities, procedural support workspace infrastructure that integrates with the broader
INTRODUCTION enterprise by way of an enterprise bus. The workspace
infrastructure provides for the coordination and
The results of recent work [1] have highlighted the need
integration of applications, services and devices within a
to augment collaborative workspaces with new services,
workspace, much like an operating system might provide
such as an orchestration service, to coordinate and
for personal computers. The current infrastructure
synchronise intense, collaborative activities. The
implementation is based on iROS [2] and ODSI [3]. The
procedural aspects of these work activities need to be
architecture defines two service stacks. Knowledge
automated to allow workspace participants to focus on the
services provide support for those aspects that make a
cognitive aspects of achieving their goals rather than
workspace “intelligent”. Workspace support services
mechanical aspects such as retrieving files. Many of the
provide capabilities that directly support collaborative
cognitive aspects of group workspaces, such as support
activities. One of these services is workspace
for shared awareness, also need to be facilitated but few
orchestration, which is the focus of both this paper and
results from previous research are evident. The focus of
the related PhD project.
this work is to investigate the concept of workspace
orchestration that addresses some of the cognitive and UBIQUITOUS WORKSPACE ACTIVITIES
procedural needs of collaborative activities within The LiveSpaces project is currently focusing on support
ubiquitous workspaces. for intense collaboration. These are the types of activities
described by Mark [4] and others in using project or
This paper characterises ubiquitous workspaces and
“war” rooms for intensive design and planning activities
briefly describes LiveSpaces, the environment that
in areas such as space missions and software engineering.
provides the setting for this work. The paper then
Intense collaborations can be characterised as being intent
highlights the types of activities that these workspaces
(or goal) directed, time critical and involving teams of
support and identifies some of the components that are
specialists. This research focuses on workspace support
needed to support collaborative activities. The concept of
for activities such as disaster relief planning, software
workspace orchestration and its requirements are
reviews and decision making.
introduced and this suggests a direction for developing
the theoretical foundations for orchestration services in Workspaces need to adapt readily to a number of factors
future workspaces. such as: the type of activity being undertaken, the setting
UBIQUITOUS WORKSPACES hosting the activity and the people involved. Moreover,
activities need to be orchestrated and managed within
Ubiquitous computing research is providing the
individual workspaces to ensure that goals are met within
components and infrastructure for augmenting physical
a required timeframe.
workspaces with devices that will allow people to interact
more effectively with each other and with technology.

239
Issues such as coordination and synchronisation are An orchestration service needs to be partly autonomic and
critical for teams to achieve their goals. Specialists who partly interactive. For example, the service may prefetch
are participants in these activities are often co-located in and load data automatically as required, but a user should
specially designed rooms that foster face to face also have the flexibility to request ad hoc data sets. This
collaborative activities but often the supporting means that the service can coordinate activities according
technological infrastructure adds little to achieving their to a preselected sequence of events with the flexibility to
goals. change the order as determined by cognitive actions. It
should augment the cognitive work activities of the users
Many of these activities have a defined flow of events. in the workspace and at the same time provide procedural
For example, an emergency response planning session guidance. The service should monitor workspace
generally has a formally defined, procedural set of activities and context to coordinate devices, displays and
activities. Workflow engines could potentially be used applications and it should operate primarily in the
background as a ubiquitous service.
LiveSpace
Knowledge
Workspace
Support Observations of laboratory experiments and ethnographic
Services Services
Participants and Activities
studies in our candidate domains will help to identify and
Context
Devices
Media
model the cognitive and procedural processes in
Interaction collaborative activities and these processes will be
Learning
Applications Orchestration mapped to computational artefacts. An experimental
Transcription
orchestration service will be developed to explore
Instrumentation
Workspace Infrastructure implementation approaches based on the use of workflow
Simulation
concepts, an inferencing engine or other candidate
Enterprise Bus approaches. This experimental apparatus will be used
within a LiveSpaces environment to evaluate workspace
orchestration concepts in two application domains:
Organisational Enterprise Information Processes and disaster relief planning and scientific “tiger teams”.
Models Policies & Rules Services Workflow
CONTRIBUTIONS
This work will test the hypothesis that “workspace
Figure 1 The LiveSpaces architecture in the e- orchestration can enhance a team’s ability to achieve its
World lab at UNISA goals in intense collaborative activities within ubiquitous
workspaces”. It will provide a workspace orchestration
to support these procedural aspects and inferencing model for building applications for collaborative
engine models may assist when more flexible approaches workspace activities and it will model of some of the
are required. These aspects will be explored further as cognitive and procedural properties of intense
part of this PhD research. collaborative activities.
REFERENCES
In addition to the procedural side of intense collaboration, [1] Vernik, R., Blackburn, T. and Bright, D., (2003):
group cognition also needs to be considered as part of the Extending Interactive Intelligent Workspace
orchestration process. The aim of this work is to identify, Architectures with Enterprise Services. Proc Evolve
model and automate some of the group cognitive Conference, Enterprise Information Integration, Sydney,
processes such as group awareness and decision making. Australia, 2003.
Approaches to be researched in this regard include [2] Johanson, B., Fox, A. & Winograd, T., (2002): The
Distributed Cognition theory [5], which focuses on Interactive Workspaces project: experiences with
changes in cognitive states at a system level, and Activity ubiquitous computing rooms. Proc IEEE Pervasive
Theory [6],which focuses on individuals along with the Computing, 2002.
activities they are engaged in. [3] Bond, A., "ODSI: Enterprise Service Co-ordination,"
CRC for Enterprise Distributed Systems Technology, St
WORKSPACE ORCHESTRATION Lucia, Queensland. 2001.
[4] Mark, G., "Extreme Collaboration," Communications of
Workspace orchestration services support both procedural
the ACM, vol. 45, pp. 89-93, 2001.
and cognitive aspects of intense collaboration and two
[5] Hutchins, E., Cognition in the wild. Cambridge, Mass:
approaches are required to explore these aspects. The first
MIT Press, 1995.
is to investigate the procedural, structured processes that
[6] Halverson, C. A., "Activity theory and distributed
lend themselves to automation and the second is to cognition: Or what does CSCW need to DO with
identify and model the group cognitive processes that theories?," CSCW: An International Journal, vol. 11, pp.
produce the less structured, ad hoc activities. 243-67, 2002.

240
Visualisations of Digital Items in a Physical Environment
David Carmichael
School of Information Technologies
University of Sydney
NSW 2006 Australia
61-2-9351-5711

ABSTRACT 2. EXAMPLE SCENARIO


One of the results of the increase in Ubiquitous com- To give motivation for this project lets consider an ex-
puting is the addition of a large number of digital items ample scenario of an intelligent office from the perspec-
to the physical environment. As more hidden complex- tives of a normal user (an employee), visitor and ad-
ity is added to the environment it becomes difficult to ministrator.
understand. This paper presents an abstract of my doc-
The office contains all the items one would expect to
toral research into visualising this environment using a
find in a modern building, such as cabled network, wire-
variety of techniques including Virtual Environments.
less LAN, electronic door locks, fixed computers, print-
Keywords: Intelligent Environments, Visualisation, ers, fax machines, projectors, video surveillance. There
Virtual Environments are also items to make the office intelligent such as
cameras, temperature sensors, motions sensors, loca-
1. BACKGROUND tion tracking devices, large communal displays, blue-
Ubiquitous Computing and context aware computing tooth devices and PDA’s.
are increasingly active research areas. As evidenced by In addition to these physical objects with a digital effect
the growing number of new journals and conferences in there are also purely digital items. An example of one
the area such as: UbiComp, IEEE Pervasive Comput- of these would be a stick-e note, or other information
ing, IEEE Transactions on Mobile Computing, IEEE which only appears in particular contexts. An example
Distributed Systems Online: Mobile & Pervasive sec- of a context would be a particular location and spe-
tion and Middleware section, and ACM Sigmobile. cific set of people present) as would occur at a regular
Researchers have looked at embedding sensors in the department meeting.
environment to measure all manner of information. Ex- When we consider the digital items of interest to a nor-
amples of this are location and temperature. The Geor- mal employee (regular user of the building) They may
gia Tech Aware Home [2] project has built a complete be interested to know how / where they control the
house embedded with a wide variety of sensors. The lights for a particular room. For example unless they
aim of this project is make the home ”aware” of the are the building administrator they won’t be able to
activities and location of its occupants. They use this turn the lights on or off unless they are in the correct
information to improve the quality of life and allow the room or it’s their own office.
occupants to maintain their independence.
A visitor to the building has a different view of the
The Global Smart Spaces (GLOSS) project[3] aims to environment. For example, in needing to access the
provide an architecture for interaction between people wireless LAN they may need an easy way to see where
taking into account context and movement on a world- in the office there is wireless LAN coverage. Alterna-
wide scale. One example service built on the GLOSS tively they may be seeking to print and need a way to
architecture is the Hearsay system. This allows users to find the nearest printer and to determine its capabilities
leave messages for others in a given locational context. and settings. They will not be interested (and proba-
Researchers at the university of Kent have developed bly shouldn’t be able to see) certain digital messages
a system for leaving pieces of digital information in a left around for regular employees.
given context [1]. This context is generally the location An administrator of the system has a need to under-
of the information. They call these digital post-it notes stand much more about the environment. They need
stick-e notes. to be able to see all of the digital information tied to
the environment, possibly at the same time. They may
wish to see all of the digital items as well as information
about when / where / to whom the items are visible.

241
3. PROBLEM STATEMENT
The continued expansion of Ubiquitous Computing has
resulted in the physical environment having a multitude
of computational devices and digital items added to it.
These additions continue the evolution of the Physical
Environment in to an Intelligent Environment. However
such computing environments are difficult for people to
understand. The question which my research seeks to
answer is how to provide a view of the digital informa-
tion which resides in the physical environment, in a way
which is comprehensible.
The motivation for this project is that there is no good
way to see / filter all this information. It would be use-
ful to be able to visualise this information and also be
able to browse through it or see aggregated views. This Figure 1: An example view using Augmented
should be accessible on a range of devices for example Reality to show that there is mail waiting.
a PC, PDA or mobile phone.
4. PLANNED RESEARCH work has been done on using virtual reality in control
My research aims to represent physical and digital items systems but not on a worldwide scale.
within an intelligent environment. Before examining Another way of viewing the digital items would be us-
the representation of these items we first categorise the ing augmented reality systems. In order to work effec-
items of interest, finally we look briefly at the architec- tively this would require accurate location tracking to
ture required to show these representations of digital be able to put digital information in arbitrary locations.
and physical items. A more restricted option would be to put visual markers
recognisable to an augmented reality toolkit [5]. This
4.1 Physical and Digital Items allows digital information to be display next to physical
Items within an intelligent environment fall into a num- items tagged and known to the system.
ber of categories. The first category contains physical
items with an area of effect in the environment. This in- The final way of representing the digital items is on 2
cludes WiFi access points, bluetooth beacons, proximity dimensional maps. This approach is more restricted in
sensors (such as those on door locks), motion detectors terms of interaction, but can be displayed on devices
and video cameras. with lower computational power.
The next category is physical items with some digi- 5. REFERENCES
tal/physical function. Items in this category include [1] Brown P.J. The stick-e document: a framework for
light switches, door locks, projectors. creating context aware applications. In Proceeding of
The final category contains purely digital items. These Electronic Publishing 1996, pp259-272.
can vary in the context in which they exist. Their lo- [2] Cory D. Kidd, Robert Orr, Gregory D. Abowd, Christo-
cational context can vary from a single point (for eg. a pher G. Atkeson, Irfan A. Essa, Blair MacIntyre, Eliza-
message only visible at the exact spot) to a large area beth Mynatt, and Thad E. Starner and Wendy Newstet-
(eg the area a service is available in). An example in ter. The Aware Home: A Living Laboratory for Ubiqui-
the first category might be digital post-it notes while tous Computing Research. In Proceedings of CoBuild
the second category might contain the logical services ’99: Second International Conference on Cooperative
of can print or can control lighting. Buildings: 191-198
4.2 Representation to User [3] Dearle, A, Kirby, GNC, Morrison, R, McCarthy, A,
My plan is to use a number of different visualisations to Mullen, K, Yang, Y, Connor, RCH, Welen, P, Wilson,
present information about the intelligent environment A. In: Lecture Notes in Computer Science 2574, Chen,
to the user. The first is to generate a virtual environ- M-S, Chrysanthis, PK, Sloman, M, Zaslavsky, AB (eds),
ment which mirrors the physical environment. Repre- Proc. 4th International Conference on Mobile Data
sentations of digital items related to the physical en- Management (MDM 2003), Melbourne, Australia, pp
vironment are then placed appropriately in the virtual 153-164. Springer, ISBN 3-540-00393-2. 2003.
environment. [4] H.Kato, M. Billinghurst, I. Poupyrev, K. Imamoto,
The level of detail to which the physical environment K. Tachibana.Virtual Object Manipulation on a Table-
is modelled can be varied in order to make the digital Top AR Environment. In Proceedings of ISAR 2000,
items more or less prominent. Using this system the Oct 5th-6th, 2000
user may also be able to interact with the digitally con-
trolled systems via the virtual environment and have
the changes reflected in the physical world. Previous

242
Identity Management in Context-Aware
Intelligent Environments
Daniel Cutting
School of Information Technology
University of Sydney
Sydney, NSW 2006, Australia
+61 2 9351 5711
[email protected]

ABSTRACT presented to the IE by an entity. Identity management is thus


This paper briefly defines the concepts of identity manage- reduced to controlling access to sets of information (nyms)
ment as they relate to intelligent environments and provides associated with an entity and, in most cases, providing some
some examples of existing research in this area. Several link by which the IE can communicate with the entity.
problems within this space such as entity discovery and in-
Nymity is often thought of in terms of Goldberg’s Nymity
formation classification are also discussed with an aim to
Slider [4] ranging from unlinkable anonymity to fully au-
make clear several possible research directions.
thenticated verinymity with pseudonymous possibilities in
Keywords between. There are several definitions of the terms in ex-
Identity management, nymity, nym, intelligent environment. isting literature but for the purposes of this paper, I will as-
sume that unlinkable anonymity (or just anonymity) is the
INTRODUCTION case where there is no possibility of linking data to an en-
As computers become ubiquitous and the notion of intelli- tity either directly or through combination with other data
gent environments (IEs) develops, identity management de- or nyms [1] and verinymity is the case where data can be
mands increasing attention. Intuitively, identity management directly associated with a particular entity.
is concerned with controlling the pieces and types of infor- Anonymity is often illustrated by the example of using cash
mation pertaining to a person that are made available to an to purchase items from a shop that is never revisited. No au-
environment and other people. More concretely, it can be thentication is required to use cash, and once the transaction
thought of as the protocols and policies used to access this is complete it is virtually impossible to discover the involved
information. entity. Verinymity can be illustrated by the example of a reti-
IEs are often portrayed as “smart” rooms that allow users nal scan allowing access to a building. In the cases of such
to connect to various services without manually configuring biometrics there is virtually no doubt that a given identifier
network settings or interacting in explicit ways. In addition, links to a particular entity [1]. However the absolute ends of
IEs are generally able to deduce actions and context from ba- the scale should in practice be considered unachievable [2],
sic low-level sensors dispersed throughout the environment. meaning all nyms fall into the category of pseudonyms.
As with many systems, there exists a tradeoff in intelligent A nym, then, is a signifier for an entity but it is not necessar-
environments between ease of use and security. An exam- ily linked in such a way that an entifier for the entity can be
ple of this is the Passport project from Microsoft [6]. This found (that is, it is not necessarily possible to discover the
system is intended to provide a single login/password pair actual entity associated with the nym). Consequently, nyms
for a user across a wide range of web and other services. can be used to expose various pieces of information in such
While appealing in terms of its convenience, such a system a way that they cannot necessarily be linked together or to
has a major drawback; anybody able to determine the user- the entity that created them.
name and password of an email account, say, would also be
able to access services such as bank accounts or personal log EXAMPLE SCENARIO
files. Additionally if the right types of information were dis- Jill, a lecturer at a university wishes to use the projector in a
covered, it would be trivial to directly associate data found meeting room to present some slides. It is not necessary in
within these systems with an actual person. such a situation for the environment to know exactly who Jill
NYMITY
is; it is sufficient to know that she is a lecturer, and hence al-
An entity is defined as a person within an intelligent envi- lowed to use the facilities of the meeting room. In this case, a
ronment. An entifier [1] is a signifier of an entity. The best ‘lecturer’ nym exposing a particular agreed password would
type of entifier at present would seem to be a biometric such be sufficient for the environment to provide the desired func-
as a fingerprint, retinal scan or DNA sample. tionality. Other applications, such as a system whereby Jill
could be located on campus would require a different, less
The concept of nymity realises identity management in an anonymous ‘name’ nym that actually provided her name to
abstract way, dealing in particular with personaes or nyms the IE. Figure 1 illustrates these relationships.

243
tomated schemes for enforcing containment of information
Information Lecturer
have been explored in specific domains such as cooperative
collaboration tools [3] with some success.
Occupation: Lecturer
Intelligent
Name: Jill Name
Environment The problem can also be approached from a different, though
complementary direction, that of identity fusion [7, 5]. Sim-
ilar to sensor fusion, this is the concept of constructing prob-
Entity Nyms abilities of a nym relating to a particular entity based on the
accretion of low-level sensor data such as an entity’s loca-
Figure 1: The relationships between an entity, the en- tion or passage through security doors. Instead of trying to
tity’s information, nyms and the intelligent environment. reduce the leakage of information or entifiers, identity fusion
is at least partially concerned with exploiting such weak-
nesses. It would thus be beneficial to explore this concept
In general, it seems clear that most people would like to limit to strengthen research into reducing entity discovery.
the amount of information they provide about themselves to
the intelligent environment, or at least provide the requisite REFERENCES
information in such a way that it cannot be easily traced back [1] Clarke, R. Authentication Re-visited: How
to them unless absolutely necessary. Public Key Infrastructure Could Yet Prosper.
16th International eCommerce Conference, Bled,
Although the purpose of nyms is to present very specific sets Slovenia, 9-11 June 2003.
of limited information to the environment, it is easy to imag-
ine situations where nyms could be maliciously combined [2] Clarke, R. Certainty of Identity: A Fundamental
either by a single party or by colluding parties to allow the Misconception, and a Fundamental Threat to
discovery of additional information, or even the discovery of Security. Republished in Privacy Law & Policy
the entity underlying the nyms themselves. Reporter 8, 3 (September 2001) 63-65, 68.
PROBLEM STATEMENT [3] Godefroid, P., Herbsleb, J.D., Jagadeesan, L.J.,
I am interested in exploring mechanisms for automatically and Li, D., Ensuring Privacy in Presence
creating or modifying nyms to provide as little information Awareness Systems: An Automated Verification
as possible to an IE while still providing enough for applica- Approach. ACM Conference on Computer
tions to be useful. Supported Cooperative Work, Philadephia, 2000.
Further to this, I am interested in finding ways of reducing [4] Goldberg, I. A Pseudonymous Communications
the probability that an entity can be discovered (or linked to Infrastructure for the Internet. PhD thesis,
data) based on the nyms they expose. Computer Science Department, University of
California, Berkeley, 2000.
APPROACH
To understand and develop such nym-based mechanisms, it [5] Li, L., Luo, Z., Wong, K.M., and Bossé, E.,
may be worth considering the classification of types of infor- Convex Optimization Approach to Identity
mation referenced by a nym such that automated reasoning Fusion For Multi-Sensor Target Tracking. IEEE
can be applied to reduce or eliminate discovery of an entity Trans. Syst., Man and Cybernetics, 31, 3 (May
or deduction of further information. For example, if an en- 2001), 172-178.
tity’s address details are classified as extremely sensitive, a
[6] Microsoft, Microsoft .net Passport Review
nym-based framework may disallow inclusion of them in a
Guide. http://www.microsoft.com/net/downloads/
nym that is intended for public use.
passport reviewguide.doc
To take this further, such a framework could disallow the
use of multiple nyms which include an entity’s address in [7] Stillman, S., and Essa, I., Towards Reli-
different contexts so that it cannot be used as a way of ty- able Multimodal Sensing in Aware Environments.
ing together otherwise apparently unrelated nyms. Such au- http://citeseer.nj.nec.com/stillman01towards.html

244
Towards a Software Architecture for Device Management in
Instrumented Environments
Christoph Endres
Saarland University
Saarbrücken, Germany
[email protected]

ABSTRACT in more detail below.


An infrastructure for scalable plug-and-play device man-
agement in an instrumented environment is presented. 2.2 Overall system design
A prototype of the system is described and issues of the The core part of our system is a blackboard, called “the
overall architecture are addressed. pool”. It is used to store and exchange all sorts of im-
portant information about the environment. Connected
1. INTRODUCTION to the pool are several services that provide information
The FLUIDUM project (www.fluidum.org) is currently for the pool, or offer processing of information and then
building an instrumented environment in order to inves- eventually write their results back to the pool.
tigate interaction techniques for ubiquitous computing. The plugboard service is one of those services and deals
An infrastructure for device communication has to be with the device management. Besides keeping track of
provided that allows fast prototyping and provides a all plugged devices and their features, it can trigger ac-
stable foundation for the projects devices, scalable to tions on the devices based on data stored in the pool.
desk-, room-, and building-level. For instance a service might put a request for taking a
In analogy to the well known concept of a window and photo with a digital camera on the pool. The plugboard
driver manager of a conventional desktop computer I service then takes the request from the pool, captures
am working on such an infrastructure for instrumented the requested photo and places a reference (URL) to
environments of various scales. this photo back in the pool. The requesting service can
At the core of this system is a device manager with a take this URL and process it.
dynamic plug and play mechanism for possibly fluctuat- The next section discusses the device manager’s ap-
ing devices (e.g. PDAs or laptops) in the environment. proach for device classification. After that, the archi-
A prototype of this device manager is built. Its archi- tecture of the plugboard is presented.
tecture is described in the following section. Finally, I
discuss issues raised during the implementation of the
prototype. SERVICE
PLUGBOARD

SERVICE
2. ARCHITECTURE OF THE PROTOTYPE
2.1 Design goals POOL SERVICE
The design of the device manager is guided by several …
constraints. In the FLUIDUM project it will be used
in three differently scaled instrumented environments, SERVICE
with a potentially widely varying number of devices and
applications. Also, in order to cooperate with other, Figure 1: High level view of the system
similar projects at the same office, the device man-
ager has to be reusealbe in other contexts. In order
to achieve these goals, there are several important con- 2.3 Classification of devices
siderations. As mentioned above, one main issue in device classifi-
Since the architecture has to be open to new applica- cation is the uncertainty about future devices. At the
tions and new devices, the interfaces have to be well current pace of hardware evolvement, it is very hard to
defined and simple. The architecture has to be suffi- tell which kind of devices will have to be integrated in
ciently flexible for unforseen future devices. This will be the system in a few years, and next to impossible to
achieved by the way devices are classified, as described find a classification of devices that could handle them.
Therefore, we decided not to classify the devices, but
instead to classify the different properties of a device
(video capturing, text entering, infrared sensing, etc.)
and model a device as a list of those properties.
This approach turned out to be very flexible and useful
so far.

245
2.4 Plugboard architecture and device manager nection, there is no sophisticated mechanism yet to de-
The architecture of the plugboard reflects the approach tect failure of a device without previous disconnecting.
of device classification. A device is modelled as an
object containing a list of parameter/value pairs (e.g. 3.3 Ressource management
“name=camera01”) and a list of property APIs. The The device manager server is a useful lookup service to
inclusion of such a property API, e.g. “video in”, means find available devices and to find out about their fea-
that the device has this property. If a property of this tures. A feature and concept for scheduling devices to
type is missing, we can assume that the device can not applications is yet missing. Especially a realiable lock-
perform that task. The advantage of modelling those ing mechanism for devices or device features in use is
properties as API is that besides getting information missing. Also, mutual locking of different properties
about the device, we also acquire access to its features. on the same device is missing. For instance, a camera
The APIs are standardized, so on encountering a certain currently in use in the system is not capable of simul-
property API we know which functions can be called. taneously broadcasting a video stream and capturing a
The central part of the plugboard is the device man- high resolution photo. Those dependencies have to be
ager server. It is a lookup service to which devices modelled.
can connect or from which they can disconnect. On
the other hand, services can also connect to the server 3.4 Inclusion of future devices
and request information about devices. Each connected This is a point which should be solved with our ap-
service will be automatically informed if there are im- proach of device properties. The author would like to
portant changes in the plugged devices. Some of those discuss it and gather some more opinions.
services take care of the connection and exchange of
3.5 Dealing with virtual devices
data to the central data pool.
Some properties, for instance recognition of visual mark-
ers, do not have a hardware equivalent but are over-
Device Handle

DEVICE lays of other properties, for instance video capturing


Parameters in this example. There current solution is implement-
plug/unplug

ing virtual devices that plug to the server both as de-


Property API
vice (marker recognizer) as well as requesting service
Property API
SERVER Look-up (looking up devices with video capturing property). Al-
Device Plug Adapter

though this approach works, there might be a more el-


Service Adapter

Property API
Table of Monitor
… egant way to do this.
devices
Service
& their

Device Handle

DEVICE 4. ACKNOWLEDGMENTS
handles
plug/unplug

Parameters
Service
This work has been funded by the German Research
Council (DFG) and the Chair for AI at the University
Property API
of Saarbrücken, Germany.
Property API
Property API

Figure 2: Architecture of the plugboard

3. DISCUSSION ISSUES
There are some unresolved issues in the current system
that I would like to discuss.

3.1 Centralized design as bottleneck


Although the central device manager service seems the
logical design approach, it is a potential bottleneck and
source of instability of the whole system if it fails. At
the moment, the services keep a copy of the device man-
ager’s list of all plugged devices and thus could continue
working during a device manager failure. Although sta-
ble, this solution might lead to performance issues. Al-
terantive approaches might include self-organizing struc-
tures or some sort of peer-to-peer network.

3.2 Reliable recognition of device disconnection


At the moment, the devices connect via remote method
invocation to the central server. Although there are
some stable mechanisms to detect a failure of this con-

246
Ubiquitous Support for Knowledge and Work
Michael A. Evans
Knowledge Acquisition and Projection Lab
501 N. Morton, Ste 212
Bloomington, IN 47404 USA
+1 812 856 1363
[email protected]
ABSTRACT Even if you’re in a priority situation there’s (sic) a lot of
Knowledge management (KM) presents a challenge to things going on in a ship; they don’t have the time to cut a
human-computer interaction (HCI). Indeed, a reassessment message with you.
of how knowledge and work distributed across structural Don: Yeah and, in fact, just to further what BS’s saying
and cultural boundaries of organization are supported may like again the chat came in to play [in a recent
be in order. This dilemma can be summarized by stating troubleshooting action aboard a ship deployed in the
that the problem concerns how knowledge is Persian Gulf]. Because what I was doing was I was
conceptualized and at what level of organization chatting with LANT [FTSCLANT – Fleet Technical
interventions are proposed. Consequently, my dissertation Support Centre, Atlantic Division in Norfolk, VA] almost
draws upon three theories—Communities of Practice, nightly. Almost every night about what my problem was
Activity Theory, and Institutional Theory—that emphasize and, you know, what I mean and then they were in turn
knowledge and work as collective processes to counter this calling Richard [at the Naval Surface Weapons Center,
challenge. A case of the collaborative practice of virtual Crane in south-central Indiana] and actually doing, you
teams in the U.S. Navy is presented to illustrate. know, calling Richard on the phone saying, ”Yeah, you
Keywords know…[Don’s]…got these parts – he can do this, this and
Knowledge management, human-computer interaction, this,” and then they would get back on chat [to continue
Communities of Practice, Activity Theory, Institutional troubleshooting with me] and it’s all real time.
Theory The above discussion between Bill, an electronics
INTRODUCTION engineer, and Don, a subject-matter expert technician,
The U.S. Department of Navy (DON) is in a monumental encapsulates the current, yet evolving practice of
period of transition. In essence, to counter a radical maintaining and troubleshooting at a distance the
downsizing in on board personnel and to leverage what shipboard systems in the U.S. Navy. To review, a subject-
has been championed as the critical asset of tacit, expert matter expert (SME) on a “tech assist” in the Persian Gulf
knowledge as well as advanced information technologies, exploits both mundane and advanced information
the DON has formulated a strategy that promotes technologies to leverage geographically-dispersed
knowledge management and eGovernment initiatives expertise. The mission was to troubleshoot and resolve a
throughout the enterprise. To emphasize the impact of this critical problem with a complex electronic countermeasure
transition, an exchange on evolving collaborative system aboard a ship deployed to defend troops landed in
troubleshooting practice in the U.S. Navy between two Iraq.
long-time civilian employees follows: DESIGN FOR DISTRIBUTED KNOWLEDGE AND WORK
Bill: In the old days that [an exchange between at-sea The above excerpt and scenario capture nicely the hurdles
sailors and shore-based technicians engaged in a to be overcome to support sailors, engineers, and
troubleshooting action] would have been handled by technicians servicing complex electronic systems aboard
satellite phone (MRSAT) or message traffic. So the U.S. Navy ships. The matter is more critical given the
SIPRnet [the Secret Internet Protocol Router Network, DON’s explicit interest in knowledge management (KM)
used to transmit classified information about ships] has initiatives.
really helped, being able to send email because sometimes Consequently, one interpretation of this strategic initiative
it would take a day to get a message out. is to develop a knowledge management and performance
support system to aid at a distance the collaborative
troubleshooting actions of military and civilian technicians
maintaining electronic countermeasure systems aboard
U.S. Navy warships. To this end, the Knowledge
Acquisition and Projection Lab at Indiana University is

247
attempting to meet this challenge by participating in the including the object of the activity). The standard view in
Knowledge Projection Project – a joint undertaking with these situations is to deduce an ultimate set of operations
Naval Sea Systems Command (NAVSEA), Naval Surface from an abstract use activity and apply these to design and
Weapons Center (NSWC) Crane, EG&G Technical analysis. This article argues that the user interface fully
Services and Purdue University. The proposed system is reveals itself to us only when in use (p.171-172).
intended to leverage both intellectual capital (i.e., tacit In this dissertation I wish to extend her framework to
knowledge) and advanced information technologies (e.g., include Communities of Practice and Institutional Theory.
Case-Based Reasoning and High Performance Knowledge The reasons for this are threefold. First, over the past
Bases) to facilitate the collaboration between shore-based fifteen years there have been tremendous advances to
civilian technicians and on board sailors within a network theoretically-informed analyses of knowledge and work.
of distributed practice. The goals of the design team at IU Nonetheless, few attempts have been made to integrate
are to exploit KM thinking and techniques to impact key perspectives. Second, there are shortcomings to Activity
organizational variables, including a reduction in total cost Theory, particularly a lack of attention to the issue of
of ownership, an improvement in the efficiency and power that can be addressed by the other two perspectives.
effectiveness of maintenance and troubleshooting, and an Finally, a brining together of these theories can aid to
increase in fleet readiness. facilitate the continued interdisciplinary nature of HCI. By
Understandably, this presents a unique challenge to incorporating theories that are now used in cognate fields
human-computer interaction (HCI). To meet this challenge such as educational psychology, performance technology,
the suggestion forwarded here is to expand traditional information science, and organizational theory, this
frames of reference to more fully incorporate social and agenda can be further advanced.
cultural features of organization that may influence the
CONCLUSION
effective distribution of knowledge and work across My aim has been to reveal the challenges that knowledge
enterprise boundaries. As will be illustrated in the case of management initiatives bring to the theory and practice of
this collection of military and civilian technicians, social human-computer interaction. The issue is that distributed
features arise as performance is essentially a collaborative knowledge and work inevitably involve the crossing of
and distributed practice across specialized work units; social and cultural boundaries of organization. What is
cultural features arise as this coordination cuts across encouraging is that we now have appropriate,
functional identities, defined both by their status in the theoretically-based perspectives that can assist in meeting
organization (military or civilian) and role in the end-to- this endeavor.
end process (primary maintainer or first-line support). To
assist with accounting for these social and cultural ACKNOWLEDGMENTS
features, I will enlist concepts and principles from three I thank NAVSEA, NSWC Crane, and the men and women
perspectives that are appearing with increasing regularity in the U.S. Navy who maintain and operate the “Slick-32”
in the HCI literature — Communities of Practice [4], for their cooperation and participation.
Activity Theory [2], and Institutional Theory [5]. REFERENCES
Juxtaposing these three theories may better permit for the 1. Bodker, S. (1989). A human activity approach to user
examination of inherent, yet unrecognized, tensions in the interfaces. Human-Computer Interaction, 4, 171-195.
concepts of knowledge (object-process) and work
(individual-organizational) that knowledge management 2. Engeström, Y. (1987). Learning by expanding: An
principles and initiatives present. activity-theoretical approach to developmental
research. Helsinki, Finland: Orienta-Konultit.
THREE PERSPECTIVES ON KNOWLEDGE AND WORK
3. U. S. Department of the Navy (2002) Information
Almost fifteen years ago, Susanne Bødker [1] wrote:
Management & Information Technology Strategic Plan
This article presents a framework for the design of user FY2002-2003. Available at http://www.don-
interfaces that originates from the work situations in which imit.navy.mil/default.asp.
computer-based artifacts are used: The framework deals
4. Wenger, E. (1998). Communities of practice: Learning,
with the role of the user interface in purposeful human
meaning, and identity. New York: Cambridge
work…I deal with human experience and competence as
University Press.
being rooted in the practice of the group that conducts the
specific work activity…The main conclusions are: The 5. Zilber, T. B. (2002). Institutionalization as an interplay
user interface cannot be seen independently of the use between actions, meanings, and actors: The case of a
activity (i.e., the professional, socially organized practice rape crisis center in Israel. Academy of Management
of the users and the material conditions for the activity, Journal, 45(1), 234-254.

248
Anonymous Usage of Location-Based Services over
Wireless Networks
Marco Gruteser
Department of Computer Science
University of Colorado at Boulder
Boulder, CO 80309
[email protected]

ABSTRACT of personal information. Complying with these principles


The ability to sense user context, especially user location, generally requires notifying users (data subjects) about the
and to adapt applications to it, is a central notion in ubiqui- data collection and purpose through privacy policies and im-
tous computing. While such information can enable impor- plementing security measures to ensure that collected data is
tant applications, it also raises significant questions about in- only accessed for the agreed upon purpose.
formation privacy. Concentrating on location information—
arguably a more sensitive part of context—we investigate 2. SCOPE
novel privacy enhancing technologies between the extremes Our research concentrates on a complimentary approach
of relying on goodwill and complete data suppression. By based on adjusting the level of data accuracy. In this ap-
adjusting data precision and ensuring anonymity of the dis- proach, location-based services and network providers can
tributed information can mitigate privacy risks while still al- collect and use only de-personalized data (i.e., practically
lowing some applications to gain insights from this data. anonymous data). This approach promises benefits for both
parties. For the service provider, practically anonymous data
causes less overhead. It can be collected, processed, and dis-
1. INTRODUCTION
tributed to third parties without user consent. For data sub-
Improvements in sensor and wireless communication tech-
jects, it removes the need to evaluate potentially complex
nology enable accurate, automated determination and dis-
service provider privacy policies.
semination of a user’s or object’s position. There is an
immense interest in exploiting this positional data through Practical anonymity requires that the subject cannot be re-
location-based services (LBS), which we define broadly as identified (with reasonable efforts) from message contents
applications that automatically receive user location infor- and characteristics. So far, we see the greatest location pri-
mation. For instance, adaptive smart spaces could tailor their vacy challenges in link layer information available to wire-
functionality to the user’s presence and current location [1], less LAN clients and application layer information that is
or vehicle movement data would improve traffic forecasting transmitted to service providers.
and road planning [2].
2.1 Wireless LAN Hotspots
The success of LBS is intrinsically tied to wireless networks. In IEEE 802.11b wireless LANs, signal characteristics al-
Wireless networks enable a high degree of user mobility; that low determining the position of a transmitter with high preci-
is, users can access computing services virtually anywhere. sion [4]. In addition, the MAC address provides a static iden-
Thus, location becomes an important piece of contextual in- tifier that enables an adversary to link multiple messages to
formation for applications. Among wireless networks, the the same transmitter. Thus, an adversary can track the move-
proliferating wireless LAN networks are of particular inter- ments of the transmitter and potentially identify its user.
est, because they provide high-bandwidth network connec-
tions, allow precise locating of stations, and have the poten- 2.2 Location-Based Services
tial to cover the highly populated key areas, where people For LBS, we consider the primary risk inherent in the lo-
spend most of their lives. cation information. While user IDs are not required for all
However, without safeguards, extensive deployment of these services or could be faked, the transmission of location infor-
technologies endangers user location privacy and exhibits mation is necessary to obtain the service. However, reveal-
significant potential for abuse [3]. Common privacy prin- ing accurate positional information can pose serious identi-
ciples demand, among others, user consent, purpose bind- fication problems. The Global Positioning System typically
ing, and adequate data protection for collection and usage provides 10–30 feet accuracy. This information can be cor-
related with public knowledge to identify a user or vehicle.
For example, when a map service is used while still parked
in the garage or on the driveway, the location coordinates can
be mapped to the address and owner of the house. If queries
are sufficiently frequent, they can be used to track an individ-
ual. Note that these methods use mostly publicly available
information as opposed to the identity behind network (IP)
249


addresses, which is typically only known to Internet Service every quasi-identifier. In other words, there are at least
Providers. Thus, this type of identification attack is available other individuals that any given record could pertain to.
to any provider of a location-based service.
For data mining purpose entries can be perturbed before stor-
age by adding a random value [11]. A reconstruction proce-
3. APPROACH dure then estimates the approximate distribution of a large
The privacy enhancing mechanisms seek to maintain a min- number of values; however, no specific value can be linked
imum level of anonymity. Inspired by the -anonymity con- to an individual.
cept [5] for databases, we define the level of anonymity as ,
where the adversaries observations of an individuals move-

Short Bio
ments must be undistinguishable from at least other in- Marco Gruteser is currently a Ph.D. candidate in computer
dividuals. We plan to extend this model to take into account science at the University of Colorado at Boulder. His re-
continuous data updates (i.e., location information changing search interests include privacy, context-aware applications,
over time). and wireless networks.
We address the WLAN tracking problem at the link layer During a one-year leave at the IBM T.J. Watson Research
through disposable MAC addresses. Compared to solutions Center, he developed a software infrastructure that inte-
such as directional antennas, this lightweight mechanism grates sensors to support context-aware applications in the
that can be deployed without extensive hardware modifica- BlueSpace smart office project. This work led to four pend-
tions. When addresses are switched frequently enough, it ing patents, a refereed conference publication, and coverage
prevents an adversary from tracking the movements of in- from US news media such as the New York Times and ABC
dividuals. More sophisticated adversaries, however, may be Television News.
able to link several addresses to the same individual through
monitoring signal-to-noise ratio or traffic analysis. We plan Marco received a Master’s degree in Computer Science from
to analyze WLAN traces to judge how frequently addresses the University of Colorado at Boulder (2000) and completed
must be disposed for a given level of anonymity and how a Vordiplom at the Technical University Darmstadt, Ger-
vulnerable this approach is against the more sophisticated many (1998). He is a student member of the ACM.
attacks.
REFERENCES
The system uses cloaking algorithms that change the accu- [1] P. Chou, M. Gruteser, J. Lai, A. Levas, S. McFaddin, C. Pinhanez,
and M. Viveros. Bluespace: Creating a personalized and
racy of location information, when the system intentionally context-aware workspace. Technical Report RC 22281, IBM
reveals it to third parties, such as location-based services. Research, 2001.
To date, we have designed a system architecture and algo- [2] Sastry Duri, Marco Gruteser, Xuan Liu, Paul Moskowitz, Ronald
rithms [6] that adaptively control the accuracy of transmitted Perez, Moninder Singh, and Jung-Mu Tang. Framework for security
location information so that the message could have origi- and privacy in automotive telematics. In 2nd ACM International
nated from at least users. Based on automotive traffic sim- Worksphop on Mobile Commerce, 2002.
ulations we found that 100–200m accuracy is usually suf- [3] Roy Want, Andy Hopper, Veronica Falco, and Jonathan Gibbons.
ficient on city and highway streets to maintain a minimum The active badge location system. ACM Transactions on Information
Systems (TOIS), 10(1):91–102, 1992.
level of 5-anonymity. We plan to extend this work with al-
[4] Paul Castro, Patrick Chiu, Ted Kremenek, and Richard Muntz. A
gorithms that do support more sophisticated location queries probabilistic room location service for wireless networked
than asking for a single point and with algorithms that do not environments. In Ubicomp, 2001.
rely on a central trusted server. [5] L. Sweeney. -anonymity: A model for protecting privacy.


International Journal on Uncertainty, Fuzziness, and


4. RELATED WORK Knowledge-based Systems, 10(5):557–570, 2002.
The design for the Cricket location support system [7] takes [6] Marco Gruteser and Dirk Grunwald. Anonymous usage of
privacy into account by determining position the user’s location-based services through spatial and temporal cloaking. In
First International Conference on Mobile Systems, Applications, and
trusted client device (as opposed to potentially untrustwor- Services (MobiSys), 2003.
thy building infrastructure). This enables the user to take
[7] Nissanka B. Priyantha, Anit Chakraborty, and Hari Balakrishnan.
control over his location and decide whether to share it. Our The cricket location-support system. In Proceedings of the sixth
cloaking algorithms, however, can build on such location annual international conference on Mobile computing and
systems to control the accuracy of revealed information. networking, pages 32–43. ACM Press, 2000.
[8] Marc Langheinrich. A privacy awareness system for ubiquitous
Langheinrich’s privacy awareness system [8] informs data computing environments. In 4th International Conference on
subjects and users about data usage policies; thus, it in- Ubiquitous Computing, 2002.
creases awareness but does not seek to offer protection. [9] G. Myles, A. Friday, and N. Davies. Preserving privacy in
Other researchers [9, 10] describe systems that can enforce environments with location-based applications. IEEE Pervasive
data access rules specified in privacy policies; our approach Computing, 2(1):56–64, 2003.
adds the capability of determining appropriate levels of ac- [10] X. Jiang and J. Landay. Modeling privacy control in context-aware
curacy for anonymous data. systems using decentralized information spaces. IEEE Pervasive
Computing, 1(3), Jul/Sep 2002.
Sweeney [5] proposes the -anonymity model for anonymiz- [11] Rakesh Agrawal and Ramakrishnan Srikant. Privacy-preserving data
ing database tables. Generally speaking, a database table is mining. In Proc. of the ACM SIGMOD Conference on Management
considered -anonymous if it contains at least entries for of Data, pages 439–450. ACM Press, May 2000.
250
Service Advertisement Mechanisms for Portable Devices
within an Intelligent Environment
Adam Hudson
School of Information Technologies
University of Sydney, Australia
[email protected]

ABSTRACT an unfamiliar environment, such as gaining network access,


Personal servers are portable devices with substantial negotiating firewalls and finding a computer running the
processing and storage abilities, but no user interface. A correct version of an application. All the user needs to
user carries one on their person while inhabiting intelligent access their data is a personal server and a devices that
environments, and interacts with the processes and data it supports the appropriate standardised method of
contains using nearby devices. The lack of user interface communication
requires a rethink in how this interaction takes place, as the The personal server supplies external devices with access
environment now needs to access services available on the to its resources through services. These services can range
portable device, rather than the more traditional method of from providing simple personal details about the user such
the portable device accessing the services available in the as your name or your favourite food, to far more
environment. Through my research, I aim to develop a complicated services such as capturing input, outputting
mechanism which enables a portable device, such as a screen buffers, running processes or presenting its storage
personal server, to advertise the services it has available to as a mountable volume.
an intelligent environment and to explore what new
applications this enables. The properties that a personal server exhibits make it the
perfect device for a user to employ to interact with an
Keywords intelligent environment. Similar devices have been created,
Ubiquitous computing, intelligent environments, services, such as Factoid[3], MetaPad[4] and Minder[5]. However,
service discovery, service advertisement, portable devices, these tend to either be less capable, particularly in terms of
personal servers. processing ability, or to be more reliant on wired links.
INTRODUCTION This limits their ability to take part in a truly ubiquitous
Personal Servers computing environment.
The personal server [1,2] is a portable device containing Client-Server Interactions in an Intelligent Environment
high-density storage and low-power, high performance Within current intelligent environments, a user commonly
processing, but without any form of direct user interface. carries some form of portable client device with them, such
Wireless connectivity allows it to communicate with other as a PDA, which they use to access services. A service is a
devices within a local intelligent environment, and it is logical function that any other device, acting as a server, is
through these that a user interacts with their personal willing to offer to the client over a network.
server. This allows users to interact with their data through
For a client device to access services, it needs to know
whatever interface is convenient, so that the personal server
which ones are available and where they are located. There
itself does not have to be a trade off between portability
are two main ways that it can do this. One method is to
and usability. Users can carry a device small enough to clip
query some well known directory, such as a Domain Name
on a belt with them at all times, access it using a phone or
Service (DNS) [6]. This method encounters difficulties
PDA while traveling, but then easily make use of a full size
when used by portable devices, as the device may not know
screen and keyboard when it becomes available, such as in
where to look to find the directory in the first place. A
a café or their office.
better way in this situation is deploy a service discovery
A major strength of the personal server is that, regardless protocol, such as that found in zeroconf [7] or Universal
of what interface devices are being used, the user always Plug and Play [8]. These make use of multicast messaging
has access to their own data and applications. This removes to query the network and allow clients to locate the services
the difficulties inherent in accessing personal data within directly.
MOTIVATION
Personal servers change the method of interaction between
clients and servers. The situation is now reversed, such that
the server is the portable device, rather than the client. The
user wants applications running on devices within the
intelligent environment to access the services available on

251
their personal server. If these external devices are to make An important part of my research will be to recognize how
use of the services, they first need to know that they exist. the changes I propose to make to client-server interactions
Therefore the personal server needs a way to advertise can also change the nature of applications within an
what services it has to offer as it enters a network, so that intelligent environment. What applications do this new
the client devices will access them upon discovery. service advertisement mechanism enable that were
It is this requirement for a reverse discovery method, where previously difficult, or even impossible, to implement? For
the server informs the client, rather than the client querying example, what changes will it enable for identity checking,
the server, that forms the basis of my research direction. location tracking and personalisation applications?
Applications such as these are major areas of research
EXAMPLE SCENARIO within ubiquitous computing research today, so my
Take the example of a landline phone in an office, which is research has the potential to offer new ways to approach
equipped with an LCD display. As you approach the them.
phone, wearing your personal server, the phone becomes
aware of its presence and the services it is offering. It Finally, I intend to build a working system where personal
utilises the phonebook service on offer to present you with servers, or devices acting in a similar fashion (such as
a list of your contacts on the LCD display, from which you customised PDAs), can wirelessly join a network and make
can easily pick out who you want to call. When you hang their services known. These services should include some
up and walk away, the phone knows that your personal that the new mechanisms have made possible, as discussed
server is no longer offering these services, and ceases to above.
use them. ACKNOWLEDGMENTS
In order to make this work, there are a number of issues I would like to thank my supervisor Bob Kummerfeld and
that need to be addressed: my associate supervisor Aaron Quigley for their guidance
and support. I would also like to thank the Smart Internet
- How does the phone know that the personal server is Technology CRC for their ongoing support of my PhD.
there?
- How does it know what services the personal server is REFERENCES
offering? 1. Want, R., Pering, T., Danneels, G., Kumar, M., Sundar,
- How does the phone authenticate itself to get access to M., Light, J.: “The Personal Server: Changing the Way
the services? We Think About Ubiquitous Computing”, Proceedings
- What data representation do the devices use to of Ubicomp 2002, Goteburg, Sweden, September 30th –
communicate? October 2nd 2002, pp 194-209.
- How does the phone know when the personal server is 2. Want, R., Pering, T., Borriello, G., Farkas, K. I.:
no longer available? “Disappearing Hardware”, IEEE Pervasive Computing,
It is my intention that my research will lead me to come up Vol. 1, Issue 1, April 2002, pp 36-47.
with suitable solutions for each of these problems. 3. Mayo, R.: “TN-60 -- Reprint of the Factoid Web Page”,
RESEARCH DIRECTIONS http://www.research.compaq.com/wrl/techreports/abstra
Current networking models do not allow personal servers cts/TN-60.html, July 2001.
to introduce their services in the ad-hoc fashion that we
wish them to. Therefore my research is going to be directed 4. Staudter, T.: “The Core of Computing”,
towards finding new mechanisms for client-server http://www.research.ibm.com/thinkresearch/pages/2002
interactions and investigating what possibilities they open /20020207_metapad.shtml, February 2002.
up. 5. Moore, D. J., Want, R., Harrison, B. L., Gujar, A.,
Firstly, I need to fully investigate existing service Fishkin, K.: “Implementing Phicons: Combining
discovery and advertisement mechanisms. This allows me Computer Vision with InfraRed Technology for
to identify what problems they encounter when applied to Interactive Physical Icons”, Proceedings of ACM
devices wishing to offer network services. This knowledge UIST’99, Ashville, N.C., November 8th – 10th 1999, pp
can then be used to develop a protocol suite similar to that 67-68.
used by zeroconf, to make it simple to introduce devices 6. Mockapetris, P.: “Domain Names – Concepts and
and their services to the network. Facilities”, STD 13/RFC 1034, November 1987.
Consideration needs to be given to how external devices 7. “Zero Configuration Networking”, zeroconf IETF
authenticate with the personal server to access services. working group home page, http://www.zeroconf.org.
This would probably involve services available with 8. “Understanding UPnP: A White Paper”,
different levels of clearance, ranging from low-level http://www.upnp.org/download/
statistical information open to all, to the most privileged UPNP_UnderstandingUPNP.doc
ability to write to all parts of the disk.

252
ME: Mobile E-Personality

PEKKA JÄPPINEN
Department of Information Technology
Lappeenranta University of Technology
P.O.Box 20, 53851 Lappeenranta FINLAND
[email protected]

ABSTRACT ences and so on, require user’s personal information


in order to provide their service properly.
More and more services require some personal in-
formation about the user in order to work properly. Traditionally personal information has been stored
How to get the required personal information to the in the database of the service. This approach has few
service with little or no action from the user and still significant flaws from the user point of view. First,
preserve users privacy, is important question. In this as the databases of different services do not usually
research Mobile E-personality (ME) service is pre- have cooperation, user has to type in the required
sented. In ME approach the personal information is information every time he decides to use a new ser-
stored on single mobile device. ME service provides vice. For example, when taking a trip to three differ-
this information to those services that request it. ent towns the user has to reserve hotel room for each
Keywords town. This mean that if the hotels have no coopera-
tion, the user has to type in his personal information
Personal information, privacy, mobility, services, three times in a very short period of time. Besides
wireless. inputing repetitive information, the more services
the customer uses, more databases hold his personal
information, which brings up two problems. First,
1 Services and Personal Information
more places the information is stored in, bigger the
risk is that one of the places has a security flaw and
Internet based electronic services have gained more the given information gets stolen. Secondly, if some
and more customers. At the same time new commu- of the personal information changes it requires lot of
nication standards have evolved that make it possible work to go and update all those databases.
to provide new types of services with various types
of hardware for example local information kiosks For internet services these basic problems have
from where information can be fetched by using been resolved partly on web browsers. For example
Bluetooth wireless communication technology. Fur- mozilla wallet can fill web forms automaticly when
ther on ubiquitous computing aims to provide ser- the form fields are notated properly [1]. This ap-
vices automaticly to the user. proach is fine as long as the user uses only personal
computer at work or at home. For mobile user that
As the mobile devices have quite limited user inter-
uses mainly ubiquitously provided services, which
face the services should be personalised in order to
are not accessed by web browser or web café’s to
provide better use experience for the user. Personal-
access internet services, mozilla wallet helps very
isation is a technique that allows the selection of the
little.
received information according to the preferences of
the user. If implemented correctly both the user and There are many frameworks and architectures de-
the service provider will benefit. The information fined for handling user mobility. For example, In-
needed for personalisation can be acquired either by tegrated Personal Mobility Architecture [2] defines
requesting from the user or by following the user a framework where user’s personal information is
behaviour on the services. Personal information is requested from “home” network by the network in
not required only for personalisation reasons. Inter- which the user is visiting. This approach relies on
net shops, hotel registrations, competitions, confer- the fact that the service can connect to the user’s

253
home network. This may not be possible for ser- so that any service can use it? How user can define
vices provided ubiquitously. The same problem ex- what information is available to what service? How
ists for trusted third party approaches such as Lib- is user authenticated for changing the stored data?
erty Architecture [3] and Microsoft .NET passport How does ME affect on business?
[4]. Connection to the trusted third party is required
The PhD thesis is not going to answer to all of these
for personal information retrieval. Third party ap-
questions. Since the thesis is done for the computer
proach may also require some kind of payment to
science department and the laboratory of communi-
the third party for it’s services.
cations engineering, the focus is on the communica-
In this PhD research the personal information is tion between ME and the services (Figure 1).
stored in mobile device, where the user has the con-
Initial evaluation of various personal information
trol over it. The information is delivered to the ser-
properties that can affect to the location where the
vices on request by Mobile E-Personality service
given piece of information should be stored was first
(ME). Therefore there is less need for service to have
done and published at: [6]. First version of personal
huge databases of customers personal information.
information transfer from mobile device to internet
service based on vCard transfer between browser
2 ME and research tasks plug-in and mobile phone[7]. More generic transfer
was defined for transparent services[8]. Next steps
The goal of ME research is to define the ways for for research is to define privacy rules for mobile
communicating with the mobile device holding per- device and define general structure for Mobile E-
sonal information. The access of information in ME Personality service on mobile device.
is designed so that after configuration user actions
are minimised but the privacy is preserved.
References
Transparent
Service 1. Bauer, G.W., User data management (2003) , Available at:
http://www.mozilla.org/projects/ui/communicator/
Internet browser/wallet/ [Accessed March 27, 2003]
2. Thai, B. , Wan, R., Seneviratne, A., Rakotoarivelo, T., Inte-
grated Personal Mobility Architecture: A Complete Personal
Mobility Solution, Mobile Networks and Applications vol 8
2) Internet service
AP issue 1, ACM Press (2003)
Access 3) 3. Liberty Alliance, Liberty Architecture Overview (2002),
Point (AP) Available at: http://www.projectliberty.org/ [Accessed April
ME 11, 2003]
1)
4. Microsoft, Microsoft .net passport: re-
Service view guide (2002), Available at:
Mobile Accessing
Device (SAD) http://www.microsoft.com/netservices/passport/
Device
passport.asp [Accessed March 27, 2003]
Fig. 1. Mobile E-personality and services 5. Bettstetter, C., Kellerer, W., Eberspächer, J., Personal Pro-
file Mobilty for Ubiquitous Service Usage, Book Of Visions
2000, Wireless Strategic Initiative (2000)
6. Jäppinen, P., Porras, J., Analyzing the Attributes of Personal-
In order to create universally functional Mobile E- ization Information Affecting Storage Location, Proceedings
Personality there is several questions that need to on IADIS International Conference on E-Society, Lisbon,
Portugal (2003)
be addressed. What are the benefits and drawbacks 7. Yrjölä, M., Jäppinen, P., Porras, J., Personal information
of having single device holding lot of special in- transfer from mobile device to web page, Proceedings on
formation? What are the risks? How different types IADIS International Conference on WWW/Internet, Al-
of services can request the personal information or garve, Portugal (2003)
8. Jäppinen, P., Porras, J., Transfer of Personalisation Informa-
how they even know they can request it? How user tion from Mobile Device to Transparent Services, Proceed-
privacy can be ensured i.e. how much automation ings on IASTED International Conference on Computer Sci-
can be provided? How the information is notated ence and Technology, Cancun, Mexico (2003)

254
User Location and Mobility for Distributed Intelligent
Environment
Teddy Mantoro
Department of Computer Science, Australian National University, ACT-0200, Australia
+61-2-6125 3878
[email protected]

ABSTRACT models have been proposed in different domains, and can


User mobility in an Active Office (AO) represents human be categorized into two classes i.e. Hierarchical
activity in a context awareness and ambient intelligent (topological, descriptive or symbolic) and Cartesian
environment. This research describes user mobility by (coordinate, metric or geometric) [2,4,5].
detecting user’s changing location. We have explored We believe that accessible resources for users in an AO can
precise, proximate and predicted users’ location using be detected based on their proximate location. We also
variety of sensors (i.e. WiFi and Bluetooth) and investigate believe that a hierarchical location model will be more
how the sensors fit in an AO to have interoperability to relevant than a Cartesian location model because a
detect users. We developed a model to predict and hierarchical location model could scale room and building,
proximate user location using wireless sensors in the while the technology for gridding/mapping the office using
Merino Architecture, i.e.: the architecture for scalable a Cartesian model is not available yet at the time of writing.
context processing in an Intelligent Environment (IE).
In the Wireless LAN (WLAN) environment, the location of
Keywords mobile devices can be determined by measuring the signal
Context, Location Awareness, User mobility, Active Office strengths of a few most visible access points [6]. This
INTRODUCTION accuracy is sufficient to support the everyday tasks in the
An AO as an implementation model of a distributed IE is AO.
defined to be as a normal office, which consists of several User Location in an AO implies the ability of IE to
normal rooms with minimal additional decorations understand a user changing location on a ‘significant’
(minimal intrusive detectors and sensors). In order for an scale. When the user moves, it means that the user’s access
AO to provide services to the users, AO must be able to to the resources also changes. The AO can be designed to
detect its current state/context and determine what actions understand ‘significant’ user change location by using
to take based on the context. The AO uses a scalable sensor that can measure proximate location.
distribution context processing architecture (Merino service
SENSOR AGGREGATION FOR USER LOCATION
layers architecture) to manage and respond to rapidly
User locations in an AO can be categorised as precise,
changing aggregation of sensor data [1,3].
proximate and predicted locations. The category is based
The key role of a distributed context processing in IE is an on the sensor capability in covering an area.
IE domain. An IE domain is an administrative domain,
The problem is to combine these known location data to
which at least contains an IE repository, a resources
determine a user’s actual location with sufficient precision
manager, knowledge based and various sensors.
for office activity purposes.
This paper explores user mobility in an AO. We began
from understanding user location, then changing location PROXIMATE USER’S LOCATION
from a current location to another location. By analysing Proximate location is based on sensor that covers more
the history data we can get the pattern of the user mobility. than a meter range, e.g. WiFi, Bluetooth, WiMedia,
We strongly believe that by understanding user mobility we ZigBee, active/passive badge, voice recognition, face
can better understand user activity. recognition, smart floor, etc.

A location is the most important aspect providing a context Proximate location detected by WLAN is an interesting
for mobile users, e.g. finding the nearest resources, proximate sensor in an AO because it can be used to access
navigation, locating objects and people. Numerous location the network and also to sense user location on the scale of a
room or an office.
We used the Bluetooth access point as a sensor for several
rooms within the range. For example, when a user is close
to a certain access point, his location will be proximately
close to the access point and it could represent user
location from several rooms.

255
WiFi does not only have a higher speed and longer range current user’s location is determined. If not, then we check
than Bluetooth but the signal strength of Wi-Fi also can be using aggregate proximate location data.
used to detect user location. We have two scenarios to Precise Location
determine user location using WiFi. Firstly, by determining User UserLoc Location RegId Loc
Id
Uid date time

the signal strength from the WiFi capable device which pc3

Ibutton4
323

125
cwj

tm
9-9

9-9
9-9

9-9

stores data in a local IE repository with the server sending fr3

Vr2
235

125
bk

rh
9-9

9-9
9-9

9-9

the current user location. Secondly by determining the User


UserLoc
scr1 323 aq 9-9 9-9

Uid Loc Loc

signal strength from the WiFi access points and storing the MacAddress

xx-xx-xx-xx-xx-xx
Uid

cwj cwj
Id

323
Cat

Pc
Proximate Location
AP1 AP2 … APn

signal strength data in the local IE repository with the xx-xx-xx-xx-xx-xx bk 323 Px room1 999 999 … 999

room2 999 999 … 999

server sending the user location. The difference between


xx-xx-xx-xx-xx-xx rh
324 Px
… … … … …
xx-xx-xx-xx-xx-xx aq
125 Pd

these scenarios is that in the second scenario the process of


roomn 999 999 … 999

sensing is in the access point, so we do not require a user’s Uid LocId Uid
Predicted Location (History)
LocId Date Time Dev

mobile device with a high capability. cwj 323


cwj 125 020813 04.02 sun15

tm 323 021228 03.06 pc16


tm 235

We used self organizing map (Kohonen map) approach of aq uk


tm

203

030111 11.30
… …

ibutton

artificial neural network to cluster the signal strength data. UserLoc in IE repository

Once we get the signal strength cluster allocation in the Figure 1. Aggregate users’ location in AO
local IE, we can directly get current user location. CONCLUSION AND FURTHER EXPERIMENTS
In our experiment using WiFi, we used 11 access points to In an AO, a user has a regular work schedule. A user has a
measure signal strength in two adjacent buildings, already routine activity that can be used to predict his location in a
installed for WLAN access. specific timestamp. A user’s activity can be represented by
user mobility, and user mobility can be seen from the
The result was good enough to predict current user
user’s changing location on a significant scale. So, in an
location. On the 2nd level of one building, we found that
AO once we can capture a user’s location then we can map
most places had a good signal from more than two access
a pattern of user mobility.Our experiment using WiFi and
points and we could predict accurately (96%) in rooms of 3
Bluetooth as proximate location in an AO has showed their
meters width. On the 3rd level, where not all locations were
good results in sensing user location. The result can be
covered by more than one access point, we had only a
improved by developing interoperability between sensors
reasonable degree of accuracy (75%) in predicting a user’s
to get aggregate sensor data.
current location.
Further experiments that can be considered arising from
PREDICTED USER LOCATION this work are aggregation smart sensor (more
Since IE is also a ubiquitous and ambient computing interoperability between sensor) using notification system
environment, we assume that sensors and actuators, and to notify the difference between current location and
computer access will be embedded and available in every previous notification and managing location information in
area. We identify the user’s location by recording a Merino service layer architecture, i.e. format
historical database of events, whenever the representation, conflict resolution, privacy of location
receptor/sensor/actuator captures the user’s identity in a information.
certain location.
REFERENCES
We develop historical data from precise users locations. 1. Dey, A. K., G. D. Abowd, et al. A Context-Based
The history data can be used to predict user location. Infrastructure for Smart Environments. 1st International
Probabilistic model also possible to develop to find the Workshop on MANSE, 1999.
most probable location of user based on a certain policy. 2. Harter, A. and A. Hopper A Distributed Location System for
We’ve implemented the policies for user’s location the Active Office, IEEE Network Vol 8, No 1, (1994).
checkpoints i.e. the same day of the week (at almost the 3. Kummerfeld, B. Quigley A. et al.. Merino: Towards an
same time and same day of the week) and all the days in a intelligent environment architecture for multi-granularity
one week range (almost the same time within a week) [4]. context description Workshop on User Modelling for
We use simple extended SQL query to implement the Ubiquitous Computing, 2003..
above policies to find user location via a Java Speech 4. Mantoro, T. and C. W. Johnson. Location History in a Low-
interface. cost Context Awareness Environment. Workshop on
WICAPUC, ACSW 2003. Adelaide, 2003.
DISCUSSION
In figure 1, we show how an AO processes the information 5. Schmidt, A., M. Beigl, et al. There is more to context that
to determine user’s location by aggregating the relationship location. Computer & Graphics 23(1999): 893-901.
between user data and location data. 6. Small, J., A. Smailagic, et al.. Determining User Location For
Context Aware Computing Through the Use of a Wireless
Aggregate of precise location has first priority and is
LAN Infrastructure. Pittsburgh, USA, Institute for Complex
followed by proximate and predicted locations Engineered Systems, (2000)
respectively. This means that when the AO receives
information from aggregate precise location, then the

256
Towards a Rich Boundary Object Model for the Design of
Mobile Knowledge Management Systems
Jia Shen
[email protected]
Department of Information Systems
New Jersey Institute of Technology
University Heights, NJ, 07102

ABSTRACT these systems only shared explicit information in strictly


This research proposes a Rich Boundary Object Model as a defined domains.
conceptual framework in the design of knowledge However, advances in cognitive and social theories have
management systems that utilize mobile technologies. An shifted the view on knowledge as static and isolated, to
ethnographic study is being conducted of a heating and knowledge as active, holistic and socially constructed [2,
cooling services company focusing on the exchange of case 6]. It follows from this view that to maximize intellectual
stories. With knowledge gained from this study, a capital, it is often important to capture much of the context,
prototype system is being built that allows in situ multi- where knowledge and meaning is developed. In the mobile
media data capture. The proposed study will extend our domain, the focus has been the automated capture of the
understanding of how to effectively design for in situ multi- environment for context-aware computing [3], or the
media data capture so that it is integrated into automated capture of interaction history for mobile meeting
organizational processes. support (i.e. RoamWare [11]). While serving as a useful
Keywords metaphor, some researchers argue that current “context-
Mobile Knowledge Management, boundary objects, capture aware” computing obscures the centrality of control and
intelligence in recognizing context and determining
INTRODUCTION appropriate action [5].
Mark Weiser in his seminal paper “The Computer for the
21st Century” envisioned a ubiquitous computing world CONTEXT, BOUNDARY OBJECTS, AND MOBILE
where computational resources are spread into the KNOWLEDGE SHARING
environment, and people find “using computers as Instead of making systems “aware” of the “context”, this
refreshing as taking a walk in the woods” [10]. In recent research proposes a Rich Boundary Object Model for the
years various consumer technologies that meet Weiser’s design of knowledge management systems. The concept of
definition have become pervasive. The existence of these boundary object was developed through social study of
mobile, reachable devices whose use are personal and near communication and technology to characterize objects that
pervasive can clearly be used to provide value to serve as an interface between boundaries of domain
organizations. knowledge [9]. The theoretical importance of the concept
has been studied in a number of studies concerning
With the increasing acceptance of the importance of organizational memory [6,7] and common information
intellectual capital for businesses and organizations, many space [2]. In all of these studies, boundary objects are
information systems have been designed specifically to necessarily decontextualized on one side of the boundary
address knowledge management issues. Pioneers of for storage, and need to be recontextualized on the other for
information systems for knowledge management and reuse. The recontextualization of the boundary object, for
organizational memory adopted a simple view of example a personal record, was found to be critical to reuse
knowledge and a passive memory model. From this information in organizations [6]. Serving the purpose of
perspective, the design of knowledge management systems boundary objects, entries in organizational memory and
is similar to database design. These early systems focused knowledge management systems are usually text that are
on categorizing information into fields for later reuse and entered after-the-fact. Contextual information is often lost
retrieval (e.g. Answer Garden [1]). The result was that during the process.
Different from context-aware computing, the rich boundary
object model recognizes the importance of the “knower”,
people who do their job through field practice. Provided
with mobile multimedia capture devices, the knower
captures multimedia data in situ, which can be used to
construct rich boundary objects (and thus the name of the

257
model). More “ portable context” [2] can be carried with technicians and between technicians, secretaries and
rich boundary objects. The understanding of context is not customers.
only from a computational perspective focusing on physical The proposed research is innovative because it
environment [3], but also from social and organizational operationalizes the conceptually important yet ambiguous
perspectives incorporating the organizational and social idea of context in mobile knowledge management systems
context [4]. The central hypothesis is that rich boundary design, using mobile multimedia data capture technologies.
objects afford multiple levels of contextualization, and The result is not only a database with information objects
enhance tacit as well as explicit knowledge transfer. and events, but a common information space [2] where
METHODOLOGY meanings of information objects can be interpreted and
To test and refine the model, it is proposed that a series of shared.
studies be conducted focusing on the exchange of case ACKNOWLEDGEMENTS
stories, which are messages that tell the particulars of an I would like to thank my advisor Dr. Quentin Jones for
occurrence or course of events that is directly related to guidance and support, Dr. Roxanne Hiltz and Dr. Steve
work processes. Similar to stories, a case story is told for a Whittaker for constructive comments, and MacKinney Oil
particular purpose. Different from other forms of stories, Company for allowing us to conduct the field study.
such as jokes, news, or notifications, case stories focus on
REFERENCES
work process experiences. Such case stories have been used
1. Ackerman, M.S., Augmenting the Organizational
to share informal information; transfer tacit knowledge;
Memory: A Field Study of Answer Garden. Proceedings
share organizational culture and norms; help form
of CSCW, 1994.
communities of practice; and catalyze organizational
change. Orr’s pioneering research, which examined Xerox 2. Bannon, L. and Bodker, S., Constructing Common
photo-copier repair technicians in 1986, showed how the Information Spaces. Proceedings of the 5th European
exchange of “ war stories” could help a community of Conference on CSCW, 1997.
practice diagnose problems, circulate information, and 3. Dey, A.K., Abowd, G.D., and Salber, D., A Conceptual
celebrate identity [8]. Framework and a Toolkit for Supporting the Rapid
Exchange of case stories and the model are being examined Prototyping of Context-Aware Applications. HCI, 2001.
through field studies at a company that provides fuel, house 16(2-4).
boiler and air conditioner repair, and maintenance service 4. Dourish, P., Seeking a Foundation for Context-Aware
to about 3000 customers in central New Jersey. The Computing. Human-Computer Interaction, 2001. 16.
company has four secretaries, seven technicians, managers
5. Erickson, T., Ask Not for Whom the Cell Phone Tolls:
and oil drivers. A significant proportion of the technicians’
Some Problems with the Notion of Context-Aware
activities can be described as mobile knowledge work, and
Computing. Communications of the ACM, 2001, 2001.
a significant proportion of the office activities support the
mobile technicians. Three studies are being proposed, each 6. Halverson, C.A. and Ackerman, M.S., "Yeah, the Rush
addressing a specific question in the context of the field Ain’t Here yet - Take a Break": Creation and Use of an
study site: Artifact as Organizational Memory. Proceedings of the
36th Annual Hawaii International Conference on System
1. Study 1 –what are the uses and limitations of boundary
Science, 6-9 Jan 2003, 2003: pp. 113 -122.
objects in current organizational knowledge sharing?
7. Lutters, W.G. and Ackerman, M.S., Achieving Safety:
2. Study 2 – what rich boundary objects are considered
A Field Study of Boundary Objects in Aircraft Technical
useful in knowledge sharing?
Support. CSCW 2002, 2002.
3. Study 3 – can the rich boundary object model be
8. Orr, J., Narratives at Work: Story Telling as
utilized to effectively guide the design of a mobile
Cooperative Diagnostic Activity. CSCW, 1986.
knowledge management system?
9. Star, S.L., The Structure of Ill-Structured Solutions.
Part of study 3 involves the development of the CAse sTory
Gasser, L. & Huhns M. (eds), Distributed Artificial
capture and Sharing (CATS) system, which will enable
Intelligence-Volume II. Morgan Kaufmann, 1989: pp.
capture of rich in situ data and the creation and sharing of
37--54.
rich boundary objects. The prototype CATS system being
built will use digital pictures and voice recording on pocket 10. Weiser, M., The Computer for the 21st Century.
PCs with digital camera attachment, and will synchronize Scientific American, 1991. 265(3): pp. 94-104.
data via 802.11-enabled network. 11. Wiberg, M., Roamware: An Integrated Architecture
STATUS AND CONTRIBUTIONS for Seamless Interaction in between Mobile Meetings.
Currently study 1 is being conducted at the field site. A Proceedings of the 2001 International ACM
variety of boundary objects and their limitations are being SIGGROUP Conference on Supporting Group Work,
identified in current case story sharing processes among 2001: pp. 288-297.

258
Part V

Videos
DigiScope: An Invisible Worlds Window
Alois Ferscha Markus Keller
Research Institute for Pervasive Computing Research Institute for Pervasive Computing
Altenberger Straße 69 Altenberger Straße 69
4040 Linz, AUSTRIA 4040 Linz, AUSTRIA
[email protected] [email protected]

ABSTRACT DIGISCOPE
Smart appliances, i.e. wirelessly networked mobile With our work we aim at supporting “human to ubiquitous
information devices have started to populate the “real computer interaction” processes by bringing back visual
world” with “hidden” or “invisible” services, thus building clues to the user on how to interact. Once computers have
up an “invisible world” of services associated with real disappeared from desks, hiding in the background, their
world objects. With the embedding of invisible technology services will most likely still be there. New artefacts and
into everyday things, however, also the intuitive perception smart appliances [7] are evolving that “carry” invisible
of “invisible services” disappears. In this video we present services, such that manipulating the appliance controls a
how we can support the perception of smart appliance service. Even if the service is not integrated into the artefact
services via novel interactive visual experiences. We have but merely “linked” to a background system [5], the
developed and built a see-through based visual perception manipulation of the physical object can manipulate their
system for “invisible worlds” to support interactive theatre virtual representative on that background system
experience in mixed reality spaces, which we call respectively. To this end it is necessary to link the physical
DigiScope. In the video we shown how e.g. the “invisible world with the virtual world [2], i.e. the linking of physical
services” of our SmartCase, an Internet enabled suitcase, objects with their “virtual counterparts” [6]. Tangible
can be visualized via graphical hyperlink annotations interface research [4] has contributed to this issue of
Keywords
physical-virtual linkage by considering physical artefacts as
Computational Perception, Smart Appliances, MR. representations and controls for digital information. A
physical object thus represents information while at the
same time acts a control for directly manipulating that
SMART THINGS information or underlying associations.
“The most profound technologies are those that disappear.
They weave themselves into the fabric of everyday life until
they are indistinguishable from it“ was Mark Weiser’s This video presents the use of DigiScope, a 6DOF visual
central statement in his seminal paper [8] in 1991. His see-through tablet we have developed to support an
conjecture, that “we are trying to conceive a new way of intuitive “invisible service” – or more generally: “invisible
thinking about computers in the world, one that takes into world” – inspection: the invisible services of the smart
account the natural human environment and allows the appliance “SmartCase” – which has been developed as a
computers themselves to vanish into the background” has demonstrator for a contextware framework [2] [3] – are
fertilized not only the embedding of ubiquitous computing inspected. We exploit the metaphor of digital annotations
technology into a natural human environment which for real world objects, and display these annotations along
responds to people’s needs and actions in a contextual the line of sight to real world objects that are seen through a
manner, but has also caused “hidden” functionality and holographic display. The user gets the ability to interact
services volatilize out of sight of humans. “Smart Things” with the virtual object and its digital information by
functionality is characterized by the autonomy of their viewing the corresponding real (physical) artefact. With
programmed behaviour, the dynamicity and context- DigiScope, the user is handling a holographic display tablet
awareness of services and applications they offer, the ad- just like a 6 DOF window that opens a view into the virtual
hoc interoperability of services and the different modes of world. The tablet is an optical see-though display which
user interaction upon those services. Since many of these allows for a very natural viewing and scene inspection. To
objects are able to communicate and interact with global implement correct views into the scene, the angle and
networks and with each other, the vision of “context-aware” perspective of the DigiScope is being tracked, instead of
[1] smart appliances and smart spaces – where dynamically tracking the position and orientation of the user. Thus the
configured systems of mobile entities by exploiting the user is freed from any system hardware obstacles like
available infrastructure and processing power of the HMDs, stereoscopic glasses, trackers, sensors, markers,
environment – has become a reality. tags, pointers and the such. To support free navigation in
the scene, the DigiScope can be fully tilt and rotated in
space by hand. The projecting beamer is fixed in the right

261
projecting angle within a 6DOF mounting frame, and is object (e.g. shirt) has been put into the SmartCase, this
used to project the computer generated image encoding the service can be queried to check whether the shirt is in the
scene annotation onto a holographic display. The case or not. A straightforward way to access this
DigiScope software architecture is based on standard information would be via a classical http interface to the
building blocks for AR application frameworks: (i) a 6 embedded web-server. Observed via the DigiSpace
DOF tracking library for position and orientation tracking however, changes to the SmartCase inventory are displayed
of the DigiScope frame, (ii) Java and Java3D for 3D scene as a graphical annotation of the real world.
modelling, rendering and implementing user interaction,
and (iii) ARToolkit for visual object tracking and scene
recognition.
CONCLUSIONS
This video presents DigiScope, a 6DOF visual see-through
INSPECTING SMARTCASE inspection tablet, as an approach towards emerging problem
In previous work we have developed SmartCase [2], a of developing intuitive interfaces for the perception and
context aware smart appliance [3]. The hardware for the inspection of environments populated with an increasing
SmartCase demonstration prototype uses an embedded number of smart appliances in the pervasive and ubiquitous
single board computer integrated into an off-the-shelf computing landscape. DigiScope envisions a new type of
suitcase, which executes a standard TCP/IP stack and MR interface with two main features: (i) a new exploration
HTTP server, accepting requests wirelessly over an experience of the physical world seamlessly merged with its
integrated IEEE802.11b WLAN adaptor. A miniaturized digital annotations via a non-obtrusive MR interface, and
RFID reader is connected to the serial port of the server (ii) an integration of ubiquitous context-awareness and
machine, an RFID antenna is integrated in the frame of the physical hyperlinking at the user interface level. The
suitcase so as to enable the server to sense RFID tags DigiScope is demonstrated in operation.
contained in the SmartCase. A vast of 125KHz and 13,56
MHz magnetic coupled transponders are used to tag real
REFERENCES
world objects (like shirts, keys, PDAs or even printed
1. Dey, A.K.: Understanding and Using Context. Personal
paper) to be potentially carried (and sensed) by the suitcase.
and Ubiquitous Computing, Special Issue on Situated
In addition, the SmartCase is equipped with optical markers
Interaction and Ubiquitous Computing, Vol. 5 No. 1
so as to enable visual recognition and tracking with the
(2001)
ARToolkit framework.
2. Ferscha, A.: Contextware: Bridging Virtual and Physical
Worlds. Reliable Software Technologies, AE 2002.
LNCS 2361, Berlin (2002) 51-64
3. Gellersen, H.W., Beigl, M., Schmidt, A.: Sensor-based
Context-Awareness for Situated Computing, Proc. of
Workshop on Software Engineering for Wearable and
Pervasive Computing, Ireland, June, (2000) 77-83
4. Gorbet, M.G., Orth, M., Ishii, M.: Triangles: Tangible
Interface for Manipulation and Exploration of Digital
Information Topography. Proc. CHI1998 (1998) 49-56
5. Kindberg, T., Fox, A.: System Software for Ubiquitous
Computing. IEEE Pervasive Computing, Vol. 1 No. 1
(2002) 70-81
6. Römer, K., Schoch, T., Mattern, F., Dübendorfer, T.:
Smart Identification Frameworks for Ubiquitous
Computing Applications. Proc. PerCom, (2003)
Figure 1: The DigiScope 7. Schmidt, A. Van Laerhoven, K: How to Build Smart
Appliances?, IEEE Personal Communications 8(4),
August, (2001) 66-71
A unique ID associated with every real world object is the
8. Weiser, M.: The Computer of the Twenty-First Century.
ID encoded in its RFID tag. It is sensed by an RFID reader
Scientific American (1991) 94-104.
which triggers a script to update the state information on the
embedded Web server. Considering now the inventory of
the SmartCase as an “invisible” service, then, once an

262
Bumping Objects Together as a Semantically Rich Way of
Forming Connections between Ubiquitous Devices
Ken Hinckley
Microsoft Research, One Microsoft Way
Redmond, WA 98052 USA
Tel: +1 425 703 9065 email: [email protected]

ABSTRACT
This research explores the use of distributed sensors to form (a)
dedicated and semantically rich connections between
devices. For example, by physically bumping together the
displays of multiple tablet computers that are facing the
same way, dynamic display tiling allows users to create a
temporary larger display. If two users facing one another
instead bump the tops of their tablets together, this creates a
collaborative face-to-face workspace with a shared
whiteboard application. Each tablet is augmented with
sensors including a two-axis linear accelerometer, which
provides sufficient information to determine the (b)
relationship between the two devices when they collide.
Keywords
Distributed sensors, context aware, multi-user interaction
INTRODUCTION
Establishing meaningful connections between devices is a
problem of increasing practical concern for ubiquitous
computing [3][4]. Wireless networking and location sensing
can allow devices to communicate and may provide
information about proximity of other devices. However, Fig. 1 (a) Dynamic display tiling by bumping together two
tablets that are facing the same direction. (b) The tablets
with many devices nearby, how does a user specify which
form a temporary larger display, with the image expanding
devices to connect to? Furthermore, connections need across both screens. Small arrows provide feedback of the
semantics: What is the connection for? Is the user edges involved in the dynamic display connection.
collaborating with another user? Is the user combining the
input/output resources of multiple devices to provide (a)
increased capabilities? Users need techniques to intuitively
form semantically rich connections between devices.
This research proposes physically bumping two devices
together as a means to form privileged connections.
Bumping introduces an explicit step of intentionality, which (b)
users have control over, that goes beyond mere proximity of
the devices to form a specific type of connection. For
example, dynamic display tiling [2] enables users to
combine the displays of multiple devices by bumping a
tablet into another one lying flat on a desk (Fig. 1). Users
can also establish a collaborative face-to-face workspace
[1] by bumping the tops of two tablets together (Fig. 2).
Bumping generates equal and opposite hard contact forces (c)
that are simultaneously sensed as brief spikes by an Fig. 2 (a) Face-to-face collaboration by bumping the tops
accelerometer on each tablet. The software synchronizes the of two tablets together. The sketch is shared with the other
data over an 802.11 wireless connection; two spikes are user for annotation. Also shown: feedback for (b) making or
considered to be simultaneous if they occur within 50ms of (c) breaking a collaboration connection.
one another.

263
The two orthogonal sensing axes of each accelerometer breaks the face-to-face connection if one of the users walks
provide enough information to determine which edges of away (walking can be sensed using the accelerometer [1]).
the tablets have collided, allowing tiling of displays along Users can also exchange information by bumping tablets
any edge (left, right, top, or bottom) or sensing that the together just as people at a dinner table might clink glasses
tablets are facing one another when bumped together in the together for a toast. This is distinguished from display tiling
case of face-to-face collaboration. Example accelerometer by sensing that both tablets are being held (as opposed to
data from bumping two devices together is shown in Fig. 3, one being stationary on a desk). Finally, one user can
as well as simultaneous but incidental handling of the “pour” data from his tablet into that of another user by
devices. The software can ignore most such sources of angling the tablet down when the users bump their tablets
false-positive signals. Details of synchronization and together [2]. These variations shown in the video suggest
gesture recognition appear in [2]. additional ways to enrich the semantics of connections that
can be formed based upon bumping objects together.
RELATED WORK
Smart-Its Friends and ConnecTables form distinguished
connections between multiple devices. Smart-Its Friends
infers a connection when two devices are held together and
shaken. ConnecTables [4] are wheeled tables with mounted
LCD displays that can be rolled together so that the top
edges of two LCD’s meet, forming a connection similar to
the collaborative face-to-face workspace proposed here.
Both [3] and [4] can form only one type of connection,
Fig. 3 Left: Example accelerometer signature for whereas bumping two objects together can support multiple
bumping two tablets together, with forward-back and left- types of connections. Furthermore, bumping can specify
right accelerometer axes for the local and remote devices. additional parameters, such as which edges of two separate
Right: Incidental handling of both tablets at the same time displays to join, or determining which tablet is the
results in signals that are distinct from intentional bumping.
connecting tablet (as opposed to the base tablet) to provide
For dynamic display tiling, one tablet (the base tablet) rests a direction (hierarchy) to the connection.
flat on a desk surface, and a second tablet (the connecting CONCLUSION
tablet) is held by a user and bumped into the base tablet This work contributes a novel and intuitive mechanism to
along one of the four edges of its screen bezel. Note that form specific types of connections between mobile devices.
this creates a hierarchy in the connection. The connecting When bumping two tablets together, a connection is formed
tablet temporarily annexes the screen real estate of the base in the physical world by manipulating the actual objects of
tablet. The software currently distinguishes the connecting concern, so no naming or selection of devices from a list is
tablet from the base tablet using capacitive touch sensors to needed. Bumping can support several different types of
determine which of the two tablets is being held. connections, including dynamic display tiling, face-to-face
Appropriate feedback confirming that a connection has collaboration, or “pouring” data between tablets. Here we
been established is crucial to the techniques. Users are focus on multiple Tablet PC’s, but in the future,
shown the type of connection being formed using overlaid dynamically combining multiple heterogeneous devices
icons on the screen as shown in Fig. 2 (b, c) for face-to-face could lead to compelling new capabilities for mobile users.
collaboration; analogous “connection arrow” icons for
REFERENCES
dynamic display docking can be seen in the video.
1. Hinckley, K., Distributed and Local Sensing Techniques
Furthermore, because the techniques involve two users, one
for Face-to-Face Collaboration, to appear in ICMI-
user’s attention may not be focused on the tablets; hence it
PUI'03 5th Intl. Conf. on Multimodal Interfaces.
is important to provide audio feedback as well. Tiling two
displays together makes a short metallic clicking sound 2. Hinckley, K., Synchronous Gestures for Multiple Users
suggestive of a connection snapping together. A different and Computers, to appear in ACM UIST'03.
sound reminiscent of slapping two hands together occurs 3. Holmquist, L., Mattern, F., Schiele, B., Alahuhta, P.,
when users establish face-to-face collaboration. Beigl, M., Gellersen, H., Smart-Its Friends: A
For display tiling, picking up a tablet removes it from the Technique for Users to Easily Establish Connections
shared display. By contrast, for face-to-face collaboration, between Smart Artefacts, Ubicomp 2001: Springer-
users may want to move their tablets apart but continue Verlag, 116-122.
collaborating; hence moving the tablets apart does not 4. Tandler, P., Prante, T., Müller-Tomfelde, C., Streitz,
break the connection in this case. Instead, users can N.A., Steinmetz, R., Connectables: dynamic coupling of
explicitly break the connection by drawing a slash across displays for the flexible creation of shared workspaces,
the handshake icon (Fig. 2b), or the system automatically UIST 2001, 11-20.

264
Ubiquitous Computing in the Living Room:
Concept Sketches and an Implementation of a Persistent
User Interface
Stephen S. Intille1 Vivienne Lee1 Claudio Pinhanez2
1 2
Massachusetts Institute of Technology IBM T.J. Watson Research
1 Cambridge Center, 4FL 19 Skyline Drive - office 2N-D09
Cambridge, MA 02142 USA Hawthorne, NY 10532 USA
[email protected] [email protected]

ABSTRACT mitting an application designer to display information


This video shows some concept sketches of applications on most of the planar surfaces in the living room: the
that might be created for a living room with ubiqui- floor, ceiling, 3 of 4 walls, furniture, and in some cases
tous display and laser pointer interaction technology. even on the interior sides of shelves and furniture. The
A fully-functioning prototype of a persistent interface only limitation is that information can be displayed only
is also described: a language-learning tool. in one area of the room at a time.
The primary mode of interaction in the room is laser
1. INTRODUCTION
pointer interaction [4]. The user carries a small laser
Until recently, researchers creating ubiquitous comput-
pointer device. The ED-projector’s camera moves so
ing environments were forced to examine the target
that the projected image is in the field of view of the
space, pick a handful of locations where information
camera. Image processing algorithms detect the red
was most likely to be needed, and then install display
laser point in the camera view when the user points at
devices at those locations. The limitations of display
the image. That position is mapped back into the image
technology have driven user interface design decisions
space. Heuristics for “click” events are then created. In
rather than the needs of the end user. Ideally, user in-
the living room, the user clicks on the display using a
terface designers would understand the needs of the end
1-2 second dwell.
users for any given application and then decide when
and how to display information to impact the desired
tasks. 3. LIVING ROOMS “OF THE FUTURE”
The video shows a series of mock-up examples of in-
We have constructed a mock living room that combines
terfaces that could be developed for a living room with
non-invasive sensing technology with ubiquitous display
ubiquitous display and interaction capabilities. Although
technology to create an environment where information
there are many interesting applications one can envi-
can be presented and manipulated on nearly any sur-
sion, the ubicomp display and interaction technology is
face. Our ubicomp environment makes it possible to
analogous to the computer mouse. There is no single
develop prototype applications. In this video, we show
“killer app” for the mouse. It is an enabling technol-
a series of concept sketches illustrating ideas for inter-
ogy that was helpful for certain graphical tasks on early
faces that could be developed for a living room with
computers and which was later adopted as a key compo-
enabling ubicomp technologies. The video concludes
nent of nearly every application that runs on a common
with a fully-functioning example of a persistent inter-
desktop computer. The mouse, once a novelty, is now
face. A persistent interface is one that is designed to
a necessity for most computer use.
be continuously present [1] without creating feelings of
information overload [2]. We believe the same will ultimately be true for ubiq-
uitous display and interaction technology. As did the
2. THE UBICOMP ENVIRONMENT mouse, the use of this tool will require designers to
Figure 1 shows the ubiquitous computing living room change how they design human-computer interfaces. Par-
prototype in our lab. This room has, among others, the ticularly challenging is merging multiple applications
following capabilities: (1) ubiquitous display of infor- seamlessly and sharing a single ubicomp resource. The
mation on most planar surfaces, (2) ubiquitous audio, environment itself becomes the “display space” and ap-
and (3) ubiquitous interaction with information on most plications will compete for use of this resource.
planar surfaces using laser pointer interaction.
The examples in the video illustrate how the technology
We have embedded an ED-projector system [5] into a could be used to enable mobility, augment traditional
cabinet in the the living room environment. This rel- media, provide just-in-time motivation [3], help people
atively inconspicuous device fundamentally alters the exploit idle time, pull interfaces off devices, and map
display capabilities of the entire living room space, per- information at life-size onto the real world.
265
(a) (b) (c)
Figure 1: (a) The mock living room. (b,c) A user interacting with the language-learning tool: a
functioning example of a persistent interface.

4. THE LANGUAGE-LEARNING TOOL traction.


Many of the concept examples are applications that
If the user happens to notice a word, the user can ac-
command the user’s attention. The video concludes
quire more information using laser pointer interaction.
with a fully-operational example interface. Our goal
By simply pointing and dwelling on the word it will
when creating the language-learning tool was to create
change to the English version. At this point, the pace
an interface that (1) could be run continuously in the
of the interaction increases because the user has ex-
environment so that it was always available, (2) used
pressed an interest in the application, and the applica-
the environment and the objects in the environment,
tion will more proactively provide information that is
and (3) was always ignorable.
potentially disruptive. If the user continues to dwell,
The language-learning tool is designed to help home oc- for example, the pronunciation of the word is heard
cupants learn the vocabulary of a foreign language. The through the room’s audio system. At this point slow
tool is non-disruptive but ever-present. It can be used fades are not used because the user has expressed an
by multiple people. It exploits the ability of the ED- active interest. If the user ignores a word, however,
projector to present information directly on objects in the application switches back to the peripheral mode.
the environment and the ability of laser pointer inter- Finally, the words that are presented are sometimes as-
action to permit selection of information from any part sociated with the location of the room in which they are
of the space. The application works as follows. Foreign presented. The application has hand-coded knowledge
language words (in this case French) appear randomly of the location of particular objects in the room and is
on different surfaces in the environment. When the user biased to display words related to those objects when
expresses interest in a word, the user can get the En- projecting near them (e.g. “chaise” when on the chair).
glish translation. Further, if the user wants, he or she Figure 1b-c shows a user interacting with the system.
can listen to the pronunciation of the word. This application can be run continuously in the space
even as people have meetings and work in the space.1
The appeal of the interface is the simplicity, and it
shows how the ubicomp environment can be used to REFERENCES
create a pleasing, persistent interaction experience. The [1] G.D. Abowd and E.D. Mynatt. Charting past, present, and
first challenge was to create a persistent interface that future research in ubiquitous computing. ACM Transactions
does not cause visual or auditory distraction that would on Computer-Human Interaction, 7(1):29–58, 2000.
disrupt ongoing activity in the environment, since the [2] S.S. Intille. Change Blind Information Display for
application runs continuously. We employ one strategy Ubiquitous Computing Environments. In G. Borriello and
L.E. Holmquist, editors, Proceedings of the Fourth
to minimize perceptible visual distractors: slow change. International Conference Ubiquitous Computing, volume
Overall, the strategy is to design the entire interface so LNCS 2498, pages 91–106. Springer-Verlag, Berlin, 2002.
that visual change is imperceptible to the user (change- [3] S.S. Intille. Designing a home of the future. IEEE Pervasive
blind user interface design [2]) unless the user is actively Computing, April-June:80–86, 2002.
interacting with the application. After a location in the [4] D.R. Olsen Jr. and T. Neilsen. Laser pointer interaction. In
room is randomly selected, the ED-projector is moved Proceedings of CHI 2000, pages 17–22. 2000.
to that position. To avoid visual distraction, a word [5] C. Pinhanez. The Everywhere Displays Projector: A device
then fades in over 30 seconds, which is too slow to trig- to create ubiquitous graphical interfaces. In G.D. Abowd,
ger peripheral detection. The visual effect is that of a B. Brumitt, and S.A.N. Shafer, editors, Proceedings of the
word materializing out of a surface. No visual distrac- Conference on Ubiquitous Computing, LNCS 2201, pages
315–331. Berlin Heidelberg, 2001.
tors are created. If the user does not interact with the
word after 2 minutes, the word fades away and the pro-
jector moves to another randomly chosen location and
displays another word. With a silent ED-projector, this 1
We thank Ron MacNeil, Ed Huang, CIMIT, and the Na-
interface would run without creating any cognitive dis- tional Science Foundation.
266
STARS - A Ubiquitous Computing Platform
for Computer Augmented Tabletop Games
Carsten Magerkurth, Richard Stenzel, Thorsten Prante
Fraunhofer IPSI
”AMBIENTE – Workspaces of the Future”
Dolivostraße 15
D-64293 Darmstadt, Germany
+49 (0) 6151 / 869-997
{magerkurth; stenzel; prante}@ipsi.fraunhofer.de

ABSTRACT AnReicherungs-Sytem, a German acronym for game table


In this video presentation we demonstrate the STARS augmentation system) consists of a Roomware hardware
platform for realizing computer augmented tabletop games setup and a specialized software framework that realizes
within a smart Roomware® environment. STARS cross-device interaction through different modes and
dynamically couples multiple types of interaction devices modalities [3]. The software framework of STARS provides
such as personal digital assistants (PDAs) or headsets with many board-game specific functions such as the
an interactive game table. STARS augmented tabletop administration of the virtual game board that facilitate the
games provide a number of features like dynamic game development of new games on top of the platform.
boards or private communication channels that go beyond STARS DEVICES
traditional tabletop games, but at the same time preserve the STARS integrates different input and output devices that
human centered interaction dynamics which makes playing each have dedicated purposes within game sessions.
board games a joyful group experience.
Game Table
Keywords The game table is the central instance for any tabletop game
Tabletop games, ubiquitous computing platform, smart application. In our setup, it consists of the InteracTable®
environment, Roomware Roomware component, which is an interactive table with an
INTRODUCTION embedded touch-sensitive plasma display as the table’s
During the past decade of ubiquitous and pervasive surface. The plasma display is used for displaying the
computing research there has been a growing amount of contents of game boards and for dealing with related
scientific activity dedicated to the integration of various interaction objects. Above the table, an overhead camera
differently sized and -shaped devices within ubiquitous tracks the positions of arbitrary playing pieces on the table
computing environments. surface.
Our realization of such a ubiquitous computing Below the table’s surface, a radio frequency (RF-ID)
environment and the integrated ‘Roomware®’ devices is antenna is embedded to detect RF-ID tags placed on the
presented in [4]. Roomware closely follows Weiser’s notion table. These tags are used to initiate and terminate different
of calm computing devices that integrate seamlessly into game sessions by just placing them on or removing them
everyday objects [6]. This means that Roomware from the game table.
components still function like traditional room elements Wall Display
(e.g. tables or walls), but provide dedicated computing A large display is located at one wall near the game table. It
services for the people in the smart environment. displays game relevant public information that each player
In addition to the cooperative work applications previously can view at any time. In our setup, the wall display consists
developed for the Roomware environment [5], we now of the DynaWall®, which is a Roomware component
present a new software platform to support the realization embedded into one of the walls in our research lab. It is a
of tabletop gaming applications with multiple input and rear-projected interaction space consisting of three joint
output devices that integrate with the platform. The STARS segments that allow multiple users to simultaneously
gaming environment (STARS stands for Spiel-Tisch- interact with the display surface.
Personal Digital Assistants
Players can integrate their Personal Digital Assistants
5 th International Conference on Ubiquitous Computing
(PDAs) via an 802.11b network connection to administrate
(Ubicomp’03), October 12–15, 2003, Seattle, WA, USA
private information and to communicate clandestinely with
Copyright by the Authors of this Publication.
other players.

267
Audio Devices of additional input and output devices and the augmentation
A public loudspeaker is available to emit ambient audio of single playing pieces with information technology.
samples or atmospheric music. STARS also integrates
RELATED WORK
headsets that allow players to receive computer-generated
Mandryk et al [2] have developed a computer augmented
private messages or utter verbal commands. STARS
tabletop game called False Prophets. Similar to STARS,
provides a speech generation and a speech recognition
False Prophets’ goal is to combine the strengths of
module based on the Microsoft Speech API.
traditional tabletop gaming and computing devices. As in
BENEFITS OF THE PLATFORM STARS, mobile computers are integrated for private
Playing hybrid tabletop games in a ubiquitous computing information. However, False Prophets does not attempt to
environment offers potential benefits over traditional board create a general purpose platform for multiple games, but is
games. currently limited to a single exploration game.
Persistency and Game Session Management Björk et al. [1] presented a hybrid game system called
STARS game sessions can be interrupted and continued at Pirates! that does not utilize a dedicated game board, but
any time with the current state of a game session being integrates the entire world around us with players moving in
automatically preserved for later continuation. The RF-ID the physical domain and experiencing location dependent
reader unit at the game table assigns game sessions to RF- games on mobile computers. Thereby, Pirates! follows a
ID tags, so that the tags operate like physical bookmarks. very different, but very interesting approach to integrate
This makes session management much more intuitive and virtual and physical components in game applications.
natural than GUI-based interfaces.
ACKNOWLEDGMENTS
Complex Game Rules We thank our colleagues Norbert Streitz, Peter Tandler, and
Complex traditional board games such as conflict Carsten Röcker for their helpful feedback on our work.
simulations or role-playing games usually either involve a Also, we especially thank our student staff member Sascha
lot of table reading and dice rolling that hamper game play Nau for his dedicated efforts to complete the video
or they suffer from oversimplified rules to make them more realization in time.
manageable. In STARS, the more complex game rules are Parts of this work are supported by a grant from the
put into the digital domain, so that an accurate simulation of Ladenburger Kolleg “Living in a smart environment” of the
the game world can be realized without slowing down the Daimler-Benz foundation.
game flow.
REFERENCES
Dynamic Information Visualization 1. Björk, S, Falk, J., Hansson, R., Ljungstrand, P.: Pirates!
The interactive table display allows providing the players Using the Physical World as a Game Board. In:
with dynamic game boards. This includes alterations to the Proceedings of Interact’01, Tokyo, Japan.
boards at runtime, e.g. a fog-of-war might be lifted when
new areas of the board are explored. Also, the presentation 2. Mandryk, R.L., Maranan, D.S., Inkpen, K.M.: False
of the boards can be automatically adjusted to real-world Prophets: Exploring Hybrid Board/Video Games. In:
properties such as the positions and viewing angles of the Extended Abstracts of CHI’02, 640-641.
players. 3. Magerkurth, C., Stenzel, R.: Computer-Supported
Cooperative Play - The Future of The Game Table. (in
Generic Development Architecture
German). To appear in Proceedings of M&C’03.
The STARS software architecture relieves the game
developer from many mundane tasks such as device 4. Streitz, N.A., Tandler, P., Müller-Tomfelde, C.,
integration or game board management. Thereby, she can Konomi, S. Roomware: Towards the Next Generation of
concentrate on creating rules and providing content. So far, Human-Computer Interaction based on an Integrated
we have realized a roleplaying game called KnightMage Design of Real and Virtual Worlds. In: J. A. Carroll
and a Monopoly clone called STARS Monopoly. Both (Ed.): Human-Computer Interaction in the New
games make use of the heterogeneous device setup, e.g. in Millennium, Addison Wesley, 553-578, 2001.
KnightMage the wall display shows a public map of the 5. Tandler, P.: Software Infrastructure for a Ubiquitous-
explored game area, while the PDAs are used for inventory Computing Environment Supporting Collaboration with
management and character attributes. Multiple Single- and Multi-User Devices. Proceedings
CONCLUSIONS of UbiComp'01. Lecture Notes in Computer Science,
We have presented the STARS platform for computer Springer, Heidelberg, 96-115, 2001.
augmentend tabletop games. Apart from writing new games 6. Weiser, M.: The Computer for the Twenty-First
for the platform, our next steps will include the integration Century. Scientific American, 94-100, 1991.

268
A-Life: Saving Lives in Avalanches
Florian Michahelles Bernt Schiele
Percepual Computing and Computer Vision Group, ETH Zurich
Haldeneggsteig 4, IFW C29, Zurich, Switzerland
{michahelles, schiele}@inf.ethz.ch, http://www.vision.ethz.ch/projects/avalanche

ABSTRACT completely buried in avalanches producing multiple burial


We present a novel approach to enhance scenarios [4]: 61% of all backcountry skiers who could not
avalanche companion rescue using wearable be found by visible parts were involved in a multiple burial
situation. Multiple burial scenarios ask too much of most
sensing technologies. The time to find and companion rescuers, support with avalanche beacons is not
extricate victims is most crucial: once buried by sufficient yet but necessary.
an avalanche, survival chances drop
State of the art in wearable sensing suggests more
dramatically already after the first 15 minutes. opportunities. We believe that providing information on
Current technology offers only information on the victims’ physical states at an avalanche site to rescuers
location of a single victim, however statistics allows much better resource allocation to the most urgent
show that in many cases there are multiple victims.
victims. In our research we show how wearable OPPORTUNITIES WITH WEARABLE SENSORS
sensors can further enhance such devices and Our approach is to automate prioritization of victims
how this additional information can be (triage) through sensors. Mountain medicine has developed
visualized. A prototypical implementation offered a so-called triage-scheme for emergency physicists [5].
Based upon different vital sign data, such as heart rate,
a basis for participatory evaluation with respiration activity and consciousness, different urgency
practitioners in the field. states can be derived that determine appropriate first aid
operations. Currently, triage is delayed until victims have
Keywords been extricated and rescuers get access to them. Instead,
situation-aware, wearable-sensing, personal assistance, wearable sensing attached to the human body can provide
avalanche rescue, mobile application, wearable computing continuous sensors readings and share the awareness of
INTRODUCTION emergency with surrounding rescuers much earlier.
There is an on-going trend towards out of bound (off-piste) Automation could also provide non-professional rescuers
skiing. Recreationists go beyond their limits, underestimate with the ability of triage in order to rescue additional
the danger of avalanches and risk their lives without the victims.
appropriate awareness of avalanche risks. Statistical SENSING
analysis of avalanche accidents during the last 30 years [1] According to the triage scheme [5] used in avalanche
has revealed that successful avalanche rescue has to aim at rescue the primary selection criteria are heart rate and
rescuing victims within the first 15 minutes. Avalanche respiration activity. Oximeters offer a non-invasive way of
survival chances rapidly decrease with time and after 15 measuring heart rate and blood oxygen saturation.
minutes there is the biggest decline from 90% to 30%.
Accordingly, beacon technology is widely used by
recreationists. Worn by the mountaineers these electronic
devices enable survivors and witnesses of an avalanche to
start immediate search and rescue operations.
Microprocessors on the devices can calculate distance and
direction to a single victim from the emitted dipole flux
pattern of a standardized long-wave signal [2]. Usually, a
range of 80m can be achieved. Batteries last up to 300h. In
practice, self-organized rescue yields survival chances four
Fig. 1: Oximeter measurements
times as high as in case of professional rescue [3], which is
often too late. We tested different placements of the oximeter: forehead,
finger tips and toe. Only the toe turned out to be
Current devices only provide directions for finding a single
appropriate in mountaineering: Today's ski and hiking
victim. However, recent statistical analysis has shown that
boots are well isolated and can shelter the sensor from
a surprisingly high percentage of victims get caught and
damage and loss. We have run several tests with skiers and

269
snowboarders and could achieve reasonable measurements putting vital functions, air-pocket existence and orientation
of heart rate and oxygen blood saturation (Fig. 1). Further, of all victims into one interface would be too much, we
subjects reported that the sensor would not have disturbed propose separation in location and urgency (Fig. 2). First,
them during their activities, once wrapped around toe or visual map presentation of the victims’ spatial distribution
finger, they soon forget about the sensor. However, we are enables the user to select victims such that ways can be
aware of the fact that severe cold may cause retreat of kept short. Secondly, separation in urgency provides
blood from the extremities, referred to as centralization, rescuers with a global view on the emergency which allows
such that peripheral measurements at the toe may get better focus on the most urgent victims. For that, we
unreliable under harsh conditions. A more promising way introduce a decision tree defined as follows: Heart rate is
of detecting heart rate, are contact-free measurements the primary criterion, air-pocket is second, oxygen blood
through radar based on Doppler phenomena. Currently, this saturation is third and orientation the fourth. In case of
technology has been deployed for people detection in earth unavailable sensor information the fundamental concept is
quakes and border controls. However, customization for to always assume the worst case. Now multiple victims can
on-body measurements could offer solution which is robust be aligned on a one-dimensional scale where victims’
against both centralization and displacement, as this physical states can be easily compared – even under stress
technique works contact less. conditions. With this user interface, rescuers can select
victims either based on location or urgency in order to
Another important source of information is the existence of
obtain more details on their vital signs displayed in the
air-pockets in the snow, closed air bubbles in front of
right column.
mouth and nose, as they protect victims against
asphyxiation up to 90 minutes [6]. As an initial study, we CONCLUSIONS
investigated the use of oxygen sensors for air-pocket We motivated the use of sensors in avalanche rescue by the
detection. Unfortunately, oxygen sensors did not appear as importance of time during avalanche rescue. We discussed
the appropriate method for air-pocket detection: even and described how sensor technology can be used to
compact snow contains air, such that exhaled air of a provide rescuers with a valuable tool for better planning
victim does not deviate significantly from normal snow. rescue procedures. For demonstration and evaluation
purposes we have developed a first prototype, technical
Knowledge about a victim’s orientation can be very helpful
details and experiences can be found in [7].
for rescuers during extrication. As accelerometers measure
all means of acceleration, in the stationary case these ACKNOWLEDGEMENTS
sensors can report orientation derived from direction of The Smart-Its project is funded in part by the Commission
gravity. We explored how a two-axis accelerometer can be of the European Union under contract IST-2000-25428,
applied to detect the orientation of one’s spine. and by the Swiss Federal Office for Education and Science
(BBW 00.0281).
VISUALIZATION
Avalanche rescue is a situation under immense pressure. REFERENCES
Nevertheless, today’s devices still require lots of training: 1. Brugger, H. and Falk, M.. Le quattro fasi del seppellimento da
Guidance with periodical beeps or support with little valangha. Neve e Valanghe. 16:24-31, 1992 (Italian)
arrows and rough distance is rather difficult for untrained 2. Hereford, J., Edgerly, B.. 457 kHz Electromagnetism and the
users. A visual user interface displaying more appropriate Future of Avalanche Transceivers. In Proceedings International
information could help to make usage much easier. Snow Science Workshop (ISSW 2000). Big Sky MT, USA, Oct.
2000.
3. Tschirky, F. and Brabec, B. and Kern, M., Avalanche rescue
systems in Switzerland: experience and limitations, In
Proceedings International Snow Science Workshop (ISSW
2000). Big Sky MT, USA, Oct. 2000, p. 369-376
4. Genswein, M., Harvey, S.. Statistical analyses on multiple
burial situations and search strategies for multiple burials. In
Proceedings International Snow Science Workshop (ISSW
2002). British Columbia, Canada, Oct. 2002.
5. Brugger, H., Durrer, B., Adler-Kastner, L., Falk, M., Tschirky,
F.. Field management of avalanche victims. Resuscitation 51:7.
6. Falk, M., Brugger, H., Adler-Kastner, L.. Avalanche survival
chances. Nature 368:21, 1994.
Fig. 2: screen design of prototype 7. Michahelles, F., Matter, P., Schmidt, A., Schiele, B., Applying
With the introduction of unique identifiers - rather a Wearable Sensors to Avalanche Rescue: First Experiences with
a Novel Avalanche Beacon. Computers & Graphics 27:6, 2003.
standardization problem among manufacturers than a
technical challenge – multiple victims are discriminated. As

270
Breakout for Two: An Example of an Exertion Interface for
Sports over a Distance

Florian Mueller 1, 2 Stefan Agamanolis 1 Rosalind Picard 2


[email protected] [email protected] [email protected]
1 2
Human Connectedness Group MIT Media Lab
Media Lab Europe 20 Ames St
Sugar House Lane, Bellevue Cambridge, MA 02139
Dublin 8, Ireland USA

ABSTRACT
Breakout for Two is the first prototype of a physical,
exertion sport that you can play over a distance. We
designed, developed, and evaluated Breakout for Two that
allows people who are miles apart to play a physically
exhausting ball game together. Players interact through a
life-size video-conference screen using a regular soccer ball
as an input device. In a test of 56 volunteers, the Exertion
Interface users said that they got to know the other player
better, became better friends, felt the other player was more
talkative and were happier with the transmitted audio and
video quality, in comparison to those who played an
analogous game using a non-exertion keyboard interface.
Keywords
Exertion interface, physical interface, sports interface,
social bonding, computer mediated communication,
interpersonal trust, funology, sport, video-conferencing Figure 1: Breakout for Two
INTRODUCTION It’s a cross between soccer, tennis, and the popular
“You can discover more about a person in an hour of play computer game “Breakout”. The players share a court, but
than in a year of conversation” (Plato, 427-347 BC). This stay on their side of the field, like in tennis. They see and
quotation conveys the motivation for our work perfectly. hear each other through a life-size videoconference, which
BREAKOUT FOR TWO feels like they’re separated by a glass wall.
How cool would it be if you could play football with your
friend, even though he just moved miles away? What about
playing tennis with a famous tennis player on another
continent who is preparing for a grand slam?
With Breakout for Two, you can. Breakout for Two is the
first prototype of a physical, exertion sport that you can
play over a distance.

Figure 2: Feels like a glass wall between the players


They both have a ball which they can throw, kick, smash,
in whatever sport they agree on – for example tennis.

271
Figure 3: The system also works with tennis balls Figure 5: Breakout for Two also supports two-on-two
They have to strike semi-transparent blocks, which are CONCLUSION
overlaid on the video stream. These virtual blocks are Breakout for Two is only one example of an Exertion
connected over the network, meaning they are shared Interface, which supports Sports over a Distance.
between the locations. If, for example, one player hits the Augmenting a gaming environment with exertion will
block on the upper left, the block on the upper right is hit greatly enhance the potential for social bonding, just as
for the other player. The goal is to hit all the blocks before playing an exhausting game of squash or tennis with a new
the other player hits them. You win if you hit more blocks acquaintance or co-worker helps to "break the ice" and
than the other player. build friendships. You can now have fun playing sports
with your local and remote friends!

Figure 4: Technical framework


EXERTION INTERFACE
Breakout for Two serves as an example of an Exertion Figure 6: Happy winners
Interface, an “interface that deliberately requires intense ACKNOWLEDGEMENTS
physical effort” [1]. It aims to recreate the same bonding Thanks to Tom Walter, Beth Veinott and Ted Selker. This
and team spirit experience of traditional sports except over project was created in the Human Connectedness group at
a distance; not with email and instant messengers, but with Media Lab Europe. More information on:
real balls, sweat, and exertion.
http://www.exertioninterfaces.com
EVALUATION http://www.medialabeurope.org/hc
56 volunteers evaluated the system. They did not know
each other beforehand, and played Breakout for Two for REFERENCES
half an hour. These players reported that they got to know 1. Mueller, F., Agamanolis, S., Picard, R. Exertion
the other player better, became better friends, felt the other Interfaces for Sports over a Distance. UIST 2002, Paris,
player was more talkative and were happier with the France.
transmitted audio and video quality, in comparison to those 2. Mueller, F., Agamanolis, S., Picard, R. Exertion
who played an analogous game using a non-exertion Interfaces: Sports over a Distance for Social Bonding
keyboard interface (p<0.05 for all these results) [2]. and Fun. CHI 2003, Fort Lauderdale, USA.

272
Concept and Partial Prototype Video:
Ubiquitous Video Communication with the Perception of
Eye Contact
Emmanuel Munguia Tapia Stephen S. Intille John Rebula Steve Stoddard
Massachusetts Institute of Technology
1 Cambridge Center, 4FL
Cambridge, MA 02142 USA
emunguia | intille @mit.edu

ABSTRACT minimize θ is to simply mount the camera sufficiently


This concept and partial prototype video introduces a close to the display so that the display and camera have
strategy for creating a video conferencing system for fu- nearly the same optical path. This can be effective with
ture ubiquitous computing environments that can guar- small, PDA-sized displays [3] but fails with larger (e.g.
antee two remote conversants the ability to establish desktop) size displays. This video shows a fourth op-
eye contact. Unlike prior work, eye contact can be tion that combines the positive properties of each of
achieved even as people move about their respective en- these approaches: manipulating the video (via warping
vironments engaging in everyday tasks. and object tracking) to guarantee that the optical path
of the looker’s eyes and the observer’s face are closely
1. INTRODUCTION aligned by ensuring that the camera is always positioned
Imagine this scenario: You are working in your future in the display directly between the observer’s eyes.
kitchen. You turn on your video conferencing system
and make a call to a friend in a distant city. The 3. A SYSTEM FOR UBIQUITOUS EYE CONTACT
face of your friend appears just above your work sur- We introduce a design for a video conferencing sys-
face on your kitchen cabinet door. When you look at tem that can be installed in existing environments in
your friend, he perceives excellent eye contact with you. a space-efficient manner and that permits ubiquitous
No matter how you move about your space, when the video communications with the perception of eye con-
conversation warrants it you can look at a nearby flat tact. Wireless phones freed people to move about their
surface, see the face of your friend, and initiate eye con- homes and accomplish cognitively simple tasks as they
tact. Establishing eye contact does not restrict you or talk; our proposed video conferencing system would per-
your friend to standing or sitting in a particular posi- mit this type of casual interaction while maintaining
tion, chair, or space. In this video we show how such a the option of effortlessly establishing eye contact when
system could be achieved. needed. We achieve this by simultaneously exploiting
several technologies: perspective warping of imagery,
2. VIDEO CONFERENCING AND EYE CONTACT
computer vision head tracking, and miniature (i.e., pin-
Eye gaze conveys important signals about trust and hole) video cameras embedded within walls and cabi-
attention during interpersonal communications [1]. A netry.
camera that is placed above or to the side of a display
of a videoconferencing system forces the person looking 3.1 Concept
at the display (the looker) to look away from the cam-
Figure 1 illustrates the full concept as it could be imple-
era’s optical path. The displacement angle, θ, between
mented in a future home with a high-bandwidth con-
the optical path of the looker’s eyes and the camera
nection. Participant A is located in a kitchen where the
will result in a perception of lack of eye contact if θ is
walls of the kitchen cabinet doors and the refrigerator
sufficiently large.
have been built with embedded pinhole video cameras
Chen [4] enumerated the three options previously inves- centered in each door. Participant B is located in the
tigated to minimize or eliminate θ. The first is selec- living room of a remote home where pinhole cameras
tively warping the video so that it appears to be cap- have been embedded at two different heights, one ap-
tured from the viewpoint in front of the looker’s eyes. propriate for sitting and one appropriate for standing.1
This strategy can create unnatural faces and enforce eye Both environments have an Everywhere Display Projec-
contact when it should not be perceived. The second tor (EDP) [6] located at the ceiling of one corner of the
approach to minimize θ is to merge the optical path 1
of the camera and the display, which typically requires These camera units could eventually be installed by simply
drilling a small hole in an existing wall or using component-
two conversants sit in predetermined positions in front based architectural wall and furniture systems under devel-
of their displays with their heads roughly centered in the opment in our group that are designed for such sensor inte-
respective camera’s field of view. The third approach to gration [5].
273
Everywhere Image of A: dio. Other mechanisms could be used to improve head
displays eyes centered tracking (e.g. [7]).
on pinhole
A portable video projector placed to the side of the
board uses the same perspective transformation algo-
A Network rithms as the Everywhere Display but permits test-
ing with higher-resolution images than can currently
Ubiquitous, be achieved in the our living room prototype environ-
embedded ment with Everywhere Display. When an image of a
pinhole cameras
remote conversant’s face is projected with the pinhole
centered between the conversant’s eyes, the 1.5mm hole
Figure 1: The concept for video conferencing is distinguishable but not distracting.
anywhere with sensors embedded in architec-
tural components. Cabinetry with integrated The video also shows our prototype living room with
sensing being designed by our group would make Everywhere Display. Cameras can be embedded di-
such a scenario possible. rectly into surfaces in the space. One is used to create
a “magic mirror” effect when used in combination with
an Everywhere Display Projector (EDP) [6]. An image
room. Each EDP uses a small computer-controlled mir- from the pinhole camera (flipped) is projected onto the
ror and perspective warping and can project an undis- wall centered on the camera itself. The effect is that the
torted image onto nearly any planar surface in the en- user appears to be looking directly at oneself. People
vironment. who are shown this setup often have a difficult time de-
termining the position of the camera when they are not
The system in person A’s space would use face-finding told about the 1.4mm pinhole in the drywall in advance;
algorithms (e.g. [7]) to detect where person A is lo- only after asking them to “point where the image must
cated and grabbing the person’s image from the closest be coming from” and pointing out the tiny speck on the
camera. (2) The system in person B’s space would sim- wall do visitors typically understand how the effect is
ilarly detect where person B’s face is located. (3) Based achieved.
on these locations, the systems would place person A’s
image in person B’s space so that it is near person B. Eventually we hope to test one half of the full video
(4) The EDP is used to place the image. The 1.5mm conferencing system illustrated in this concept video
pinhole is barely noticeable on the image of the face. in our prototype living room environment. Resolution
Simultaneously, person B’s image appears on the pin- limitations of our current Everywhere Display unfortu-
hole embedded in the refrigerator appliance in person nately prevent an ideal implementation of the concept
A’s kitchen. As both people move about their respective described. Our group is developing cabinetry compo-
spaces, the image of the other person will automatically nents that have integrated sensors networks built in and
move to the most visually convenient camera position. could easily accommodate the integrated pinhole cam-
The system preserves both the ability to make eye con- eras.
tact and the ability to intentionally not make eye con-
REFERENCES
tact because the remote participant’s eyes are always [1] M. Argyle and M. Cook. Gaze and Mutual Gaze. 1976.
moved so they are centered directly over a pinhole cam-
[2] G.R. Bradski. Computer vision face tracking for use in a
era. perceptual user interface. Intel Technology Journal, 1998.
[3] W. Buxton, A. Sellen, and M. Sheasby. Interfaces for
3.2 Partial Implementation Multiparty Videoconferences. In K. Finn, A. Sellen, and
We have begun to implement and test the components S. Wilbur, editors, Video-Mediated Communication, pages
of this concept. We have two video conferencing sta- 385–400. 1997.
tions located in non-adjacent rooms in our laboratory. [4] M. Chen. Leveraging the asymmetric sensitivity of eye
Each station consists of a flat, white foam core board contact for videoconference. In Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems, pages
with a pinhole color camera embedded behind a 1.5mm 49–56. New York, NY, 2002.
hole centered horizontally on the foam board. The cam-
[5] K. Larson. Places of Living: Integrated Components for
era is adjusted so that the optical axis is perpendicular
Mass Customization. Changing Places Technical Report,
to the board’s surface. Massachusetts Institute of Technology, March 2002.
Head tracking is performed using the Continuously Adap- [6] C. Pinhanez. The Everywhere Displays Projector: A device
tive Mean Shift algorithm [2]. The head pixels are to create ubiquitous graphical interfaces. In G.D. Abowd,
B. Brumitt, and S.A.N. Shafer, editors, Proceedings of the
translated so that the remote viewer sees the head dis- Conference on Ubiquitous Computing, LNCS 2201, pages
played with the eyes centered over the pinhole cam- 315–331. Berlin Heidelberg, 2001.
era. A low-pass filter on the face position is used to [7] P. Viola and M. Jones. Rapid Object Detection Using a
eliminate jitter motion that would otherwise be visi- Boosted Cascade of Simple Features. In Proceedings of the
ble due to camera noise and sudden flesh changes as IEEE Conference on Computer Vision and Pattern
people quickly move their heads. The prototype is run Recognition, volume I, pages 511–518. 2001.
using a standard office speaker phone to provide au-
274
The Design of a Context-Aware Home Media Space
Carman Neustaedter & Saul Greenberg
University of Calgary
Department of Computer Science
Calgary, AB, T2N 1N4 Canada
+1 403 220-6087
[carman or saul]@cpsc.ucalgary.ca

ABSTRACT Unlike office-based media spaces, a home media space has


Traditional techniques for balancing privacy and awareness to pay considerably more attention to how the system
in video media spaces have been proven to be ineffective appropriately balances privacy and awareness, because
for compromising home situations involving a media space. privacy concerns are far more problematic for home users.
As such, we present the rationale and prototype design of a Homes are inherently private in nature, and appearances or
context -aware home media space (HMS)—defined as an behaviours that are appropriate for the home may not be
always-on video media space used within a home setting— appropriate when viewed at the office. As well, individuals
that focuses on identifying plausible solutions for balancing in the home other than the telecommuter who gain little or
privacy and awareness in compromising home situations. no benefit from the HMS still incur its privacy threat.
In the HMS design, users are provided with implicit and These increased privacy risks suggest that home media
explicit control over their privacy, along with visual and space systems must incorporate techniques that somehow
audio feedback of the amount of privacy currently being mitigate privacy concerns. One possibility is to simply
maintained. adapt techniques already proposed for office media spaces
Keywords . Casual interaction, video media spaces, [2]. However, research [3] has shown that traditional
privacy, telecommuting. image processing techniques do not suffice for home-based
video conferencing situations. Image processing
INTRODUCTION techniques are overly simplistic because they do not
A home media space (HMS) is an always-on video-based understand the context of their use. For this reason, our
media space used within a home setting. It is designed research focuses on designing a home media space using
specifically for the telecommuter who chooses to work at context -aware computing and dedicated physical controls.
home, but who still wishes to maintain a close-working
relationship with particular colleagues in remote office OUR DESIGN PHILOSOPHY
environments. Like all media spaces, the video provides Privacy regulation in real life is lightweight and transparent
the telecommuter with awareness information about their [1]. We replicate this by providing lightweight and
collaborator’s availability for conversation, and a way to transparent privacy regulation in our HMS using context -
easily move into casual communication over the same aware computing as a tool for balancing privacy and
channel. awareness through implicit means. We enable one specific
location—a home office/spare bedroom shown in Figure
1—with technology that senses who is around and then
infers privacy expectations through a simple set of rules.
Context -aware systems can make mistakes and it is
important that these mistakes do not increase privacy
threats. As a result, we first warn users if an implicit action
has initiated a privacy decreasing operation; and second,
we provide an opportunity for users to override this
operation. Continuous visual and audio feedback makes it
easy to know how much privacy is currently maintained
and users are able to fine-tune privacy and awareness levels
with dedicated physical and graphical controls.

Figure 1: The context-aware home m edia space.

275
1 2 3 4 5
Attribute Controlled Explicit Control Implicit Control Audio Feedback Visual Feedback
Camera State: Click play button None Camera clicking; LEDs on; Camera
Stop to Play Camera rotating rotates to face you;
Mirrored video
Camera State: Click play button Telecommuter sits in Same as above; Same as above; Camera
Pause to Play chair; Family/friend Camera Twitches Twitches
leaves room
Camera State: Click stop button; None Camera rotating LEDs off; Camera
Play to Stop Block camera with rotates to face the wall;
hand; Touch off button Mirrored video
Camera State: Click pause button Telecommuter stands up Same as above Same as above
Play to Pause out of chair; Family /
friend enters room
Camera State: Click stop button; Block Telecommuter leaves None Mirrored video
Pause to Stop camera with hand; the room for an
Touch off button extended period of time
Capturing angle Adjust physical or Change in Camera rotating Slider position; Camera
graphical slider camera state position; Mirrored video
Video fidelity Adjust physical or None None Control position;
graphical control Mirrored video
Audio link Moves hand over None Own voice None
microphone base

Table 1: Control and feedback mechanisms found in the HMS


Elements of a Context-Aware HMS if one is in a location where the system can recognize their
Our design contains specific elements that can be used control action (e.g., one must be near the camera to block it
together along with a simple set of rules to balance privacy with a gesture); and if one feels that the effort is worth it.
and awareness. Elements such as the camera state, Ideally, the system should automatically sense privacy-
capturing angle, or video fidelity can be controlled with violations and control the information it transmits
explicit actions such as touching an off button, or implicit accordingly, but this can be quite difficult to do in practice.
actions like sitting down in a desk. Despite these caveats, our experience was positive overall.
Table 1 summarizes how design elements are either: We were able to control the space for many situations, and
controlled, used for explicit or implicit control, or used as indeed just knowing that we could control the media space
feedback. Each row in the table describes how one media was comforting. Future work will involve formally
space attribute (column 1) is controlled either explicitly evaluating our redesigned home media space.
(column 2) or implicitly (column 3). The fourth and fifth CONCLUSION
columns describe the audio and visual feedback that While we have concentrated on one specific use of video in
indicate to the users that the attribute in column 1 has homes, our research contributes ideas that have a broader
changed and what its current value is. significance for home -based videoconferencing in general.
DISCUSSION Regardless of the specific use of video in a home, people
This was our “first cut” of a context -aware home media need and desire methods to regulate their privacy; many
space and as a result we wanted to see what big problems video conferencing systems (e.g., Webcam for MSN
emerged by trying it out ourselves (an evaluation Messenger, Yahoo! Messenger) ignore these user
methodology called “eat your own dog food”). In requirements.
particular, the first author, a frequent REFERENCES
telecommuter, routinely used the home media space over 1. Altman, I.: The Environment and Social Behavior:
several months within his own home office/spare bedroom. Privacy, Personal Space, Territory, Crowding,
We noticed several design faults. First, the control and Wadsworth Publis hing Company (1975) pp. 1-51, 194-
feedback mechanisms need to be more natural if they are to 207.
fit well within a person's everyday world. For example, 2. Boyle, M., Edwards, C. and Greenberg, S.: The Effects
adjusting a physical slider to regulate privacy is a of Filtered Video on Awareness and Privacy, Proc.
somewhat abstract notion. Second, a consequence of CSCW'00 [CHI Letters 2(3)], ACM Press, (2000) pp. 1-
unobtrusive peripheral feedback of system state is that the 10.
person may overlook the information. For examp le, one
may overlook feedback that the camera is recording at high 3. Neustaedter, C.: Balancing Privacy and Awareness in
quality (e.g., the LEDs). Third, explicit control Home Media Spaces, MSc Thesis, Department of
mechanisms only work if one can anticipate and react Computer Science, University of Calgary, Calgary,
quickly enough to the risk inherent in the current situation, Canada, June 2003.

276
Hello.Wall – Beyond Ambient Displays
Thorsten Prante, Carsten Röcker, Daniel van Alphen
Norbert Streitz, Richard Stenzel, Hufelandstr. 32, D-10407 Berlin, Germany
Carsten Magerkurth [email protected]
Fraunhofer IPSI
AMBIENTE – Workspaces of the Future Daniela Plewe
Dolivostr. 15, D-64293 Darmstadt, Germany Franz-Künstler-Str 2, D-10969 Berlin, Germany
{prante, roecker, streitz, stenzel, [email protected]
magerkurth}@ipsi.fraunhofer.de
ABSTRACT current situation (e.g., distance to the wall; see below),
We present a ubiquitous computing environment that people can use ViewPorts to decode visual codes (here,
consists of the Hello.Wall in combination with ViewPorts. light patterns), to download (“freeze”) or just browse
Hello.Wall is a new wall-sized ambient display [4,2] that information, to paint signs on the wall, or to access a
emits information via light patterns and is considered message announced by a light pattern. See figure 1.
informative art. As an integral part of the physical
environment, Hello.Wall constitutes a seeding element of a
social architectural space conveying awareness information
and atmospheres in organizations or at specific places. The
display is context-dependent by reflecting identity and
distance of people passing by. Hello.Wall can "borrow"
other artefacts in order to communicate more detailed
Figure 1. Interaction at Hello.Wall using ViewPort
information. These mobile devices are called ViewPorts. as „borrowed display“
People can also further interact with the Hello.Wall using
ViewPorts via integrated WaveLAN and RFID technology. INTERACTION DESIGN
Interactions among the different components are supported
Keywords by two independent RFID systems and a wireless LAN
Ambient display, informative art, social architectural space, network to enable a coherent and engaging interaction
context-dependent, sensor-based interaction, interactive experience. The RFID systems cover two ranges and
wall, interaction design, mobile devices, smart artefacts, thereby define three "zones of interaction": ambient zone,
ubiquitous computing environment, calm technology notification zone, and cell interaction zone (see figure 2).
HELLO.WALL AND VIEWPORT They can be adapted, e.g., according to the surrounding
Hello.Wall is a piece of unobtrusive, calm technology [3] spatial conditions.
exploiting humans' ability to perceive information via
codes that do not require the same level of explicit coding
as with words. It can stay in the background, only
perceived at the periphery of attention, while one is being
concerned with another activity, e.g., a face-to-face
conversation.
Borrowing another Artefact Cell Interaction Zone
We propose a mechanism where the Hello.Wall can Notification Zone
"borrow" other artefacts, in order to communicate more Ambient Zone
detailed information. These mobile devices are called
ViewPorts and can be personalized using short-range
transponders. Due to the nature of the ViewPort's display,
the information shown can be more explicit and it can also Figure 2. Three zones of interaction
be more personal. Depending on their access rights and the The zones were introduced to define "distance-dependent
semantics", meaning that the distance of an individual from
the wall defines the interactions offered and the kind of
5th International Conference on Ubiquitous Computing information shown on the Hello.Wall and the ViewPort.
(Ubicomp’03), October 12–15, 2003, Seattle, WA, USA It should be noted that multiple people can be sensed at
Copyright by the Authors of this Publication once in the notification and cell interaction zones.

277
Interactions
When people are outside the range of the wall's sensors (in
the ambient zone), they experience the ambient mode, i.e.
the display shows general information that is defined to be
shown independent of the presence of a particular person.
InformationCell with Short-
Range Transponder

Figure 4. From left to right: 1) Rear view with control components


2) Wiring and transponders for each cell 3) Cells with LED clusters

APPLICATIONS
Long-Range Transponder
Atmospheric aspects that can, e.g., be extracted from con-
versations [1] are mapped onto visual codes realized as
light patterns which influence the atmosphere of a place
and the social body around it. While the Hello.Wall serves
a dedicated informative role to the initiated members of an
organization or a place, visitors might consider it only as an
WLAN Access Point atmospheric decorative element and enjoy its aesthetic
Controlling PC
Driver Interface WLAN Adapter quality.
Long-Range Reader Short-Range Reader
Long-Range Antenna Long-Range Transponder
Communicating atmospheric aspects of an organization
includes general and specific feedback mechanisms that
Figure 3. Communication and Sensing infrastructure of allow addressing different target groups via different
Hello.Wall and ViewPort
representation codes. Individuals as well as groups create
People within the notification zone are detected via two public and private codes depending on the purpose of their
long-range readers installed in the lower part of the intervention. The content to be communicated can cover a
Hello.Wall (see figure 3) and people can identify wide range and will be subject to modification, adjustment,
themselves to a ViewPort via the integrated short-range and elaboration based on the experience people have.
reader. Once a person is detected in the notification zone,
Sample applications are presented in the video. They
depending on the kind of application, data can be
include radiating the general atmosphere in an organization
transmitted to the ViewPort and/or distinctive light patterns
or at a place, distributing more specific and directed
can be displayed for notification. These can be personal
information, various forms of playful close-up interactions,
patterns known only to a particular person, group patterns,
and support for team building and coherence through
or generally known patterns. Within the cell interaction
“secret” visual codes mediating, e.g., acitivty levels among
zone, people that are very close to the Hello.Wall can
the team’s members. To learn more about the acceptance of
interact with each single cell (= independent interactive
applications, we are currently running user experiments.
"pixel") or several cells at once using a ViewPort to read
the cells’ IDs. Simultaneous interaction using several ACKNOWLEDGMENTS
ViewPorts in parallel at a Hello.Wall is supported as well. This work is supported by the European Commission
These features allow playful and narrative interactions and (contract IST–2000-25134) as part of the proactive
there is also a charming element of surprise that may be initiative “The Disappearing Computer” of “Future and
discovered via single cell interaction. Emerging Technology” (FET) (project website:
www.ambient-agoras.org). Special thanks are due to our
TECHNOLOGY
student Stefan Zink for his contributions to implementing
Each of the 124 cells of the Hello.Wall contains an LED
the Hello.Wall hardware.
cluster and a short-range transponder (see figure 4). The
brightness of the LED clusters is controlled by a standard REFERENCES
PC via a special driver interface with control units using 1. Basu, S. et al. Towards measuring human interactions in
pulse width modulation. This interface also developed by conversational settings. Proc. of IEEE CUES 2001.
us consists of 17 circuit boards. 2. Streitz, N. et al. Situated Interaction with Ambient
The ViewPort is developed on the basis of a PocketPC with Information: Facilitating Awareness and Communi-
32bit RISC Processor, touch-sensitive color display and cation in Ubiquitous Work Environments.. Proc. of
64MB RAM. Its functionality is extended through a short- HCII 2003, to appear.
range (up to 100mm) reader unit and a WaveLAN adapter. 3. Weiser, M., Brown, J. S. Designing calm technology.
Additionally, the ViewPort is equipped with a long-range PowerGrid Journal, Vol. 1, No. 1, 1996.
transponder. Thus, the ViewPort can be detected by
stationary artefacts as, e.g., the Hello.Wall, while at the 4. Wisneski, C. et al. Ambient displays: Turning
same time identify nearby artefacts through its own reading architectural space into an interface between people and
unit. dgital information. Proc. of CoBuild '98, 22-32.

278
Browsing Captured Whiteboard Sessions
Using a Handheld Display and a Jog Dial
Johan Sanneblad and Lars Erik Holmquist
Future Applications Lab
Viktoria Institute, Box 620, SE 405 30 Göteborg, SWEDEN
{johans, leh}@viktoria.se
www.viktoria.se/fal

ABSTRACT
In previous work we introduced Total Recall, a system for
in-place viewing of captured whiteboard annotations using
a handheld display. To improve on our system we now
introduce a method for navigating through time-based
whiteboard annotations using a jog dial. By turning the
dial, the user can navigate back and forth in time to reach a
desired point in the captured session, which is then
displayed on the handheld device at the correct location.
The tracking system supports drawing as well as erasing,
which are both immediately reflected on the handheld
display. We argue that our system introduces new
application possibilities, e.g. in education.
Keywords
Whiteboard capture systems, ubiquitous computing
INTRODUCTION
During the past few years, several systems have been
introduced to augment whiteboards and making them
“smart”. The reasons are many; being able to keep digital Figure 1. (a) An annotation is created, (b) a jog dial is
copies of what has been written to the whiteboard, used to navigate back to a previous position in time,
incorporating computer acronyms such as “cut” and “paste” and (c) the session is recalled in its original location.
into a non-computer environment, and simplifying drawing
in general. Several strategies have been tested to enhance
whiteboards. One type of system replaces the entire viewing the notes on a PC usually requires a significant
whiteboard with a digital touch sensitive display. Examples amount of zooming and panning of the captured image.
of such system include the LiveBoard [4], which was part Total Recall [3] was introduced to provide in-place viewing
of Xerox PARC’s original ubiquitous computing of captured whiteboard annotations using a handheld
experiment; and current commercial products such as the display. Using a handheld computer equipped with an
SmartBoard (www.smarttech.com). Replacing the drawing ultrasonic positioning system, Total Recall makes it
area with a digital replica provides many possibilities for possible to view annotations where they were created –
enhancing the whiteboard, but it is an expensive option that even if they are partially erased! Total Recall can be seen
limits its use to specific environments. as a physical instantiation of a Magic Lens, an operator that
An alternative approach is to use a system with pens is positioned over an onscreen area to change the view of
equipped with built-in positioning systems, such as the objects in that region [1]; other similar approaches include
commercially available Mimio system (www.mimio.com). Peephole Displays [5]. We have now extended the Total
To the end-user, using the Mimio system is perceived as Recall system by introducing a jog dial (as seen in Figure
using an ordinary whiteboard – the difference is that the 1) that can be used to view how the whiteboard state looked
coordinates of each pen stroke are captured on a PC, at a specific moment in time. By turning the dial, the user
making it possible to create a snapshot of the whiteboard at can navigate to different point in time during the
a specific moment in time. While systems such as the digitization. The effect is similar to Time-machine
Mimio are portable and can be used in any environment, Computing [2], since the user can always go back in time
they do require a separate PC to view the annotations. and for instance easily retrieve previously erased content.
However, considering the size of an ordinary whiteboard,

279
the handheld computer to send XY-coordinates to the PC
we extracted the interior of a Mimio pen shell and attached
it to the back of a handheld computer. Using a switch
attached to the back of the handheld computer it is possible
to manually enable / disable sending of coordinates.
APPLICATION SCENARIO: DRAWING CLASS
The new system could be used as support in a learning
situation, such as a drawing class. Looking at a finished
image does not tell the student very much about how it was
drawn, nor does it show how much time was spent on each
specific detail. Through the ability of time-based browsing,
Total Recall could make it is possible for students to first
Figure 2. The improved Total Recall architecture. watch a tutor complete a drawing, and then go back in time
using the jog dial to study in detail how a specific section
was achieved. Unlike on a stationary PC, the student could
ARCHITECTURE
study the drawing process on the position where it actually
The Total Recall architecture comprises two parts: a server
happened, using the finished drawing as a frame of
software installed on a stationary PC to capture whiteboard
reference.
annotations, and a client software installed on a handheld
computer to view the annotations. The system has two DISCUSSION AND FUTURE WORK
“modes”: in one mode coordinates are received as paint or In the current implementation of time-based browsing we
erase strokes when pens or the eraser are used to draw on experienced issues with captured sessions with large
the whiteboard. In the other mode coordinates are received amounts of data. When the jog dial is used to move
from a handheld computer equipped with an ultrasonic and backwards in time, the stationary pc needs to resend the
infrared transmitter in the form of continuous XY- entire coordinate list up to an exact moment for the
coordinates. The user switches between these two modes handheld computer to redraw the canvas. We are currently
by pressing the top of the jog dial that is shown in Figure working on optimizing the system where the stationary pc
1b. Using our server software, the XY-coordinates received is responsible for creating “bitmap snapshots” when a
from the pens are sent as drawing coordinates to the specific number of coordinates have been received since
handheld computer when the user changes modes. The the last snapshot. Using this approach it would only be
coordinates received from the handheld computer are sent necessary to transfer the bitmap snapshot together with the
back to the device in a compressed form over a wireless coordinates received since last snapshot to the handheld
connection, where the client software periodically redraws device.
the image to reflect the whiteboard using the current XY ACKNOWLEDGMENTS
position. This project was supported by SSF and Vinnova.
The jog dial shown in Figure 1B was added to support REFERENCES
time-based browsing of the drawing session. In normal use, 1. Bier, E.A., et al. (1993). Toolglass and Magic Lenses:
the stationary PC will continuously send the XY- The See-Through Interface. Proceedings of SIGGRAPH
coordinates it receives back to the handheld device. The 1993.
handheld display will then draw a brush or erase stroke 2. Rekimoto, J. (1999). Time-machine Computing: a
depending on the current stroke type. When the jog dial is Time-centric Approach for the Information
turned counter clockwise, the display of the handheld Environment. Proceedings of UIST 1999.
device is cleared and the entire canvas is redrawn from the 3. Sanneblad, J., L. E. Holmquist (2003). Total Recall: In-
beginning up to the coordinate at the current position in place Viewing of Captured Whiteboard Annotations.
time. When the jog dial is turned clockwise, the server Extended Abstracts of CHI 2003.
software will simply send out the brush or erase strokes that 4. Weiser, M. (1991). The Computer for the 21st Century.
were drawn since the last update. Scientific American, 1991, 265 (3), 94-104.
IMPLEMENTATION 5. Yee, K.P. (2003). Peephole Displays: Handheld
We used the Mimio sensor and pens to get positioning Computers as Virtual Windows. Proceedings of CHI
information for both the PDA and the stationary PC. To get 2003.

280
eyeCOOK: A Gaze and Speech Enabled
Attentive Cookbook
Jeffrey S. Shell, Jeremy S. Bradbury, Craig B. Knowles, Connor Dickie, Roel Vertegaal
Human Media Lab, Queen’s University
Kingston, ON, Canada, K7L 3N6
{ shell, bradbury, knowles, connor, roel }@cs.queensu.ca

ABSTRACT SYSTEM DESCRIPTION


While preparing food, cooks often have to manage many O ur attentive cookbook prototype consists of an electronic
time sensitive processes. Because cookbooks require visual recipe database with a hypertext-style interface. eyeCOOK
and physical attention to use, they may distract, rather than receives input from an LC Technologies eye tracker and a
focus the cook on executing the recipe. The knowledge wireless microphone
requirements of cooking, concurrent demands for attention using the Microsoft
and the sensitivity of recipes to proper procedure conspire Speech API (SAPI)
to make cooking a stressful experience, particularly to for speech
novices. We present eyeCOOK, a multimodal attentive recognition and
cookbook which allows users to communicate using eye- production. The
gaze and speech. eyeCOOK responds visually and/or eyetracker is
verbally, promoting communication through natural human calibrated only once
input channels without physically encumbering the user. for each user. Since
Our goal is to improve productivity and user satisfaction eyeCOOK uses a
without creating additional requirements for user attention. small context-
We describe how the user interacts with the eyeCOOK sensitive grammar,
prototype and the role of this system in an Attentive speech recognition
Kitchen. calibration is rarely Figure 1. The eyeCOOK System
required.
Keywords
Attentive User Interfaces, Gaze, Eye Tracking, Speech, eyeCOOK can read aloud ingredients and instructions,
Context-aware, Information Appliance, Sensors. access additional information such as pictures and
definitions of ingredients, terminology, cookware,
INTRODUCTION nutritional information, the history of the dish, and suggest
The attentive cookbook reduces the burden caused by other food items it can be served with. Throughout the
competing requests for physical attention by directing process of cooking, the system automatically triggers timed
instructions and glossary definitions to the unoccupied notifications, to remind the user when steps are completed,
auditory channel. Gaze and speech were chosen as input or require attention. The relationship between the current,
modalities because they do not require physical contact previous and future cooking steps and the ingredients they
(i.e., mouse, touch screen), which may be inconvenient and involve is implicit via a simple color-coding scheme. The
unsanitary while preparing food. The dynamic display dynamic colouring, shown in figure 2, situates the current
window automates page turning, allowing users to focus cooking task within the recipe as a whole.
their hands on the process of cooking. Alternative
approaches to enhancing the cooking experience include Adaptive Inputs and Display
Cook’s Collage [7], a passive memory recovery aid, and The display adjusts its presentation based on user presence
CounterActive [1], a touch enabled multimedia cookbook. reasoning. If eye gaze is detected, then the user must be in
Unlike these approaches, eyeCOOK adapts its behavior and front of the screen. Thus the entire recipe is shown on one
interface presentation based on the user’s eye gaze, page in Cookbook mode (see figure 2), and deictic
proximity, and the current cooking task. eyeCOOK references are interpreted using gaze and speech.
qualifies as both an information appliance [2], and a Alternately, if eye gaze is not present, the display is set to
context aware system [4], but because user attention drives Recipe Card mode. This breaks up the recipe into multiple
the human interface, it is most precisely described as an cards and enlarges the text, thereby allowing a user who is
Attentive User Interface (AUI) [5]. Using knowledge of further away from the display to easily read the recipe.
the user’s attentive context, the system applies contextual Because the user is not providing eye gaze input in Recipe
reasoning to activate localized grammars, working within Card mode, the system compensates by adjusting the
the user’s attention space [5]. vocabulary of the speech recognition engine, providing
answers to more detailed queries.

281
Attentive and Environmental Sensors
Increasing the knowledge of users’ activities may allow
interfaces to engage in less interruptive, and more
respectful interactions with users. Visual attention, a prime
indicator of human interest, can be deduced by adding eye
contact sensors [5,6] to items in the environment. This
information can be used to determine the appropriate
volume and timing of notifications to the user.
Additionally, temperature sensors can be used to keep track
of the status of the oven and the elements of the stove and
could be synchronized with electronic timers to increase the
system’s ability to guide the user’s cooking experience

Appliance Coordination
Integrating knowledge of the environment can result in
improved functionality, taking up less of the user’s time and
effort. For example, user recipe preferences, timing
constraints, as determined by the user’s electronic schedule,
and currently available ingredients, communicated by food
Figure 2. eyeCOOK in Page Display Mode storage areas, can be combined to suggest recipes. Once
selected, the ingredients from the recipe can be added to an
NATURAL INPUTS electronic shopping list stored on the user’s PDA.
eyeCOOK is designed to use natural input modalities, or
those that humans use in human to human, non mediated Active Environmental Actions
communication [5]. Observing and interpreting this The kitchen should not only be aware of its environment,
implicit behavior reduces the need for users to provide but it should also be able to affect it. Thus, it should be able
explicit input. Using these cues, interfaces can be designed to take actions which increase efficiency, and reduce the
such that the difficulty lies in the intended task, not the user’s action load, like automatically preheating an oven.
technological tool.
CONCLUSIONS
Gaze and Speech We have presented eyeCOOK, a gaze and speech enabled
When the user is in range of the eye tracker and looking at multimodal Attentive User Interface. We have also
the display, eyeCOOK substitutes the target of the user’s presented our vision of an Attentive Kitchen in which
gaze for the word ‘this’ in a speech command. For appliances, informed by sensors, coordinate their behavior,
example, eyeCOOK responds to the spoken command and have the capability to affect the environment. This can
‘Define this’ by defining the word the user is currently reduce the user’s workload, and permit rationalizing
looking at. However, because eye trackers are spatially requests for user attention.
fixed and have a limited range, the user will not always be
in a position where eye tracker input is available. Thus, our REFERENCES
speech grammar is designed such that system functionality 1. Ju, W. et al. CounterActive: An Interactive Cookbook
is maintained when users are not in front of the eye tracker. for the Kitchen Counter. Extended Abstracts of CHI
Instead of saying “define this” while looking at the word 2001 (Seattle, April 2001) pp. 269-270
sauté, the user simply states “define sauté.” The active 2. Norman, D. A. The Invisible Computer, MIT press,
vocabulary is dynamically generated using context- 1999
sensitive, localized speech grammars, allowing more 3. Schmidt, A. et al. How to Build Smart Appliances.
synonyms to be included for a given word. Real world IEEE Personal Communications 8(4), August 2001.
performance may be improved by adding partial terms and pp. 66-71.
colloquialisms that may only be relevant in specific 4. Selker, T., et al. Context-Aware Design and Interaction
circumstances. in Computer Systems. IBM Systems Journal (39) 3&4,
pp. 880-891.
TOWARDS AN ATTENTIVE KITCHEN 5. Shell, J. et al. Interacting with Groups of Computers.
Interfaces that recognize and respond to user attention, and Commun. ACM 46(3) March, 2003.
understand how it relates to the overall activity can help the 6. Shell, J. et al. EyePliances: Attention Seeking Devices
user efficiently engage in tasks. To achieve this, we must that Respond to Visual Attention. Extended Abstracts
augment the kitchen with attentive sensors that monitor of CHI 2003 (Ft.Lauderdale, April 2003) pp. 770-771
human behavior [5,6], augment appliances with functional 7. Tran, Q., et al. (2002). Cook’s Collage: Two
sensors [3,6], improve coordination among appliances [5], Exploratory Designs. Position paper for Families
and allow appliances to affect the environment [3,5]. Workshop at CHI 2002. (Minneapolis, April 2002)

282
Virtual Rear Projection
Jay Summet, Ramswaroop G. Somani, James M. Rehg, Gregory D. Abowd
College of Computing
801 Atlantic Drive
Atlanta, GA 30332-0280
{summetj,somani,rehg,abowd}@cc.gatech.edu

ABSTRACT
Rear projection of large-scale upright displays is often pre-
ferred over front projection because of the elimination of
shadows that occlude the projected image. However, rear
projection is not always a feasible option for space and cost
reasons. Recent research suggests that many of the desir-
able features of rear projection, in particular shadow elim-
ination, can be reproduced using new front projection tech-
niques. This video demonstrates various front projection
techniques and shows examples of coping behavior users ex-
hibit when interacting with front projected displays.
1. PASSIVE TECHNOLOGIES
Researchers have been working to resolve the occlusion prob- Figure 2: Warped front projection attempts to shift the
lem by filling in the technological space between standard user’s shadow so that they can interact with the graphics
front projection and true rear projection. We have performed in front of them.
an empirical study comparing the following projection tech-
nologies (the first three are demonstrated in the video): The output is warped to provide a corrected display on the
Front Projection (FP) - A single front projector is mounted screen. Examples are new projectors with on-board warping
along the normal axis of the screen. Users standing between functions, such as used by the 3M IdeaBoard[1], or the Ev-
the projector and the screen will produce shadows on the erywhere Displays Projector[3]. Additionally, the latest ver-
screen. This is a setup similar to most ceiling mounted pro- sion of the nVidia video card drivers includes a “keystoning”
jectors in conference rooms. function which allows any Windows computer to project a
warped display.

Figure 1: Front projection casts a shadow directly before


the user. Figure 3: Virtual Rear Projection provides redundant il-
lumination so that the user casts “half-shadows” when
Warped Front Projection (WFP) - A single front projector is they block one of the two projectors.
mounted off of the normal axis of the projection screen, in
an attempt to minimize occlusion of the beam by the user.
Virtual Rear Projection (VRP) - Two front projectors are
mounted on opposite sides of the normal axis to redundantly
illuminate the screen. Output from each projector is warped
(as with WFP) to correctly overlap on the display screen.
This reduces the number, size and frequency of occlusions.
Users standing very close to the screen may still completely
283
occlude portions of the output, but usually only occlude the Active Virtual Rear Projection (AVRP) - Similar to VRP,
output of one of the projectors, resulting in "half-shadows" AVRP adds a camera or other sensor which determines when
where the output is still visible at a lower level of contrast. one of the projectors is occluded. The system then attempts
Rear Projection (RP) - Uses a single projector mounted be- to compensate for this occlusion by boosting output power
hind the screen, so that it is not possible to occlude the pro- from the other projector(s) to increase contrast in the "half-
jection beam or cause shadows. shadow" area(s)[2,4].

2. USER COPING STRATEGIES


In our study[5], we determined coping behaviors exhibited
by users when using front projection (and to a lesser ex-
tent with warped front projection) displays. The behaviors
demonstrated in the video are:
Dead Reckoning - 1 of 17 participants. This participant would
stand in the center of the screen so that he was blocking a
particular box. When a box didn’t appear (because it was oc-
cluded by his body) he would reach to where the box would
have been projected and moved it without the visual cue.
Edge of Screen - 7 of 17 participants. These participants
would stand at the edge of the screen so that they would not Figure 5: Switched Virtual Rear Projection with Blind-
occlude the boxes as they were projected. They would then ing Light Suppression actively reacts by turning off pix-
have to lean inwards to move the box. els that are blocked by the occluder, which prevents the
Near Center - 7 of 17 participants. These participants would system from casting light onto the user.
stand near the center of the screen. Whenever a box did not
appear (because they were occluding it with their body) they Switched VRP with Blinding Light Suppression (SVRP-BLS)
would “sway” by leaning at the waist until the box appeared. - Similar to AVRP, SVRP-BLS adds the ability to turn off
The fourth observed coping strategy, Move on Occlusion was projector output that is projecting on a user or object. This
exhibited by 3 of the 17 participants1. When they occluded blinding light suppression allows users to comfortably face
a box, they would move (left or right) to uncover it, and stay the projectors without blinding light or distracting graphics
in their new position until they occluded another box. being projected into their eyes or onto their bodies.
Users did not exhibit these behaviors for virtual rear projec- These techniques are not yet indistinguishable from a rear
tion and rear projection displays. projected surface, and exhibit some possibly distracting vi-
sual artifacts such as “halos” which follow occluded areas.
3. UNDER DEVELOPMENT: ACTIVE TECHNOLOGIES We are continuing to develop improved techniques to simu-
As we found that users prefer rear projection (RP) over pas- late rear projection using redundant front projectors.
sive virtual rear projection (VRP), we have continued to de-
References
velop projection technologies that attempt to close the gap 1. 3M IdeaBoard. http://www.3m.com/.
between true rear projection and passive virtual rear projec-
2. Christopher Jaynes, Stephen Webb, R. Matt Steele, Michael Brown, and
tion. Emerging technologies demonstrated in the video are: W. Brent Seales. Dynamic shadow removal from front projection dis-
plays. In Proceedings of the conference on Visualization 2001, pages
175–182. IEEE Press, 2001.
3. Claudio Pinhanez. The everywhere displays projector: A device to cre-
ate ubiquitous graphical interfaces. In Proceedings of Ubiquitous Com-
puting (Ubicomp), pages 315–331, 2001.
4. T.J. Cham, J. Rehg, G. Sukthankar, R. Sukthankar. Shadow elimination
and occluder light suppression for multi-projector displays. In CVPR
Demo Summary, 2001.
5. Jay Summet, Gregory D. Abowd, Gregory M. Corso, and James M.
Rehg. Virtual rear projection: An empirical study of shadow elimi-
nation for large upright displays. Technical Report 03-13, GVU Center
and College of Computing. Georgia Institute of Technology, 2003.

Figure 4: Using feedback from a camera, Active Virtual


Rear Projection boosts light in shadowed regions.
1
These three participants choose not to release their videos to out-
side observers, so an example is not included in the video.
284
VIRTUAL HANDYMAN: Supporting Micro Services on Tap
through Situated Sensing & Web Services
Dadong Wan
Accenture Technology Labs
Chicago, IL 60601 USA
+1 312 693 6806
[email protected]

INTRODUCTION market. These applications run on separate computers


Just imagine a homeowner trying to install a light fixture. connected to a network. The user application includes a
It doesn’t go smoothly, so he needs expert assistance. wireless microphone and a wireless miniature camera that
What would he do? First, he’d look for help, possibly in a is measured at .88” x .57” x .92” and with a range of up to
thick phone book or perhaps through a keyword search on 750 feet with line of sight. We also built a customized
the Internet, or ask a neighbor. He’d assess the flashcam (see Figure 1) that combines illumination,
possibilities (e.g., home improvement and hardware wireless video, and pointing device into a convenient form
stores, private contractors, handymen) and make a choice. factor that can be used when working in poorly lit areas
Then he’d make a call, try to describe the problem, and like craw space or underneath a sink. The wireless
decide what to do. Maybe he’d take notes and go back up camera and flashcam enables the provider at a remote
the ladder to give it another try. Maybe a repairman location to see what the user is up to, and to give advice
would come to the house, lend a hand, and present a bill. accordingly.
The above scenario represents a typical call for “micro
services” – services that come in a finer granularity in
terms of duration and cost. These services share three
common characteristics. First, they involve a layperson
and an expert. The task typically requires asymmetric
collaboration between the two parties. While the first step
in any service involves finding an appropriate service
provider for the problem on hand, the difference is that the
discovery cost for micro services is very high relative to
the total cost for rendering the service. In the above
scenario, for example, it may only take the electrician 10
minutes to show the homeowner show to install a light
fixture, but it would take much longer to find a right
electrician who is available and willing to provide such a Figure 1. A 900MHz miniature wireless camera (left.)
service. And finally, micro services require a high degree allows hand-free operations. The flashcam (right) has
of spontaneity. In the above example, if getting the built-in flashlight, wireless camera, and laser pointer.
service requires the homeowner to go out of their way, The wireless microphone (i.e., Sony WCS-999) allows the
e.g., waiting for 3 hours or learning a new application, user to freely roam around the home and still be able to
chances are that the person would not end up using the interact with the application via speech recognition and
service. synthesis. Below is a sample dialogue between Krishna
To addresses these unique challenges of micro services, (K) and the application (A), when he asks for help for
we propose the concept of “micro services on tap,” which replacing an electrical outlet:
allows spontaneous service delivery by integrating several K: I need some help in installing my electrical
technologies, including miniature cameras, Web services, outlet.
wireless networks, and speech interface. A: Do you want an electrician?
THE PROTOTYPE K: Yes.
VIRTUAL HANDYMAN is a research prototype that supports A: I’ve found two service providers: Excel Home
micro services on tap for home improvement tasks. The Center and Randy’s Electrical Shop.
prototype consists of three modules: user, provider, and
K: Tell me more about Excel Home Center.

285
A: Excel Home Center offers virtual installation to the provider, the workbench can afford much richer
services for the products sold in its stores free of experience. For example, if needed, the provider can
charge. direct the user to an instructional video or a Web page
K: Connect me to Excel Home Center. about a task, which is shown on the LCD display.
A: Please wait while I’m binding to Excel Home
Center…
When Krishna’s request arrives at Excel Home Center, the
provider is alerted through a pager-like device. He then
walks over to a store kiosk or an in-vehicle PC. By simply
entering the service ID, he immediately gets connected to
the user. Figure 2 shows the view from the Excel Home
Center. The screen displays the recent history with the
customer and a live view of his task environment.

Figure 3. The online workbench (with built-in camera,


microphone array, LCD display, and Internet connection)
offers a fixed location interface to the service provider.
CONCLUSIONS
While VIRTUAL HANDYMAN focuses on home
improvement tasks, the approach is can be generalized to
other service areas, such as cooking, fashion, personal
security, travel, shopping, and so on, where personal
interactions between a novice and an expert is needed.
Figure 2. The service provider application shows a live Take personal fashion, for example. With a smart
view of the customer’s task environment. wardrobe like [2] in your bedroom, any time you need
advice on what to wear, your wardrobe can, in real-time,
At the heart of VIRTUAL HANDYMAN is the market find and connect you to a live fashion advisor, who may
application, which includes a private UDDI [1] registry help you select the best outfit for your specific occasion.
and a custom taxonomy for home improvement. For our Since the fashion advisor can see what you’re wearing
prototype, the registry contains a dozen of businesses and through the built-in camera, and what you have in the
services. The Web services interface is implemented using wardrobe through the embedded RFID reader, you don’t
Microsoft UDDI Server SDK. In addition to UDDI need to waste any time explaining or describing them. The
functions, the module integrates speech engines (i.e., IBM service is fast, personal, and can be called upon any time.
Via Voice and AT&T Natural Voices) so the user can use
voice to interact with the system, as illustrated in the In sum, micro services on tap, as illustrated by VIRTUAL
dialogue above. The module also includes a simple task HANDYMAN, are all about being able to easily and
model about home improvement so that it can map the spontaneously discover and interact with people who
user task at hand to a specific type of service. For could be helpful for a specific situation. With the
example, when the user mentions the word “electrical proliferation of broadband wireless networks, Web
outlet” or “light fixture,” it knows that he is performing an services, and sensors like inexpensive miniature cameras,
electrical job, and thus replies, “Do you want an GPS receivers, and RFID tags, we expect that such
electrician?” services soon become widely available.

To complement the mobile solution, we also built an REFERENCES


online workbench (see Figure 3) with an embedded 1. For more information on UDDI, see www.uddi.org and
www.webservices.org.
camera, microphone array (Andrea DA-400), LCD
3. Dadong Wan. Magic Wardrobe: Situated Shopping from
display, and Internet connection. When a user carries out a Your Own Bedroom. 2nd International Symposium on
task on the workbench (e.g., repairing a small appliance or Handheld and Ubiquitous Computing (HUC’2000), Bristol,
creating a blueprint) and needs help, he can call up a UK, September 25-27, 2000.
service, just like described earlier. This time, however, he
doesn’t need to wear a microphone or camera, since both
are built into the workbench. As a fixed-location interface

286
Part VI

Workshops
Ubicomp Education:
Current Status and Future Directions
Gregory D. Abowd Gaetano Borriello Gerd Kortuem
College of Computing Department of Computer Computing Department
Georgia Institute of Science & Engineering Lancaster University
Technology University of Washington Lancaster
Atlanta, Georgia 30332-0280 Seattle, WA 98195-2350 LA1 4YR
USA USA UK
+1 404 894 7512 +1 206 685 9432 +44 1524 593116
[email protected] [email protected] [email protected]

natural evolutionary process. Experimental ubicomp


ABSTRACT
courses are taught on a more or less regular basis at several
Ubiquitous computing is becoming an increasingly universities as part of computer science and engineering
important research area. While experimental Ubicomp curricula. But as of yet there exists no common
courses are taught on a more or less regular basis at several understanding on what and how to teach ubicomp-related
universities there is no common understanding of which topics. Although this is in part a reflection of the general
ubicomp-related topics should be taught and how to teach state of ubicomp as a fast growing and still relatively new
them as part of Computer Science and Computer research discipline, we feel it is time to review the current
Engineering curricula. This workshop aims to review the state of ubicomp education and to create a vision for its
current state of Ubicomp education and create a vision for future.
its future. The primary expected outcomes are the
Goals
definition of Ubicomp teaching modules and the
establishment of a permanent collection of Ubicomp The goal of this workshop is to discuss where ubicomp
educational materials. education is today and where it should be headed. It is
aimed at educators and students who are actively involved
WORKSHOP DESCRIPTION in teaching, learning and training ubicomp and who are
interested in shaping the future of ubicomp education.
Motivation
Particular questions to be addressed at this workshop
Over the last years ubiquitous computing (ubicomp) has
include:
emerged as one of the fastest growing research areas in
computer science. At its core is the idea of the • What should we be teaching? Which, theories,
“disappearing computer” – a vision of computational principles, models and frameworks are considered
devices and digital services that blend into the environment essential for ubicomp?
where they become an integral part of people’s everyday • How should we be teaching ubicomp? Which
lives and experience. educational approaches and learning pedagogies are
Ubicomp is inherently an interdisciplinary research area. appropriate for ubicomp education?
To be successful ubicomp must balance the creation of new • How should ubicomp be positioned in relation to
technologies and human-centred, real world evaluation. existing Computer Science, Electrical Engineering and
This requires the integration of essential concepts and HCI curricula? Is there a need for separate ubicomp
methods from a variety of disciplines including computer courses or should ubicomp-related topics be subsumed
science, computer engineering, human-computer in existing courses?
interaction, industrial design, architecture, psychology, and • What resources are required and available to enable
sociology. educators and learners? Which books, course
materials, web sites, videos, software libraries and
The growing importance of ubicomp as a research area and hardware toolkits exist or should exist for teaching
its potential economic and social impact raises important ubicomp?
questions of how to teach ubicomp-related topics as part of • What is industry looking for? Which ubicomp related
computer science and computer engineering curricula. skills and knowledge do employers and companies
Ubicomp education is currently in the beginning stages of a

289
require now and in the future? Participation
Participants will be selected on the basis of their ubicomp
Expected Outcome
teaching experience. In lieu of a traditional position paper,
The workshop is intended as a forum for the exchange of participants are asked to complete a Ubicomp education
educational experiences and is focused on the joint questionnaire.
development of a common vision statement derived from
Workshop Web Site
the experience of each of the participants.
The workshop web site can be found at
The expected results include: http://ubicomp.lancs.ac.uk/workshops/education03.
• A survey of the current state: a (preliminary and
ORGANIZERS
possible incomplete) compilation of currently offered
ubicomp-related courses and available educational Gregory D. Abowd is an Associate Professor in the
resources College of Computing and GVU Center at the Georgia
• The scope of ubicomp education: a compilation of Institute of Technology. His research interests include
important ubicomp topics (research questions, theories, software engineering for interactive systems, with
methods, models and technologies). These topics could particular focus on mobile and ubiquitous computing
lay the foundation for the later generation of Ubicomp applications. He leads a research group in the College of
teaching modules. Computing focussed on the development of prototype
• Future agenda: identification of concrete steps that future computing environments which emphasize mobile
should be taken to improve the quality of ubicomp and ubiquitous computing technology for everyday uses. He
education. currently serves as Director for the Aware Home Research
Initiative. He was General Chair for Ubicomp 2001, held in
Dissemination Atlanta, and is an Associate Editor for IEEE Pervasive
The results of the workshop will be summarized in a report Computing magazine. Dr. Abowd has published over 90
to be published in an appropriate journal or newsletter. A scientific articles and is co-author of one of the leading
web site will serve as permanent record of the event. Most textbooks on Human-Computer Interaction.
importantly, we envision this web site to function as Gaetano Borriello is a faculty member in the University of
continuously updated public collection of educational Washington’s Department of Computer Science and
materials (lecture notes, assignments, reference list, Engineering. He is currently near the end of a two-year
software toolkits, pointers to web sites etc.). leave to serve as Director of Intel Research Seattle where
the focus of research is on new devices, systems, and usage
WORKSHOP FORMAT models for ubiquitous computing. His research interests
Activities span the categories of embedded system design,
To maximize information and idea exchange and foster development environments, user interfaces, and networking
collaboration, we plan to spend most of the time on infrastructure. They are unified by a single goal: to create
discussions rather than presentations. The primary new computing and communication devices that make life
activities at the workshop will take place in small working simpler for users by being invisible, highly efficient, and
groups made up of 3-4 people. able to exploit their networking capabilities. Prior to
receiving his Ph.D. in Computer Science at UC Berkeley,
The broad outline of the workshop is: Dr. Borriello spent four years as a member of the research
• Early morning: participants present their background staff at Xerox PARC. In 1995, he received the UW
in ubicomp education and compare educational Distinguished Teaching Award. He currently serves on the
experiences Editorial Board of the IEEE Pervasive Computing
• Mid morning: discussion of the scope of ubicomp Magazine. He is a member of the IEEE Computer Society
education; identification of important ubicomp topics and the ACM.
• Late morning: forming of break out groups Gerd Kortuem is a Lecturer in the Computing Department
• Early afternoon: break out groups identify research at Lancaster University, UK. His research interests include
questions, theories, methods, models and technologies engineering and usability aspects of interactive and
related to individual topics collaborative technologies with particular focus on mobile,
• Late afternoon: groups report back, drafting of a future wearable and ubiquitous computing applications. Dr.
agenda to improve the quality of ubicomp education. Kortuem received his Ph.D. in Computer Science at
University of Oregon for his work on Wearable
Expected Audience Communities. In the past, he worked as researcher at
The workshop is directed towards educators, researchers Apple Computer's Advanced Technology Group in
and students from a variety of disciplines (computer California, the Technical University of Berlin, and the IBM
science, computer engineering, human-computer Science Centre in Germany. He is a member of the IEEE
interaction, psychology, and sociology, …). Computer Society and the ACM.

290
2003 Workshop on Location-Aware Computing
Mike Hazas James Scott John Krumm
Lancaster University Intel Research Cambridge Microsoft Research,
Redmond
http://www.ubicomp.org/ubicomp2003/workshops/locationaware/

ABSTRACT Unfortunately, many of the promising location systems, plat-


The field of location-aware computing has made great ad- forms, and applications presented have not gone into wide-
vances recently, with numerous location systems, software spread use in ubiquitous computing research labs, much less
platforms, and location-aware applications having been pre- everyday environments. Thus, it is important for the diverse
sented by researchers. However, in most environments, in- groups of researchers interested in location-aware comput-
cluding many research labs, location-aware computing has ing to collectively examine the state of the art, with the fol-
yet to achieve the proliferation and ubiquity envisioned for lowing goals:
it.
1. Survey the various location-sensing technologies systems
This workshop will examine the state of the art in location- presented thus far; if possible, identify trends in existing
aware computing, with the objectives of drawing conclu- research leading to generalizations about location tech-
sions from existing work, and identifying critical areas for niques.
future research. The main themes of the workshop will in-
clude location sensing technologies, location representations 2. Identify compelling location-aware computing scenarios
and sensor fusion, compelling location-aware applications, and applications (especially potential “killer apps”), and
and factors affecting the deployment of location-aware sys- assess the nature of their needs (such as accuracy and up-
tems in everyday environments. date rate) for location information and platform support.

3. Examine current location-aware abstractions, platforms


THE STATE OF THE ART and frameworks, and evaluate their suitability for existing
Determining the location of people and objects has been the and proposed location-aware computing scenarios.
focus of much research in ubiquitous computing. Many lo-
4. Taking all the above into account, consider the factors
cation sensing technologies have been devised, resulting in
necessary to facilitate deployment and everyday use of
systems which perform sensing using diverse physical me-
location-aware computing systems in workplaces, homes,
dia, such as infrared light [22, 23], ultrasound [20, 1], elec-
and public spaces.
tromagnetic signals [8, 4, 15, 6], ground reaction force [2,
19], physical/electrical contact [10], and visible light [14, Accomplishing these objectives will help identify the most
17]. Naturally, these systems have an equally diverse set of promising directions for future research in location-aware
properties; each implementation has its own level of accu- computing.
racy, update rate, infrastructure cost, deployment difficulty,
robustness, and capacity for privacy guarantees [9]. WORKSHOP DESCRIPTION
The workshop aims to engage all participants in the exchange
Location-aware applications are numerous. Examples in- of ideas. The morning sessions will consist of short talks
clude portable memory aids [16], conference assistants [21],
from six invited speakers. The speakers in related talks will
environmental resource discovery and control [13], support
then act as a panel, with a large allowance for panel-based
systems for the elderly [12], tour guides [5], augmented re- discussion and debate.
ality [3], mobile desktop control, 3D mice, and virtual but-
tons [1]. Each demands different levels of service from the The afternoon sessions will be organized around the use of
supporting systems, for example in terms of location accu- breakout groups to discuss key issues in location-aware com-
racy and update rate. puting. While the exact breakout topics will be decided
during the workshop, potential topics include: comparisons
There has also been a recent focus on location-aware “plat-
and classifications of location-sensing infrastructure and lo-
forms,” which link data-gathering systems and the data-con- cation-aware platforms; “killer apps” and their requirements
suming applications in a flexible manner. Such work in- of location infrastructure; deployment techniques for loca-
cludes location representation [11], sensor fusion to combine
tion infrastructure; and social implications, such as usability,
location data from many sources [7], and software frame- user acceptance, and privacy concerns.
works supporting the distributed nature of location-aware
computing [18]. Such abstractions are essential for the in- The groups will report back to the workshop in a plenary ses-
teroperability, usability and development of location-aware sion, with a short presentation followed by panel-based dis-
systems and applications. cussion with the breakout group members forming the panel.
291
Results of the workshop will be presented as one or more AT&T Bell Labs and Xerox Palo Alto Research Center. At
posters at UbiComp 2003. PARC, he championed the notion of location-aware comput-
ing and helped invent, design, and build the software and
SPEAKERS applications for the PARCTAB. He is Associate Editor in
Jay Cadman has over ten years of experience selling and Chief of IEEE Computer, an area editor of IEEE Wireless
Communications, and is a member of the IEEE Computer
marketing high-tech products around the world. He was
one of the founders of the North American operations of Society and the ACM.
Smallworld (a leading Geographical Information Systems Steven A. N. Shafer is a Senior Researcher at Microsoft
company), helping it grow to over two hundred employees, Corporation working in the area of ubiquitous computing.
and eventually running global marketing. He then spent two He received his BA from the University of Florida in 1976,
years running commercial and risk management for GE Net- and his PhD from Carnegie Mellon in 1983. He became part
work Solutions and was appointed marketing lead for Au- of the faculty at Carnegie Mellon for twelve years, working
tomation and Network Services, and approximately $300 in computer vision and robot navigation. Dr. Shafer joined
million GE business. Jay now manages North American Microsoft in 1995, where he started the EasyLiving project
sales operations for Ubisense, a company that provides lo- to develop an architecture for building intelligent environ-
cation-aware solutions. ments. His current work is in location awareness and RFID
Dieter Fox is currently an Assistant Professor at the Uni- technology.
versity of Washington, where he heads the Sensing, Track-
ing, and Robotics lab which he established in January 2000. PARTICIPANT LIST
Fox obtained his PhD from the University of Bonn, Ger- Twenty-eight participants were accepted to the workshop on
many, in the area of state estimation in mobile robotics. Be- the basis of submitted participation statements. The partic-
fore joining UW, he spent two years as a postdoctoral re- ipants, listed below, provide representation from four conti-
searcher at the CMU robot learning lab. Over the last six nents and from both academic and industrial backgrounds.
years his research has focused on probabilistic sensor inter-
Jürgen Bohn ETH Zürich
pretation, state estimation, and their application to mobile
Jay Cadman Ubisense Inc.
robotics. He introduced particle filters as a powerful tool
Derek Corbett U. of Sydney
for state estimation in robotics. His current research projects
Esko Dijk Eindhoven U. of Technology
include multi-robot coordination and human activity recog-
Boris Dragovic U. of Cambridge
nition. Fox received an NSF CAREER award and several
Heinz-Josef Eikerling Siemens
best paper awards at major robotics and AI conferences.
Dieter Fox U. of Washington
Jeffrey Hightower is a doctoral candidate at the Univer- Richard Glassey U. of Strathclyde
sity of Washington. His research interests are in employing Mike Hazas Lancaster U.
devices, services, sensors, and interfaces so computing can Jeffrey Hightower U. of Washington
calmly fade into the background of daily life. Specifically, Albert Krohn TecO, U. of Karlsruhe
he investigates abstractions and statistical sensor fusion tech- John Krumm Microsoft Research
niques for location sensing. He received an MS in Computer Anthony LaMarca Intel Research
Science and Engineering from the University of Washington Gaute Lambertsen Japan Science & Tech. Corp.
and is a member of the ACM and the IEEE. John Light Intel Research
Christopher Lueg U. of Technology Sydney
Bodhi Priyantha is a fifth year PhD student attached to the Robert Lutz Sun Microsystems
Networks and Mobile Systems group at the Computer Sci- Nibuhiko Nishio Japan Science & Tech. Corp.
ence and Artificial Intelligence Laboratory at MIT. He re- Bodhi Priyantha MIT
ceived his BSc degree in Electronics and Telecommunica- Steffen Reymann Philips Research Labs
tions from University of Moratuwa, Sri Lanka in 1996. He Kam Sanmugalingam U. of Cambridge
was a teaching instructor at the University of Moratuwa from Bill Schilit Intel Research
1996 to 1998, and received his MSc in Computer Science James Scott Intel Research
from MIT in 2001. His primary research area is location Steve Shafer Microsoft Research
awareness in mobile computing and sensor networks. Cur- David Ayman Shamma Northwestern U.
rently he is working on self-calibration and configuration of Richard Sharp Intel Research
location-enhanced sensor nodes.
Bill Schilit is co-director of Intel Research Seattle and is
part of a small team chartered with defining and driving In- ORGANIZERS
tel’s ubiquitous computing agenda. Dr. Schilit’s research fo- Mike Hazas is a research associate at Lancaster University.
cuses on ubiquitous and proactive computing applications, His primary area of research is sensors and hardware ar-
with an emphasis on context-aware computing. His research chitecture for location systems; he is currently working on
is positioned at the intersection of networking and human- infrastructure-free relative positioning technologies. Mike’s
computer interaction. Prior to joining Intel, he managed the PhD study focused on indoor location systems, with partic-
Personal and Mobile Computing Group at FX Palo Alto Lab- ular attention paid to investigating the advantages of broad-
oratory, a Fuji Xerox Company. Dr. Schilit also worked at band ultrasound for indoor positioning.
292
James Scott is a researcher with Intel Research Cambridge, [12] C. D. Kidd, R. Orr, G. D. Abowd, C. G. Atkeson, I. A.
where he is working on location-aware computing research, Essa, B. MacIntyre, E. Mynatt, T. Starner, and
most recently investigating deployment issues in location- W. Newstetter. The Aware Home: A living laboratory
aware systems. His PhD was on Networked Surfaces, a for ubiquitous computing research. In Proc. of
novel type of network using physical surfaces such as desks CoBuild, Pittsburgh, USA, Oct. 1999.
as the connecting medium, which also provides accurate lo-
cation and orientation information. [13] T. Kindberg, J. Barton, J. Morgan, G. Becker,
D. Caswell, P. Debaty, G. Gopal, M. Frid, V. Krishnan,
John Krumm is a PhD researcher at Microsoft Research. H. Morris, J. Schettino, B. Serra, and M. Spasojevic.
His experience in location-aware computing includes per- People, places, things: Web presence for the real
son-tracking with video cameras, the SmartMoveX RF ac- world. In Proc. of WMCSA, pages 19–28, Monterey,
tive badge, and the Locadio system for computing the loca- USA, Dec. 2000.
tion of Wi-Fi clients. In addition to work with these location
sensors, he has developed various means of enhancing accu- [14] J. Krumm, S. Harris, B. Meyers, B. Brumitt, M. Hale,
racy by respecting prior assumptions about peoples’ walking and S. Shafer. Multi-camera multi-person tracking for
speeds and feasible paths. EasyLiving. In Proc. of the Third IEEE Intl. Workshop
on Visual Surveillance, Dublin, Ireland, July 2000.
REFERENCES
[1] M. Addlesee, R. Curwen, S. Hodges, J. Newman, [15] J. Krumm, L. Williams, and G. Smith. SmartMoveX
P. Steggles, A. Ward, and A. Hopper. Implementing a on a graph—an inexpensive active badge tracker. In
sentient computing system. IEEE Computer, Proc. of UbiComp, pages 299–307, Göteborg,
34(8):50–56, Aug. 2001. Sweden, Sept. 2002.

[2] M. Addlesee, A. Jones, F. Livesey, and F. Samaria. [16] M. Lamming and M. Flynn. Forget-Me-Not: Intimate
The ORL Active Floor. IEEE Personal computing in support of human memory. In Proc. of
Communications, 4(5):35–41, Oct. 1997. the Intl. Symp. on Next Generation Human Interface
Technologies, Meguro Gajoen, Japan, Feb. 1994.
[3] R. T. Azuma. A survey of augmented reality.
Presence: Teleoperators and Virtual Environments, [17] D. López de Ipiña, P. Mendonça, and A. Hopper.
6(4):355–385, Aug. 1999. TRIP: A low-cost vision-based location system for
ubiquitous computing. Personal and Ubiquitous
[4] P. Bahl and V. N. Padmanabhan. RADAR: An Computing, 6(3):206–219, May 2002.
in-building RF-based user location and tracking
system. In Proc. of InfoCom, volume 2, pages [18] H. Naguib and G. Coulouris. Location information
775–784, Tel-Aviv, Israel, Mar. 2000. management. In Proc. of UbiComp, pages 35–41,
Atlanta, USA, Sept. 2001.
[5] K. Cheverst, N. Davies, K. Mitchell, and A. Friday.
Experiences of developing and deploying a [19] R. J. Orr and G. D. Abowd. The Smart Floor: A
context-aware tourist guide: The GUIDE project. In mechanism for natural user identification and tracking.
Proc. of MobiCom, Boston, USA, Aug. 2000. In Proc. of CHI, The Hague, Netherlands, Apr. 2000.

[6] R. J. Fontana and S. J. Gunderson. Ultra-wideband [20] N. B. Priyantha, A. Chakraborty, and H. Balakrishnan.
precision asset location system. In Proc. of the IEEE The Cricket location-support system. In Proc. of
Conf. on Ultra Wideband Systems and Technologies, MobiCom, Boston, USA, Aug. 2000.
Baltimore, USA, May 2002. [21] Y. Sumi and K. Mase. Digital assistant for supporting
[7] D. Fox, W. Burgard, H. Kruppa, and S. Thrun. A conference participants: An attempt to combine
probabilistic approach to collaborative multi-robot mobile, ubiquitous and web computing. In Proc. of
localization. Autonomous Robots, 8(3):324–344, 2000. UbiComp, pages 156–175, Atlanta, USA, Sept. 2001.

[8] I. Getting. The Global Positioning System. IEEE [22] R. Want, A. Hopper, V. Falcao, and J. Gibbons. The
Spectrum, 30(12):36–47, Dec. 1993. Active Badge location system. ACM Trans. on
Information Systems, 10(1):91–102, Jan. 1992.
[9] J. Hightower and G. Borriello. Location systems for
ubiquitous computing. IEEE Computer, 34(8):57–66, [23] G. Welch, G. Bishop, L. Vicci, S. Brumback,
Aug. 2001. K. Keller, and D. Colucci. The HiBall tracker:
High-performance wide-area tracking for virtual and
[10] F. Hoffmann and J. Scott. Location of mobile devices augmented environments. In Proc. of the ACM Symp.
using Networked Surfaces. In Proc. of UbiComp, on Virtual Reality Software and Technology,
pages 281–298, Göteborg, Sweden, Sept. 2002. University College London, Dec. 1999.
[11] C. Jiang and P. Steenkiste. A hybrid location model
with a computable location identifier for ubiquitous
computing. In Proc. of UbiComp, pages 246–263,
Göteborg, Sweden, Sept. 2002.
293
UbiHealth 2003: The 2nd International Workshop on
Ubiquitous Computing for Pervasive Healthcare Applica-
tions
Jakob E. Bardram Ilkka Korhonen
Center for Pervasive Healthcare VTT Information Technology
Computer Science Department P.O. Box 1206 (Sinitaival 6),
University of Aarhus FIN-33101 Tampere
Aabogade 34, DK-8200 Århus N FINLAND
DENMARK + 358-3-316 3352
+45 8942 3200 [email protected]
[email protected]

Alex Mihailidis Dadong Wan


Gerontology Research Centre, Accenture Technology Labs
Simon Fraser University 161 North Clark Street
515 West Hastings Street Chicago, Illinois 60601
Vancouver, British Columbia, V6B 5K3 +1 312 693 6806
+1 604 291 5180 [email protected]
[email protected]

SUMMARY Weiser stated "the most profound technologies are those


The term ‘pervasive healthcare’ describes the use of perva- that disappear" [1]. Pervasive computing may be consid-
sive computing technologies in delivering healthcare serv- ered as the opposite to virtual reality: while in virtual real-
ices, including making healthcare services more ‘perva- ity the user enters the world created by computers, in per-
sively’ available across boundaries in time, organization, vasive computing it is the computing which enters the
and place. physical world and bridges the gap between the virtual and
physical worlds. One of the most important application
Building on the UbiCog 2002 workshop held at the Ubi-
areas for this technology is healthcare, wellness and disease
Comp 2002 conference, the aim of this workshop is to con-
management and support for independent living [2,3,4].
tinue the development of a community of researchers
working with pervasive and ubiquitous computing technol- Developments in sensor and measurement technology make
ogy and healthcare, and to identify and discuss research it possible to obtain health related information from wear-
themes and methods in order to guide future research. able or embedded sensors in our daily lives. Ubiquitous
communication based on mobile telephone networks,
The workshop will focus on exploring this potential by
(wireless) local area networks, and/or some other wireless
discussing the constraints and possibilities of existing and
technologies makes possible anywhere, anytime transfer
emerging technologies for supporting healthcare and pa-
and access of all kinds of information – like measurement
tient self-management, and focusing on identifying current
data, person-to-person communications or health informa-
and future research directions. These goals will be accom-
tion. Mobile communication devices provide ubiquitous
plished through presentation of the participants’ visions and
user interfaces for the users (from health care professionals
research, brainstorming sessions, and small-group breakout
to citizens). The possibilities this technology offers for
sessions.
health care delivery are vast and are only gradually be-
INTRODUCTION coming realized, for example [5,6].
Pervasive healthcare can be defined from two perspectives.
Besides this kind of monitoring and transfer of biological
First, it is the application of pervasive computing (or ubiq-
data, pervasive healthcare also contains a notion of using
uitous/proactive computing, ambient intelligence) tech-
pervasive computing – or ambient technology – for social
nologies for healthcare, health and wellness management.
computing. For example, creating technologies for relatives
Second, it is making health care available everywhere,
and peers of chronically ill persons to stay in touch with a
anytime – pervasively. patient. Pervasive healthcare also contains a notion of
helping the patient to better manage his or her own disease,

294
as well as technologies for communication and collabora- and its applications, and to identify and discuss research
tion among healthcare professionals. themes and methods in this area.
Another central theme related to ubicomp and healthcare is
the intersection of ubiquitous computing, cognitive aids,
TOPICS
and/or artificial intelligence, as applied to helping people
with cognitive disabilities perform daily activities. A pri- The overall topic of this workshop is: The development
mary motivation for developing intelligent “caregiving” and application of ubicomp infrastructures and devices for
systems is the increase in the number of people who have pervasive healthcare including assisting people with cog-
some type of cognitive disability, such as young adults who nitive, mobility, and sensory impairments.
have learning disabilities, or elderly people who demon- This includes sub-topics such as (but not limited to):
strate (a form of) dementia, such as Alzheimer’s disease
• How to infer a person’s behaviors, intentions,
(AD). Recently researchers and industrial partners in ubi-
needs, and mistakes from sensory data.
comp and artificial intelligence have come together to envi-
sion systems that can act as proactive partners in assisting • Deciding on appropriate intervention strategies.
these special populations, such as [7,8,9,4]. Furthermore, • Wearable computing for health.
this community has started to facilitate the exchange of • Wireless, wearable and implantable sensors.
ideas and sharing of information through the organization • Personal medical devices.
of seminars and workshop, such as the Cognitive Aids • Applications for personal management of acute
Workshop held last year at UbiComp 2002. and chronic diseases, wellness, and health, in-
cluding self-treatment (e.g. medication), self-care,
The use of pervasive computing for delivery of health care
self-diagnosis and self-rehabilitation.
raises numerous challenges. When dealing with personal,
sensitive health related aspects of a person’s life, this puts • Smart home technology to support independent
forth strong demands for systems that are reliable, scalable, living and to support peer-to-peer support in
secure, privacy-enhancing, usable, configurable, and many health management.
other things. At the same time, one has to consider that the • Pervasive computing in hospitals and care institu-
average user of such systems is not the typical early adopter tions.
of new technology. This puts special focus on creating • Pervasive applications for caregivers and nurses.
technologies that are usable, and can adapt to, and seam- • New services that health care providers, such as
lessly melt into, heterogeneous computing environments, pharmacies and physicians, can potentially pro-
like the home of the future. vide by taking advantage of these emerging tech-
Pervasive healthcare also contains a fundamental meth- nologies.
odological challenge. Typical research into pervasive com- • Personal, social, cultural and ethical implications
puting uses methods of experimental computer science, of developing and using such technology.
where researchers design, develop, program, and evaluate • Methodological issues to evidence based medicine
prototypes of new technology. The ‘proof-of-concept’ is a (EBM) and pervasive healthcare – how to create
term often used to denote a prototype, which illustrates and innovative new healthcare technology, which can
implements the important aspect of a computer system that be clinically tested?
one wants to demonstrate. Such an ‘experimental’ approach
becomes highly problematic when dealing with health re- The goal of this workshop is to bring together individuals
lated research – what if the experiment falls out wrong? who are actively pursuing research directions related to
Modern evidence based medicine, in contrast, is based on pervasive healthcare. This workshop will allow individuals
statistical significance – one has to demonstrate with sig- with complementary research experiences to build a col-
nificance, that a treatment or cure actually works, and with lective understanding of the issues surrounding the use of
limited side effects. Such clinical trials often involve great pervasive technologies in healthcare settings. This work-
number of subjects (i.e. persons) involving various test shop is timely given the existence of diverse research in the
groups, including a control group. To set up such a clinical area, and will help to bring together the work of emerging
trial running over several months or years clearly takes research groups, all working in the area of pervasive
much more that a ‘proof-of-concept’ prototype. One has to healthcare.
have the resources to design, develop, implement, and
maintain a full-fledged computer system, used by thou-
sands of users. Hence, there is a fundamental methodologi- WORKSHOP ACTIVITIES AND GOALS
cal contradiction embedded in pervasive healthcare, and we This workshop will be run over a full day and will be
would like to discuss this at the workshop as one theme. structured to provide maximum time for group discussion
The purpose of this workshop is to collect original and in- and brainstorming. Prior to the workshop, each participant
novative contributions in the area of pervasive healthcare will be required to read the other participants’ position
statements to ensure that he/she is familiar with their re-

295
search in the area and their visions for pervasive healthcare. pectations towards the workshop, and the author's research
All of this will be available from the workshop homepage. activities including a short bio of the author(s).
Participants will briefly present their research and vision Position papers should be formatted according to the stan-
for future directions in this area. Presentations will be or- dard Springer Verlag format and submitted in PDF format,
ganized according to various themes within pervasive according to the template submission file, which can be
healthcare. These themes will be decided upon after the found at http://www.springer.de/comp/lncs/authors.html.
review of all submissions. Discussion periods will follow Papers should be submitted by email to Ilkka Korhonen
after the presentation of each theme. Upon completion of ([email protected] ).
the presentations, participants will be divided into working There will be a workshop website accessible from
groups based on the themes identified on beforehand, mod- http://www.pervasivehealthcare.dk/ubicomp2003 where
erated by the workshop organizers. In a plenary session both accepted submissions, workshop details (e.g. the pro-
following the groups’ work, each will present their ideas gram), and the results of the workshop will be posted.
and discuss them with all the workshop participants. In a
brief concluding plenary session, all the participants will Paper deadline: August 8
have the opportunity to provide feedback on the workshop Notification of acceptance: August 22
and decide for any continuation at another venue. Acceptance of submissions will be decided upon the review
The goals of this workshop are the following: of the program committee.
1. To build a network and community of researchers
and practitioners working within pervasive
PROGRAM COMMITTEE
healthcare;
The workshop’s program committee includes leading aca-
2. To identify common research themes in pervasive demic researchers as well as representatives from industry.
healthcare; The committee has international members from outside
3. To discuss methodological challenges to pervasive North America reflecting the fact that UbiComp is an inter-
healthcare, especially how to conduct evidence national conference. To date, the following committee
based medicine; members have been confirmed:
4. To foster collaborative efforts among participants, Henry Kautz (Past Chair, U of Washington, US)
who might work together in research projects; Eric Dishman (Past Chair, Intel Labs, US)
5. To create an awareness about research within per- Irfan Essa (Georgia Tech., US)
vasive computing and healthcare.
Ken Fishkin (Intel Labs, US)
Peter Gregor (U of Dundee, Scotland)
PARTICIPATION Edmund LoPresti (AT Sciences, US)
We welcome participants from industry, academia, and
government. We encourage people who are designing and Misha Pavel (Oregon Health & Science University, US)
implementing pervasive computing technologies in medical Martha Pollack (U of Michigan, US)
settings, like in hospitals, in patient’s home, in elderly resi-
Rich Simpson (U of Pittsburgh, US)
dents, and in the offices of general practitioners. We en-
courage a broad range of researchers and practitioners to Ad van Berlo (Smart Home Foundation, The Netherlands)
participate due to the multi-disciplinary nature of using Liam Bannon (University of Limerick, Ireland)
pervasive computing in healthcare. We encourage teams of
Morten Kyng (University of Aarhus, Denmark)
medical and technological researchers to submit work, as
well as individuals representing governmental bodies who Nicos Maglaveras (Aristotle University, Greece)
are enacting related policies, such as those dealing with Niilo Saranummi (VTT Information Technology, Finland)
privacy and ethics of such technologies.
We will explicitly encourage the participation of a number PUBLICATION
of central research groups working in the area of pervasive The IEEE Transactions on Information Technology in
healthcare and future health. This includes research groups Biomedicine (IEEE T-ITB) has a call for a special issue on
in Scandinavia, central Europe, and the USA and Canada. pervasive healthcare (See http://www.vtt.fi/tte/
After review of all submissions, 10 to 15 participants will samba/projects/titb). This issue will be edited by the work-
be invited based on the quality and relevance of the shop organizers, and participants in this workshop are en-
position paper. Each position paper should be two to five couraged to submit their research to this special issue. The
pages in length and consist of the author's vision of the use deadline for the journal papers is November 30, 2003,
of pervasive computing in healthcare, current work, ex- which is a nice timing to the UbiComp 2003 conference.

296
ABOUT THE ORGANIZERS focusing on remote health monitoring using sensors, wire-
Dr. Jakob E. Bardram’s main research areas are perva- less networks, and Web Services. Dr. Wan has presented
sive and ubiquitous computing, distributed component- his research in various academic conferences. His work is
based system, computer supported cooperative work also o widely covered by the media, including Wall Street
(CSCW), human-computer interaction (HCI), and medical Journal, Financial Times, BBC, CNN, ABC News, and
informatics. His main focus currently is ‘Pervasive TechTV.
Healthcare’ and is conducting research into technologies of
future health – both at hospitals and in the patient’s home.
Currently, he is managing a large project investigating REFERENCES
technologies for “The Future Hospital”, which includes 1. Weiser, M. “The Computer for the 21st Century”. In
(among other things) embedding ‘intelligence’ in everyday Scientific American, vol. 265, no. 3, Sept. 1991, pp. 66-
artifacts within a hospital, such as in the walls of the radi- 75.
ology conference room, in the patient’s bed, in the pill 2. Stanford, V. “Using Pervasive Computing to Deliver
containers, and even into the pills. He received his Ph.D. in Elder Care”. In IEEE Pervasive Computing: Mobile and
computer science in 1998 from the University of Aarhus, Ubiquitous Systems, vol. 1, no. 1, Jan.-Mar. 2002, pp.
Denmark. He currently directs the Centre for Pervasive 10-13.
Healthcare at Aarhus University
[www.pervasivehealthcare.dk]. 3. Kidd, C. et al., “The Aware Home: A Living Laboratory
Prof. Ilkka Korhonen's main research interests are appli- for Ubiquitous Computing Research”. In Proceeding of
cation of pervasive computing on healthcare and wellness, 2nd International workshop on Cooperative Buildings
(CoBuild99). Lecture Notes in Computer Science, vol
biosignal interpretation, home monitoring and critical care
1670, Springer-Verlag, Berlin, 1999, pp. 191-198.
patient monitoring. He received his PhD ('97) in signal
processing from Tampere University of Technology, Fin- 4. Mynatt, E.D., Essa, I., and Rogers, W. Increasing the
land. He is a docent in Medical Informatics at Tampere opportunities for aging in place, in CUU 2000 Confer-
University of Technology, and a Research Professor in In- ence (New York, NY, 2000), ACM Press, 1 – 7.
tuitive Information Technology at VTT Information Tech-
5. I. Sachpazidis. ”@HOME: A modular telemedicine
nology, Tampere, Finland. He has >70 scientific publica-
system Mobile Computing in Medicine”, In Proceed-
tions in international scientific journals and conferences. ings of the 2. Workshop on mobile computing
Dr. Mihailidis has been conducting research in the area of 11.04.2002 Heidelberg, Germany
cognitive devices for older adults with dementia for the
6. I. Sachpazidis, “@Home Telemedicine”, In Proceed-
past eight years. While at Sunnybrook & Women’s College ings of Telemed 2001 Conference. Telematik im Ge-
Health Sciences Centre in Toronto, Canada, he was one of sundheitswesen, 9th - 10th November 2001, Berlin,
the first researchers in this field to develop and clinically Germany.
test a prototype of an intelligent cognitive device that as-
7. Kautz, H., Fox, D., Etzioni, O., Borriello, G., and Arn-
sisted older adults with Alzheimer’s disease during a wash-
stein, L. An overview of the assisted cognition project,
room task. He has presented this area of work at many in-
in AAAI-2002 Workshop on Automation as a Caregiver:
ternational conferences and has published in key journals
The Role of Intelligent Technology in Elder Care
related to rehabilitation engineering, assistive devices, and
(2002), AAAI Press.
dementia care.
8. Adlam, T., Gibbs, C., and Orpwood, R. The Gloucester
Dr. Wan has been investigating how emerging technolo- smart house bath monitor for people with dementia.
gies, such as ubiquitous computing, can be used to help
Physica Medica, 17, 3 (2001), 189.
create new consumer experiences and business opportuni-
ties for the past seven years. Five years ago, he developed 9. Mihailidis, A. Fernie, G.R., and Barbenel, J.C. The use
the Magic Medicine Cabinet a popular prototype that inte- of artificial intelligence in the design of an intelligent
grates biometrics, RFID, and health monitoring devices to cognitive orthosis for people with dementia. Assistive
provide consumers with compliance support, vital moni- Technology, 13 (2001), 23 – 39.
toring, and personalized health information. Currently, he is

297
2nd Workshop on Security in Ubiquitous Computing

Submitted as a proposal to the organizers of UBICOMP 2003,


Seattle Washington

Joachim Posegga, Philip Robinson, Narendar Shankar, & Harald Vogt


May 16, 2003

will seek good quality papers that show evidence of these


ABSTRACT developments, as well as those that refute these claims and
A first instance of this workshop was held at UBICOMP make other propositions for research. Submissions to the
2002 in Göteborg Sweden (see workshop and the subjects of ensuing discussion will
http://www.teco.edu/~philip/ubicomp2002ws/). Only 19 include:
persons were in attendance in 2002, due to space
limitations at the Göteborg location; all participants
1) Platforms, Architectures and Models
considered it as a successful event, and many expressed the
wish to continue at the next conference. This year we 2) Applications and Experience
intend to repeat it, and seek to dig deeper into the 3) Novel Technologies and Techniques
outstanding issues we established during last year’s forum.
4) Advances in Networking
Keywords
Security
The organizers of the workshop are all currently active in
Summary security research within this area. However, there are also
The last security in ubiquitous computing workshop many interested persons worldwide, as evidenced by the
concluded with a summary of a critical assessment of the Conference on Security in Pervasive Computing, held in
state of the art in security, and the requirements of Boppard Germany this year
Ubiquitous Computing applications and environments [1]. [http://www.dfki.de/SPC2003/]. In this document we
Most of the contributions were theoretical and were propose five topics to be included in the workshop, and
derived based on an analysis of changes in human-to- suggest how we will organize the correlation and
computer and computer-to-computer interaction and documentation of useful results.
networking. The major challenges identified were:
Topic 1: Emergence of New Models for Trust, Privacy
and Content Protection
1) New models for verifying trust, user-controlled In Ubiquitous Computing, computers are no longer
privacy and content protection. restricted to being physically locked away in large
2) We cannot assume priori trust in UbiComp processing centers, being contained within a logically
environments, as evidenced by developments in ad defined administrative boundary, or even being stashed in
hoc networking. our pockets or briefcases. Computers are everywhere! They
are in the packaging of the groceries we buy, public
3) Basic credential-based authentication and transportation, our homes, our clothing and perhaps even in
verification is not sufficient for both security and the walls we lean on. Therefore, there are so many new
ubiquitous computing goals to be fulfilled. interfaces to systems and possible information access
4) Adaptive, user-based and integrated mechanisms that are in the hands of the attacker. Perhaps
infrastructures for privacy management are the original story of the “Trojan horse” becomes even more
required relevant. There’s no firewall, there’s no constant
5) Context authentication, application of implicit surveillance and intrusion detection, and there’s no full-
interaction to security mechanisms, and provision time administrative staff. Do we rely on high assurance
of more finely grained factors for trust models computing? Should mechanisms for degradation or
need to be developed. destruction of information be provided? What about trust?
Trust decisions have to be made dynamically, be based on
continuums rather than discrete models, and go beyond
The question for this year’s workshop stands as, “is one-dimensional identities – we need mechanisms for
research moving towards meeting these challenges?” We

298
entity recognition. Can the private remain private even Topic 5: The use of Context Information
without priori trust? Context is any set of information that can be used to
characterize the situation of an entity. It ranges from
Topic 2: The Absence of Priori Trust
persisted logs and records of an entity, to the observation
Through the use of X509 certificates and the presence of an
and sensing of the physical environment. Context
online CA (certification authority), PKI (Public Key
information is a foundation for making security-relevant
Infrastructure) has become an adopted model for trust in
decisions. A certain context might convey an uneasy
the Internet. Other models include Kerberos, which is
feeling about the security of an interaction, while in
characterized by a centralized KDC (Key distribution
another situation unknown devices are trusted to carry out a
center), as well as PGP (Pretty Good Privacy), which
critical transaction. How are such environments
creates a web-of-trust that does not require one single
characterized, and how do entities adapt when the situation
signing authority, to whom decisions of trust can be
changes?
delegated. In any event, all of these models are built on the
assumption that some prior registration has been made, and Organizational Information
hence a registrar (CA, KDC, web-of-trust) can be
referenced when faced with making the ultimate decision - Papers will be selected based on their contribution to the
to trust or not. In some ubiquitous computing applications understanding of security in ubiquitous
these infrastructures may or may not exist; often the user computing,originality, and novelty.
has to explicitly authorize transactions. Can we still make
decisions of trust without making the assumption that such Based on the success of last year’s program, we will aim to
authorities exist? Do all principals and subjects still require keep a similar format, but are open to change based on the
a strict identity-based authentication? Are the advantages quality and quantity of accepted submissions.
of separating the identity (secret key) from the permissions, Results of Workshop will be published and exploited
as suggested by SPKI [2], well positioned for ubiquitous initially through the UBICOMP conference, mailing list
computing? Once trust is established, how does it evolve and on a prepared website. We will consider publication of
over time, and how can collaboration amongst devices a special issue, following the evaluation of the workshop.
contribute to higher assurance levels? Contact person: Philip Robinson, [email protected]
Topic 3: Beyond Traditional Credential-based The Organizers
Authentication
It has often been said that authentication is the fundamental Joachim Posegga
basis for security and trust. Applications require Pervasive Security Research
authentication to determine authorizations, establish SAP AG, Germany
constrained channels, infer data integrity, enforce non-
repudiation, and complete some form of billing and Philip Robinson
auditing. Authentication is essentially the proof of identity Tele-cooperation Office (TecO)
through a secret, unique characteristics, permissions, or University of Karlsruhe, Germany
possession. Traditional credential-based authentication
includes passwords, biometrics, and signatures. Narendar Shankar
Topic 4: Adaptive Infrastructures University of Maryland
The origins of Ubiquitous Computing were really USA
concerned with the user. Writers like Weiser [3] and
Norman [4] urged that the comprehensive use of computer Harald Vogt
systems and machines were at times beyond the grasp of Institute for Pervasive Computing
the everyday-man. Complicated user interfaces and rules ETH Zurich
become more of an obtrusion than an aid for completing
mental and physical tasks. HCI (Human Computer REFERENCES
Interaction) and distributed systems offer research into 1. 1st Workshop on Security in Ubiquitous Computing,
adaptive interfaces and middleware respectively, to support UBICOMP 2002, Göteborg Sweden, September 2002.
the goals of seamless integration of technology into http://www.teco.edu/~philip/ubicomp2002ws/proceedin
everyday life and enterprise systems. How do we fit gs.htm
security into these mechanisms without undoing the
2. http://www.ietf.org/html.charters/spki-charter.html
flexibility and usability aspects? How do we exploit
context information in supporting security infrastructures 3. Mark, Weiser. The Computer for the 21st century.
that are adaptive and complementary? Scientific American, September 1991.
4. Donald A. Norman. The Design of Everyday Things.
Doubleday, 1988.

299
Multi-Device Interfaces for
Ubiquitous Peripheral Interaction
Loren Terveen Charles Isbell Brian Amento
Computer Science & Engineering College of Computing Speech Interfaces Department
University of Minnesota Georgia Tech AT&T Labs
Minneapolis, MN 55455 USA Atlanta, GA 30332 USA Florham Park, NJ 07932 USA
[email protected] [email protected] [email protected]

is currently a member of the Speech Interfaces department. His


SUMMARY research focus has been in interfaces that support cooperation
Cell-phones and PDAs are ubiquitous, yet there are limits to the between humans and machines by allowing users to interact with
usage they afford. Notably, they aren’t always accessible – agents that mine their environment and associated data sources for
they’re usually carried in a pocket, purse, backpack, etc. This relevant content and meta-data. This work has been in areas such
means that people must consciously decide to use a device; fur- as collaborative filtering, information retrieval, speech interfaces
thermore, when a device is used, it becomes the focus of the and multi-modal systems.
user’s attention.
A new generation of emerging devices does not share these limits. WORKSHOP DETAILS
For example, a computationally augmented wristwatch is always Maximum number of participants: 15
visible. It could be used to scroll messages and reminders, and to Means of soliciting participation: messages to relevant online lists
sound tones or light LEDs for items of special interest. and forums (e.g., [email protected],
Such special purpose devices can be combined with PDAs to form www.chiplace.org) and personal email to researchers active in the
multi-device interfaces, delivering a user experience that offers field
both the peripheral awareness enabled by (say) a wristwatch, and Means of selecting participants: Would-be participants will sub-
the more powerful computational, networking, and interactive mit a 2-4 page position statement identifying a specific research
capabilities of a PDA. This workshop aims to explore candidate question or questions in the area of multi-device interfaces and
devices, interface designs, and information architectures for multi- their approach to the questions. Participants will be selected
device interfaces. based on the clarity and interest of the research questions and the
novelty and interest of their proposed or already implemented
BACKGROUND OF ORGANIZERS solutions.
Loren Terveen is an Associate Professor of Computer Science &
Engineering at the University of Minnesota. He received his PhD DETAILED DESCRIPTION: ACTIVITIES AND
in Computer Sciences from the University of Texas at Austin,
GOALS
then spent 11 years at AT&T Labs / Bell Labs. Within the field of
Human-Computer Interaction, his specific research interests in- A vision – towards ubiquitous peripheral interaction.
clude recommender systems, online community, web search and Internet enabled cell-phones and PDAs with wireless networking
information management, location-aware systems, and peripheral capabilities enable continuous access to information for mobile
awareness interfaces. He has extensive experience in conference users. A PDA can access information from a server, thus (poten-
and workshop organization, including serving as general co-chair tially) keeping users current. However, users cannot be expected
of CHI 2002 and IUI 1998 and organizing multiple workshops at to hold their cell-phones or PDAs in their hands, constantly
CHI and other conferences. checking them for new information or messages. In other words,
these devices support a “pull model” of information access, in
Charles L. Isbell, Jr. received his PhD from the MIT Artificial which users have to make an explicit decision to seek information.
Intelligence Laboratory in 1998. He then spent 4 years at AT&T
Labs/Research, before joining the faculty of the College of Com- In contrast, wearable devices like wrist computers enable periph-
puting at Georgia Tech in 2002. Charles' research interests are eral awareness of information, a “push model”. With a wrist
varied, but the unifying theme of his work in recent years has been computer, for example, information such as messages, reminders,
using statistical machine learning techniques to build autonomous and news stories could be constantly streamed to the device and
agents that engage in life-long learning of individual preferences. displayed as a scrolling “ticker”. The information is at the periph-
These agents build models of the usage patterns of individuals, ery of users’ attention, and there is no guarantee that any particu-
rather than discovering trends in large datasets. His work with lar item will be noticed. However, from time to time a user may
agents who interact in social communities has been featured in glance at his watch and notice an item of interest. When this oc-
The New York Times, the Washington Post and Time magazine's curs, he can follow up on that item simply by pressing a button on
inaugural edition of Time Digital magazine, as well as in several the wrist display. This will initiate a program to view that item in
technical collections. detail on the user’s PDA. The user then can take out his PDA and
interact with the information as desired.
Brian Amento received his PhD in Computer Science from Vir-
ginia Polytechnic and State University. He spent 6 years in the This workshop will explore the theme of multi-device interfaces
Human Computer Interfaces group at AT&T Labs Research and for ubiquitous peripheral interaction in depth. It has a number of

300
specific goals, including identifying requirements for ubiquitous Enabling software techniques and architectures.
peripheral awareness devices and considering specific devices that For any information delivery system, ensuring that information is
can meet these requirements, exploring software techniques and relevant to an individual user is a key goal. This is even more
architectures that drive the interaction, and examining designs for important in our context. Networking may be relatively slow,
interfaces that divide their functionality across several wearable unreliable, and expensive. Most of the time, users’ attention will
devices. We consider each of these goals in some detail. not be focused on their peripheral awareness device, so the system
Devices for ubiquitous peripheral awareness: requirements and should be careful not to distract users from their current tasks.
candidates. These considerations pose challenges for the information man-
A wearable peripheral awareness device must be always on, al- agement software that runs on the server. This software is respon-
ways accessible (that is, a user must always be able to receive sible for monitoring events in a user’s computational environment
information from it), and must not demand explicit user attention (e.g., email, voicemail, calendar), deciding what information to
(that is, it is accessible without a user explicitly “taking it out to send to a user’s wearable system, prioritizing that information,
use”, as one must with a PDA or cell-phone). It would be very dealing with user responses, and learning from those responses so
helpful if some version of the device already enjoys widespread its decisions can improve over time.
use (as do wristwatches, for example). And, since in our vision, One challenge for the design of this software is to identify the
this device is intended to work together with a PDA, it must be factors it can use to make these types of decisions. One such fac-
networked. We believe that wireless devices are much more ac- tor is timeliness. New email or voicemail, upcoming appoint-
ceptable to users, which means that the networking should be ments, and changes to web-based information services such as
wireless. stock tickers or sports scoreboards are some examples of timely
There are various promising devices that meet some or all of these information. For mobile use, location also is important in deter-
requirements. One is a computationally augmented wristwatch. mining relevance. GPS is one technology that enables devices to
There has been a lot of activity in this area recently, both in com- be location-aware. Location-aware devices could notify people of
mercial and research contexts. Companies such as Suunto make nearby stores with items they need to purchase, or of nearby
sophisticated wrist computers tailored for sports like golf and friends whom they’d like to contact.
skiing. Fossil and onHand make wrist PDAs that runs Palm OS. A system also should provide users with information that is im-
Swedish researchers prototyped a simple ‘Reminder Bracelet’ portant. For a simple example, it should avoid streaming spam
(CHI 2001) – LEDs were added to a wrist band, and they were email to its users. Ideally, it should be able to prioritize all the
lighted to show several types of information of varying levels of types of information (messages, reminders, news items, etc.) and
importance. Most notably, IBM and Citizen are prototyping a use these priorities to guide its presentation of the information.
general purpose wrist computer that runs Linux. Initial priorities may be based on the type of the information: for
Another interesting possibility is the use of audio for peripheral example, getting notified that one of your friends is in the same
awareness. For example, Sahwney and Schmandt’s Nomadic mall or library as you is likely to be more important than most
Radio used small, neck-mounted, directional speakers to deliver news stories.
audio. Headsets that are relatively unobtrusive and still allow Finally, an effective system will be adaptable. No two users will
users to engage in normal interaction are another possibility;. have the same notion of importance. A system should learn from
Grinter & Woodruff (CHI 2002) did a pilot user study that user responses what that particular user considers important. For
showed some preference for single ear headsets; ear buds also are example, if a user frequently reads email messages from particular
worthy of more exploration. addresses, messages from these addresses should gradually re-
Other wearable devices also are worth investigating, e.g., compu- ceive higher priority.
tational jewelry or eyeglass displays. Thus, the key enabling software techniques come from areas such
When it comes to the networking requirements for a multi-device as information filtering and artificial intelligence, particularly
interface, Bluetooth seems well-suited. It works over short dis- machine learning. The workshop will explore the range of tech-
tances, but the peripheral awareness device and the PDA will be niques that are relevant, discuss lessons to be learned from exist-
well within the working range. While Bluetooth is appropriate for ing research prototypes, and identify open problems where new
networking the components of the multi-device interface, the research is needed.
PDA will need to connect a server (more on this below). This Design of multi-device interfaces.
will require a long-distance wireless networking technology such
as WIFI or GSM/GPRS. There are several challenges here. One is the general problem of
designing for devices with i/o capabilities that are quite different
Another important issue we will explore is how much can be done from, and relatively impoverished compared to desktop user inter-
with off-the-shelf devices. The IBM/Citizen ‘Watchpad’ is not faces. Devices may have very small displays – or, in the case of
yet available; if it were, it would fulfill most or all the require- audio devices, no displays at all. And their input capabilities may
ments mentioned above. However, it currently still may be neces- be very limited, in the case of a wrist computer, or inherently
sary to do some simple hardware prototyping along the lines of error-prone, as with a speech interface. Designing an interface for
the Reminder Bracelet. a PDA or cell-phone is hard; designing for a wristwatch will be
In general, the workshop will address this goal by trying to reach even harder.
agreement on appropriate requirements, developing a comprehen- Another issue is when it is appropriate to ‘promote’ information
sive list of candidate devices, and evaluating the extent to which beyond the periphery, that is, when the system should attempt to
each candidate satisfies the requirements. notify the user about some information at once. When this is nec-
essary, a wristwatch computer might be able to flash its display,

301
vibrate, or even sound a tone. Of course, the decision whether to they now go through the PDA? This suggests that the division of
seek the user’s attention must be guided by the prioritization tech- functionality across the two devices may have to be dynamic.
niques mentioned above. Workshop activities.
It’s also important to consider how users might respond to items We will attempt to achieve these goals through a combination of
on the peripheral awareness device that catch their attention. One activities, both prior to and at the workshop. Would-be partici-
way would be to support brief “canned” responses. For a mes- pants will submit 2-4 page position statements, identifying one or
sage, in particular, it should be possible to send a response like more of the workshop goals that they want to address, at least one
“OK” or “I got your message – will reply in detail when I’m back specific research issue associated with the goal, and their current
at my desk” without having to take out one’s PDA. One research or planned work that addresses the issue. The organizers will
question worth exploring is how to define a small set of generally select participants based on the interest and clarity of the research
useful canned responses. Another issue is how to design an inter- questions and the novelty and interest of the proposed or com-
face for a device with very limited input capabilities (like a wrist- pleted solutions. Position papers of all accepted participants will
watch) that makes it simple for users to select a response. be posted on a website for the workshop.
Of course, the most interesting and novel interface design chal- Well in advance of the workshop, we will group participants into
lenge is how to divide functionality across multiple devices. The sessions, and assign a discussant for each session. The discuss-
simplest case to envision is one where a user notices something on ant’s job will be to bring additional relevant knowledge to bear on
the peripheral awareness device, gives a single command (e.g., a the work to be presented in the session, e.g., to compare and con-
button press) to indicate interest, then takes out a PDA to explore trast the different approaches, compare them to previous work, or
the information in detail. This model assumes that one first uses suggest alternative approaches. This process will begin through
the peripheral awareness device alone, then the PDA alone. It email exchanges prior to the workshop.
also requires very simple coupling between the two devices. We
think this model is well worth exploring; however, we also will At the workshop, presentations will be organized around the
attempt to identify situations when a more complicated model for workshop goals and kept short, no more than 15 minutes. This
combining the devices is necessary. For example, when a user has will allow plenty of time for discussion. The discussion will be
his PDA out and is using it, should information and notifications initiated by the assigned discussant; we expect discussants to pre-
still be going through the peripheral awareness device? Or should sent their perspective for about 15 minutes before opening up to
general discussion.

302
Ubicomp Communities: Privacy as Boundary Negotiation

John Canny1, Paul Dourish2, Jens Grossklags3, Xiaodong Jiang1, and Scott Mainwaring4

1 3
Computer Science Division School of Information Mgmt. and Systems
University of California, Berkeley University of California, Berkeley
Berkeley, CA 94720 USA Berkeley, CA 94720 USA
{jfc; xdjiang}@cs.berkeley.edu [email protected]
2 4
School of Information and Computer Science Intel Research
University of California, Irvine 2111 NE 25th Ave., MS JF3-377
Irvine, CA 92697 USA Hillsboro, OR 97214 USA
[email protected] [email protected]

ABSTRACT ubicomp community are understandable. But addressing


Ubiquitous computing conjures visions of big and little privacy alone ignores the needs for community
brother, and ever-diminishing privacy. But it also opens up participation. How do we support and enhance normal
new forms of communication, collaboration and social social disclosure in ubicomp settings? How do we establish
relations. Full participation in communities involves and expose norms for disclosure within communities? Can
exchange of information, and maintenance of a visible, we support gradual, mutual disclosure for trust-building?
public persona. Privacy is often regarded as an imperative These are typical questions we would like to address in the
in its own right, but this perspective ignores the workshop.
countervailing need for disclosure in social settings. This This workshop aims to provide a forum for ubicomp
workshop takes a balancing perspective: it treats system developers and researchers, security researchers,
community participation as a goal, and balances the need and social scientists to collaboratively explore the future of
for disclosure against the need for privacy. Privacy is not trust-sensitive and community tools in ubicomp. Areas of
an abstract consideration, but a practical process of interest to this workshop include (but are not limited to) the
negotiating and managing boundaries. The workshop will following topics:
explore both social perspectives and technical approaches
to this issue. It builds on last year’s ubicomp workshop on
“Socially-informed design of privacy-enhancing solutions 1. Community models and Ubicomp: What are the
in ubiquitous computing”. emerging and anticipated forms of community
Keywords supported by ubiquitous computing? What are the
Privacy, Communities, Ubiquitous Computing communication modes? How are norms established
and maintained? What design principles can be
INTRODUCTION
discovered?
In normal social situations, we are aware of what we say
and do, who can hear and see us, and what are appropriate
norms for disclosure for the community we are in at the 2. Communities and Privacy: What forms of disclosure
time. Our awareness of where information is going (and and discovery are appropriate for ubicomp
should go) helps regulate the information flow and give us communities? How is disclosure mediated? What
a good level of privacy protection. When we rely on kinds of disclosure boundary are appropriate? How
ubiquitous computing to mediate our interactions, we may can this be exposed and supported by technology?
lose many or all of these cues. In the worst case, How can disclosure boundaries be negotiated? Are
information about us may be flowing to other actors there asymmetries that need to be mitigated? This topic
without our knowledge or consent. The ease and has both social and technical dimensions.
invisibility of electronic communication in ubicomp greatly
increases the risk of unchecked information flow and
3. Communities and Trust: With respect to privacy and
consequent loss of privacy. Concerns about privacy in the
disclosure, how is “trust” manifest in ubicomp

303
communities? How are reputations, reliabilities, and Submission, Selection Process and Publication
risks established, measured, and represented? What The workshop organizers will select participants based on
forms of information or other exchange occur in the review of submitted position papers, taking into account
community? How might ubicomp systems handle scientific quality and relation to the workshop topics. We
differences in power, access, and expertise within a will also invite selected prominent ubicomp researchers,
community? security experts, social scientists and privacy advocates to
submit to the workshop.
Format of the Workshop and Timetable
This workshop will last for 1 full day and will be limited to The workshop proceedings will be published as a technical
20 participants (not including the workshop organizers) to report available online. Also we will seek cooperation with
enable lively and productive discussions. Participants will ACM to archive the workshop outcomes in the ACM
be invited on the basis of position papers. Such position digital library.
papers should be no longer than 4 pages excluding Update on Selected Workshop Papers
references, and they will be selected based on their Michael Boyle “A Shared Vocabulary for Privacy”
originality, technical merit and topical relevance.
James Fogarty “Sensor Redundancy and certain Privacy
The workshop will be organized into panels and breakout Concerns”
sessions. Depending on the submitted position papers, the
workshop will consist of 3 to 4 panels. Each panel lasts Jonathan Grudin, Eric Horvitz “Presenting choices in
about an hour, and includes presentation of 5 or 6 position context: approaches to information sharing”
papers that share a similar topic, followed by organizer-
Jason Hong, Gaetano Boriello, James A. Landay, David W.
moderated discussions. The morning panels are devoted to
McDonald, Bill N. Schilit, J. D. Tygar “Privacy and
community-oriented ubicomp systems, while the afternoon
Security in the Location-enhanced World Wide Web”
panels are devoted to trust issues manifested in those
systems. Also in the afternoon, there will be breakout Charis Kaskiris “Socially-Informed Privacy-Enhancing
sessions lasting about 1.5 to 2 hours, followed by reports to Solutions Economic Privacy and the Negotiated Privacy
a plenary session. In addition, coffee breaks and lunch will Boundary”
serve as opportunities for informal discussion. To the
Saadi Lahlou “Constructing European Design Guidelines
extent possible, participants will have lunch together within
for Privacy in Ubiquitous Computing”
short walking distance of the workshop location.
Marc Langheinrich “When Trust Does Not Compute - The
Role of Trust in Ubiquitous Computing”
Scott Lederer, Jen Mankoff, Anind Dey “Towards a
09:00 – 09:30 Registration & welcome
Deconstruction of the Privacy Space”
09:30 – 10:00 Introductions
Carman Neustaedter, Saul Greenberg “Balancing Privacy
10:00 – 11:00 Panel 1 and discussion and Awareness in Home Media Spaces”
11:00 – 11:30 Coffee Chris Nodder “Say versus Do; building a trust framework
11:30 – 12:30 Panel 2 and discussion through users’ actions, not their words.”
12:30 – 02:00 Lunch David J. Phillips “The information environment as text
02:00 – 03:00 Panel 3 and discussion interpreted by communities of meaning: implications for
design of ubicomp systems”
03:00 – 04:30 Breakouts
Organizers
04:30 – 05:30 Breakout presentation
John Canny, Paul and Jacobs Distinguished Professor of
05:30 – 06:00 Wrap-up & coffee Engineering, University of California, Berkeley
John Canny is a Professor of Computer Science at UC
Berkeley with a background in robotics, AI and algorithms.
His recent work is on privacy-preserving collaborative
Desired Outcome algorithms, including collaborative filtering and location-
We expect that this workshop will have concrete results based services. He founded the group on Human-Centered
that will advance the development of trust-sensitive Computing at UC Berkeley in 1998, a group of technical
community-oriented ubicomp systems. We will put and social sciences interested broadly in the impacts of IT
together a poster summarizing the activities of the on society. In 2001, he co-founded the Berkeley Institute of
workshop, and report back to the conference. Design, a new interdisciplinary program in socially-
informed design of informational environments.

304
Scott Mainwaring, Senior Researcher, People and Jens Grossklags, Ph.D. student, School of Information
Practices Research Lab, Intel Research Management and Systems, University of California,
Scott Mainwaring is a senior researcher in Intel’s Berkeley
ethnographic research and design group in Hillsboro, Jens Grossklags is a Ph.D. student in Information
Oregon, conducting fieldwork in settings of technology use Management and Systems at UC Berkeley, and has been
ranging from U.S. and Korean households to Chinese visiting research student at NASA. Jens also has been
businesses. Current interests include social, cultural, and research guest at the Max-Planck Institute for Research into
place-based constraints and opportunities for ubicomp. Economic Systems in Germany. His current work spans the
Prior to joining Intel in 2000, Scott was a member of domains of computer networks, economics, and human
research staff at Interval Research Corp., working on media factors/psychology research. He has been honored with two
spaces, lightweight communication, interactive television, conference best paper awards: for research on behavioral
and video ethnography. studies in privacy, awarded by the German Informatics
Society, and for architectural design of large-scale ad-hoc
sensor networks, conferred at the ACM/ACS MDM 2003.
Paul Dourish, Associate Professor, School of Information
and Computer Science, University of California, Irvine
Paul Dourish is an Associate Professor in the School of Xiaodong Jiang, Ph.D. Student in Computer Science,
Information and Computer Science at UC Irvine. He has University of California, Berkeley
held research positions at Xerox PARC, Apple Computer, Xiaodong is a Ph.D. student in Computer Science at UC
and Rank Xerox EuroPARC. His principle research Berkeley, where he is currently a Mayfield Fellow and
interests are in the areas of Human-Computer Interaction Hitachi Fellow. Xiaodong’s research focuses on ubiquitous
and Computer-Supported Cooperative Work. In particular, and context-aware computing. He has worked on the
he has a long-term interest in the relationship between information space model and infrastructure for context-
information system design and social analysis. Most aware computing, and applications that assist firefighters in
recently, he has been exploring the foundations of their emergency response practice.
embodied interaction, which seeks to apply
phenomenological approaches to understand encounters
between people and technology.

305
At the Crossroads: The Interaction of HCI and Systems
Issues in UbiComp
Brad Johanson Jan Borchers, Bernt Schiele
Stanford University ETH Zurich
[email protected] {borchers,schiele}@inf.ethz.ch
Peter Tandler Keith Edwards
IPSI Darmstadt Palo Alto Research Center
[email protected] [email protected]

ABSTRACT In the remainder of this proposal we present some more


Two key research areas in ubicomp are human-computer detailed background information on the topic of the
interaction (HCI) and systems. Both have unique workshop, followed by a preliminary schedule.
challenges, and constraints from each field affect what can
Workshop Topic Details
be achieved in the other. Despite this, researchers too often
Fundamental constraints from the fields of systems and
focus on one research area while spending only the
HCI both have an impact on how well an application
minimum necessary time addressing the issues in the other.
designer can do in the UbiComp domain. On account of
For example, an HCI researcher might focus on multi-
this, it is important to have a good understanding of the
modal interaction for ubiquitous computing environments,
fundamentals from both fields.
but be stymied because they did not pay attention to
whether or not the underlying system could support the There are four general areas of relevance on the systems
latencies needed by the users of the system. This workshop side: hardware platforms, peripherals, system
will focus on how researchers from both HCI and systems infrastructures, and applications. At the hardware level, the
have noticed the impact of constraints from the other field speed of processors and networking technologies place
on their own efforts. upper bounds on how feature-rich and responsive any given
system can possibly appear to the end user. Of course,
Keywords
faster processors and networking technologies are
Human-Computer Interaction, Systems
continuously appearing, but ultimately performance is
WORKSHOP OVERVIEW constrained by the inherent latencies incurred by the need
UbiComp research can be seen as both the study of the way to obey the speed of light. Today’s networks range up to
future users will interact with the sea of computers through gigabits/s in transfer speed and down to about 10 Mb/s for
which they move, and also the systems technology that will wireless. Processors range from the gigaops/s range used in
allow these computers to interact with one another in desktop workstations down to the low megaops/s range
meaningful ways. Thus, the field needs to deal both with driving many embedded devices.
issues of human-computer interaction (HCI) and systems The peripheral level consists of the sensors and actuators
technology. Systems issues always impact the types of deployed in UbiComp environments. These promise to be
user interactions that are feasible, and user interaction the basis for novel interaction mechanisms and may even
should influence the design of underlying systems, but in fundamentally change the way humans interact with a
UbiComp the two disciplines are even more interdependent ubiquitous computing environment. However, limiting
than on current desktop platforms where interaction factors such as the accuracy and rate of capture or
modalities and techniques are less diverse and mostly actuation, interpretation ambiguities, and processing speed
predetermined through the desktop metaphor. and latency of these devices may become the bottleneck
This workshop will look at the fundamentals of HCI and and constrain the way the user perceives the system and its
how they affect the design of UbiComp systems, as well as performance.
the limits and tradeoffs in system design and how those Given the underlying hardware and peripherals (and in
restrict the types of interaction and interfaces that can be some cases, operating systems), a system infrastructure or
supported in the world of ubiquitous computing. The middleware can be constructed. In creating these systems
workshop will include overviews of these issues, designers must make tradeoffs between flexibility and
presentations from participants about their experiences with features and the maximum performance the system
interactions between HCI and systems issues in their own infrastructure will make available to application developers.
work, and a final group discussion session where we will In the UbiComp field, these infrastructures include Gaia OS
try to extract themes and principles from the workshop for [4], one.world [6], BEACH [10], BASE [1], iROS [9], and
future presentation to the UbiComp community at large. others.

306
Finally, application designers create applications which reasoned analyses of how the two interact and guidelines
present user interfaces to their end users. In doing so, for choosing appropriate design points. Ten to twelve
designers must understand the underlying system papers would be selected from those submitted, and those
framework they are using to ensure that the user interface is authors would be invited to participate in the workshop.
sufficiently responsive, flexible, and intuitive. It is The papers would be made available as a booklet at the
common practice to run applications through cycles of workshop, and permanently on a web site associated with
design and testing to optimize the user interface, but the workshop.
Edwards et al. [5] point out that it is probably also valuable
to feed what is learned in application design back into the We propose a one-day long workshop with the following
design process of the underlying infrastructure so that it schedule:
will evolve to be able to better support the types of
applications being built on the system infrastructure. • Anchor Talks (9-10:30am). Initial suggestions, to
be fixed by workshop:
Just as the underlying system constrains the types of
interfaces that can be built, the fundamentals of HCI place o HCI Fundamentals
bounds on any system that wishes to support human- o Systems Fundamentals
computer interaction. Some of these fundamentals are
cognitive and involve how quickly humans can respond to o Overview of performance constraints of
their environment and how fast their environment must existing UbiComp systems infrastructures
react to them. For example, humans cannot see any detail • Break (10:30-10:45am)
in motion faster than around 60 Hz, and users expect a
system to react to a trigger within 1 s before they feel they • First Paper Session [15 min presentations]
are waiting. Many of these fundamentals are documented (10:45am-12:15pm)
in classic HCI literature such as [3]. • Lunch (12:15-1:30pm)
More recently, HCI has begun to develop “post-cognitive” • Second Paper Session [15 min presentations]
theories for human-computer interaction. These broaden (1:30pm-2:45pm)
the previous focus on one user interacting with one device
• Break (2:45pm-3pm)
to accomplish a single task, to the larger context of multiple
people, tasks, and devices. Examples of such theories • Discussion, Debriefing and Attempt to Draw
include Activity Theory [8], Distributed Cognition [7], and Conclusions from Presented Materials (3pm-
Speech Acts [11]. While work on these theories is 4:30pm)
ongoing, their results will help define the characteristics
and performance needed for systems infrastructures After the workshop, the organizers and interested
supporting coordination and collaboration between multiple participants would try to move from the ideas generated in
users and machines in device-rich UbiComp environments. the final debriefing towards a set of principles or design
patterns [2] for designing user interfaces and systems and
Of course, HCI and systems issues have been intertwined infrastructure in the UbiComp world that take into account
before on desktop computers, and some of the lessons both systems and HCI issues. Participants would be offered
learned in that domain may be applicable or extensible to the opportunity to continue with this synthesis work after
the UbiComp domain. In particular in the days of slower the workshop with the goal of presenting it back to the
desktop computers many coping mechanisms (some still in community in the form of one or more future papers,
use today) had to be developed to create a more pleasant possibly collected in a special issue of an appropriate
user experience. For example, hardware cursors were used journal.
to insure the direct-feedback loop between user input and
cursor display occurred without noticeable delay. Other Resources/References
mechanisms include hourglasses and progress bars, 1. Becker, C., G. Schiele, and H. Gubbels. BASE - A
window drags as outlines, and jump scrolling. At the same Micro-kernel-based Middleware For Pervasive
time, many of the well-established metaphors that today's Computing. In 1st IEEE International Conference on
desktop user interfaces deploy simply do not work in Pervasive Computing and Communications (PerCom
UbiComp environments—for example, what are concepts 2003). 2003. Dallas-Fort Worth, Texas, USA: IEEE.
such as "pointer focus" and "selection" supposed to mean in
a space augmented with multiple machines, input and 2. Borchers, J. A Pattern Approach to Interaction Design.
output devices, and used by many people simultaneously? 2001. Chichester, UK: Wiley.
Proposed Workshop Format 3. Card, S., T. Moran, and A. Newell, The Pyschology of
We would ask members of the UbiComp field to submit Human-Computer Interaction. 1983, Hillsdale, NJ:
papers either on their personal experiences with the Erlbaum.
interaction of HCI and systems constraints in UbiComp, or

307
4. Cerqueira, R., et al. Gaia: A Development
10. Tandler, P. The BEACH Application Model and
Infrastructure for Active Spaces. In Ubitools Workshop Software Framework for Synchronous Collaboration in
at Ubicomp 2001. 2001. Atlanta, GA. Ubiquitous Computing Environments. To appear in
5. Edwards, W.K., et al. Stuck in the Middle: The Journal of Systems and Software, 2003 (Special Issue
on Application Models and Programming Tools for
Challenges of User-Centered Design and Evaluation
for Middleware. In CHI 2003: Human Factors in Ubiquitous Computing).
Computing Systems. 2003. Fort Lauderdale, FL: 11. Winograd, T., and F. Flores. Understanding Computers
Association for Computing Machinery. and Cognition: A New Foundation for Design. 1986.
6. Grimm, R., et al. A System Architecture for Pervasive Norwood, NJ: Ablex.
Computing. In 9th ACM SIGOPS European Workshop.
2000. Kolding, Denmark: pp. 177–182.

7. Hollan, J., E. Hutchins, and D. Kirsh. Distributed


cognition: Toward a new foundation for human–
computer interaction research. June 2000. ACM
Transactions on Computer-Human Interaction, 7(2),
pp. 174–196.

8. Nardi, B. (ed.) Context and Consciousness: Activity


Theory and Human–Computer Interaction. 1996.
Cambridge, MA: MIT Press.

9. Ponnekanti, S., et al. Portability, Extensibility and


Robustness in iROS. In 1st IEEE International
Conference on Pervasive Computing and
Communications (PerCom 2003). 2003. Dallas-Fort
Worth, Texas, USA: IEEE: pp. 11–19.

308
System Support for Ubiquitous Computing – UbiSys
Roy Campbell1, Armando Fox2, Paul Chou3,
Manuel Roman4, Christian Becker5, Adrian Friday6

1
University of Illinois at Urbana-Champaign, USA
[email protected]
2
Stanford University, USA
[email protected]
3
IBM T. J. Watson Research Center, USA
[email protected]
4
DoCoMo Labs, USA
[email protected]
5
University of Stuttgart, Germany
[email protected]
6
Lancaster University, UK
[email protected]

SUMMARY as well as experience reports from the following topics and


We propose a workshop that focuses on developing an related research areas:
understanding of the challenges faced by system and • System support infrastructures and services
middleware researchers in supporting ubiquitous • Middleware for ubiquitous computing
computing environments on a large scale. We aim to • Architectural structure, design decisions and
identify the fundamental services and paradigms necessary philosophies
to support the move from focused prototypes to wider scale • Interoperability and wide scale deployment
coordination and deployment.
Keywords Mobisys 2003 hosted a panel about metrics to evaluate
ubiquitous computing systems. The results were
Middleware, system support, ubiquitous computing,
overwhelmingly positive and the two and a half hours were
operating systems, interoperability.
insufficient to host the number of discussions. The panel
DESCRIPTION OF THE WORKSHOP organizers decided to cover a specific well-defined subject
This workshop offers the opportunity to bring together instead of fostering discussion on a number of different
researchers and practitioners involved in the development topics. This approach proved highly successful and we plan
of systems support for general purpose ubiquitous to follow the same format. The workshop will explicitly
computing environments. It aims at exploring most recent concentrate on the three aspects listed above.
research and findings in this area, comparing results, In order to ensure a high quality technical session, paper
exchanging experiences, and promoting collaboration and submissions will be 3-5 pages long and will have to cover
cooperation among researchers in the field. The workshop one of the topics listed above. Furthermore, we will
aims to identify the common abstractions and patterns prioritize experience papers describing lessons learnt from
found in the existing systems, as well as the core low-level built systems, including information about approaches that
services that are needed to build general-purpose did and did not work, unexpected results, common
ubiquitous computing environments. The workshop will abstractions, abstraction mapping among different systems,
focus on different aspects of system and middleware common building blocks present in different architectures,
research and the challenges involved when applying them and metrics for evaluating ubiquitous computing
to support ubiquitous computing. Special emphasis will be infrastructures.
placed on presenting state of the art and emerging research

309
Ubiquitous computing environments are envisioned as organization, application models, and application support
being populated with large numbers of computing devices in general.
and sensors to the extent that the physical and
We strongly believe that the proposed workshop will
computational infrastructures become fully integrated,
provide a unique opportunity to foster discussion and
creating a dynamic programmable environment. To realize
interaction among researchers working on this new area.
this vision, several projects have developed prototype
We expect the workshop to be the first step towards the
environments, typically focused on a particular ubiquitous
creation of a discussion group and future workshops and
computing scenario or application. System support is often
conferences working on the formalization and
pragmatic, problem oriented and difficult to generalize to
standardization of basic building blocks for Ubiquitous
other domains. To fully realize programmable ubiquitous
Computing.
computing environments it is essential to provide services
that coordinate software entities and heterogeneous
networked devices and provide the low-level functionality ORGANIZERS
needed to enable ubiquitous computing in the general case. • Roy Campbell, Ph.D., Department of Computer
Systems software provides a homogeneous computing Science, University of Illinois at Urbana-Champaign.
environment where applications are supported with Roy Campbell is a professor of computer science at the
resource management (i.e. resource and service discovery), University of Illinois at Urbana-Champaign. His
and common abstractions that leverage the implicit research interests include operating systems, distributed
heterogeneity in such environments. The work of multimedia, network security, and ubiquitous
independent researchers has revealed patterns of service computing. Prof. Campbell is the head of the Gaia
usage, indicating that systems software for ubiquitous project, a ubiquitous computing infrastructure under
computing may converge to a set of necessary core development that is investigating systems support and
services. If such a set of requirements could be identified, applications for ubiquitous computing environments.
applications could then be more easily ported across He received his B.Sc. in mathematics from the
different implementations and interoperability would be University of Sussex, and his M.Sc. and Ph.D. in
simplified. computing from the University of Newcastle upon
Tyne.
One of the key issues for debate is the underlying structure • Armando Fox, Ph.D., Department of Computer
of ubiquitous computing middleware. Current prototypes Science, Stanford University. Armando Fox is an
are characterized by three different architectural assistant professor at Stanford University, and is the
organizations. In the first case, the environment provides faculty leader of the Software Infrastructures Group
an infrastructure that coordinates the resources present in a (SWIG). He received a BSEE from M.I.T., an M.S.E.E
specific geographical location. Applications can discover from University of Illinois at Urbana-Champaign, and a
and access such resources only via the infrastructure. Ph.D. in Computer Science from UC Berkeley. His
Furthermore, all communication between devices is primary research interests are systems approaches to
mediated by the infrastructure. Additional information, improving dependability (the Recovery-Oriented
such as large amounts of application data which for Computing project) and system software support for
instance cannot be stored on small devices, can be ubiquitous computing (the Interactive Workspaces
maintained in the infrastructure. Direct interaction between project).
devices is not considered and the infrastructure typically
• Paul Chou, Ph.D., IBM T.J. Watson Research Center.
provides services localized to a specific geographical area
Paul is a Research Staff Member and the manager of the
such as a room, or a building, covered by one or more
Emerging Interactive Spaces department at IBM
network types. The second architectural organization relies
Research. His present research focus is on pervasive
on spontaneous interaction among the devices present in
computing, in particular, the technology and usability
the environment as a federation of peers. There is no
challenges in bringing physical objects and digital
common infrastructure per se – applications have to store
infrastructure together to address social and business
and maintain application data cooperatively. Such an
issues. His recent projects include intelligent vehicle,
organization is typically based on an underlying ad-hoc
telematics data privacy protection, and office of the
configuration. The third model is a hybrid that relies on a
future. Paul received his Ph.D. in Computer Science in
centralized system support infrastructure, but relies on
1988 from the University of Rochester.
peer-to-peer communication among entities. So far, the
three approaches seem to offer benefits in distinct • Manuel Roman, Ph.D., DoCoMo Labs, USA. Manuel
application domains and it is likely that they will continue Roman recently completed a Ph.D. in Computer
to co-exist and complement each other. These three Science at University of Illinois at Urbana-Champaign.
approaches lead to a variety of research questions His research interests include ubiquitous computing,
concerning interoperability, architecture, service middleware, and operating systems, and how to

310
combine ideas from all these areas to create research projects in the areas of mobile computing,
interactive/programmable active spaces. His research context-aware systems and advanced distributed
interests include also handheld devices and how to systems, including the Equator IRC, GUIDE II and
integrate them in active spaces. He received his BS and CORTEX projects. He has a consistent track record of
MS degrees in Computer Science from La Salle School publishing in high quality peer-reviewed international
of Engineering (Ramon Llull University) in Barcelona, conferences and journals, having authored and co-
Spain. authored over 50 publications to date. His current
research interests include: distributed system support
• Christian Becker, Ph.D., Institute for Distributed and
for mobile, context-sensitive and ubiquitous computing.
Parallel Systems, University of Stuttgart, Germany.
Christian Becker is a senior researcher and lecturer at WORKSHOP TITLE
the University of Stuttgart. He received a PhD in “System Support for Ubiquitous Computing – UbiSys”
Computer Science from the University of Frankfurt and PARTICIPATION SOLICITATION
a Diploma in Computer Science from the University of We plan to invite key researchers and practitioners who
Kaiserslautern. His primary research interests are work in this area to submit papers and give oral
ubiquitous computing systems. Currently he is involved presentations on their research. We also plan to put out a
in middleware support for spontaneous networking (the call for papers that solicits research papers describing
BASE platform) and support of context-aware current and on-going research and experience reports
applications though global augmented world models related to the three aspects described in this proposals. All
(the Nexus project). submissions will be reviewed and selected based on their
• Adrian Friday, Ph.D., Computing Department, originality, merit, and relevance to the workshop. All
Faculty of Applied Sciences, Lancaster University, UK. accepted papers should be presented orally during the
Adrian Friday graduated from the University of London workshop.
in 1991. In 1992 he helped establish the mobile MAXIMUM PARTICIPANTS
computing group at Lancaster University, completing We expect to have up to 15 participants giving short
formative work in the area of mobile distributed presentation about their submissions. We are also
systems, leading to the award of his PhD in 1996. In expecting 30-50 attendees to participate in the discussion
1998 he was appointed as a Lecturer in the department session.
of Computer Science and is an active member of the
Distributed Multimedia Research Group. During his
research career, Adrian has been involved with over 9

311
First International Workshop on Ubiquitous Systems for
Supporting Social Interaction and Face-to-Face
Communication in Public Spaces
Organizers
Rick Borovoy Volodymyr Kindratenko
nTAG, LLC NCSA, UIUC
[email protected] [email protected]

Harry Brignull Alex Lightman


The Interact Lab, COGS, University of Sussex Charmed Technology
[email protected] [email protected]

Shahram Izadi Norbert Streitz


The Mixed Reality Lab, Univ. of Nottingham Fraunhofer IPSI
[email protected] [email protected]

Program Committee
Donna Cox David Pointer
NCSA, UIUC NCSA, UIUC
[email protected] [email protected]

ABSTRACT Origins of the workshop


This workshop will bring together researchers involved Every year thousands of conferences and professional trade
with the development and deployment of ubiquitous shows take place worldwide, attracting millions of
systems to support social interaction and face-to-face attendees. Likewise, millions of people go to museums and
communication in public spaces (e.g., museums) or semi- attend various public gatherings. Such events represent
public spaces in campus-like environments of large important venues for social interaction and the informal
organizations (e.g., companies, universities) and at public exchange of knowledge, providing a place to find others
gatherings, such as conferences, and tradeshows. The who share common or complementary interests. However,
workshop presentations will focus on the underlying poor event management and missed opportunities for
technology, applications, and social challenges presented communication continue to be vexing challenges for
by the technology. organizers.
How do attendees find others who share their interests?
How do they identify and communicate with others whose
Keywords
Ubiquitous systems in public spaces expertise they seek? How and where does the interaction
between attendees happen? How can organizers
understand the dynamic profile of participation at the
INTRODUCTION sessions and be more responsive to the participants’
Audience of the workshop emerging interests? In the absence of good answers to
This workshop will bring together participants who would these questions, many attendees do not benefit from the
like to help define, design, develop, deploy, evaluate, and events as much as they could, resulting in many person-
use ubiquitous systems for supporting social interaction years of wasted human resources.
and face-to-face communication in public spaces and at State of the art
public gatherings. It aims at providing a forum for In recent years, attempts have been made to provide public
designers, developers and users of ubiquitous systems gatherings attendees with various value-added services
deployed in public spaces to exchange experiences and based on the ability to either track individual attendees as
contribute to the elucidation of research challenges and they go from one location to another or detect when they
directions. interact with each other or with various "smart" objects

312
embedded in the space. One of the earliest experiments, interact with each other and the interaction is logged and
Meme Tags project, used electronic nametags capable of subsequently uploaded to a private website accessible by
exchanging short messages (memes) via IR. The tags also each user.
stored information about the interaction between tag SpotMe system by Shockfish SA (www.spotme.ch)
wearers and shared it with the centralized database. The requires participants to carry a cell phone-size device
cumulative data was shown on large displays (Community through which they can find out who is standing within a
Mirrors). 30 meter radius from them. Participants can be notified if a
Digital Assistant developed at ATR Media Information person with shared interests comes within 10 meters, and
Science Lab aimed to enhance communication among they can send messages to each other or exchange
conference participants by tracking them as they attend electronic business cards.
various locations and providing access to a content-rich More recently, research investigates also the combined use
personalized environment either via web kiosks or of public displays integrated in the architectural
interactive displays. Users were required to wear IR environment with mobile devices carried by visitors of
badges that could be detected at some locations within the public spaces or semi-public spaces in campus-like
conference space. The resulting data was used to create the environments of large organizations (e.g., companies,
user's touring diary and to provide personalized real-time universities). The AMBIENTE-Team at Fraunhofer IPSI
services. developed smart artifacts as the GossipWall (a large wall-
Georgia Tech’s Social Net system required attendees to sized ambient display) and the ViewPort (a mobile device)
carry a portable device (Cybiko) that uses RF to help in order to support informal communication between
mutual friends connect strangers (who were co-located for people and convey atmospheric information
a considerable amount of time). In order for these mutual (www.ambient-agoras.org).
friends to identify who among their friends are not
Justifications for the workshop
connected (but should be, because they tend to be co- Although the above-described projects attempt to solve the
located), the system requires each user to provide a list of same class of problems and use relatively similar set of
all their friends – a task that turned out to be challenging basic techniques, the developers do not have a common
for some in a field test of 10 users at a 3-day conference. venue for sharing the results of their work. For example,
MIT Media Lab's Sociometer prototype captures social papers describing this type of systems are scattered across
interaction among individuals who use wearable computers several barely-related journals, which makes it difficult for
with microphones. While the system tracks and a novice developer to assess the state of the art in the field.
subsequently analyzes communication patterns, it does not The proposed workshop therefore is a first attempt to bring
use the data to provide any real time value-added services together like-minded researchers and practitioners who
to the users. focus on the area of ubiquitous systems for supporting
NCSA’s IntelliBadge™ project implements location social activities and social interaction in public spaces and
tracking by proximity to RFID location markers installed at at public events.
the points of interest. All the user services are built around An interdisciplinary approach is required in order to
tracked location information and a prior knowledge about develop a successful application for supporting social
the attendees and the conference events. These services interactions. Without an interdisciplinary team, the
include the ability to locate other people, view events application is likely to overstate some issues and
attendance statistics, and interact with visualization completely miss other important topics. However, this is
applications. not an obvious detail for most novice developers. This
nTag by nTAG Interactive, LLC (www.ntag.com) uses workshop therefore will bring together researchers from
semi-passive Radio Frequency IDentification (RFID) tag different disciplines, such as sociology, psychology, art,
operating in the UHF band which enables a conference computer engineering, computer graphics, and interface
organizer to use it for security, to record how many people designers with the goal to point out the place and
attended certain sessions, or to track how many people importance of each of these disciplines in the overall scope
visited certain areas of an exposition floor. When people of a successful system design.
meet, their tags exchange information about their interests The final goal of the workshop is to identify and clarify the
and preferences, thus facilitating social interaction among research challenges and directions that the researchers
the attendees. Tags also store and provide convenient involved with this type of work are likely to face.
access to the conference program.
WORKSHOP TOPICS
CharmBadge by Charmed Technology, Inc. The main subject of the proposed workshop is the
(www.charmed.com) uses IR-based tags programmed with development and use of ubiquitous systems to support
attendees' individual business card information. This social interaction in public spaces and at public events,
information is exchanged between attendees as they

313
such as museums, conferences, trade shows, etc. Topics deploy advanced versions of this technology in commercial
relevant to this subject include: settings.
• Applications: existing commercial and experimental Harry Brignull is a research fellow at Sussex University's
applications, e.g., ubiquitous systems in museums, at Interact lab, where he is also finishing his Ph.D. His
public gatherings, etc. specialist area is user-experience design for public situated
displays. Harry currently works on the Dynamo project, on
• User interface: how to provide a simple and intuitive
which they have developed a system which provides a
user interface for novice users to a complex system.
communal surface for the sharing and exchange of
• Presentation: how various types of information information in face-to-face scenarios. On another thread of
acquired by the ubiquitous system can be effectively the Dynamo project, Harry has looked at the use of large
presented to the end users. displays to encourage socialising, and the issues involving
• Scalability: how to accommodate a large number of enticing users to progress from passer-by to participant.
simultaneous users at a potentially unlimited number Shahram Izadi is a research fellow in the Communications
of locations. Research Group at the University of Nottingham. He is
• Deployment: how to package the system so that it can actively involved in a number of research projects at
be easily deployable in an environment that is not Nottingham including Equator, a six-year Interdisciplinary
prepared for such type of applications. Research Collaboration (IRC) supported by the EPSRC
that focuses on the integration of physical and digital
• Reliability: how to build robust and reliable systems interaction; and Dynamo a public multi-user interactive
that can guarantee at least some minimal number of surface that supports the cooperative sharing and exchange
services. of a wide range of media. Shahram has also had the
• Privacy: if the system "knows" everything about opportunity to work on the Speakeasy project at PARC,
everybody currently present in the tracked ubiquitous where he has helped engineer an interconnection
environment, what are the privacy concerns and how technology for enabling digital devices and services to
best to address them? easily interoperate over both wired and wireless networks.
He has published at international conferences on ubiquitous
• Security: what happens if the system is defatted and
computing, CSCW and mobile computing, and written
the intruders gain access to all the accumulated
several journals articles. He is currently working on a book
knowledge. How to prevent this from happening.
chapter for the Handbook of Mobile Computing.
• Social aspects: how the technology can be used to help
Volodymyr Kindratenko graduated in mathematics and
forming social networks and how it can be used to
computer science from the State Pedagogical University,
study them.
Kirovograd, Ukraine, in 1993. He received a Ph.D. degree
WORKSHOP GOALS from the University of Antwerp, Belgium, in 1997. His
The goals of this workshop are research involved the development and application of
• to examine the state of the art of an emerging area of image analysis techniques for Scanning Electron
ubiquitous computing research, Microscopy imagery. From 1997 to 1998, Volodymyr
Kindratenko was employed by the National Center for
• identify relevant projects and technologies and build Supercomputing Applications (NCSA) at the University of
up a taxonomy Illinois as a Postdoctoral Research Associate in the Virtual
• identify challenges and issues that need to be resolved Environments group where he worked on the development
in order for this technology to proliferate in the future, of a distributed virtual reality system for collaborative
product design. From 1998 to 2002, Volodymyr
• provide a venue for the like-minded researchers to
Kindratenko was employed by NCSA as a Research
meet and end exchange ideas.
Scientist in the Visualization and Virtual Environments
WORKSHOP ORGANIZERS group where he worked on the development of industrial
Rick Borovoy has a Bachelors Degree (1989) from virtual reality applications and high-end visualization
Harvard in Computer Science, and a Masters (1995) and systems. He is now a Research Scientist at NCSA in the
Ph.D. (2001) from the MIT Media Lab. In 1995, he co- Experimental Technologies Division where he is involved
created the Thinking Tags: interactive name tags that gave with ubiquitous systems, interactive spaces, and sensors
people talking to each other at a conference a simple research and is the Technical Lead on the IntelliBadge™
measure of how much they had in common. This led to project.
several more prototypes, including the Meme Tags and i- Alex Lightman is a leading writer and speaker on the
balls, and a Ph.D. thesis on "Folk Computing: Using future of technology and communications. He is the author
Technology to Support Face-to-face Community Building". of the first book on 4G wireless, Brave New Unwired
He recently started a company -- nTAG Interactive -- to World: The Digital Big Bang and the Infinite Internet,

314
published by John Wiley in 2002 and has published nearly Xerox PARC and at the Intelligent Systems Lab of MITI,
100 articles for technology, business, and political Tsukuba Science City, Japan..
publications including Red Herring, Chief Executive, and He is the Chair of the Steering Group of the EU-funded
Internet World. proactive initiative "The Disappearing Computer", a cluster
Lightman is the CEO of Charmed Technology, of 17 projects, and, more recently, the co-chair of
(www.charmed.com) which makes wearable computers CONVIVIO: the EU-funded Network of Excellence on
and achieved world-wide acclaim for producing 100 People-Centred Design of Interactive Systems. He was and
wearable technology fashion shows in 20 countries. He is still is active in various special interest groups in different
the founding director of The 4G Society and the first Cal- scientific organizations, e.g., GI (Gesellschaft für
(IT)2 Scholar at the California Institute for Informatik), DGP (Deutsche Gesellschaft für Psychologie),
Telecommunications and Information Technology, a joint ACM (Association for Computing Machinery), EACE
program of UCSD and UCI. (http://www.calit2.net). (European Association for Cognitive Ergonomics).
Lightman has nearly 20 years of high technology His research interests range from Cognitive Science,
management experience and, in addition, has experience in Human-Computer Interaction, over Hypertext/Hypermedia
politics (including work for a US Senator), construction, and Computer-Supported Cooperative Work to Interaction
consulting, the oil drilling industry, and the renewable Design for Ambient/Pervasive/Ubiquitous Computing in
energy industry. He also created, managed and received the context of an integrated design of real and virtual
accreditation for the Nizhoni Institute, a small school and worlds. He and his team are known, e.g., for the
college, and produced the 100 Brave New Unwired World development of Roomware®, the integration of walls and
fashion shows featuring wearable and pervasive furniture with information technology and the design of
computing, which included many of Lightman’s own Smart Artefacts. The roomware components that were
inventions and designs, such as the patented Charmed developed in close cooperation with an office furniture
Viewer display and the first Internet jewelry. Harvard manufacturer won several design prices. In the EU-funded
Business School featured Lightman and Charmed in case project “Ambient Agoras: Dynamic Information Clouds in
study that recognized Lightman’s pioneering innovation of a Hybrid World”, he is now working on situated interaction
presenting computers as fashion. Both the show and and services employing wall-sized ambient displays and
Lightman’s designs are now widely copied worldwide. handheld mobile devices in the context of Cooperative
Norbert Streitz holds a Ph.D. in physics and a Ph.D. in Buildings.
psychology. He is the head of the research division He has published/edited 15 books and more than 90 papers
"AMBIENTE – Workspaces of the Future" at the presented at the relevant national and international
Fraunhofer institute IPSI in Darmstadt, Germany, where he conferences or in journals in his areas of interest. He serves
also teaches at the Department of Computer Science of the regularly on the program committees of these conferences
Technical University Darmstadt. He studied mathematics, and on several editorial boards. In the context of his
physics, chemistry, and psychology at the University of interest in design and architecture, he was also appointed as
Kiel, Germany, and psychology, education, and philosophy a design competition jury member. He is often invited to
of science at the Technical University (RWTH) of Aachen, present keynote speeches to scientific as well as
Germany. He was a post-doc research fellow at the commercial events in Europe, USA, South America, and
University of California, Berkeley, a visiting scholar at Japan.

315
Intimate (Ubiquitous) Computing

Genevieve Bell*, Tim Brooke*, Elizabeth Churchill+, Eric Paulos~


+ ~
*Intel Research FX Palo Alto Intel Research
Intel Corporation Laboratory, 2150 Shattuck #1300
Hillsboro, OR 97124 3400 Hillview Avenue, Berkeley, CA 94704
[email protected] Palo Alto, CA 94304 [email protected]
[email protected] [email protected]

ABSTRACT In this workshop we address the notion of ‘intimate


Ubiquitous computing has long been associated with computing’. We invite designers within the area of
intimacy. Within the UbiComp literature we see intimacy Ubiquitous Computing to: address and account for people’s
portrayed as: knowledge our appliances and applications embodied, lived experiences; explore the ways in which
have about us and the minutiae of our day-to-day lives; computing technology could and should be more intimate;
physical closeness, incarnated on the body as wearable and join us in considering possible pitfalls along the design
computing and in the body as ‘nanobots’; and computer path to such intimacy.
mediated connection with friends, lovers, confidantes and Intimacy as a cultural category/construct
colleagues. As appliances and computation move away
What might intimacy have to do with technology and
from the desktop, and as designers move toward designing
computers, beyond the obvious titillation factor? In the
for emotion and social connection rather than usability and
United States in particular and the west more broadly, there
utility, we are poised to design technologies that are
is a persistent slippage between intimacy and sex, which is
explicitly intimate and/or intimacy promoting. This
not to say that there isn’t a place to talk about the
workshop will: critically reflect on notions of intimacy;
relationship between sex, intimacy and technology [see 15].
consider cultural and ethical issues in designing intimate
However, in this workshop, we want to cast our net more
technologies; and explore potential socio-technical design
broadly. We are interested in other constructions of
methods for intimate computing.
intimacy; intimacy as something that relates to our
Keywords innermost selves, something personal, closely felt. Such a
Intimacy, computing, emotion, identity, body, play, construction could include love, closeness, or spirituality.
bioethics, design methods, socio-technical design Or perhaps it is in the way we understand, feel and talk
INTRODUCTION about our lives, our bodies, our identities, our souls. In all
Intimate. adj. Inmost, deep seated, pertaining to or connecting these ways, intimacy transcends technology, and has a role
with the inmost nature or fundamental character of the thing; to play in shaping it. As we move towards designing for
essential, intrinsic ... Pertaining to the inmost thoughts or communication, emotion, reflection, exploration and
feelings, proceeding from, concerning, or affecting one’s inmost relationship, we need to critically reassess our reliance in
self, closely personal. design on outmoded conventions and old models of
We inhabit a world in which the classic computing computation and connection. We need to employ new
paradigm of a PC sitting on your desk is giving way to a metaphors and create new models.
more complicated and nuanced vision of computing A BRIEF HISTORY OF (INTIMATE) UBIQUITOUS
technologies and power. This next era is predicated on a COMPUTING
sense that the appliances and algorithms of the future will
Having said that, there has been an idea of intimate
respond better to our needs, delivering ‘smarter’ more
computing for as long as there has been a vision of
context-appropriate, computing power. Underlying such a
ubiquitous computing. The two are inexorably linked in the
vision is the notion that computers in their many forms will
pages of the September 1991 issue of Scientific American.
be pervasive and anticipatory. Arguably, to achieve this,
In that month’s issue of the magazine, Mark Weiser,
computing appliances will have to become more intimate,
articulated his vision of ubiquitous computing – “we are
more knowing of who we are and what we desire, more
trying to conceive a new way of thinking about computers
woven into the fabric of our daily lives, and possibly woven
in the world, one that takes into account the natural human
into the fabric of our (cyber)bodies.
environment and allows computers themselves to vanish
into the background” [25]. In the article that follows, Alan

316
Kay used ‘intimate’ as a modifier to computing in an essay intimacy between remote people that would normally only
reflecting on the relationship between education, computers be possible if they were proximate. Examples include
and networks [10]. He wrote, “In the near future, all the explicit actions (e.g. erotically directed exoskeletons [19]),
representations that human beings have invented will be non-verbal expressions of affection or “missing” [22], and
instantly accessible anywhere in the world on intimate, computationally enhanced objects, like beds, that offer “a
notebook-size computers.” This conjoining of intimate shared virtual space for bridging the distance between two
computers and ubiquitous computing within an issue of remotely located individuals through aural, visual, and
Scientific American dedicated to Communications, tactile manifestations of subtle emotional qualities.” [5].
Computers and Networks is perhaps not a coincidence – These computationally enhanced objects are all the more
both represents complementary parts of a future vision. effective because they themselves are rich (culturally
specific) signifiers. Dodge states of the bed, it is “very
How has this conjunction been expressed more recently?
"loaded" with meaning, as we have strong emotional
Broadly, there are 3 manifestations in the (predominantly)
associations towards such intimate and personal
technology literature. 1. intimacy as cognitive and
experiences”[5].
emotional closeness with technology, where the technology
(typically unidirectionally) may be aware of, and responsive INTIMATE COMPUTING TODAY AND TOMORROW
to, our intentions, actions and feelings. Here our So where are we to go with intimate computing in the age
technologies know us intimately; we may or may not know of ubiquitous and proactive computing and the tentative
them intimately. 2. intimacy as physical closeness with realities of pervasive computing [23]? Clearly, as we move
technology, both on the body and/or within the body. 3. to the possibility of computing beyond the desktop and
intimacy through technology: technology that can express home office, to wireless hubs and hotspots, and from fixed
of our intentions, actions and feelings toward others. devices to a stunning array of mobile and miniature form
In the first category, Lamming and Flynn at Rank Xerox factors, the need to account for the diversities of people’s
Research Center in the UK in the mid-1990s invoked embodied, daily life starts to impose itself into the debate.
‘intimate computing’ as a broader paradigm within which to We already worry about issues of privacy, surveillance,
situate their ‘forget-me-not’ memory aid. They wrote, “The security, risk and trust – the first accountings of what it
more the intimate computer knows about you, the greater its might mean for individual users to exist within a world of
potential value to you. While personal computing provides seamless computing. And then there are issues of scale –
you with access to its own working context – often a virtual ubiquitous computing is a far easier vision to build toward.
desktop – intimate computing provides your computer with It promises a sense of scale and scalability, of being able to
access to your real context.” [12]. Here ‘intimate design a general tool and customize it where a local
computing’ (or the ‘intimate computer’) refers to the depth solution is needed. But intimate computing implies a sense
of knowledge a technology has of its user. of detail; it is about supporting a diversity of people,
bodies, desires, ecologies and niches.
‘Intimate computing’ has also occasionally been used to
describe a different kind of intimacy – that of closeness to THE WORKSHOP:
the physical body. In 2002, the term appears in the Outlining A Research Agenda for Intimate Computing
International Journal of Medical Informatics along with In this workshop, we address the relationship of people to
grid computing and micro-laboratory computing to produce ubiquitous computing, using notions of ‘intimacy’ as a lens
“The fusion of above technologies with smart clothes, through which to envisage future computing landscapes, but
wearable sensors, and distributed computing components also future design practices. We consider the ways
over the person will introduce the age of intimate ubiquitous computing might support the small scale realities
computing” [20]. Here ‘intimate computing’ is conflated of daily life, interpersonal relations, and sociality, bearing
with wearable computing; elsewhere intimate computing is in mind the diversity of cultural practices and values that
even subsumed under the label of wearable computing [2]. arise as we move beyond an American context.
Crossing the boundary of skin, Kurzweil paints a vision of We perceive four interrelated perspectives and strategies
the future that centralizes a communication network of for achieving these goals: (1) deriving understandings of
nanobots in the body and brain. He states “We are growing people’s nuanced, day-to-day practices; (2) elaborating
more intimate with our technology. Computers started out cultural sensitivities; (3) revisioning notions of mediated
as large remote machines in air-conditioned rooms tended intimacy, through explorations of play and playfulness; and
by white-coated technicians. Subsequently, they moved (4) exploring new concepts and methods for design. Below
onto our desks, then under our arms, and now in our we elaborate on these perspectives:
pockets. Soon, we'll routinely put them inside our bodies
and brains. Ultimately we will become more nonbiological 1. Nuanced practices
than biological.”[11] A sense of intimacy made its way into Wesier’s thinking
Finally, intimate computing has also referred to about ubiquitous computing. In collaboration with PARC’s
technologies that enhance or make possible forms of anthropologists, he and his team became aware of ways in

317
which people’s daily social practices impacted their As ubiquitous computing researchers, we must be aware of
consumption and understanding of computing. They looked this human tendency to play, and use it to our advantage.
at the routine, finely grained, and socially ordered ways in When does play occur? How does it begin and end? When
which people use their bodies in the world to see, hear, is it appropriate or inappropriate? What elements give rise
move, interact, express and manage emotion and pondered to play? The understanding of play may affect our views
“how were computers embedded within the complex social about the origin and experience of human intimacy.
framework of daily activity, and how did they interplay with 4. New paradigms for design
the rest of our densely woven physical environment (also
known as the “real world”)?”[27] This consideration of It is hard to imagine that the computer, an icon of
social frameworks and physical environments led Weiser’s modernity, high technology and the cutting edge could in
team to propose “calm computing” as a way of managing some ways be behind the times. However, its association
the consequences of a ubiquitous computing environment. with modernity marks it as old fashioned; as a product of
Calm computing is concerned with people in their day-to- modernity the computer is highly functional with a
day world, with affective response (beyond psycho- minimalist aesthetic. It approaches the modernist ideal of
physiological measures of arousal), with the body, with a pure functionality with little necessity for physical presence.
sense of the body in the world, and with the inner workings Computer chips become smaller and smaller black boxes
and state of that body. This notion of calmness and calm offering more and more functionality, but not necessarily
technology thus echoes the sense, if not sensibility, of more intimacy.
intimate computing. [26] Bergman states modernity has been admired for its “high
2. Culture Matters seriousness, the moral purity and integrity, the strength of
its will to change”, but he also goes on to note “At the same
Weiser also credits anthropologists with helping him see the time, it is impossible to miss some ominous undertows: a
slippage between cultural ideals and cultural praxis as it lack of empathy, an emotional aridity, a narrowness of
related to the use of computing technology in the work imaginative range.”[4]. Modernity in art, design,
place. One of the issues that is very clear when we engage architecture and fashion are associated with aesthetics and
in a close reading of ubiquitous computing is how very design principles from the first half of the twentieth century
grounded it is in Western practices, which makes sense [7]. Since then, movements in pop art, deconstructivism,
given its points of origin and the realities of resource and and postmodernism have invited us beyond functionalism to
infrastructure development. However, there have been new ways of thinking about how to make the impersonal
several significant, unanticipated changes in the last decade, computer more intimate. There are lessons in consumer
in particular the leapfrogging of developing countries into product design; the founder of Swatch focused on the
wireless networks and whole-sale adoption of mobile emotional impact of the watch to start his business,
phones. It is important then to explore some of the ways in designing the watch as a fashion accessory and invoking the
which intimacy is culturally constructed, and as such might ideals of pop art “fun, change, variety, irreverence, wit and
play out differently in different geographies and cultural disposability” [21]. What might it mean to apply such
blocks [3;9]. We also need to explore cultural differences lessons to the design of ubiquitous computing systems?
in the emotional significance and resonance of different
objects. Goals of the workshop

3. Can Ubiquitous Computing come out and Play? Taking the above perspectives as a springboard for
discussion, this workshop has the following aims:
“You can discover more about a person in an hour of play
than in a year of conversation” (Plato 427-347 BC). Play x To bring together a multi-disciplinary group of
provides a mechanism to experiment with, enter into, and practitioners to discus what it might mean to account
share intimacy. The correlation of play and intimacy is so for intimacy in ubiquitous computing and to consider
strong that elements of one rarely occur without the other. It issues like: How do notions of intimacy change over
is during play that we make use of learning devices, treat time and place? How do notions of intimacy differ as
toys, people and objects in novel ways, experiment with we engage in different social groups and social
new skills, and adopt different social roles [16, 17, 18]. We activities? When does intimacy lead to or become
make two important observations about play: (1) humans intrusion? Invasion? Stalking?
seamlessly move in and out of the context of play and (2) x To elaborate new methods and models in design
when at play, humans are more exploratory and more practice that can accommodate designing for intimacy.
willing to entertain ambiguity in their expectations about
x To develop an agenda for future collaborations,
people, artifacts, interfaces, and tools. Such conditions may
research and design in the area of intimate computing
more easily give rise to intimacy. Such a scenario
and identify critical opportunities in this space.
represents a different design scenario from designing for
usability and utility [6].

318
Workshop Activities 11. Kurzweil, R. We Are Becoming Cyborgs, March 15,
We will balance presentations and discussion with 2002, http://www.kurzweilai.net/
collaborative, hands-on creative activities. These activities 12. Lamming, M., Flynn, M. Forget-me-not: Intimate
will include: Computing in Support of Human Memory. In Proc.
x Cluster analysis, including questions like what does FRIEND21: Int. Symposium on Next Generation
intimacy cluster with semantically (ie: identity, Human Interfaces, pp. 125-128, 1994.
uniqueness, personalization, friendship, connection) 13. Lupton, E. Skin. Surface, Substance and Design. New
x Designing intimacy within, upon and beyond the skin: York: Princeton Architectural Press, 2002.
build your own membrane/skin; designing supra-skin 14. McDonough, W and Braungartm M. Cradle to Cradle:
technological auras; designing for a reflective ethics Remaking The Way We Make Things. NY: North Star
Workshop Organizers Press, 2002.
The organizers of this workshop come from a wide range of 15. Maines, R. The Technology of Orgasm: "Hysteria," the
backgrounds, including cultural anthropology, computer Vibrator, and Women's Sexual Satisfaction. Johns
science, psychology and design. Together they have Hopkins University Press, 2001.
considerable experience in workshop organization across 16. Newman, L. S. Intentional and unintentional memory
several disciplines. in young children : Remembering vs. playing, J. of
Exp. Child Psychology, 50, pp. 243-258, 1990.
REFERENCES
1. Adelman, C. What will I become ? Play helps with the 17. O'Leary, J. Toy selection that can shape a child's
answer, Play and Culture, Vol. 3, pp. 193-205, 1990. future. The Times, 1990.
2. Baber, C., Haniff, D.J., Woolley, S.I. Contrasting 18. Piaget, J and B. Inhelder, The psychology of the child.
paradigms for the development of wearable computers. Basic Books, 1969.
IBM Systems Journal, 38, 2, 551-565, 1999. 19. Project paradise, Siggraph 1998
3. Bell, G., Blythe, M., Gaver, B., Sengers, P., Wright, P. eserver.org/cultronix/pparadise/happinessflows.html.
Designing Culturally Situated Technologies for the 20. Silva, J.S. and Ball, M.J. Prognosis for year 2013.
Home. Proceedings of CHI ’03. ACM Press, 2003. International Journal of Medical Informatics 66, pp.
4. Bergman, M. The Experience of Modernity. In 45-49, 2002.
Thackara, J. (Ed.) Design after Modernism. Hudson 21. Sparke, P., A Century of Design. London: Reed
and Thames, London, 1988. Consumer Books, 1998.
5. Dodge, C. The Bed: A Medium for Intimate 22. Strong, R. and Gaver, W. Feather, Scent and Shaker:
Communication. Extended Abstracts of CHI'97. ACM Supporting Simple Intimacy in Videos. CSCW '96,
Press, 371-372, 1997. p29-30, 1996.
6. Gaver, W., Beaver, J. and Benford, S. Ambiguity as a 23. Tennenhouse, D. Proactive Computing. CACM, 43, 5,
resource for design. Proc. CHI2003. ACM Press, 2003. pp. 43-50, May 2000.
7. Glancey, J. The Story of Architecture, Dorking 24. Thackara, J. Design after Modernism. London: Hudson
Kindersley, New York, 2000. and Thames, 1988.
8. Katagiri, Y., Nass, C. and Takeuchi, Y. Cross-cultural
studies of the computers are social actors paradigm: 25. Weiser, M. The Computer for the Twenty-first
The case of reciprocity. In M. J. Smith, G. Salvendy, Century, Scientific American, 265, 3, pp. 94-10, 1991.
D. Harris, and R. Koubek, (Eds.). Usability evaluation 26. Weiser, M. and Brown, J.S. The Coming Age of Calm
and interface design. Lawrence Erlbaum Associates, Technology, Beyond Calculation: The Next Fifty Years
2001. of Computing. P. Denning and R. Metcalfe (Eds) NY:
9. Kato, M.: Cute Culture. The Japanese obsession with Springer-Verlag, 1997.
cute icons is rooted in cultural tradition. Eye: The
27. Weiser, M., Gold, R., Brown, J.S. The Origins of
International Review of Graphic Design, Vol. 11, #44,
ubiquitous computing research at PARC in the late
London, 58-64, 2002.
1980s. IBM Systems Journal, 38, 4, pp. 693-696, 1999.
10. Kay, A. Computers, networks and education. Scientific
American, 265, pp. 100-107, September 1991.

319
Ubicomp 2003 Workshop Proposal
on Ubiquitous Commerce

George Roussos Anatole Gershman Panos Kourouthanassis


School of Computer Science Accenture Technology Labs Dept. Management Science
Birkbeck, University of London 3773 Willow Road Athens Univ. of Economics and
Malet Str, London WC1E 7HX, UK Northbrook, IL 60062, USA Business
+44 20 76316324 Evelpidon 47A, 113 62 Athens, Greece
[email protected] [email protected] +30 2108203663
m [email protected]

ABSTRACT that they can include each others requirements in their


Ubiquitous computing offers many varied applications but models. We propose to hold this workshop to provide a
will probably have its most significant impact on day-by- forum for the expression of this collaborative ethic across
day living. “The most profound technologies are those that disciplines.
disappear,” wrote Mark Weiser. “They weave themselves To this end, we have identified both vertical and horizontal
into the fabric of everyday life until they are axes to describe the particular areas of interest that we
indistinguishable from it.” Over the past decade, would like to see expressed in this workshop. At the
researchers have sought to understand the ways ubiquitous horizontal axis we have:
technologies would affect different aspects of everyday
activities including learning, entertainment, collaborative • Technologies: Smart home, radio frequency
work and the home environment. But ultimately, new identification, ubiquitous payments and value transfer,
technologies will be used for conducting business. This location and context awareness, agents.
workshop will bring together researchers and practitioners • Legal: intellectual property protection, access to
interested in the uses as well as the implications of intellectual property, privacy protection, ownership of
ubiquitous commerce. personal data.
Keywords • Social: effects on structures, emergent social practices,
Ubiquitous computing, pervasive computing, electronic effects on roles within social organization units, identity
commerce, mobile commerce and anonymity.
WORKSHOP THEMES • Economics: pricing of ubiquitous services, valuation of
The rapid proliferation of e-commerce technologies over goodwill and information goods, fair pricing for personal
the past decade has fundamentally transformed the way we data and privacy.
conduct business. This trend is expected to accelerate in • Business: ubiquitous business models, supply chain
the coming years due to a number of different factors, management and optimization, industrial design, process
including the introduction of new mobile and ubiquitous design, ubiquitous product development, customer
computing technologies; the wider recognition by business relationship management.
of the strategic advantages offered by the implementation
of ubiquitous computing and communications • Experience design: appliances, architecture and
infrastructures; the emergence of novel business models building, ubiquitous commerce spaces.
which become possible only through this technology; and At this vertical axis we have:
last but not least the development of new economics that
• Entertainment, infotainment, retailtainment and
can be used to understand and value ubiquitous commerce
gaming.
activity. There are thus, several areas of contestation that
must interact to produce the conditions for the successful • Tourism and experience recording.
implementation of ubiquitous commerce. Indeed, recent • Ubiquitous assistance through valets and personal
experience has shown that the concerns of these agents.
(traditionally distinct) areas are intimately interrelated and
thus have to be co-developed in parallel. Moreover, • Pervasive Retail.
researchers and practitioners from all fields need to be • Remote shopping with smart home infrastructures.
informed of the concerns and the priorities of each other so

320
• Health- and home-care. Security Officer at the Ministry of Defence, Athens,
Greece, where he designed the Hellenic armed forces
• Industrial applications
Internet exchange and domain name systems, and as a
• Automotive telematics research fellow at Imperial College, London, UK, where he
WORKSHOP GOALS conducted research in distributed systems and algorithms.
Ubiquitous computing has been recognized as an inherently His current research interests include ubiquitous and
interdisciplinary research field, requiring the collaboration pervasive computing and commerce with particular
between several technical disciplines including but not emphasis on ubiquitous narratives, trailblazing and retail as
restricted to computing, telecommunications, human well as active rules for sensor networks. George is also a
computer interfaces and industrial design. In addition to director for Netsmat Technologies Ltd a start-up providing
these, ubiquitous commerce requires contributions from the home care applications over Digital TV infrastructures to
product development, finance, business process the NHS. He is a member of the ACM and an associate of
management, standardization, law, consumer experience the IEEE and the IEEE Computer Society. He holds a B.Sc.
design and social science points of view, to produce useful in (Pure) Mathematics from University of Athens, Greece,
results. However, researchers with the required expertise a M.Sc. in Numerical Analysis and Computing from
do not have a forum to exchange ideas and concerns and University of Manchester Institute of Science and
develop common agendas and roadmaps for research. Technology, UK, and a Ph.D. in multi-resolution
The proposed ubiquitous commerce workshop will aim to computer-aided geometric design from Imperial College,
bring together researchers with diverse background to: University of London, UK.
• Share understandings and experiences as well as Anatole Gershman
recognize each other’s concerns. Anatole Gershman is a partner at Accenture, one of the
world’s largest consulting companies and Director of
• Foster collaboration across research communities. Research at Accenture Technology Labs. Under his
• Create effective channels of communication to transfer direction, the laboratory has been conducting extensive
lessons learnt from one community to the other. applied research in ubiquitous commerce across many
industries. Anatole Gershman holds a Ph.D. in Computer
• Co-develop a roadmap for future research directions.
Science from Yale University and has been conducting or
directing applied research for over 25 years.
WORKSHOP ORGANISERS Panos Kourouthanassis
The workshop organizers have worked previously on the Panos Kourouthanassis is a Research Officer at ELTRUN
design and development of ubiquitous commerce systems the eBusiness Center hosted at the Athens University of
and recognize the importance of interdisciplinary research Economics and Business (AUEB), Athens, Greece. His
and development teams. They have found that interaction research interests include information systems design,
between their respective disciplines is critical in their ubiquitous computing and mobile business. Panos holds a
previous work and propose this workshop to promote this B.Sc. in Information Systems and a M.Sc. in Decision
type of interdisciplinary cross-fertilization. The three Sciences both from AUEB and is preparing his Ph.D. thesis
organizers have a broad range of international experience in pervasive retail information systems at the Department
and complementary expertise: GR is conducting research in of Management Science and Technology, Athens
the various technical aspects of ubiquitous commerce with University of Economics and Business (AUEB), Athens,
emphasis on retail and identity management; AG is Greece.
developing prototype ubiquitous commerce systems aiming
to transfer the technologies to practitioners; and PK is WORKSHOP ACTIVITIES
concert with the implications of ubiquitous commerce for This workshop will attempt to attract participants with
businesses. technical, business, legal and economics backgrounds as
well as with experience in consumer culture research and
George Roussos the social implications of changes brought about from new
George Roussos is a Lecturer in IT Applications at the methods to conduct commerce. The workshop will be
School of Computer Science and Information Systems, organized around position statements and panel
Birkbeck College, University of London, U.K. Before discussions. Participation will be invited on the basis of
joining Birkbeck he was the Research and Development relevance and originality of contributions and so as to
Manager of Pouliadis Associates Corporation, Athens, represent the multidisciplinary nature of the workshop.
Greece, where he was responsible for the strategic
development plan for new IT products, primarily in the
areas of knowledge management and the mobile Internet, REFERENCES
as well as for international collaborations in new 1. R. Asthana, M. Cravatts and P. Krzyzanowski: An
technology fields. He has also held positions as an Internet indoor wireless system for personalized shopping

321
assistance, Proc. of IEEE Workshop on Mobile Recommendations, Data Mining and Knowledge
Computing Systems and Applications, Santa Cruz, Discovery, vol. 5, 2001, 11-32.
California, IEEE Computer Society Press, 1994, 69-74. 13. G. Roussos, L. Koukara, P. Kourouthanasis, J.O.
2. J. Buckhardt, H. Henn, S. Hepper, K. Rindtorrff and T. Tuominen, O. Seppala, G. Giaglis and J. Frissaer: A
Schaeck, Pervasive Computing, Addison-Wesley, 2001. case study in pervasive retail, ACM MOBICOM
3. A. Fano, and A. Gershman: The Future of Services in WMC02, 2002, pp. 90-94.
the Age of Ubiquitous Computing, Communications of 14. G. Roussos, P. Kourouthanassis and T. Moussouri:
the ACM, 45, 12 (December 2002), 83-85. Appliance Design for Mobile Commerce and
4. A. Gershman: Ubiquitous Commerce - Always On, Retailteinment, to appear in a special issue on
Always Aware, Always Pro-active, Proc. SAINT 2002: Appliance Design in Personal and Ubiquitous
37-38 Computing, 2003.
5. E. A. Gryazin and J.O. Tuominen: The SMART 15. G. Roussos, D. Peterson and U. Patel: Mobile Identity
Environment for Easy Shopping, Proc. Int. ITEA Management: An Enacted View", to appear in a special
Workshop on Virtual Home Environment, February issue on mobile business of the Int. Jour. E-Commerce,
2002. 2003.
6. J. King: Is IT Ready to Support Ubiquitous E- 16. G. Roussos, Diomidis Spinellis, Panos Kourouthanasis,
Commerce?, Computer World, March 2000. Eugene Gryazin, Mike Pryzbliski, George
Kalpogiannis, George Giaglis: Systems Architecture for
7. M. Kärkkäinen and Jan Holmström: Wireless product Pervasive Retail, Proc. ACM SAC 2003, Melbourne,
identification: Enabler for handling efficiency, Florida, 2003, pp. 631-636.
customisation, and information sharing, Supply Chain
Management: An International Journal, vol. 7, no. 4, 17. S. Sarma, D.L. Brock and Kevin Ashton: The
2002, 242-252. Networked Physical World Proposals for Engineering
the Next Generation of Computing, Commerce and
8. 4. V. Kotlyar, M. Viveros, S.S. Duri, R.D. Lawrence, Automatic-Identification, Whitepaper WH-001, Auto-ID
and G.S. Almasi: A Case Study in Information Delivery Centre, MIT, Cambridge, MA.
to Mass Retail Markets, in T. Bench-Capon, G. Soda
and A M. Tjoa (Eds.): DEXA’99, LNCS 1677, 1999, 18. J. Smaros and J. Holmstrom: Viewpoint: Reaching the
842-851. consumer through e-grocery VMI, International
Journal of Retail and Distribution Management, vol.
9. 8. P. Kourouthanassis, L. Koukara, C. Lazaris, K. 28, no. 2, 2000, 55-61.
Thiveos: Last Mile Supply Chain Management:
MyGROCER Innovative Business and Technology 19. M. Strassner and T. Schoch: Today's Impact of
Framework, Proc. 17th International Logistics Ubiquitous Computing on Business Processes, in F.
Conference, 2001, 264-273. Mattern (ed.) Proc. Pervasive 2002, Short Paper
Proceedings, Zurich, 2002
10. P. Kourouthanasis, G. Lekakos and G. Doukidis, 2001,
"Challenges for Automatic Home Supply 20. P. Tarasewich: Wireless Devices for Ubiquitous
Replenishment in e-Retailing", e-Commerce Frontiers Commerce: User Interface Design and Usability, in
2001, Cheshirehenbury, Macclesfield, UK. Brian E. Mennecke and Troy J. Strader, (Eds.) Mobile
Commerce: Technology, Theory, and Applications,
11. P. Kourouthanassis and G. Roussos: Consumer Culture 2002, Hershey, PA: Idea Group Publishing, 26-50
and Pervasive Retail", IEEE Pervasive Computing, to
appear in the April issue on The Human Experience, 21. C. Trigueros: ALBATROS: Electronic tagging
April 2003. solutions for the retail sector, Informatica El Corte
Inglés, Madrid, Spain, 1999.
12. G.S. Lawrence, V. Almasi, M.S. Kotlyar, M. Viveros
and S.S. Duri: Personalization of Supermarket

322
AIMS2003: Artificial Intelligence In Mobile Systems

Antonio Krüger Rainer Malaka


Geb. 36, Zimmer 106 European Media Laboratory GmbH
FR Informatik Villa Bosch
Universität des Saarlandes Schloss-Wolfsbrunnenweg 33
66123 Saarbrücken, Germany 69118 Heidelberg, Germany
+49 681 302 4137 +49 6221 533 206
[email protected] [email protected]

SUMMARY AIMS 2001 (with IJCAI '01, Seattle), AIMS 2002 (with
ECAI '02) organised by the same persons and institu-
Today's information technology is rapidly moving small
tions. In order to foster the investigation of AI methods in
computerised consumer devices and hi-tech personal ap-
ubiquitous computing scenarios AIMS 2003 will be held
pliances from the desks of research labs into sales shelves
in conjunction with Ubicomp 2003. A combination that
and our daily life. Various platforms from low perform-
we believe will be very fruitful for both research areas.
ance PDA, embedded computers in cameras, cars, or mo-
bile phones, up to high performance wearable computers
will become essential tools in many situations for private SCOPE
and professional use. These systems require new interac- In the AIMS 2003 workshop we intend to bring together
tion metaphors and methods of control. Well-known in- researchers working in the sub-fields of AI described
teraction devices, such as mouse and keyboard are not above and those working with the design of mobile appli-
necessarily available, rendering user interfaces that rely cations and devices (wearable as well as environmental).
on them inappropriate. Other resources such as power or The scope of interest includes but is not limited to:
networking bandwidth may be limited or unreliable de-
• Location awareness
pending on time and location. Moreover, the physical
environment and context are changing rapidly and must • Context awareness
be taken into account appropriately. In the future the fo-
cus will shift from single users, using single services on • Interaction metaphors and interaction devices
single artefacts towards groups of users collaborating us- for mobile /ubiquitous systems
ing a combination of different services in physical spaces
equipped with personal as well as public dynamically • Smart user interfaces for mobile /ubiquitous
configured artefacts (ubiquitous computing or ambient systems
technology).
• Multi-modal interfaces for mobile /ubiquitous
Therefore, the main challenge for the success of mobile systems
systems is the design of smart user interfaces and software
that allows ubiquitous and easy access to personal infor- • Situation adapted user interfaces
mation and that is flexible enough to handle changes in
user context and availability of resources. Artificial intel- • Adaptation to limited resources
ligence has investigated the problems of making user in-
terfaces smart and cooperative for many years and is at- • Fault tolerance
tacking the challenges of explicitly dealing with limited
resources lately. AI methods provide a range of solutions • Service discovery, service description lan-
for those problems and currently seem to be one of the guages and standards
most promising tools for building location and situation We encourage submissions from researchers and practi-
aware mobile systems that support users at their best and tioner in academia, industry, government, and consulting.
behave cooperatively in unobtrusive ways. Students, researchers and practitioners are invited to sub-
mit papers (max. 8 pages) describing original, novel, and
AIMS 2003 will be the fourth workshop in a row as a inspirational work. The submissions will be reviewed by
successor of AIMS 2000 (with ECAI 2000, Berlin), an international group of researchers and practitioners.

323
PROGRAM COMMITTEE:

Jörg Baus (DFKI, Germany)


Mark Billinghurst (HITLab, New Zealand)
Andreas Butz (Saarland University, Germany)
Keith Cheverst (Lancaster University, UK)
Tobias Höllerer (University of Southern California,
Santa Barbara, USA)
Eric Horvitz (Microsoft Research, USA)
Christian Kray (Lancaster University, UK)
Mandayam Raghunath (IBM, USA)
Thomas Rist (DFKI, Germany)
Georg Schneider (University of Applied Sciences in Trier, Germany)
Howard Shrobe (MIT, USA)
Massimo Zancanaro (ITC-IRST, Italy)

http://w5.cs.uni-sb.de/~krueger/aims2003/

324
Author Index

Aarts, Ronald 199 Chen, Harry 183


Abowd, Gregory D. 137, 283, 289 Chou, Paul 309
Adlam, T. D. 233 Churchill, Elizabeth 24, 316
Agamanolis, Stefan 163, 171, 173, 271 Churi, Ariel 28
Alahuhta, Petteri 215 Coroama, Vlad 221
Alldrin, Neil G. 44 Cox, Donna 312
Amento, Brian 300 Crow, David 31
Ames, Morgan 159 Crowcroft, Jon 185
Antifakos, Stavros 155, 161, 207 Cullinan, Cian 171
Aoyama, Tomonori 201 Cutting, Daniel 243
Assad, Mark 235
Atkeson, Christopher 141 Davenport, Duncan 115
Avery, Michael 237 Davenport, Glorianna 31
Decker, Christian 9, 197, 217
Backlund, Sara 179 Denoue, Laurent 24
Bagci, Faruk 191 Dey, Anind K. 159, 167
Bardram, Jakob E. 294 Dickie, Connor 281
Barkhuus, Louise 165 Dijk, Esko 199
Bassoli, Arianna 171 Do, Ellen Yi-Luen 229
Becker, Christian 309 Donath, Judith 61
Beckmann, Chris 167 Dourish, Paul 303
Beigl, Michael 9, 197, 217 Dow, Steven 131
Bell, Genevieve 316 Doyle, Linda 209
Berglin, Lena 179 Dragovic, Boris 185, 195
Bernson, Jesper 179
Berzowska, Joey 131 Edwards, Keith 306
Bettadapur, Chinmayi 159 Ekberg, Mats 225
Bianciardi, David 13 Endres, Christoph 245
Binder, Frank 217 Evans, Michael A. 247
Blackburn, Terence 239
Blackwell, Alan F. 123, 169 Fantauzza, Jill 131
Boger, Jennifer 219 Fels, Sidney 193
Bolter, Jay David 227 Fernie, Geoff 219
Borchers, Jan 306 Ferscha, Alois 261
Borovoy, Rick 312 Fiano, Vincent 131
Borriello, Gaetano 21, 48, 74, 289 Finin, Tim 183
Boutellier, Jani 155 Fistre, Julien 131
Boyer, Robert 44 Foley, Timothy J. 44
Bradbury, Jeremy 281 Fox, Armando 35, 309
Braun, Elmar 147 Friday, Adrian 309
Brignull, Harry 312 Frost, Jeana 38
Brooke, Tim 17, 316
Brown, Steven W. 44 Gaye, Lalya 58
Brucker-Cohen, Jonah 173 Gellersen, Hans 9
Brumback, Christine 93 Gershman, Anatole 320
Brunette, Waylon 21 Gitman, Yury 42
Glaser, Daniel 175
Cadag, Eithon 74 Gomez de Llarena, Carlos J. 42
Camacho-Guerrero, José Antonio 137 Greenberg, Saul 275
Campbell, Roy 309 Griswold, William G. 44, 74
Canny, John 303 Gromala, Diane 137, 227
Caravia, Yvonne 131 Gross, Mark D. 229
Carmichael, David 241 Grossklags, Jens 303
Carter, Scott 24 Gruteser, Marco 249
Chakravorty, Rajiv 195 Guttman, Ed 93
Hague, Rob 169 Krumm, John 291
Harashima, Hiroshi 157
Hartl, Andreas 147 LaMarca, Anthony 74
Hayakawa, Keisuke 213 Landay, James A. 187
Hayes, Gillian 97 Lee, Sanggoog 177
Hazas, Mike 291 Lee, Vivienne 265
Helfman, Jonathan 24 Leuchtner, Michael 197
Hendry, David G. 81 Li, Yang 187
Hightower, Jeffrey 48 Light, John 97
Hinckley, Ken 263 Lightman, Alex 312
Holmquist, Lars Erik 58, 207, 279 Lin, Vivian 28
Hong, Jason I. 187 Liu, Sheng 211
Honjo, Masaru 189 Lucas, Charles P. 44
Hsi, Sherry 223
Huang, Elaine M. 149 Magerkurth, Carsten 267, 277
Hudson, Adam 251 Mainwaring, Scott 303
Hudson, James 52 Malaka, Rainer 323
Hudson, Michael 71 Malm, Esko-Juhani 215
Hwa, Howard 115 Mamuji, Aadil 77
Mankoff, Jennifer 159
Iachello, Giovanni 131 Mantoro, Teddy 255
Igoe, Tom 13 Mase, Kenji 193
Iles, Alastair 175 Matsuguchi, Tetsuya 193
Ilinski, Roman 151 McCarthy, Joseph F. 81, 84
Ingmarsson, Magnus 225 McCurdy, Neil J. 44
Intille, Stephen 265, 273 McDonald, David 74
Iossifova, Milena 56 Michahelles, Florian 155, 269
Isaksson, Mikael 225 Mihailidis, Alex 219, 294
Isbell, Charles 300 Mikesell, Dan 88
Ishii, Yoko 153 Minami, Masateru 201
Ito, Sadanori 193 Moore, Julian 171
Izadi, Shahram 312 Morikawa, Daisuke 189
Morikawa, Hiroyuki 201
Jäppinen, Pekka 253 Moriwaki, Katherine 209
Jacobs, Margot 58 Morrier, Michael J. 137
Jafarinaimi, Nassim 227 Morris, Margaret 17
Jang, Seiie 177 Mueller, Florian 271
Jenkins, J. R. 81 Munguia Tapia, Emmanuel 273
Jiang, Xiaodong 303 Murphy, Paul 24
Johanson, Brad 306
Joshi, Anupam 183 Naemura, Takeshi 157
Nain, Delphine 131
Kaartinen, Jouni 215 Nakanishi, Yasuto 153
Kam, Lilly 31 Nelson, Les 24
Kam, Matthew 175 Neustaedter, Carman 275
Karahalios, Karrie 61 Nguyen, David H. 84
Kashitani, Atsushi 213 Nicholls, Jim 229
Keller, Markus 261 Niemeyer, Greg 90
Kieslinger, Michael 65 Nishio, Shojiro 213
Kindberg, Tim 69
Kindratenko, Volodymyr 312 O’Mahoney, Margaret 209
Kishino, Yasue 213 Ohashi, Masayoshi 189
Klein, Yves Amu 71 Oka, Kenji 153
Knowles, Craig 281
Koike, Hideki 153 Pan, Pengkai 31
Korhonen, Ilkka 294 Parkes, Alan 52
Kortuem, Gerd 289 Parness, Amy 93
Kourouthanassis, Panos 320 Patanapongpibul, Leo 195
Krüger, Antonio 323 Patel, Dipak 163
Krohn, Albert 9, 217 Pattison, Eric 97

326
Paulos, Eric 3, 316 Spasojevic, Mirjana 69, 223
Pering, Trevor 97 Steinbach, Leonard 119
Petzold, Jan 191 Stenzel, Richard 267, 277
Pham, Thanh 77 Stoddard, Steve 273
Phelps, Ted 100 Streitz, Norbert 277, 312
Picard, Rosalind 271 Stringer, Mark 123
Pinhanez, Claudio 265 Subramanian, Anand Prabhu 203
Plewe, Daniela 277 Sue, Alison 149
Pointer, David 312 Sumi, Yasuyuki 193
Policroniades, Calicrates 195 Summet, Jay 283
Posegga, Joachim 298 Sundar, Murali 97
Poupart, Pascal 219
Prante, Thorsten 267, 277 Tabert, Jason 74
Tallyn, Ella 69
Röcker, Carsten 277 Tanaka, Yu 157
Raghunathan, Vijay 97 Tandler, Peter 306
Rajani, Rakhi 69 Terada, Tsutomu 213
Randell, Cliff 100 Terveen, Loren 300
Rao, Srinivas G. 205 Toye, Eleanor F. 123
Rashid, Al Mamunur 84 Trumler, Wolfgang 191
Rea, Adam 21 Truong, Khai N. 137
Rebula, John 273 Tsukamoto, Masahiko 213
Rehg, James 283
Reitberger, Wolfgang 131 Ungerer, Theo 191
Rhee, Sokwoo 211 Ushida, Keita 157
Robinson, James G. 108, 111
Valgårda, Anna 165
Robinson, Peter 169
VanArsdale, David 227
Robinson, Philip 9, 298
van Alphen, Dnaiel 277
Rode, Jennifer A. 123
van Berkel, Kees 199
Rogers, Yvonne 100
Van Kleek, Max 181
Roman, Manuel 309
van Loenen, Evert 199
Ron, Ruth 104
Vekaria, Pooja C. 137
Roussos, George 320
Vertegaal, Roel 77, 281
Russell, Daniel M. 149
Vidales, Pablo 195
Rydenhag, Tobias 179
Vildjiounaite, Elena 215
Vina, Victor 127
Sanneblad, Johan 279
Vogt, Harald 298
Sato, Yoichi 153
Schiele, Bernt 155, 161, 207, 269, 306 Wan, Dadong 285, 294
Schilit, Bill N. 74 Wang, Ningya 211
Schmidt, Albrecht 9, 155 Want, Roy 97
Scott, James 291 Wei, Sha Xin 131
Seetharam, Deva 211 Weller, Michael 229
Semper, Robert J. 223 White, David Randall 137
Serita, Yoichiro 131 Wilson, Daniel 141
Shankar, Narendar 298 Winograd, Terry 35
Shapiro, R. Benjamin 44 Witchey, Holly R. 119
Shell, Jeffrey 281 Wolf, Ahmi 56
Shell, Jeffrey S. 77 Woo, Woontack 177
Shen, Jia 257 Wren, Christopher R. 205
Shimada, Yoshihiro 157
Singer, Eric 13 Xiao, Jason 211
Smith, Brian K. 38
Smith, Marc 115 Yamaguchi, Akira 189
Sohn, Changuk 77 Yoshihisa, Tomoki 213
Somani, Ramswaroop 283
Soroczak, Suzanne 84 Zimmer, Tobias 9, 217

327

You might also like