Vehicular Lifelogging: New Contexts
and Methodologies for Human-Car
Interaction
Joshua McVeigh-Schultz
Avimaan Syam
University of Southern California
USC, SCA, IMD
(USC), School of Cinematic Arts
[email protected]
(SCA), Media Arts and Practice
program (iMAP)
Amanda Tasse
[email protected]
USC, SCA, iMAP
[email protected]
Jennifer Stein
USC, SCA, iMAP
Michael Annetta
[email protected]
USC, SCA, IMD
[email protected]
Jacob Boyle
USC, SCA, Interactive Media
Simon Wiscombe
Division (IMD)
USC, SCA, IMD
[email protected]
[email protected]
Emily Duff
Scott S. Fisher
USC, SCA, IMD
USC, SCA, Assoc. Dean of Research
[email protected]
and Founding Chair of IMD
[email protected]
Jeff Watson
USC, SCA, iMAP
[email protected]
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior
specific permission and/or a fee.
CHI’12, May 5–10, 2012, Austin, Texas, USA.
Copyright 2012 ACM 978-1-4503-1016-1/12/05...$10.00.
Abstract
This paper presents an automotive lifelogging system
that uses in-car sensors to engage drivers in ongoing
discoveries about their vehicle, driving environment,
and social context throughout the lifecycle of their
car. A goal of the design is to extend the typical
contexts of automotive user-interface design by (1)
looking inward to the imagined “character” of the car
and (2) looking outward to the larger social context
that surrounds driving. We deploy storytelling and
theatrical strategies as a way of moving our thinking
outside the familiar constraints of automotive design.
These methods help us to extend the concept of a
lifelog to consider the “lives” of objects and the
relationship between humans and non-humans as
fruitful areas of design research.
Keywords
lifelogging; storytelling; car-human relationship;
memory annotation; design fiction; experience design
ACM Classification Keywords
H.5.1 Multimedia Information Systems
[Miscellaneous]; D.2.2 Design Tools and Techniques
[Miscellaneous]; H.5.2 User Interfaces [Theory and
Methods]; H.5.2 User Interfaces
[Evaluation/methodology]
General Terms
Design; Experimentation
Memex system, Gordon Bell began recording his life
using MyLifeBits [10] a software later augmented by
the SenseCam video recording system [7,11].
Introduction
Automobiles have become mobile sensing and
computing devices that can support contextually rich
interfaces and interactions. However, innovation in
automotive design tends to be dominated by a focus on
driver-assistance, efficiency, and safety [16]. This
emphasis, while understandable, can constrain
automotive design by limiting its context to the goals of
driving. Alternative research has challenged these
constraints in a number of ways: by researching the
larger social contexts of driving [24], by designing
interfaces for passengers and drivers to interact
together [3], or by considering the internal psychology
of drivers who project anthropomorphic identity onto
their cars [8]. This paper presents novel design
methodologies for exploring these expanded notions of
context in and around the car. In particular, we used
storytelling strategies and experience design methods
as a way of envisioning novel relationships between incar sensors and meaningful drive scenarios. These
novel interpretations of sensor states then became the
basis for an automotive lifelog design.
Lifelog
The concept of a lifelog, as a record or index of
personal information, has emerged in various contexts.
As early as 1945, Vannevar Bush proposed a
hypothetical device, the Memex, to exhaustively record
and organize the details of a researcher’s mental and
physical experience [6]. In the 1980s Steve Mann
began experimenting with streaming video and started
recording his life using the Wearable Wireless Webcam
in 1994 [14]. Later, inspired by Bush’s vision of the
Lifelogs for Objects and Environments
More recently, attention has turned to the relationship
between a lifelog system and objects in the
environment. For example, Lee et al designed a lifelog
system that capture images and other data from the
perspective of objects (in proximity to humans) [12].
However, such research is less interested in the
possibility that objects and environments might
themselves be positioned as the subjects of lifelogging.
The work of the University of Southern California’s
Mobile and Environmental Media Lab (MEML) looks at
objects and environments as entities with their own
stories to tell. Through projects like The Million Story
Building [19] and StoryObjects [10] we have explored
the lifelog as a narrative platform for objects and
buildings [18]. This model of lifelog research has led to
insights about how to animate built environments using
networked objects [20]. In more recent work, PUCK
positions lifelogging as a platform for supporting
relationships between a building and its inhabitants
[22]. This work departs from the familiar emphasis on
video recording as the primary tool of the lifelog.
Addressing concerns of surveillance [13], we align with
research that confronts this issue by emphasizing
mutual or horizontal participation [1]. In this way, our
design strategy aims to support reciprocal relationships
and learning experiences between objects and humans.
Vehicular Lifelogging
Building on this work, the MEML team designed a
vehicular lifelog prototype for a MINI Countryman. This
research relied on strategies of storytelling and
experience design to help us envision new driving
scenarios. In particular we were interested in
understanding how a vehicle-based lifelog might (1)
impact the way that drivers project character onto their
cars, (2) support data-driven discoveries about user
patterns, and (3) point to new models of lifeannotation. Using the various in-car sensors within the
MINI, we have designed a lifelogging system that
tracks events, milestones, achievements, and tallies
associated with driving. These modes of interaction
offer opportunities for the car to reveal its “character”
by surprising the driver with contextually relevant
messages displayed on the MINI infotainment system.
In addition, we developed an iPad-based lifelog
interface, which enables users to review the various
events collected by the car’s lifelog. Future iterations of
this platform will provide the driver with opportunities
teach the car about important social contexts that lie
beyond the car’s sensor capabilities.
Methodology
Our approach to vehicular lifelogging project draws on
storytelling and performative methodologies for rapid
prototyping. Housed within USC’s School of Cinematic
Arts, MEML straddles the cultures of interaction design,
visual storytelling, and performance. We borrow
strategies from the filmmaker’s toolkit—storyboarding,
video mock-ups, and narrative-driven experience
demos—in order to prototype new interactive contexts.
This approach draws on performative methodologies
familiar to the interaction design world, including
bodystorming [5,15] and experience prototyping [4].
Our methodology shares with these approaches an
emphasis on enacting scenarios as a tool for rethinking
context. However, there are also important differences
between our approach and more familiar theatrical
design techniques. In this project, since we were
interested in discovering potential story in lifelog data
and in mapping new relationships between sensors and
context, our objectives were often design-problem
agnostic. In this sense, we had to move beyond the
usual anchors of improvisational brainstorming. And
since lifelogging technologies imply longer durations of
interaction than are possible in real-time interaction
scenarios, many of our “discoveries” had to be made
during the story-crafting phase as opposed to the
experience-enacting phase.
Within automotive user interface research, these
techniques are less familiar, but there is emerging
interest in performative and narrative based design
methodologies. For example, the theater-system
technique—a more dynamic variation on the “Wizard of
Oz” approach—has been used as a rapid prototyping
method in automotive design [17]. Our approach is not
only performative in this sense, but also seeks to
discover novel interaction scenarios by using the tools
of storytelling in visual media to prototype longer
durations of use.
Storyboarding
As a way of understanding how car-sensors might map
onto potential lifelog scenarios, we developed short
scripts for use-cases and then worked with an
illustrator to envision the narrative context of a user
experience. The comic-style storyboards illustrated in
[Fig 1.] and [Fig 2.] present an example scenario that
we crafted to help us envision possible contexts of lifeannotation. By forcing ourselves to construct a robust
narrative context for this scenario, we needed to
grapple with a car’s entire lifecycle, which could
Storyboarding
Process:
•
•
•
•
Scripting
Workshopping and
revision
Collaboration with
illustrator
Scenario crafting
potentially include being passed down through multiple
generations of drivers. In this example, a mother
passes on the car to her daughter. By framing vehicular
lifelogging in this way, we had to think more deeply
about the implications of multiple drivers and multiple
layers of lifelog information. Additional storyboard
scenarios—not pictured here—focused on (1) surprising
Benefits:
This storytelling strategy
helped us to probe the
possibilities of vehicular
lifelogging by:
1.
2.
3.
raising questions
about multiple
drivers,
enabling us to
envision the lifelog
over an extended
duration,
encouraging us to
consider novel
subjects like
location-based
memory annotation
as a conceivable
topic of automotive
design
Credit: Illustrations by
Bryant Paul Johnson
Figure 1.
sensor-based achievements, (2) in-car alternate reality
games, (3) practices of community lifelogging among
drivers, and (4) the discovery of patterns in aggregate
data. For each of these examples, the process of
creating storyboards served as an entry point into
questioning our assumptions about the context of
design and enabled us to imagine valuable alternatives.
Using storyboards to
prototype longer durations
of interaction:
Typical interaction-design
prototypes are intended to be
tested over minutes rather
than years. By conducting
narrative exercises in visual
media, designers can gain
access to a deeper
understanding of time and
duration of interaction. In this
way, we use narrative
structure to speculate about
the ways that experience
unfolds over the entire
lifecycle of the car.
Figure 2.
Design Fiction
Our strategies of storytelling draw on an increasingly
innovative area of design research known as design
fiction. Bruce Sterling uses the concept of design fiction
to describe “a space between design and science
fiction” and points to “the deliberate use of diegetic
prototypes to suspend disbelief about change” [23].
Our work similarly constructs narrative worlds for our
prototypes as a way of probing the unknown contexts
of an alternative future. In this sense, design fictions
can serve as conversation pieces that provoke new
ways of imagining alternative worlds and novel
experiences [2]. Others have extended this position by
describing the ways in which design is an active
construction of culture [9]. Design fiction represents a
reflective mode of speculative thinking that can open
up new questions and unfamiliar opportunities. In our
own research, especially during the preliminary ideation
phase, visual storytelling helps us to think outside the
constraints of familiar user interaction scenarios.
Experience Prototyping
The second core component of our methodology
involves translating our narrative scenarios into
experience prototypes. For vehicular lifelogging, this
process involved (1) paper prototyping, (2) demo
drives, (3) theatrical “Wizard of Oz” techniques, (4)
interactive prototypes of sensor-driven events, and (4)
prototyping of lifelog review interfaces on an iPad. The
interactive prototypes enabled in-car sensors to
communicate with an iOS device through the MINI
infotainment system. Sensors triggered contextually
relevant pop-ups (during a drive), and these
notifications could be reviewed later in our lifelog
review interface (iPad application). Demo drives were
organized into multiple theatrical “Acts” separated by
an imagined passage of time. This enabled us to
explore longer durations of interaction by acting out
various vignettes within the lifecycle of a car.
System Design
We designed and implemented a system that allowed
us to rapidly prototype and iterate on Lifelog Events.
The system consists of two major pieces: an iOS app,
and a server-based component. In our first version of
the system, the iOS app read sensor data from the car
and watched for Lifelog Events, then communicated
these events to the server. The server—written in PHP
and using a MySQL database—recorded the events it
received from the client and then visualized them
according to time and location.
This initial version proved difficult to work with, as all
events had to be manually written and compiled into
the iOS app. As a result, we reengineered the server
component to not only track which events were found,
but define which events we were looking for—which the
iOS app would load upon connect. From here we
created a simple web interface for defining new—or
modifying existing—events. This new interface changed
the creation of lifelog events from an engineering task
to a writing and storytelling task.
Vehicular Lifelog Taxonomy
In our lifelog system a variety of lifelog events can
occur during a drive. Achievements, Tallies, Memories
and sensor-specific notifications appear as popups on
the MINI infotainment system. These notifications are
triggered by: location data, sensor data, and annotated
lifelog events that appear when a particular location is
revisited. Achievements use sensor data and location
information to signal accomplishments and can only be
received once. A driver is notified in real time when an
achievement has been unlocked, and will also see a
badge appear at the location received on map while
reviewing the lifelog after a drive. Tallies represent
accumulations of collected instances of Lifelog Events.
Moments and memories are location-based events that
have been annotated within a driver’s lifelog after a
drive, but reappear within the MINI infotainment
system on a future drive in the same location as the
original event. Finally, sensor-specific notifications
make drivers aware of particular events such as RPM
data, braking data, and acceleration data. These events
can be tallied so that, for example, the MINI might alert
a driver to the number of close calls they have had
(based on rapid deceleration events).
Whose Lifelog?
According to our system, a car has a single lifelog, but
multiple drivers can contribute to that lifelog
independently. By default, a driver’s contributions are
visible to other drivers, but one can also choose to
make their portion of the lifelog hidden. Since car and
driver behaviors are so interlinked, it is difficult to
position the vehicular lifelog as exclusively “owned” by
the vehicle. Instead, it makes most sense to think
about the vehicular lifelog as a document of the
relationship between a car and its driver(s).
Rethinking Context in Sensor Driven Lifelog
Events
The storytelling and experience prototypes described
above led us to unexpected ways of interpreting
automotive sensor-states. For example, in a demo drive
scenario depicting a late-afternoon coffee run at
McDonalds, we used a combination of location, time,
and the rolling down of a driver’s-side window to infer
that the car was approaching a drive-through window.
We created a tally called “Afternoon Pick Me Up” that
was tied to this particular location.
Figure 3.
At the drive-through, our system recognizes that the
driver’s side window is being lowered at the same time
that a GPS reading indicates a food location.
Figure 4.
The recognition of the sensor states in [Fig. 3] (along
with a recognition of the time of day) triggers an event
that adds to the “Afternoon Pick Me Up” tally. The
caption reads “Congratulations! You’ve collected an
afternoon pick me up at this location 10 times!”
A Note on Character
Many of the achievements we developed can be read as
subtly ironic or teasing. For example, drivers earn a
“Close call commander” achievement for slamming on
the breaks too hard. This decision about the car’s
“voice” was a deliberate attempt at crafting a particular
personality for our MINI (as whimsical, geeky, and a
touch sarcastic). So, while tallies for returning to a
particular vendor may seem to incentivize repeat visits,
there is also the possibility that the car is “teasing” its
driver by pointing out how many times the driver has,
in this case, gone to McDonalds. In this sense, by
pointing out redundant patterns in driver-profile data,
the lifelog can also serve to encourage greater
exploration of anomalous contexts.
Consecutive Sensor States
Another avenue that helped us to rethink the
relationship between sensor states surrounding context
was to draw connections across consecutive sensorstates. [Fig. 5] depicts a driver filling up her car with
gas immediately after the drive-through scenario
described earlier. In this case, the lifelog system
recognizes that both human and car are being “fed”—a
moment of anthropomorphic identification that could
strengthen the relationship between driver and car. We
characterized this event with an achievement called
“Sharing a meal together.”
Figure 6.
Rain sensors, originally intended to trigger the
automatic windshield wipers, were used instead to
trigger a “Car Wash” event.
Figure 5.
Soon after the events in [Fig. 3] and [Fig. 4] the
driver, fills up the gas tank, earning a “”Sharing a
Meal” achievement.
Figure 7.
When exploring a new part of Los
Angeles, the lifelog system
displays an “LA Dystopia"
achievement for driving by the
Bradbury Building. Caption reads:
“…famous location from the movie
Bladerunner.”
Reciprocation
Our lifelog events often adapted sensors for unexpected
purposes. For example, we used the rain sensors
(which were designed to trigger the automatic
windshield wipers) to indicate that the car was being
washed [Fig. 6]. If the current weather report says
clear skies but the rain sensors have been triggered,
then the lifelog system can infer that a car wash has
taken place. This recognition enabled us to create
notifications that demonstrated the car’s gratitude to
the driver.
During informal interviews with drivers who tend to
anthropomorphize their cars, we found that the act of
washing the car was an important ritual because it
enabled them to reciprocate and “give back” to the car
for all of the effort that their car has expended on their
behalf.
Exploring the environment
We also used the lifelog as a platform for enabling
drivers to see their environments in a new way. In a
demo drive through Los Angeles’s history theater
district, we used GPS points to trigger exploration
achievements tied to notable locations. For example,
[Fig. 7] depicts the “LA Dystopia” achievement for
passing by the Bradbury building (a famous location
from the movie Bladerunner).
Recognizing Social Context
Often the lifelog events we prototyped provided new
ways of understanding the relationship between
sensors and social context. For example, we acted out
a scenario in which a change in the seat sensor and a
recognition of a GPS location would enable the lifelog to
infer that a child had been dropped off at school [Fig.
8]. Similarly, seat sensors recognize the first instance
of a baby seat and GPS coordinates match this event
with the location of a hospital. Later, when the driver
returns to the vicinity of this location, a “Memory”
event is triggered [Fig 9].
Figure 8.
Using the seat sensor, the
lifelog recognizes that a
passenger (in a child-seat) has
been dropped off. This
information, paired with GPS
coordinates for a local school,
adds an instance to the
“Dropping Off Child” tally.
Figure 10.
The lifelog tablet interface
organizes the events of the
lifelog according to time, space,
and event definitions.
Figure 9.
Memory detected: “Baby on Board.” Caption reads:
“new addition to the family recorded on 11/10/2010.”
Conclusions and Future Directions
The concept of lifelogging can increasingly be extended
beyond human subjects to include objects and built
environments. A crucial aspect of lifelogging involves
choosing what parameters to keep track of. This is
essentially a question of what inputs matter and why.
Novel lifelogging subjects (like cars) require innovative
approaches to these questions. In an effort to explore
the possibilities of vehicular lifelogging we have
deployed methods of visual storytelling, theatrical
experience design, rapid prototyping, and flexible
information-system design. These methods help to
defamiliarize the context of automotive design by
encouraging creative remapping of the relationship
between sensors and driving contexts. We propose a
new approach to automotive design that looks “in” to
the imagined space of a car’s character and “out” to the
surrounding social contexts in which a car exists as a
lived experience.
Lifelog Review Interface
Acknowledgements
Lifelog events trigger fleeting popup notifications on the
MINI infotainment system, but a much richer iPad
interface was also designed to enable users to review
their car’s lifelog outside the car (or inside, in the case
of passengers) [Fig. 10]. Designed using Google’s
SIMILE API, this interface maps lifelog events according
to location, time, and event definition. This application
was intended to help users visualize their experience
with their car as an unfolding relationship. Future
iterations of this interface will enable users to sift
through a rich array of sensor data in order to make
discoveries about patterns in their car’s lifelog.
This research was made possible by the support of the
BMW Group Technology Office in Mountain View, CA.
Special thanks to Stephan Durach and Stefan Hoch,
who provided critical guidance throughout the project
and Paul Doersch who assisted with MINI infotainment
system-iPhone integration. We would also thank Cecilia
Fletcher, Bryant Paul Johnson, Michael Lin, Hyung Oh,
and Peter Preuss, who all lent valuable contributions to
the development of this project.
References
[1] Albrechtslund, A. and Ryberg, T. Participatory
Surveillance in the Intelligent Building. Design Issues 27, 3
(2011).
[2] Bleecker, J. Design Fiction: A short essay on design,
science, fact and fiction. Near Future Laboratory, 2009.
[3] Broy, N., Goebl, S., Hauder, M., et al. A Cooperative
In-Car Game for Heterogeneous Players. AutomotiveUI,
(2011).
[4] Buchenau, M. and Suri, J.F. Experience prototyping.
DIS, ACM Press (2000), 424-433.
[5] Burns, C., Dishman, E., Verpiank, W., and Lassiter, B.
Actors, Hairdos & Videotape - Informance Design. CHI,
(1994), 119-120.
[6] Bush, V. As we may think. Atlantic, 1945.
[7] Czerwinski, B.Y.M., Gage, D.W., Gemmell, J.I.M.,
Marshall, C.C., Skeels, M.M., and Catarci, T. Digital
Memories in an Era of Ubiquitous Computing and Abundant
Storage. Communications of the ACM 49, 1 (2006).
[8] Donath, J. 1964 Ford Falcon. In S. Turkle, ed.,
Evocative Objects: Things We Think With. MIT Press.,
Cambridge, MA, 2007.
[9] Dourish, P. and Bell, G. Resistance is Futile”: Reading
Science Fiction Alongside Ubiquitous Computing. Personal
and Ubiquitous Computing.
[10] Gemmell, J., Bell, G., Lueder, R., Drucker, S., and
Wong, C. MyLifeBits!: Fulfilling the Memex Vision. ACM
Multimedia, (2002).
[11] Gemmell, J., Williams, L., Wood, K., Lueder, R., Bell,
G., and Ave, J.J.T. Passive Capture and Ensuing Issues for
a Personal Lifetime Store. CARPE, (2004), 48-55.
[12] Lee, S. UbiGraphy!: A Third-Person Viewpoint Life
Log. CHI, (2008), 3531-3536.
[13] Lyon, D. Surveillance Society: Monitoring Everyday
Life (Issues in Society). Open University Press, 2001.
[14] Mann, S. “WearCam” (the wearable camera):
personal imaging system for long-term use in wearable
tetherless computer-mediated reality and personal
photo/videographic memory prosthesis. Proc. of ISWC,
(1998), 124-131.
[15] Oulasvirta, A., Kurvinen, E., and Kankainen, T.
Understanding contexts by being there: case studies in
bodystorming. Personal and Ubiquitous Computing 7, 2
(2003), 125-134.
[16] Schauffele, J. and Zurawka, T. Automotive Software
Engineering: Principles, Processes, Methods, and Tools .
SAE International, 2005.
[17] Schieben, A. The theater-system technique!: Agile
designing and testing of system behavior and interaction,
applied to highly automated vehicles. Automotive UI,
(2009), 43-46.
[18] Stein, J. and Fisher, S. Ambient Storytelling: The
Million Story Building. ACM SIGCHI Conference on
Tangible, Embedded, and Embodied Interaction, (2011).
[19] Stein, J., Carter, W., and Preuss, P. StoryObjects.
2009. http://interactive.usc.edu/project/storyobjects/.
[20] Stein, J., Fisher, S., and Otto, G. Interactive
Architecture: Connecting and Animating the Built
Environment with the Internet of Things. Internet of Things
Conference, (2010).
[21] Stein, J., Watson, J., Carter, W., et al. Million Story
Building. 2009. http://interactive.usc.edu/project/millionstory-building/.
[22] Stein, J.L. PUCK!: Place-based, Ubiquitous ,
Connected and Kinetic experiences for interactive
architecture. PhD Dissertation, USC, 2011.
[23] Sterling, B. Closing Keynote address. IxDA
Interaction Eleven Conference, (2011).
[24] Zafiroglu, A., Plowman, T., Healey, J., Graumann, D.,
Bell, G., and Corriveau, P. The Ethnographic (U) Turn!:
Local Experiences of Automobility. Automotive UI, (2011),
47-48.