Co-Performing Agent - Design For Building

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

CHI 2019 Paper CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK

Co-Performing Agent: Design for Building


User–Agent Partnership in Learning
and Adaptive Services
Da-jung Kim, Youn-kyung Lim
Department of Industrial Design
KAIST (Korea Advanced Institute of Science and Technology)
Daejeon, Korea, Republic of
{dajungkim, younlim}@kaist.ac.kr

ABSTRACT 2019), May 4–9, 2019, Glasgow, Scotland, UK. ACM, NY, NY, USA. Paper 484,
14 pages. https://doi.org/10.1145/3290607. 3290605.3300714
Intelligent agents have become prevalent in everyday IT
products and services. To improve an agent’s knowledge of
a user and the quality of personalized service experience, it 1 INTRODUCTION
is important for the agent to cooperate with the user (e.g., Intelligent agents, which leverage user data to personalize
asking users to provide their information and feedback). system behaviors for individual needs, are becoming
However, few works inform how to support such user- increasingly prevalent in everyday IT products and services.
agent co-performance from a human-centered perspective. For instance, intelligent agents for mail prioritization, news
To fill this gap, we devised Co-Performing Agent, a Wizard- filtering, and content recommendations [22,24,25] have been
of-Oz-based research probe of an agent that cooperates widely adopted in mobile services to effectively manage
with a user to learn by helping users to have a partnership information overload. Also, recent examples, such as smart
mindset. By incorporating the probe, we conducted a two- thermostats and wearables, provide personalized support for
month exploratory study, aiming to understand how users diverse activities in users’ daily lives (e.g., automatic
experience co-performing with their agent over time. Based temperature control of a home based on a household’s
on the findings, this paper presents the factors that affected lifestyle and personalized feedback and suggestions for
users’ co-performing behaviors and discusses design health management).
implications for supporting constructive co-performance
and building a resilient user–agent partnership over time. User–system interaction in the era intelligent services
becomes reciprocal transactions of data rather than simple
CCS CONCEPTS input–output interactions. The quality of user experience in
• Human-centered computing → Empirical studies in HCI; learning and adaptive services depends on how a user and an
User centered design. agent co-perform to improve the agent’s knowledge of the
user. Users have to perform their roles, from sharing
KEYWORDS: personal data with systems to giving their feedback on how
Co-performance; Intelligent agents; Adaptive services;
Personalization
a system behaves. However, there are several limitations in
supporting such co-performance in current learning and
ACM Reference format: adaptive services. First, while it has become increasingly
Da-jung Kim & Youn-kyung Lim. 2019. Co-Performing Agent: Design for important for users to understand their roles to enable them
Building User–Agent Partnership in Learning and Adaptive Services. In 2019
CHI Conference on Human Factors in Computing Systems Proceedings (CHI
to fully benefit from this technology, current systems rarely
support users in building a partnership mental model with
the systems. Also, current systems do not provide a proper
Permission to make digital or hard copies of all or part of this work for personal or channel for users to control the ways in which the systems
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
learn. Thus, the systems are still opaque to users. Lastly,
on the first page. Copyrights for components of this work owned by others than ACM while these are ongoing transactions, the systems do not
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specific permission and/or a clearly communicate how the reciprocal interactions work
fee. Request permissions from [email protected]). after the initial interactions, and this often leads to negative
CHI 2019, May 4-9, 2019, Glasgow, Scotland, UK.
© 2019 Association for Computing Machinery. consequences to users’ experiences.
ACM ISBN 978-1-4503-5970-2/19/05...$15.00.
DOI: https://doi.org/10.1145/3290605.3300714

Paper 484 Page 1


CHI 2019 Paper CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK

Given this situation, helping users have a partnership mindset recently, the transparency issue arose even in the context of
that positions users themselves as the co-creators of the social media newsfeed algorithms. To address these issues,
service with intelligent agents rather than as passive Eslami et al. [8] developed FeedViz, which visualizes both
receivers of the services given by a system, would be an filtered and unfiltered newsfeeds so that users can compare
important starting point at which to support the co- the results and control the priority of newsfeed content.
performance over time. From this motivation, we devised a
These attempts suggest potential ways to increase the
Wizard-of-Oz-based research probe called Co-Performing
visibility of the black-box processes of intelligent system
Agent, which co-performs with users by incorporating
personalization. However, it becomes challenging to explain
several approaches to build a partnership mindset for users.
the ever-increasing complexity of intelligent systems to
By conducting a two-month exploratory study with the
users. Also, providing users too much control over systems
probe and several participatory design activities [26], we
might violate the very purpose of intelligent systems in
investigated how users co-perform and develop a
supporting users with less cognitive overload [10]. In this
relationship with their own agent over time. By doing this,
regard, increasing discourse in HCI [6,21] raises the question
we expect to contribute to providing some clues for
of how to provide the proper level of transparency and
improving the limited designs for co-performance in
controllability to users in the emerging types of intelligent
learning and adaptive services.
systems. Responding to this ongoing discussion, this study
investigates how a partnership mindset building approach
2 BACKGROUND & RELATED WORKS could contribute to address existing UX issues.
In this section, we review previous studies to provide an
The Co-Performance Perspective in Cooperating with
understanding of the importance of co-performance and the Users in Learning and Adaptive Services
potential of our partnership-building approach in supporting
There has been research that proposed and investigated
co-performing experiences.
ways to incorporate user input into improving system
UX Issues in Learning and Adaptive Services intelligence for personalization. For example, the notion of
While the potential of intelligent agents in providing programming by demonstration or programming by examples
personalized supports for people’s lives is ever growing as [5,20] has been incorporated in designing interface agents to
technologies advance, previous research has shown several make systems learn the ways users perform a repetitive task
user experience issues of autonomously adaptive behaviors over the shoulder so that systems can automate some
of intelligent agents. First, it is difficult for users to procedural tasks. In addition, previous research on
understand how a system works (e.g., what it knows, how it recommender systems has investigated how user feedback
knows the information, and what it does with the (e.g., ratings and evaluation) can improve the quality of
information), as intelligent systems are not often designed to recommendations [9,25]. While previous works contribute to
be intelligible and transparent to users [2]. Studies in demonstrating the technical feasibility of incorporating user
intelligent systems also highlighted that autonomously inputs in learning systems, few works investigated how users
adaptive and proactive system behaviors can confuse people, might experience intelligent agents or the systems that
resulting in users’ decreased sense of control [10,29,32]. For attempt to cooperate with users for personalization and what
example, a study of learning thermostats [31] reported how they would expect from cooperating with systems over time.
the changes made by Nest based on its own assumptions In overcoming the technology-centered perspective, the
about user needs annoyed users and gave the sense of losing notion of co-performance [16] suggests ways to rethink
control over the changes. user–system relationships. The notion of co-performance
HCI researchers have investigated ways to overcome these emphasizes the process of shaping the role of artificial
issues of transparency and controllability. For example, agency together with users, instead of understanding artificial
Cramer et al. [4] revealed that providing an explanation of agency as the one that are “scripted at design time.” [16]
why certain contents are recommended can increase the Aligning with this theoretical notion, previous research in
trust and acceptance of recommendations. In addition, personalized services also has pursued a similar perspective.
Kulesza et al. [17] suggested educating users about the Researchers in this line of work argued the importance
underlying mechanisms of recommendations so that they understanding users not just as passive recipients of the
can better understand the systems and manually control the service, but more as active agents who can take a role in
factors that contribute to recommendation results. More adjusting the service experience [7,11,16,19]. In this regard,

Paper 484 Page 2


CHI 2019 Paper CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK

Lee et al. [19], for example, proposed a way to help people Co-Performing Agent: A Wizard-of-Oz-based Probe
reflect more deeply on their needs to better personalize to Simulate Co-Performing Experiences
health services. Also, researchers like Huang et al. [11] We first devised a Wizard-of-Oz-based research probe, called
investigated design spaces for eco-coaching thermostats Co-Performing Agent to simulate co-performing experience
which aim to provide users thermal comforts not just by (Figure 1). Since the fundamental goal of co-performing with
assuming users are mere receivers of comfort given by the the agent is to improve the service for a user’s personal needs
systems, but rather as independent agents who can take and preferences, setting up the Co-Performing Agent probe
energy-saving actions by themselves. with each participant’s personal service needs in mind was
As these previous works imply, investigating the ways to important to understand their genuine co-performing
support user–agent co-performance is considered experiences. Thus, we planned to ask study participants to
worthwhile to study, not just to provide users more create a fictional service that they actually need in their lives.
personally relevant support for their lives, but also to In addition, to help them readily come up with ideas for the
empower users in the experience of learning and adaptive service, we specified our research context as a user-created
services. Building on these initial works, we investigate how fictional service in a car environment, as this provides a
users experience co-performance in the wild, aiming to promising environment where people expect personalized
suggest design implications for supporting users’ co- services while moving between diverse places that are
performing experience over time. closely related to users’ personal lives (e.g., home, office, and
social places).
Social and Relational Strategies for Agent Design
Previous research has investigated how social and relational
factors could affect user–agent relationship and their
cooperation. For example, conversational strategies, such as
personalized small talk, have shown to improve rapport,
cooperation, and engagement with computational agents
[3,18]. Also, a body of work in line with the ‘Computers are
Social Actors’ paradigm has shown that how users perceive
computers differently depending on the social strategies they
incorporate, such as personality and humor [23]. In spite of
this potential, social and relational strategies have rarely
been addressed for empowering users as cooperators in
Figure 1. Co-Performing Agent: (a) an agent profile, (b) an
improving agent intelligence and improving the quality of agent’s message, (c) a teaching information panel
personalized service experiences. With this gap in mind, this
paper explores users’ co-performing experience and We devised Co-Performing Agent as a web-based mobile
investigates ways to help users build a sound partnership application in order to enable users to access it whenever
with agents as co-creators of the personalized services. By they want. The probe consisted of three parts (Figure 1): an
doing so, we expect that this study will also contribute to the agent profile, an agent’s message, and a teaching information
emerging discussions on the cooperative relationship panel. To support users in having a partnership mindset for
between users and intelligent technologies [1]. co-performance, we designed Co-Performing Agent to
embody three partnership-building elements, namely, First
Encounter Interaction, a Teaching Channel for a user, and an
3 METHOD agent’s Learning Messages. Although the probe was devised
To investigate users’ co-performing experiences from a user- only for our research purpose, and the ways we designed the
centered perspective, we took an exploratory approach probe might not be the only ways to build a user–agent
rather than simulating users’ experience within existing partnership, we hoped this probe would provide a setting for
intelligent systems, which already have defined functions initiating our investigations of users’ co-performing
and ways of co-performing. To investigate users’ reactions experiences.
and expectations in a more flexible manner, we devised an
First Encounter Interaction. Building a partnership mindset
exploratory study by combining various designerly research
should start from the initial phase of interaction with the
methods, including a research probe, the Wizard-of-Oz
agent, because users often set an unreasonably high
method, and participatory design activities.
expectation of intelligent systems and such misled initial

Paper 484 Page 3


CHI 2019 Paper CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK

expectations increase the potential of users’ disappointment detected by sensors but can be given by a user if they want.
and early abandonment of the systems even before the Thus, by allowing users to freely choose what they want to
systems acquire the knowledge of users [13,30,31]. Thus, we teach and how they teach it, we aimed to enable users to have
designed Co-Performing Agent to communicate its control over the agent’s learning.
immature knowledge of a user during its first encounter with
the user (Figure 2). In designing this initial interaction, we
were inspired by how people introduce themselves when
they first meet each other. People exchange basic
information to explore each other and to experiment
whether they would like to continue developing the
relationship. Utilizing this exploratory conversation, we
designed Co-Performing Agent to communicate its ability
and to simulate its reciprocal information exchanges with a
user and its learning through simple example conversations Figure 3. The Script for Learning Interaction
(e.g., asking a user’s name and utilizing the information in Agent’s Learning Messages. We designed Co-Performing
the subsequent conversation). Agent to provide an agent’s learning messages to enable a
user to know the growth of agent’s ability over time. This
was because that users often expect such reciprocal
information transactions with the systems that leverage
user’s personal data. For example, a study of self-tracking
devices showed that users expected more personal nuance in
the health-related recommendations, as they accumulated
their activity data over time [13]. Such users’ expectations of
data-leveraging services are quite aligned with the notion of
social reciprocity [14,28], a social norm whereby of people
try to repay what others have provided to them (e.g., goods,
information, and favors). According to social exchange
theories, reciprocity plays an important role in maintaining
Figure 2. The Script for First Encounter Interaction
relationships [14,28]. Given previous research findings and
A Teaching Channel for Users. To simulate the actual co- theories, reciprocity was deemed an important notion for
performing experience, we also devised Co-Performing supporting users in building a partnership mindset over
Agent to provide users an explicit channel to teach their time.
agent (Figure 3). For instance, if a user selects an information
From this motivation, we developed three levels of the
category to teach from a teaching information panel, the user
agent’s learning messages, as a feedback for a user’s
will be taken to a page where s/he can answer the questions
teaching, by gradually improving the quality of inferences
asked by Co-Performing Agent. If a user enters an answer,
and recommendations: i) a fact-level learning message that
Co-Performing Agent will show text that says, “Thanks for
only repeats the collected data, showing users that the agent
teaching. I got your answer well,” to reassure users that the
is actually learning what users teach it, ii) an inference-level
agent is learning. The answer data was sent to and stored on
learning message that shows some of the inference that an
the Co-Performing Agent database, which will be utilized by
agent discovered from the collected data, representing the
the Wizard to create an agent’s learning messages (see the
growth of the agent’s intelligence; and iii) an action-level
following section).
learning message that provides proactive suggestions based
We intentionally designed the contents of this teaching on the agent’s understanding of the user. This three-level
channel to be empty at first and asked participants to decide learning messages may not be the only way to simulate the
what and how their agent would learn by themselves growing reciprocity, but we expected that this setting would
through a participatory design activity (see the following at least enable users to think about how their co-
section). This was because that the automated data collection performance works by showing them how the quality of the
that are prevalent in current systems cannot consider the agent’s knowledge of a user can be improved over time.
information a user feels comfortable to share with the system
and many other types of information exist that cannot be

Paper 484 Page 4


CHI 2019 Paper CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK

Participants
Through an online screening survey that inquired about
applicants’ driving patterns and purposes, we recruited eight
regular drivers who were aware of agent-based interfaces
but did not have much experience with them. As they
regularly drove their own cars, we expected that they would
have their own service needs for our research context and a
motivation to start co-performing with the agent probe to
improve the fiction service they would create for the study.
Participants were in their 20s or 30s and most of them were
graduate students, except one housewife, who was on
maternity leave. While their apparent occupation was
similar, they all had different life patterns and personal Figure 4. Materials for Participatory Design Activities
purposes for driving, which was the most important
recruitment condition for this research. For example, most Then, we gave an individual access link for Co-Performing
participants usually drove for commuting purposes on Agent to each participant, which took each user to the
weekdays, but they drove for different purposes on the interactive web pages for the First Encounter Interaction.
weekends (e.g., for traveling, dating, shopping, etc.). Also, six Following the interaction script, each participant read the
of the participants had regular fellow passengers (e.g., a agent’s introduction of its immature ability and experienced
romantic partner, children, and colleagues), whereas the the simple simulations of reciprocity (i.e., teaching the agent
other two usually drove alone. We expected these differences his or her name and giving the name of his or her agent). We
would provide opportunities to observe how co-performing further asked participants to decide the agent’s appearance
with the agent probe would be experienced in each and ways of speaking, if they wanted, which allowed us to
participant’s different service needs and life contexts. All the inquire their initial perception toward the agent. As the last
participants consented to the study under the approval of step to set up the Co-Performing Agent, we asked each
Institutional Review Board (IRB). participant to create a list of questions to teach the agent by
filling out a blank question and answer template (Figure 4-
Study Design b). The template guided participants to decide what
The study consisted of a pre-session to set up the Co- questions their agent would ask them to improve the
Performing Agent probe for each user’s own service needs, a fictional service and how they would answer the question
two-month in-the-wild deployment to simulate co- (e.g., free text, options, scale, etc.). We guided participants to
performing experience in a user’s real-world life, and list similar questions under a category and to specify the
weekly sessions to inquire whether and how users’ name of the category. We used this material to create the list
perceptions of and attitudes toward co-performance changed of teaching information on Co-Performing Agent. By
over time. allowing participants to select only the category of
information they want to teach the agent at a given time, we
Pre-Session. Participants visited the lab prior to the study and aimed to enable users to teach the agent more effectively.
had an individual pre-session to create their own fictional
service and to set up the Co-Performing Agent probe for that Regarding all the features and contents created by
specific service needs. First, we asked each participant to participants, we inquired why each participant created such
create a fictional service in a car that they need in their daily a fictional service and why he or she decided to teach such
lives. By reflecting on their driving experience and daily life, questions to understand the users’ initial perceptions of and
each participant came up with major features of the fictional expectations toward Co-Performing Agent. Based on the
service they needed (Table 1) and drew those features on a outcomes of participatory design pre-session, each
blank mobile screen template (Figure 4-a), which we participant’s Co-Performing Agent was updated, and all the
provided to help them concretely imagine their fictional materials created by participants were filed to use in their
service. upcoming participatory design sessions.

Paper 484 Page 5


CHI 2019 Paper CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK

Table 1. Each Participant’s Initial Fictional Service messages for week 6 to 8. Intentionally, learning messages
Partic Major Features of a Fictional Service for week 7 were designed to violate the reciprocity (e.g.,
ipant attempting to over-interpret) to investigate how such
Daily Briefing service that provides today’s briefing misbehaviors of Co-Performing Agent affect users’
P1 of P1’s health condition based on the sleep patterns and perceptions of and attitudes toward co-performance. The
the amount of physical activities. personalized learning message was given to each participant
Dining Mate service that provides restaurant twice a week, and participants were able to give their own
P2 information on the way to a destination based on P2’s feedback to the agent regarding its messages by teaching it
dining patterns.
(during W1–W5) or by rating the agent’s recommendations
Personal Reminder service that reminds things to do
P3 (during W6–W8). To ensure that participants did not
and where to go based on P3’s driving patterns.
evaluate their experience of Co-Performing Agent based on
Hangout Mate service that provides the top 3
P4 restaurants’/activities’/places’ information based on groundless assumptions on the agent’s capability, we
P4’s leisure time driving patterns. consistently emphasized that the agent provided the learning
Personalized Navigation service that provides real- messages purely based on how participants had taught their
time information for P5’s frequent hangouts before agent.
P5
heading to the place (e.g., on-going promotion,
Table 2. The Examples of Learning Messages for P4’s
crowdedness, and open–close day and time).
Fictional Service (i.e., Hangout Mate Service)
Personalized Shopper service that reminds the user
of a grocery shopping list and provides the price Learning
P6 Week Examples
information at nearby markets based on P6’s necessity Message
and stock information. W1 Default “Hi, I’m your agent, OO.”
Driving Mate service that visualizes and analyzes the W2 “You’ve been to OO last week. How was
P7 Fact Level
places that P7 has visited and P7’s driving habits. W3 the trip?”
Personal Jockey service that automatically plays W4 Inference “I think you may feel tired after a long-
audio content based on P8’s own driving modes (e.g., W5 Level distance trip during holidays!”
P8 playing cheerful music when driving back home and “You seem to love sushi. How about
playing English news when driving to his second W6 going to a new sushi café near your
language class). office next time?”
(W7-intended mistake) “You may like to
drink a beer with your wife, since you
In-the-Wild Deployment of the Co-Performing Agent Probe. To Action
W7 haven’t gone out for beer lately. How
simulate co-performing experiences, we deployed the Level
about OO pub on this Friday?”
customized Co-Performing Agent for eight weeks in the (W7-recovery) “Sorry, I gave you the
wild. We guided participants to teach their agent through the wrong recommendation. For your
W8 health condition, how about going to
teaching information panel by answering the questions that
they devised during the pre-session, thinking that the OO juice café for a drink?”
information they teach would be the source of learning by
the agent. To observe participants’ natural engagement with We deployed Co-Performing Agent for two months,
the agent, we allowed them to decide when and how considering the time required for technology adoption and
frequently they would teach. the agent’s learning. In relation to the time required for
During this two-month deployment, we provided the technology adoption, the researchers suggested that two
learning messages by utilizing the collected user data (Table months would be enough time to observe stable interaction
2). We decided not to provide a learning message in week 1 with the artifacts without the novelty effect [12,27].
to simulate a situation in which the collected information is Regarding the time for the agents’ learning, the two-month
not enough to build a knowledge of user. From week 2 to period was expected to provide the possibility to learn
week 8, two researchers, as a Wizard, changed the default repetitive behavioral patterns in life, as participants can
greeting message on the probe into the learning messages teach their daily, weekly, and monthly behaviors at least
they developed by interpreting the actual user data collected twice. Although it may not be sufficient to observe the entire
on the Co-Performing Agent database: the fact-level learning trajectory of co-performance over time, a two-month
messages for week 2 and 3, the inference-level learning duration was expected to provide participants the time to
messages for week 4 and 5, and the action-level learning adopt a new artifact and to provide the likelihood of actual
learning during the study.

Paper 484 Page 6


CHI 2019 Paper CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK

Weekly In-Depth Inquiry Sessions. To inquire whether and Each weekly in-depth session lasted about an hour. All the
how users’ perceptions of and attitudes toward co- collected data were used to understand how participants’
performance change over time, we conducted an in-depth perceptions of the agent’s ability and their relationship
inquiry session every week in person. During each session, changed over time and to gain insight on the ways to
we first conducted a de-briefing interview of users’ thoughts, improve the support for building user–agent partnerships.
feelings, and any challenges users had while interacting with After each weekly session, the probe was modified based on
Co-Performing Agent. Then, we asked them to do three the outcomes from the session (i.e., modified agent’s name,
participatory design activities by reflecting on their profile image, and learning contents) and participants
experiences: i) the agent profile revising activity, ii) the service resumed to teach their agent with the updated questions.
revising activity, and iii) the learning question revising
activity. Data Collection and Analysis
To understand participants’ co-performing behaviors and
The agent profile revising activity was to inquire about
their partnership development with their agent over time, all
users’ changed perception of the relationship with the agent.
the relevant data from the pre-session and weekly sessions
Reflecting on their co-performing experience, we asked
were audio-recorded and transcribed (e.g., participants’ in-
participants to describe their relationship using an analogy.
the-wild co-performing experiences, feelings and thoughts
If they thought it was necessary, participants were allowed
toward their agent, the relationship analogies, and all the
to change the given properties of Co-Performing Agent (e.g.,
rationales of participatory design activities). Over 72 hours
appearance, the ways of speaking, etc.) so that we could
of interview transcripts were re-organized with the related
provide the updated version of the Co-Performing Agent
participatory design outcomes from the offline sessions.
probe in the following week.
After each weekly session, preliminary analysis was
The service revising activity was to inquire about users’ conducted by five researchers searching for emergent
perception of their agent’s ability. For this purpose, we first themes and patterns with regard to user–agent partnership
asked participants to create inferred information cards (Figure and co-performing behaviors. After finishing all the sessions,
4-c), on which they were asked to write down the we conducted a more holistic analysis by analyzing how
information that they thought their Co-Performing Agent users’ co-performing experiences in a given week affected
had learned or discovered from what they had taught it. This their perceptions of their partnerships and the co-
was to enable participants to think of the perceived performing practice in the subsequent weeks. We iterated
knowledge of the agent more easily and concretely. For this this analysis process to identify the underlying reasons and
activity, each participant was given the raw data they had factors for their co-performing behaviors and perception of
taught to their Co-Performing Agent up until the session. their partnership that were commonly observed across
Then, we asked participants to add, delete, or modify the participants.
features of their Co-Performing Agent service as a way to
express how they thought their agent could improve its
4 FACTORS AFFECTING CO-PERFORMING
service, given the inferred information that they thought that
BEHAVIORS
their agent had acquired. Participants modified the features
only when they thought that their Co-Performing Agent had From the analysis, we found three factors that affected users’
learned a reasonable amount of information necessary for co-performing behaviors: i) users’ initial mental model
the service evolution. Otherwise, participants were guided to toward an agent’s capability, ii) confirming experiences, and
hold off on modifying the features and teach their Co- iii) changes in the styles of learning.
Performing Agent more until it acquires the enough amount Users’ Initial Mental Model toward an Agent
of information. In this way, we aimed to investigate users’
perceptions of their agent’s ability with more rationale. The first factor that affected users’ co-performing behaviors
was users’ initial mental models about agent’s potential
Lastly, we conducted the learning question revising activity capability. Although all participants went through the same
to inquire about the users’ attitude toward further co- introduction to Co-Performing Agent, they had different
performing behaviors. For this activity, we asked initial mental models about the agent’s potential capability,
participants to modify the ways that they taught the Co- namely Getting-Things-Done (GTD) Agent model and
Performing Agent, considering the agent’s current ability Companion Agent model.
and their expectation of service evolution.

Paper 484 Page 7


CHI 2019 Paper CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK

Getting-Things-Done Agent Mental Model. Participants with agent build a truly deeper understanding of the user. For
GTD Agent model (P3, P5, P6, and P8) tended to think that instance, P1 taught her agent not just about her commuting
the agent’s capability to improve the service was limited to pattern, but also about her health-related data (e.g., workout
machinery optimizations (e.g., automating and and sleep), interest-related data (e.g., interests toward stock
streamlining). Thus, these participants thought their agent information) and personal driving habit data as well,
would provide efficiency-related value to users through the expecting her agent to improve the Daily Briefing service to
co-performance. For instance, P3 thought that his agent take care of her daily lives. Also, Companion model
would improve his Personal Reminder service by providing participants tended to teach the aforementioned information
prediction-based navigations (e.g., automatically setting a at a subjective level and expected to be analyzed with
predicted destination where he should go at a given time) so semantic interpretations. For instance, while P3 (GTD model)
that he can reduce time for navigating. Also, P8 expected that expected that his agent would infer the frequency of the
his agent would improve the Personal Jockey service by places he visited from his driving data, P7 (Companion
learning which music it should play depending on his pre- model) expected that his agent would infer his favorite places
defined driving contexts and automatically playing the and lifestyle from his driving data.
content even without him manually selecting music time
As these differences show, the mental model that
after time (e.g., playing cheerful music when driving back
participants initially had toward agent-based services shaped
home, English news when driving to his second language
different overall attitudes toward co-performance (e.g., the
class, and podcasts when driving for long distances).
quantity and quality of information that each participant
Since they had such expectations, they wanted their agent to decided to teach in the first week and the eagerness to teach
quickly develop simple service features through a short over time).
period of co-performance. For this reason, they taught their
agent focusing on a single aspect of their lives, mostly just Iterative Confirming Experience
about driving history. In addition, they tended to teach the Another important factor that affected users’ co-performing
aforementioned information at a factual level (e.g., when and behaviors was confirming experience, an experience through
where they have been, what the purpose was, and whom which a user can confirm that an agent is learning with the
they were with) and expected these data to be analyzed in a help of the user. We found that whether and how
statistical way (e.g., the three most frequently visited places participants experience such confirmation in the earlier
(P5), the average time of daily workout (P6)). weeks affected participants’ willingness to continue teaching
their agent in the later weeks, resulting in the virtuous cycle
Companion Agent Mental Model. Unlike the participants with
of enhancing user–agent partnership or the vicious cycle of
the GTD Agent mental model, participants with the
deteriorating user–agent partnership. Meanwhile, it was
Companion Agent mental model (P1, P2, P4, and P7) thought
interesting to note that the vicious cycle was observed more
that the agent had the capability to acquire a deeper
frequently from the participants with the GTD Agent mental
understanding of its user and to improve the service not only
model. In what follows, we describe how confirming
for machinery optimization, but also for more personally
experiences affected users’ co-performing behaviors and the
nuanced supports (e.g., personalization based on user’s state
relations between initial mental model and the resulting
and taste). Thus, these participants thought that their agent
user–agent partnership.
would provide more integrated and high-level supports as
companions, enabling users to gain better self-knowledge The Virtuous Cycle of Enhancing User–Agent Partnership. In
and inspiring them to be their desired selves. For instance, P7 this study, the confirming experiences mainly happened
expected his agent to improve its Driving Mate service not through the learning messages our wizard researchers
just at the level that it quantifies his travel history and provided to each participant. When we provided a learning
suggests the most preferable place, but to the level that it message that was reasonably improved based on what
suggests new places where he might have not thought to participants had taught, participants could be sure that their
visit but would be nice to visit so as to enable him to explore agent was learning as they expected, and this confirming
new areas. experience enabled participants to realize their roles and the
value of their inputs for service evolution: “Although it (his
With such expectation of the agent’s capability, these
agent) said that it would learn what I teach [in the first
participants thought that a longer-term co-performance is
encounter interactions], it was a bit ambiguous to me. However,
necessary and taught their agent about multiple aspects of
when it reacted to what I taught, I realized that it actually
their lives, even though it might take more time to help their

Paper 484 Page 8


CHI 2019 Paper CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK

utilizes what it learned from me. I think I should teach more In addition, while the participants with Companion model
carefully.” –P7 (Companion model) were satisfied with the gradual learning pace of Co-
Performing Agent, the participants with GTD model tended
Like the case of P7, such confirming experiences motivated
not to appreciate the prolonged learning process. They
participants to provide quality information to their agent. P1,
thought that what their agent had to learn for service
for example, decided to increase the amount of information
evolution (e.g., the repetitive behavioral patterns) should not
she was teaching about her favorite stock items from one
require much time to learn. Thus, they expected action-level
item per day to three items per day (P1-W2). Also, P2, P3, and
feedback from the agent much earlier than the participants
P4 decided to teach more concrete and detailed information
with Companion Agent mental model. However, in this
instead of abstract information. For instance, P2 decided to
study, action-level learning messages were given after a
teach her agent the specific name of a passenger rather than
month of learning; this postponed-evolution model made the
just teaching ‘a friend’ (P2-W3) so that her agent could
GTD model participants difficult to confirm the value of their
improve its Dining Mate service based on P2’s dining pattern
inputs in a timely manner. In consequence, these not-
with that friend. By learning additional details about the
rewarding experience demotivated these participants to put
information that it had learned previously, these participants’
their efforts into teaching over time. For example, P6 wanted
agent could provide more concrete learning messages over
to sync all data from third party applications without her
the following weeks, and this reinforced those participants’
manually teaching her agent: “I don’t like to teach health
continued willingness to co-perform with their agent. As this
information by myself, because it is so much of a burden for me
example shows, when the agent repeatedly provided
and I don’t even believe the agent has the intelligence to learn.
confirming learning messages showing its growth, the
I just want the agent to automatically collect necessary data
virtuous cycle of teaching-confirming-teaching was iterated
from the related applications on my phone and provide service
over time. By doing so, these participants were able to build
smartly.” (P6-W3)
trust toward their agent’s knowledge of the user gradually
and to develop stable relationships with their agent. Thus, P1 The second reason was the learning messages that showed
(Companion model), who had built a resilient partnership mis-interpretations of what users taught and overly
with the agent, said her agent was like “another me who takes supportive actions that they did not expect from their agent.
care of my life,” highlighting the strong trust toward her For instance, P5, who wanted Personalized Navigation
agent’s knowledge of her. service and had the GTD mental model, received a movie
recommendation from his agent (e.g., “You’ve done a lot of
The Vicious Cycle of Deteriorating User–Agent Partnership. In
work this week. How about going to a weekend movie date?
contrast, we also found situations in which the learning
The latest movie, ‘Mechanic,’ is now playing at your favorite
messages did not properly provide confirming experiences
movie theater.”). While this recommendation was based on
and de-motivated the continued co-performing behaviors.
his driving history to a movie theater with his girlfriend, P5
There were two major causes for failures in providing proper
thought that inferring the specific type of movie to
confirming experiences. The first reason had to do with
recommend was excessive given that the information he had
users’ initial mental models that we discussed in the earlier
taught was only the fact that he went to the theater once.
section. Two types of users reacted differently, even though
they were given the same level of learning messages. For P8, who also had the GTD mental model, experienced the
instance, when Companion model participants received the failures in confirming experiences for both reasons. After he
fact-level and inference-level learning messages, they easily taught where he drove, he received the inference-level
confirmed the value of their inputs and tried to explore ways learning message saying, “You visited Jokbal (the name of
to help their agents improve their knowledge of the user Korean dish) restaurant last week. You seem to like Jokbal!”
more meaningfully (e.g., teaching enriched contextual This learning message was neither tightly related to the
information about their daily driving and lives). However, initial service he wanted, i.e., Personal Jockey service, nor
GTD model participants were not clearly aware of their role aligned with his GTD mental model. He said, “I hated when
in co-performance, even though they were given the same it said that last week. It was uncomfortable to talk about FOOD
quality of learning messages as the Companion model with an agent for a CAR service. It was like, for example,
participants. Thus, they tended to put less effort into talking about my romantic partner with the car agent. It would
teaching, which resulted in the users teaching too shallow have been much better if it just said that I visited some
and unstructured information to allow the agents to infer restaurant, which is the fact I taught.” (P8-W4) P8 said his
meaningful information from the data. agent “exceeded” its authority and he felt frustration in

Paper 484 Page 9


CHI 2019 Paper CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK

sharing detailed information with his agent: Changes in the Styles of Learning over Time
“I got a tendency not to teach too many details of my Regardless of the initial mental models and confirming
destination after the agent ‘crossed the line’ last time. I used to experiences, changes in the agent’s styles of learning were
write the exact name of the place in the past, but now I try not important for all participants to co-perform over time, as the
to do so and just write something like ‘a restaurant’ or ‘a cafe’. changes affected users’ perception of their agents’ activeness
I don’t want to give too much detail to this agent, because I in learning. For instance, in the case of participants who
realized that it could THINK by itself.” (P8-W6) continued teaching the same contents in the same ways for
As this example shows, when users were not able to have several weeks, they were in “doubt about whether the agent is
confirming experience in a timely manner, the teaching- learning correctly or not” (P2) and thought that the agents “do
confirming-teaching cycle was not iterated properly. In not have the willingness to learn.” (P8) In a similar vein,
consequence, these participants were not able to build trust participants appreciated when the agent started to get user
toward their agent’ knowledge of the user and a stable feedback on what it recommended instead of just
relationship with the agent. For example, P3 (GTD model), continuously learning the raw data over time. When the
who had an unstable partnership with the agent, said his agent provided more satisfying learning messages in the
agent was like “an intimate, but annoying friend,” because he following weeks by reflecting on the collected user feedback,
was somewhat bored of helping his agent after eight-week participants said that this kind of ping-pong interaction for
co-performance. learning gave them more “communicative” (P3), “cooperative”
(P5), and “diligently learning and ever-growing” (P4)
Influences on User Experience of Adaptive Services. impressions of the agent.
Confirming experiences seemed important not only for
users’ continued co-performing behaviors, but also for users’ From the analysis of the data gathered from the learning
actual service experiences. As participants with Companion question revising activities, we found several qualities of
agent model went through the virtuous cycle of teaching- learning questions that participants considered as important
confirming-teaching iteratively, their sense of control over in the changes of agent’s learning styles. Firstly, participants
the system was enhanced over time as well. Thus, even when cared the efficiency of the co-performance. For instance,
their agent made the (intended) mistakes we planned for this while participants thought that they need to answer the
study, they showed more accepting responses to their agent. agents’ questions by manually entering the answers in the
For instance, P4 (Companion model) thought that the beginning, they expected that their agents would create the
mistake was “a part of the learning process,” through which predicted user answers based on a user’s answering patterns
his agent “attempts to extend the knowledge by itself.” In the in the later interactions, for example, by automatically
case of P7 (Companion model), he was even able to analyze showing the names of frequent destinations of a user when
why his agent made such mistakes, although he had not asking the user to teach driving history. By doing so,
provided information that was relative to the incorrect participants expected to teach more efficiently over time.
inference of the agent. Thus, he tried to think of what he Also, participants cared to change the level of information
could do to amend the incorrect knowledge of his agent. This they teach over time. For instance, in the beginning,
user-empowered reaction was contrary to the reactions from participants tried to teach as much information about their
P6 (GTD model), who regarded the mistake as a limitation of daily lives as they could even in a bit abstract level, because
machines and thought there was not much she could do they thought that their agent had little knowledge of the user
about this technical flaw. In addition, when the agent and had learn the user’s representative profiles as quickly as
provided more proactive suggestions in the later weeks, the possible. However, over time, participants became to think
participants who went through iterative confirming that their agent had collected enough mundane and
experiences tended to accept their agents’ recommendations superficial information about their lives and tried to focus on
and showed more generosity, thinking that they could providing more unusual and deeper information that their
control the system, even if it made mistakes. This seemed to agents might not know unless users teach that information.
be because they had a clear understanding of how their input For instance, after a week of teaching, P1 decided to reduce
could change the agents’ behaviors. As these examples show, her efforts to teach regular behavioral information (e.g.,
confirming experiences were important to develop more commuting information) and decided put more effort into
stable and resilient partnerships with the agent. teaching subjective and contextual information that her
agent could consider in improving the Daily Briefing service
(e.g., her physical condition including the self-evaluation of

Paper 484 Page 10


CHI 2019 Paper CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK

the sleep quality in five-star rating, the reasons she could not to teach diverse tidbits of his daily life rather than teaching
sleep well, and her know-how to improve her sleep quality). factual-level information after he realized that his agent
might not learn additional information if he continued
In addition, as the agents’ knowledge of the user grow and
teaching repetitive daily activities. While the willingness to
participants’ perceived relationships with their agents
teach multi-faceted information represents the co-
became closer over time, participants expected some
performing behaviors of users with Companion Agent
proactive questions from the agent. For instance, P2 expected
mental model, P5 still showed the characteristics of GTD
that her agent would be “curious” if she drove far away to
mental model, showing less accepting reactions when his
have a rice dish because her agent knew that she prefers
agent provided a learning message that actually utilized the
flour-based foods. Thus, she expected that her agent would
life tidbits information. This mixed expectation in co-
ask questions like, “Why are you going far away to have rice
performance raises a further research question regarding
dishes on weekdays?” While these proactive questions
how such a transitional phase should be supported by a
should be designed carefully, this kind of conversation would
system to continue the co-performance without
enhance the potential for service personalization.
deteriorating users’ partnerships with their agent.

5 DESIGN IMPLICATIONS FOR CONSTRUCTIVE Explicitly Designing for a Learning Period before
CO-PERFORMANCE OVER TIME Providing Proactive Support
The findings of this study suggest three factors that should From the findings of this study, we also found that a resilient
be considered in designing for users’ co-performing user–agent partnership and trust are not ones that can be
experiences. Reflecting on these findings, we discuss further built immediately. Instead, it could be built through the
design implications for supporting constructive co- iterative cycles of confirming experiences over an expanded
performance and building a resilient user–agent partnership period of time. However, most current learning and adaptive
over time as follows. systems do not explicitly consider these iterative and time-
taking nature of building a user’s trust and partnership
Supporting Co-Performance Based on Users’ Mental toward intelligent agents. Rather, those systems tend to
Model toward Agents attempt to provide proactive supports as quickly as possible
Reflecting on our findings, supporting users’ co-performance without considering users’ perceived ability of and trust
with an understanding of a user’s mental model of agent- toward the systems. This collapsed interaction phase for co-
based services would be important to enable constructive co- performing and confirming experiences might have caused
performance over time. In doing so, two types of initial user early abandonment of these intelligent systems. In this sense,
mental model we found from the study (i.e. the GTD and explicitly designing for a learning period before providing
Companion Agent mental models) can be used to inquire as proactive supports would create opportunities for users to
to how a user would like to co-perform with the agent and build a partnership mental model by allowing both users and
what s/he might expect from co-performance. For example, agents to simulate co-performance and recover their
if an intelligent agent provides users the option to choose partnership more easily beforehand.
one of the co-performing journeys in the first encounter
interaction (e.g., a short-term and focused co-performance Careful Considerations for Applying Human-Likeness
in Co-Performing Agent Interface
for GTD Agent mental model vs. a long-term and multi-
faceted co-performance for Companion Agent mental One of the interesting findings of this study is that the
model), the agent would be able to adjust its methods of co- human-likeness and rapport building interaction of Co-
performance to be more suitable to the chosen mental model. Performing Agents were not a primary factor for their co-
While Kulesza et al. [17] also classified users’ mental models performing experience, although it seemed influential in
depending on the degree of understanding of how systems user–agent interactions. Some of the participants (P4-
work (i.e. functional vs. structural), our classification of users’ Companion model, P3-GTD model) even felt uncomfortable
mental models provides a more actionable taxonomy to when the agents’ learning messages included casual ways of
reduce the conceptual gap between users’ expectations and speaking in the beginning, because they thought that trying
actual system behaviors. to build an intimacy even before completing its original
purpose (i.e., building a knowledge of users) was
Meanwhile, we also observed potential changes of a user’s inappropriate and unnecessary. Rather, such user–agent
initial mental model over time. For example, one of our intimacy was naturally built though the iterative confirming
participants with a GTD mental model (P5) changed his mind experiences that enabled users to realize that the agent had a

Paper 484 Page 11


CHI 2019 Paper CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK

quality understanding of their lives. Thus, applying human- the couple started to prepare for pregnancy. Thus, he created
likeness in co-performing agent interfaces should be a new set of questions to teach the changed situations and
carefully considered and if necessary, interactions for taught his agent to avoid sushi restaurants that he and his
building rapport would be better in the later interactions. wife used to visit, during the time they were preparing the
pregnancy. Also, P1, who expected an agent’s service for
reviewing her daily exercise, was getting busier due to her
6 DISCUSSION
tasks at work and did not have time to exercise at all. Thus,
In-the-wild deployment of Co-Performing Agent also she wanted her agent to recommend exercises that she could
revealed several issues around collecting users’ behavioral do during the short breaks in a day, rather than the ones that
traces in the real-world context. These issues suggest some require significant time and effort: “How I lived in this week
challenges and opportunities for future research. was quite different from the previous four weeks in many
senses. My commuting time was shifted, and I couldn’t exercise
Users’ Concerns on Privacy and Controllability of
Personal Data Collection and Inference even once this week. Given the information Ryan [her agent]
has learned so far, I thought that Ryan could notice the changes
As participants continuously taught their personal
and I expected some feedback related to the changed life
information, revealing traces of their daily lives, participants’
patterns.”
privacy concerns became salient over time. For example,
there were participants who had concerns about continuous These examples show that how the considerations on these
data tracking for agents’ learning, considering whether they kinds of temporal or permanent changes of a user’s profile
should share their behavioral traces even when they did not could enhance personalized service experiences. Thus, this
want to. P2 was especially concerned about the potential suggests further research on how to support users in helping
embarrassment of unexpectedly revealing sensitive their agents re-learn their profiles and how to support them
information in a social context: “Let’s suppose that I want to in managing the expired profiles, which would extend our
dine out with my new boyfriend and what if it (her agent) understandings on supporting user–agent co-performance
tactlessly suggests the restaurants that my ex-boyfriend and I over time.
used to go to? Considering such situations, I am not sure
whether it would be still okay to give all of my information to
7 CONCLUSION
the agent.” (P2-W4) As P2’s perceived privacy concern
increased over time, she wanted her agent to ask her whether In this paper, we presented a two-month exploratory study
she wanted to mark given behavioral data as a “secret.” Based that investigated how users’ perceptions and attitudes
on that secret marking, she wanted her agent to pretend not toward Co-Performing Agent changed over time. The
to know secret events when she was with someone else. findings of this study contribute to providing empirically
grounded design implications for supporting user–agent co-
This kind of privacy concern may happen as the amount of performance by highlighting the factors affecting users’ co-
collected data gets bigger and the potential of inferring performing behaviors; users’ initial mental models,
personally-related traits becomes more feasible over time. confirming experiences, and changes in the styles of agents’
Moreover, this is already prevalent in everyday online learning. Investigating users’ co-performing experiences by
services: the traces of what a user liked on Facebook could manipulating these factors would uncover further
infer a lot of traits of a user [15]. Although sharing personal implications for supporting constructive co-performance
data to get personalized service might be inevitable, more over time. As an initial work that investigated human-
research should be conducted to investigate the ways to build centered ways to support user–agent co-performance, we
a sound user–agent partnership with a proper controllability hope this study inspires future research into creating
for users. personally-relevant services together with users and
empowering users in their experience of intelligent IT
Temporal/Permanent Expiration of User Profile
services over time.
During the two-month study, participants came to face the
changes in their lives and they expected the ways their agent ACKNOWLEDGMENTS
provided the service to be reoriented in response to such life This work was supported in part by the NAVER LABS Corp. and by
changes. For instance, P4, who wanted to receive restaurant Institute of Information & Communications Technology Planning
recommendations for dining with his wife, wanted to rule & Evaluation (IITP) grant funded by the Korea government (MSIT)
out raw seafood for a while from the recommendations, as (No.2016-0-00564).

Paper 484 Page 12


CHI 2019 Paper CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK

REFERENCES Proceedings of the National Academy of Sciences, 5802–5805.


https://doi.org/10.1073/pnas.1218772110
[1] Rachel K. E. Bellamy, Sean Andrist, Timothy Bickmore, Elizabeth F.
[16] Lenneke Kuijer and Elisa Giaccardi. 2018. Co-performance:
Churchill, and Thomas Erickson. 2017. Human-Agent Collaboration:
Conceptualizing the Role of Artificial Agency in the Design of
Can an Agent be a Partner? In Proceedings of the 2017 CHI
Everyday Life. In Proceedings of the 2018 CHI Conference on Human
Conference Extended Abstracts on Human Factors in Computing
Factors in Computing Systems (CHI ’18), 125:1-125:13.
Systems (CHI EA ’17), 1289–1294.
https://doi.org/10.1145/3173574.3173699
https://doi.org/10.1145/3027063.3051138
[17] Todd Kulesza, Simone Stumpf, Margaret Burnett, and Irwin Kwan.
[2] Victoria Bellotti and Keith Edwards. 2001. Intelligibility and
2012. Tell me more? The Effects of Mental Model Soundness on
Accountability: Human Considerations in Context-Aware Systems.
Personalizing an Intelligent Agent. In Proceedings of the 2012 ACM
Human-Computer Interaction 16, 2: 193–212.
annual conference on Human Factors in Computing Systems (CHI
https://doi.org/10.1207/S15327051HCI16234_05
’12), 1–10. https://doi.org/10.1145/2207676.2207678
[3] Timothy Bickmore and Justine Cassell. 2001. Relational Agents: A
[18] Min Kyung Lee, Jodi Forlizzi, Sara Kiesler, Paul Rybski, John Antanitis,
Model and Implementation of Building User Trust. In Proceedings of
and Sarun Savetsila. 2012. Personalization in HRI. In Proceedings of
the SIGCHI Conference on Human Factors in Computing systems
the seventh annual ACM/IEEE international conference on Human-
(CHI ’01), 396–403. https://doi.org/10.1145/365024.365304
Robot Interaction (HRI ’12), 319.
[4] Henriette Cramer, Vanessa Evers, Satyan Ramlal, Maarten van
https://doi.org/10.1145/2157689.2157804
Someren, Lloyd Rutledge, Natalia Stash, Lora Aroyo, and Bob
[19] Min Kyung Lee, Junsung Kim, Jodi Forlizzi, and Sara Kiesler. 2015.
Wielinga. 2008. The Effects of Transparency on Trust in and
Personalization Revisited: A Reflective Approach Helps People Better
Acceptance of a Content-based Art Recommender. User Modeling and
Personalize Health Services and Motivates Them To Increase Physical
User-Adapted Interaction 18, 5: 455–496.
Activity. In Proceedings of the 2015 ACM International Joint
https://doi.org/10.1007/s11257-008-9051-3
Conference on Pervasive and Ubiquitous Computing (UbiComp ’15),
[5] Allen Cypher. 1993. Watch What I Do: Programming by
743–754. https://doi.org/10.1145/2750858.2807552
Demonstration. The MIT Press, Cambridge, MA.
[20] Henry Lieberman. 2001. Your Wish is My Command: Programming
[6] Lia R. Emanuel, Joel Fischer, Wendy Ju, and Saiph Savage. 2016.
By Example. Morgan Kaufmann.
Innovations in autonomous systems: Challenges and opportunities for
[21] Caitlin Lustig, Katie Pine, Bonnie Nardi, Lilly Irani, Min Kyung Lee,
human-agent collaboration. In Proceedings of the 19th ACM
Dawn Nafus, and Christian Sandvig. 2016. Algorithmic Authority: The
Conference on Computer Supported Cooperative Work and Social
Ethics, Politics, and Economics of Algorithms that Interpret, Decide,
Computing Companion (CSCW ’16 Companion), 193–196.
and Manage. In Proceedings of the 2016 CHI Conference Extended
https://doi.org/10.1145/2818052.2893361
Abstracts on Human Factors in Computing Systems (CHI EA ’16),
[7] Motahhare Eslami, Amirhossein Aleyasen, Karrie Karahalios, Kevin
1057–1062. https://doi.org/10.1145/2851581.2886426
Hamilton, and Christian Sandvig. 2015. FeedVis: A Path for Exploring
[22] Pattie Maes. 1994. Agents that reduce work and information overload.
News Feed Curation Algorithms. In Proceedings of the 18th ACM
Communications of the ACM 37, 7: 31–40.
Conference Companion on Computer Supported Cooperative Work &
https://doi.org/10.1145/176789.176792
Social Computing (CSCW’15 Companion), 65–68.
[23] Clifford Nass, Jonathan Steuer, and Ellen R. Tauber. 1994. Computers
https://doi.org/10.1145/2685553.2702690
are social actors. In Proceedings of Conference Companion on Human
[8] Motahhare Eslami, Aimee Rickman, Kristen Vaccaro, Amirhossein
factors in Computing Systems (CHI ’94), 204.
Aleyasen, Andy Vuong, Karrie Karahalios, Kevin Hamilton, and
https://doi.org/10.1145/259963.260288
Christian Sandvig. 2015. I always assumed that I wasn’t really that
[24] Upendra Shardanand and Pattie Maes. 1995. Social information
close to [her]: Reasoning about Invisible Algorithms in News Feeds. In
filtering: algorithms for automating “word of mouth.” In Proceedings
Proceedings of the 33rd Annual ACM Conference on Human Factors
of the SIGCHI conference on Human factors in computing systems
in Computing Systems (CHI ’15), 153–162.
(CHI ’95), 210–217. https://doi.org/10.1145/223904.223931
https://doi.org/10.1145/2702123.2702556
[25] Beerud Sheth and Pattie Maes. 1993. Evolving agents for personalized
[9] F Maxwell Harper, Xin Li, Yan Chen, and Joseph A Konstan. 2005. An
information filtering. In Proceedings of 9th IEEE Conference on
Economic Model of User Rating in an Online Recommender System.
Artificial Intelligence for Applications, 345–352.
Proceedings of the 10th International Conference on User Modeling:
https://doi.org/10.1109/CAIA.1993.366590
307–316. https://doi.org/10.1007/11527886_40
[26] Pieter Stappers and Elizabeth Sanders. 2003. Generative tools for
[10] Kristina Höök. 2000. Steps to take before intelligent user interfaces
context mapping: tuning the tools. In International Conference on
become real. Interacting with Computers 12, 4: 409–426.
Design & Emotion, 77–81. https://doi.org/10.1201/9780203608173-c14
https://doi.org/https://doi.org/10.1016/S0953-5438(99)00006-5
[27] JaYoung Sung, Henrik I Christensen, and Rebecca E Grinter. 2009.
[11] Chuan-Che (Jeff) Huang, Sheng-Yuan Liang, Bing-Hsun Wu, and
Robots in the Wild: Understanding Long-Term Use. In Proceedings of
Mark W Newman. 2017. Reef: Exploring the Design Opportunity of
the 4th ACM/IEEE international conference on Human robot
Comfort-Aware Eco-Coaching Thermostats. In Proceedings of the
interaction (HRI ’09), 45–52. https://doi.org/10.1145/1514095.1514106
2017 Conference on Designing Interactive Systems (DIS ’17), 191–202.
[28] John W. Thibaut and Harold H. Kelley. 1959. The Social Psychology of
https://doi.org/10.1145/3064663.3064685
Groups. Wiley, New York.
[12] Takayuki Kanda, Rumi Sato, Naoki Saiwaki, and Hiroshi Ishiguro.
[29] Daniel S. Weld, Corin Anderson, Pedro Domingos, Oren Etzioni,
2007. A Two-Month Field Trial in an Elementary School for Long-
Krzysztof Gajos, Tessa Lau, and Steve Wolfman. 2003. Automatically
Term Human–Robot Interaction. IEEE Transactions on Robotics 23, 5:
personalizing user interfaces. In International Joint Conference on
962–971. https://doi.org/10.1109/TRO.2007.904904
Artificial Intelligence (IJCAI 2003), 1613–1619.
[13] Da-jung Kim, Yeoreum Lee, Saeyoung Rho, and Youn-kyung Lim.
[30] Rayoung Yang and Mark W Newman. 2012. Living with an Intelligent
2016. Design Opportunities in Three Stages of Relationship
Thermostat: Advanced Control for Heating and Cooling Systems. In
Development between Users and Self-Tracking Devices. In
Proceedings of the 2012 ACM Conference on Ubiquitous Computing
Proceedings of the 2016 CHI Conference on Human Factors in
(UbiComp ’12), 1102–1107. https://doi.org/10.1145/2370216.2370449
Computing Systems (CHI ’16), 699–703.
[31] Rayoung Yang and Mark W. Newman. 2013. Learning from a Learning
https://doi.org/10.1145/2858036.2858148
Thermostat: Lessons for Intelligent Systems for the Home. In
[14] Mark L. Knapp and Anita L. Vangelisti. 2004. Interpersonal
Proceedings of the 2013 ACM international joint conference on
Communication and Human Relationships. Allyn and Bacon.
Pervasive and ubiquitous computing (UbiComp ’13), 93–102.
[15] M. Kosinski, D. Stillwell, and T. Graepel. 2013. Private traits and
https://doi.org/10.1145/2493432.2493489
attributes are predictable from digital records of human behavior. In

Paper 484 Page 13


CHI 2019 Paper CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK

[32] John Zimmerman, Anthony Tomasic, Isaac Simmons, Ian Hargraves, Procedural Update Tasks. In Proceedings of the SIGCHI conference on
Ken Mohnkern, Jason Cornwell, and Robert Martin McGuire. 2007. Human factors in computing systems (CHI ’07), 1445.
Vio: A Mixed-Initiative Approach to Learning and Automating https://doi.org/10.1145/1240624.1240843

Paper 484 Page 14

You might also like