Dissertation PDF
Dissertation PDF
Dissertation PDF
Think Machines Can Know About the Mind, and Why Their Beliefs Matter
by
in
in the
Graduate Division
of the
Committee in charge:
Summer 2018
Mind-Reading and Telepathy for Beginners and Intermediates: What People
Think Machines Can Know About the Mind, and Why Their Beliefs Matter
Copyright 2018
by
Nicholas Julius Merrill
1
Abstract
Mind-Reading and Telepathy for Beginners and Intermediates: What People Think Machines
Can Know About the Mind, and Why Their Beliefs Matter
by
Nicholas Julius Merrill
Doctor of Philosophy in Information Management and Systems
University of California, Berkeley
Professor John Chuang, Chair
What can machines know about the mind, even theoretically? This dissertation examines
what people (end-users and software engineers) believe the answer to this question might be,
where these beliefs come from, and what effect they have on social behavior and technical
practice. First, qualitative and quantitative data from controlled experiments show how basic
biosignals, such as heartrate, meet with both social context and prior beliefs about the body
to produce mind-related meanings, and affect social decision-making. Second, a working
brain-computer interface probes the diverse beliefs that software engineers hold about the
mind, and uncovers their shared belief that the mind can and will be read by machines. These
cases trace an unstable boundary—one heavily mediated by human beliefs—between sensing
bodies and sensing minds. I propose the porousness of this boundary as a site for studying
the futures of computer-mediated communication, of security, privacy and surveillance, and
of minds themselves.
i
To Mom
I’ve been shaving (mostly). Thank you for everything. I love you forever.
ii
Contents
Contents ii
List of Figures iv
List of Tables v
1 Introduction 1
5.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Bibliography 74
iv
List of Figures
2.1 Ophiocordyceps unilateralis sensu lato takes control of an ant’s mind without
input from its brain. By constructing a network of sensors and actuators atop its
muscles, the fungal complex forces the ant to chew on the underside of a twig,
after which the ant’s body will serve only as a medium for fungal reproduction. 4
2.2 On the left, fungal filaments surround an ant’s mandible muscle [39]. On the right,
commercial sensing devices decorate the wrists of an enthusiastic self-tracker [33]. 5
4.1 The heartrate monitor. Participants were told to place their finger on the monitor
to take a reading while viewing their partner’s decisions during the previous turn. 27
4.2 The heartrate visualization. After viewing the results of the previous round,
participants saw a graph of what they believed to be their partner’s heartrate,
either normal (left) or elevated (right). Error bars fluctuated within pre-set bounds. 29
4.3 Means of entrustment and cooperation (left) and mood attributions (right) in
elevated and normal heartrate conditions. . . . . . . . . . . . . . . . . . . . . . 31
4.4 Means of entrustment and cooperation (left) and mood attributions (right) in
elevated and normal SRI conditions. . . . . . . . . . . . . . . . . . . . . . . . . 36
5.1 “Please rank the following sensors in how likely you believe they are to reveal
what a person is thinking and feeling.” Higher bars indicate higher rank, or higher
likelihood of being revealing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.1 A big loop: beliefs about the mind inform the design of tools, and the use of these
tools inform beliefs about the mind. . . . . . . . . . . . . . . . . . . . . . . . . . 71
v
List of Tables
Acknowledgments
Thank you to John Chuang for being a mentor in every sense of the word, for accepting me
into this ancient model of apprenticeship and showing me how the work is done. During
my time as a PhD student, John assembled BioSENSE, a group that made my dissertation
possible. A few BioSENSors in particular helped me immensely: Coye Cheshire (with whom
I did much of the work in this dissertation), Richmond Wong, and Noura Howell. Outside
of BioSENSE, Alva Noë raised the relationship between my work and related debates that
were hiding in plain sight. Along the way, Paul Duguid taught me a great deal about what
good scholarship looks like. I would like to thank all of my collaborators, inside and outside
the ISchool, and to thank the ISchool itself for providing me the freedom and autonomy to
pursue a risky topic.
Above all, I would like to thank Mom and Dad, who always believed in the importance of
my work, even when they weren’t quite sure what I was doing.
I would also like to thank Chihei Hatakeyama and Hakobune for the peaceful music.
1
Chapter 1
Introduction
What can machines know about the mind? This dissertation seeks to understand people’s
beliefs about this question: how these beliefs affect and arise from interactions with digital
sensors, from prior beliefs about the mind and the body; and how these beliefs may shape
the design of technical systems in the future.
The purpose of this dissertation is twofold. First, it surfaces that the boundary between
sensing bodies and sensing minds is unstable, deeply entangled with social context and beliefs
about the body and mind. Second, it proposes the porousness of this boundary as a site for
studying the role that biosensing devices will play in near future. As biosensors creep into
smart watches, bands, and ingestibles, they will build increasingly high resolution models
of bodies in space. Their ability to divine not just what these bodies do, but what they
think and feel, presents an under-explored avenue for understanding and imagining how these
technologies will come to matter in the course of life.
Chapter 2 begins by introducing the notion that the mind is readable from consumer
devices worn on the body and embedded in the environment. It reframes some past studies
in computer science and adjacent fields as having already begun the work of theorizing and
building computational models of minds (Section 2.2). It then motivates human beliefs as a
starting point for discovering the relevance of the readable mind, both in how engineers will
model it, and how end-users will encounter these models in life.
With focus fixed on human beliefs, Chapter 3 describes an empirical examination of
how people conceive of the mind with respect to heartrate, a popular sensing modality in
commercial devices. Through a vignette study, this chapter demonstrates that heartrate can
take on various, sometimes contradictory meanings in different social contexts.
While this study establishes that people can build mind-related meanings around basic
biosignals, it does not establish whether these beliefs can affect social behaviors, nor how
specific our findings are to heartrate. In Chapter 4, we apply quantitative and qualitative
analyses to an iterated prisoner’s dilemma game, in which heartrate information (“elevated” or
“normal”) was shared between players. In a follow-up study, we replicate our initial study, but
replace heartrate with an unfamiliar biosignal, “Skin Reflectivity Index (SRI).” We find that
both heartrate and the unfamiliar biosignal are associated with negative mood attributions
CHAPTER 1. INTRODUCTION 2
when elevated, but we observe a decrease in cooperative behavior only with elevated heartrate.
Our findings highlight the role beliefs about the body can play in shaping interpretations of
a biosignal, while simultaneously suggesting that the social meaning of unfamiliar signals can
be “trained” over repeated interactions.
The prior two chapters establish that the mind-related meanings of biosignals, familiar
and unfamiliar, arise from both social context and prior beliefs about the body. But how do
the basic biosignals we studied compare to the wide variety of sensing modalities emerging in
consumer devices? Chapter 5 explores beliefs about a variety of biosensing devices, examining
how people relate their data to qualities of mind. I report on the qualitative and quantitative
results of a survey among participants in a large (n>10,000), longitudinal health study, and
an Amazon Mechanical Turk population. Through these results, I locate brainscanning, and
EEG specifically, as a fruitful case for understanding how particular sensing technologies
surface and construct notions of mind.
Having motivated EEG as a fruitful sensing modality for further exploration, Chapter 6
shifts in focus from users to software engineers, studying their interactions with a working
brain-based authentication system. This population’s beliefs are of particular interest as
consumer brainscanning devices become less expensive, and increasingly open to tinkering
via software. Although we find a diverse set of beliefs among our participants, we discover
a shared understanding of the mind as a physical entity that can and will be “read” by
machines.
To conclude, chapter 7 proposes the term telepathy to describe the encoding and trans-
mission of minds. I attempt to chart a path for future work, highlighting tensions between
opportunities for novel computer-mediated communication, and concerns around security,
privacy and surveillance. Finally, I propose telepathy as a way to understand not just what
computers can know about the mind, but how machines may shape our notions of what
minds are, and who we are as mind-having beings.
3
Chapter 2
Would you wear a device in the workplace if your manager thought it could track your
productivity, or creativity [15]? Would you allow your child to wear the same device in
schools, where it could monitor both their academic achievement and their mental health
[17]? Would you wear a fitness tracker if your resting heartrate could predict your future
involvement in violent crime [62]?
In all of these examples, sensing technologies blur the line between sensing bodies and
sensing minds. Today, increasingly inexpensive sensors with developer-friendly SDKs and
APIs allow those with requisite software expertise to (purport to) detect phenomena ranging
from mental health to mood, all without direct data about the brain [38].
In this chapter, I seek to dethrone the assumption that brain-scanning is necessary for
computers to “read” or “decode” the mind. Drawing from contemporary theories of embodied,
extended and distributed cognition, I argue that consumer sensing devices are already able
to grasp at the contents of our minds by sensing our bodies, tools, and built environment
(Section 2.1). I relate this argument to existing work in affective computing and computational
social science, reframing them as having already begun the work of theorizing and building
computational models of minds (Section 2.2).
Drawing on critiques of affective computing and computational social science, I center
the primacy of human interpretation in both constructing models of minds, and interpreting
the relevance of these models in the course of life. I propose this interpretive process as a
starting point for understanding how models of minds might operate in the world (Section
2.3). I conclude by considering the limits of what computers can know about the human
mind, and how beliefs about the mind structure these limits (Section 2).
2.1 Background
Consider the ant. The fungal complex Ophiocordyceps unilateralis sensu lato overtakes the
ant’s behavior without acting on its brain at all. Instead, it uses the ant’s body to navigate
the world, constructing a network of coordinated sensing and actuation atop the ant’s muscles
CHAPTER 2. ANTS, FUNGUS & TELEPATHY 4
[39]. By sensing the ant’s environment and stimulating its muscles in response, it causes the
ant to crawl beneath a twig and bite into it; once affixed to the twig, the fungus paralyzes
the ant, using its body as a breeding ground (Figure 2.1).
Figure 2.1: Ophiocordyceps unilateralis sensu lato takes control of an ant’s mind without
input from its brain. By constructing a network of sensors and actuators atop its muscles,
the fungal complex forces the ant to chew on the underside of a twig, after which the ant’s
body will serve only as a medium for fungal reproduction.
Ignoring questions of control, consider the degree of sensing the fungus must perform in
order to utilize the ant’s body. Using the ant’s bodily infrastructure, the fungus creates a
model of ant-experience robust enough to control the organism completely. Although the
Ophiocordyceps fungal complex cannot read the ant’s brain (it has no physical presence there),
it can read the ant’s mind well enough to model its environment and body. The fungus’
model of ant-experience may not be the same, or even similar, to those used by the host ant.
Regardless, they are of a sufficient resolution to allow the fungus to achieve its (reproductive)
goals.
With this fungus in mind, consider the emerging class of internet of things (IoT) devices,
which are increasingly embedded in the built environment, worn on the body, or worn inside
the body via ingestible pills (Figure 2.2). Though common, cameras too sense bodies, often
in public and without subjects’ knowledge [90]. All of these connected devices are endowed to
some degree with the capacity to sense (and to build models of) human bodies in space. Past
work has referred to this process broadly as biosensing, and these devices as biosensors [30].
While humans are significantly more complex than ants, the Ophiocordyceps fungal
complex helps illustrate the possibility of creating models of minds with limited or no
CHAPTER 2. ANTS, FUNGUS & TELEPATHY 5
Figure 2.2: On the left, fungal filaments surround an ant’s mandible muscle [39]. On the
right, commercial sensing devices decorate the wrists of an enthusiastic self-tracker [33].
information from the brain. If fungus can do so, perhaps consumer sensing devices can, as
well. As I review in this section, contemporary philosophical theories engage seriously with
the notion of a beyond-the-brain mind. As I discuss in Section 2.2, these theories allow the
physical phenomena detected by commercial sensors to be constituent of the mind.
Cognitive science
Cognitive science has historically been an influential source of physicalist theories about the
mind. The field takes a computational account of the brain, understanding how it “processes
information” [103] within the physical constraints of computational space and time [92].
This perspective offers computational models of “cognition” [92]. For example, these models
informed the design of neural networks, before the relatively recent discovery of performant
backpropogation algorithms made neural networks practical to deploy [73].
However, cognitive scientific models of the mind have received considerable criticism [77,
103]. Two relevant critiques focus on cognitive science’s “isolationist assumptions”: a focus
on the brain (isolated from the body), and a focus on the individual (isolated from social
context, and from the environment). The following sections review major responses to these
critiques: embodied cognition, distributed cognition, and extended cognition. These theories
return later as I discuss prior work in affective computing and computational social science.
This theory does not stop at tools in describing a mind beyond the body. Broadly, extended
cognition refocuses the brain away from the individual body, and toward the “active role of the
environment in shaping cognition” [26]. This theory paved the way toward a socially-extended
cognition, or distributed cognition, as described in Hutchins’ (1995) ethnography of sailors on
a naval vessel [51]. In his analysis, multiple individuals, and the material environment play
constituent roles in cognition, manifesting a mind that is distributed across multiple human
and non-human actors.
In addressing some critiques levied against cognitive science, the theories in this section
make various cases for a mind that extends beyond the confines of the brain, and even
beyond the confines of the body. The following section argues these theories, perhaps
unwittingly, make the mind amenable to modeling via sensors that are worn or embedded in
the environment, and that past research has (also unwittingly) already begun to sense the
mind from beyond the brain.
Affective computing
Affective computing, pioneered by Rosalind Picard at the MIT Media Lab, seeks to use
sensors to measure a users’ affect, emotions, and mood in order to improve their interaction
with machines. [83]. Two commercial examples of such sensing come directly from work in
Rosalind Picard’s research group. The Empatica wristband senses electrodermal activity,
CHAPTER 2. ANTS, FUNGUS & TELEPATHY 8
with the aim of correlating these data to emotional states [41]. This wristband has gone
on to inspire cheaper consumer alternatives, such as the Feel [38]. Also from Picard’s lab,
Affectiva classifies emotions from facial expressions, as detected through a camera. Their
infrastructure works through a webcam, providing what they term “Emotion as a Service” [1].
In both of these examples, affect is framed as a bodily state, as in theories of embodied
cognition. However, affective computing extends these claims further, positing that wearable
sensors can measure, encode, and transmit emotions through their sensing of bodily states
[49]. Although work in affective computing does not generally make explicit references to
embodied cognition, it typically seeks to detect emotion via bodily phenomena, and does
not consider these phenomena to be proxies from real emotions, indicating a general view of
emotions as embodied primarily.
Chapter 3
The previous chapter argues that human interpretations are central to the study of how
models of minds might operate in the course of life. Building on this argument, the present
chapter seeks to uncover what users believe basic biosensors can capture about the minds of
others. Through a vignette experiment and a mixed-methods experimental study, this chapter
show how people use biosensory data (heartrate) in social, computer-mediated contexts to
build interpretations relating to the minds of others.
3.1 Background
As of 2016, several apps allow users to share their heartrate with their friends, leading some
[67] to wonder why anyone would anyone want to do such a thing. In fact, heartrate is a
potentially rich signal for designers. The meaning of a heartrate in any given context is at
once socially informative [40, 93] and highly ambiguous [68].
After all, heartrate is not just some number. The sense of one’s heartbeat is an integral
feature of the human experience, and people’s associations with it range from intimacy [56]
to anxiety [31] to sexual arousal [100]. Many heartrate sharing applications rely on these
associations, asking users to ascribe contextual meanings to heartrate [57, 93], often with the
aim of increasing intimacy [56]. The advertising copy for Cardiogr.am, one smartwatch app,
reads,
Your heart beats 102,000 times per day, and it reacts to everything that happens
in your life—what you’re eating, how you exercise, a stressful moment, or a happy
memory. What’s your heart telling you? [18]
These applications, along with many others, rely on the fact that people will imbue their
heartrate data with emotional, and highly contextual interpretations. Given the relatively
large number of wearables with embedded heartrate monitors (watches, bands, even earbuds)
[97], it is unsurprising that designers are looking beyond fitness and health for ways to increase
CHAPTER 3. READING MIND FROM HEARTRATE 13
user engagement with these devices. However, it is not clear how individuals will interpret a
shared biosignal (e.g., heartrate) in different contexts of social interaction.
This chapter examines what heartrate can mean as a computer-mediated cue, and how
interpretations of heartrate affect social attitudes and social behavior as people assign
meanings to these signals relevant to the mind (emotion, mood, trust).
First, we use a vignette experiment to investigate how individuals make social interpre-
tations about a rudimentary biosignal (heartrate) in conditions of uncertainty, focusing on
dyadic interactions between acquaintances. Dyadic relations, which are present in all groups,
function as a fundamental starting point for understanding interpersonal collaboration and
group interactions [22]. We describe the quantitative and qualitative results of a randomized
vignette experiment in which subjects make assessments about an acquaintance based on
an imagined scenario that included shared heartrate information. We examine two contexts
in this study: an uncertain, non-adversarial context and an uncertain, adversarial context.
These two contexts, differing only by a few words, ask participants to imagine they are
meeting someone "for a movie" (non-adversarial) or "to discuss a legal dispute” (adversarial),
in which the person they are meeting is running late. I discuss the vingnette in more detail
later.
We find that a high heartrate transmits negative cues about mood in both contexts of
interaction, but that these cues do not appear to impact assessments of trustworthiness,
reliability or dependability. Counter to our initial predictions, we find that normal (rather
than elevated) heartrate leads to negative trust-related assessments, but only in the adversarial
context. In qualitative assessments of subjects’ attitudes and beliefs, we find that normal
heartrate in the adversarial condition conflicts with expectations about how the participant
believes the acquaintance should feel, signaling a lack of concern or seriousness, which appears
to lead individuals to view the acquaintance as less trustworthy. In contrast, subjects in the
non-adversarial context relate elevated heartrate to empathy and identification rather than
trustworthiness. We also find a small number of subjects read different social interpretations
onto the heartrate signal, including a very small minority who did not infer any relationship
between the heartrate and the social situation.
Goffman [42] (p 56) makes an important distinction between the cues that we intend to
give to others, and those that are “given off” unintentionally through our numerous non-verbal
actions and behaviors. We view physiological signals such as heartrate as a form of non-verbal
signaling that can “give off” more information to others than the sender may desire [50]. This
type of personal data revealed through discreet sensors paired with mobile communication
technologies has, until recently, been unavailable in most forms of social interaction.
Sharing heartrate
Heartrate has deep-rooted cultural significance in many societies, and near-universal familiarity
as a feature of our lived experiences. Building on associations with intimacy and love, many
heartrate sharing applications have aimed to “enhance” social connectedness by fostering
feelings of intimacy between people [56, 47].
What heartrate means as a computer-mediated cue, however, is ambiguous, its potential
interpretations varying widely in different contexts [66, 93]. Boehner et al (2007) argue for
the intrinsic ambiguity of sensor data as a resource in design, particularly in systems that seek
to use these data to express emotion [10]. Many technology probes corroborate this stance,
relying on users to project socially contextual meanings around a transmitted heartrate.
Consequently, more recent work has challenged the notion that the social consequences of
transmitting physiological data will always result in increased trust and intimacy [93]. There
remains little work, however, on how the potential ambiguity of a heartrate signal is resolved
in social conditions of risk and uncertainty.
CHAPTER 3. READING MIND FROM HEARTRATE 15
Hypotheses
Based on aforementioned studies of individual’s negative emotional interpretation of their own
heartrate, we believe that this negative valence will be mirrored in people’s interpretations
of the heartrates of others in uncertain situations. Our investigation begins with two key
predictions about negative assessments of one’s partner in an uncertain social situation.
Past work indicates that people tend to make negative inferences about mood and emotion
from elevated heartrates [31, 45, 106]. As such, our first hypothesis predicts that participants
will adjust their attitudes about the mood of their partner when their partner’s heartrate is
elevated, as opposed to normal:
Where Hypothesis 1 predicts that individuals will make negative assessments about an
acquaintance’s mood based on elevated heartrate, our second hypothesis predicts that indi-
viduals will make negative assessments about dispositions to behave in a reliable, dependable
and trustworthy manner. Thus, both hypotheses stem from the same base assumption that,
all things being equal, elevated heartrate has a primarily negative connotation with attitudes
and behaviors of another person.
about the partner’s trustworthiness (2a), reliability (2b), and dependability (2c),
compared to those who believe that their partner has a normal heartrate.
We test both hypotheses in two different contexts of interaction (adversarial and non-
adversarial) to understand how the context of risk and uncertainty affects social interpretations
of heartrate.
Sample
Our sample consisted of undergraduate students recruited from the population of UC Berkeley.
Potential participants were asked to participate in a short online survey; they did not know
the nature of the questions or the topic of the study in advance. All the participants
were compensated with a $5 Amazon gift card. One hundred and three (103) participants
completed the experiment survey instrument. The pool was weighted toward women: 65%
were women and 34% were male, and 2% (2 subjects) did not identify with either gender.
With random assignment, the same overall gender split was maintained across conditions.
The mean age of participants was 23.
Figure 3.1: Mood-related evaluation (7-point Likert) means by condition (bars represent
standard deviation).
We apply both quantitative and qualitative analyses to investigate our research questions
and hypotheses. The study is based around an experimental design, but we also place
significant emphasis on open-ended responses to better understand participants’ thought
processes, beliefs, and rationale for their choices in the vignettes. Our first hypothesis predicts
that individuals will make negative attributions about the mood of the acquaintance in
this uncertain situation when they believe that the acquaintance has an elevated heartrate
(compared to normal heartrate). Given our four separate measures of mood, we conducted a
multivariate analysis of variance (MANOVA) to test the hypothesis that there are one or
CHAPTER 3. READING MIND FROM HEARTRATE 17
Figure 3.2: Trust-related evaluation means (7-point Likert) by condition (bars represent
standard deviation).
more mean differences between the normal/elevated heartrate conditions, and/or between
the two contexts of interaction (nonadversarial and adversarial).
We found a strong, statistically significant effect and a medium practical association
between emotional attributions and heartrate condition, F (4, 96) = 32.89, p < .001; partial
eta squared = .58. Turning to the individual outcomes, we find that subjects’ perceptions
of the acquaintance in the vignette’s anxiety, his/her tendency to be easily upset, his/her
tendency to be emotional, and his/her lack of calmness were all significantly higher in the
elevated heartrate conditions when compared to the normal heartrate conditions (see Figure
3.1). We found no significant effect for the two contexts of interaction, F (4, 96) = 1.072, p =
.38, and no significant effect for the context x heartrate condition interaction, F (4, 96) = 1.65,
p = .17. In sum, individuals significantly rate acquaintances with elevated heartrate as more
anxious, easily upset, and less calm than those with normal heartrates. In the non-adversarial
context, individuals did not rate the acquaintances as significantly more emotional in the
elevated condition compared to normal, but this difference was statistically significant in the
adversarial context.
The context of interaction (non-adversarial, adversarial) does not have any effect on
mood ratings. With clear statistical and practical significance for the overall effect of mood
attributions by heartrate condition in both contexts of interaction, Hypothesis 1 is supported.
Our second hypothesis predicts that individuals will make negative assessments about
how certain they are regarding the acquaintances’ trustworthiness characteristics when the
individual has an elevated versus a normal heartrate. We find a statistically and practically
significant effect for the heartrate conditions, F (3, 97) = 4.19, p < .01; partial eta squared =
.12. However, we also find statistically significant effects for both the context of interaction,
F (3, 97) = 2.82, p < .05, and the context x heartrate condition interaction, F (3, 97) = 2.75,
p < .05.
A closer inspection of the individual mean differences reveals that the means for all three
outcomes (reliability, dependability and trustworthiness) are all lower in the normal condition
compared to the elevated condition in the adversarial context (see Figure 3.2). This result
is the opposite of what Hypothesis 2 predicts. In the non-adversarial context, we find no
CHAPTER 3. READING MIND FROM HEARTRATE 18
I will feel less sympathetic to this person because their heart rate doesn’t show
that they are stressed or upset.
I feel annoyed because a higher heart rate would indicate that the person cares
about the meeting
The normal heartrate implies that my acquaintance isn’t taking this meeting
seriously. However, it is difficult to say that my acquaintance does not care or
CHAPTER 3. READING MIND FROM HEARTRATE 19
In these cases, interpretations focused on what the other person was thinking or feeling. As
we saw in the adversarial context, normal heartrate seems to be in conflict with expectations.
Interestingly, two participants read the normal heartrate positively, as a sign that the other
person was telling the truth.
If his heartrate is normal, then he is probably not lying. I would still be slightly
annoyed at this.
it’s OK. her heartbeat was normal, so no lies
These subjects seemed to feel annoyed by the partner’s normal heartrate. However, in
contrast to the adversarial context, no subjects explicitly stated that the other person seemed
less trustworthy, honest or reliable as a result.
Elevated heart rate tells me that the acquaintance at least cares that he/she is
late and there’s no point in getting mad.
I would text her back "No problem! I’ll grab the tickets and will wait for you out
front." It seems obvious she’s in a hurry to get there, and is late because of traffic.
I will feel apologetic because I can see that this person’s heartrate is elevated and
I do no want him/her to feel worried/ stressed about making a movie.
I would feel anxiety about being late for the movie and pity because they seem
anxious. I don’t like being rushed and get anxious when I am rushed
CHAPTER 3. READING MIND FROM HEARTRATE 21
In these responses, heartrate generally seemed to signal that the acquaintance was stressed.
While stress is generally assumed to be negative, in this case it seems to engender identification
and empathy with the acquaintance. This example gestures toward the highly contextual
nature of heartrate’s social meaning, and why more work should examine the consequences
of these different interpretations.
Given that heartrate sharing is not (yet) widely deployed in consumer devices, it is
somewhat surprising that only a few subjects commented on privacy concerns. This could be
partially explained by the fact that the scenario was imagined, rather that simulated, and
because subjects might have anticipated our interest in their reactions to the interface.
(adversarial + elevated heartrate) Heart rate could be elevated for many reasons,
and just like studies with lie detectors, it may possibly indicate lying, but also
could indicate other things. It’s just a number, not a definite answer of lying or
not. And even then, you’ve got to forgive people.
(adversarial + normal heartrate) “The normal heartrate implies that my acquain-
tance isn’t taking this meeting seriously. However, it is difficult to say that my
acquaintance does not care or is lying. For example, I have no knowledge of the
traffic to determine if my acquaintance is lying. Additionally, my smartphone can
be wrong; I don’t know how accurate this technology is, especially since it is a
very new piece of technology.”
CHAPTER 3. READING MIND FROM HEARTRATE 22
Our study did not reference any existing device, so it is possible that the fallibility of
particular devices was not on subjects’ minds. However, the trust that people place in sensing
devices, and the presumed authority of their data, should be explored thoroughly in future
work.
Only two subjects in the study who mentioned heartrate felt that the data was not
necessarily related to the specific social situation described in the vignette:
Across all conditions, the fact that the vast majority of participants inferred a causal
relationship between the heartrate information and the particular social situation highlights
the relatively reliable effect of context in priming subjects to draw such inferences. Our
results indicate that simply making the heartrate salient, in the absence of other cues, invites
people to project a causal narrative on the mood, intentions, and behavior of others.
3.5 Discussion
We began this investigation by asking how individuals might interpret heartrate information
in uncertain social interactions. Our hypotheses are both based on the simple rationalization
that the kinds of negative attributions that people tend to make about their own heartrate
will be echoed in their social interpretations of others’ heartrates in uncertain contexts. We
found, however, a much more complex story about the social interpretation of biosignals and
the context of interaction.
Our first hypothesis predicts that an elevated heartrate will be negatively associated with
assessments about mood and dispositions in uncertain social interactions, both adversarial
and non-adversarial. We found strong support for this hypothesis in both contexts, across
our outcome attributions, in line with prior works’ findings regarding interpretation of one’s
own heartrate [106]. Our second hypothesis predicts that an elevated heartrate will lead to
negative assessments about the partners’ trustworthiness, dependability and reliability. As
with our first hypothesis, we expected that pre-existing negative connotations with heartrate
might translate into negative expectations of trust-related behavior.
We rejected the second hypothesis in both contexts of interaction. In the non-adversarial
context, we found no difference in assessments of trustworthiness, dependability or reliability
in the elevated and normal heartrate conditions. Furthermore, we found that the average
assessments on these three outcomes were nearly identical between the elevated condition
CHAPTER 3. READING MIND FROM HEARTRATE 23
in the adversarial context and the elevated and normal conditions in the non-adversarial
context.
Most surprisingly, we find a decrease in trustworthiness, dependability, and reliability
in the normal heartrate condition, but only in the adversarial context. As noted in the
quantitative results, the differences between the elevated and normal conditions in the
adversarial context were highly statistically significant: each of the trust-related measures saw
an average decrease of one full point (on a 7-point scale) in the normal condition compared
to the elevated condition.
To help explain these results, we turn to our qualitative analyses of the adversarial (legal
dispute) context. Subjects in the adversarial context seemed to have expected their partner
to have an elevated heartrate. When the partner had a normal heartrate, participants viewed
it as evidence that s/he is not bothered enough, not taking the situation seriously, or perhaps
even lying. Indeed, many participants explicitly stated in the open text responses that they
trusted the partner less because his or her heartrate was normal.
Why do we not see the same effect in the non-adversarial context? Turning again to
the qualitative data, we find that participants took elevated heartrate as a token of their
acquaintances’ genuine desire to arrive on time. It seems that elevated heartrate led many
participants in the non-adversarial context to increase their empathy, identification, and
understanding of the partners’ situation. Thus, even though individuals in the non-adversarial
condition associate elevated heartrate with anxiety, lack of calmness, and being easily upset,
the negative emotional interpretations do not seem to translate to evaluations of one’s
trustworthiness, dependability or reliability.
Taken together, we see that heartrate does not inherently (or consistently) affect trust-
related outcomes. Instead, social expectations shape interpretations of the heartrate biosignal
to create highly contextual, socially-specific meanings. Computer-mediated communication
researchers have long noted that, when cues are omitted from computer-mediated interaction,
people tend to fill in the gaps [3,10]. However, individuals may interpret new types of
interpersonal data in ways we do not yet understand. Our work provides some evidence
that such interpretations might have real social consequences. The fact that heartrate alone
can significantly alter one’s perception of trustworthiness in an adversarial context is an
important step towards the larger goal of unpacking people’s beliefs about what machines can
know about the mind. For one thing, the mostly positive social interpretations of heartrate
observed in past work are likely highly dependent on the social context in which they were
observed. The social situatedness of models of minds are probed further in this dissertation,
particularly in chapters 4 and 6.
Finally, we note a diversity of opinions and interpretations within conditions. For example,
a few subjects took normal heartrate as proof of honesty, the opposite view from the majority
of subjects. A few subjects did not feel there was necessarily any relationship between
heartrate and the social situation at hand. A small minority (three subjects) mentioned
concerns around privacy or disclosure. The wide range of views, sometimes contradictory,
highlights the complexity intrinsic to interfaces that collect and share biosignals, and warrants
future studies into social and contextual interpretation of data from wearable devices.
CHAPTER 3. READING MIND FROM HEARTRATE 24
In our qualitative data, we regularly observed attitudes about the presumed authority
or “neutrality” of data interacting with beliefs about the body to create a context in which
wearables data can be used to construct social judgments or assessments. How these
assessments play out will vary in different social situations, with different sensors, and in
different contexts of use. This point motivates the work described in Chapter 5, which
broadens this inquiry to a variety of sensors and a variety of aspects of mind.
3.6 Limitations
Our vignette experiment examined a single type of scenario in two different contexts, using
text-based answers. We still have a limited picture of the range of theoretically important
contexts in which individuals may observe and interpret biosignals about others, and a limited
understanding of how the rich cues present in realistic interaction contexts might influence
social interpretation. Our study focused on a first-time interaction with an imagined heartrate
sharing interface. We do not know how our findings would hold over time, and it is very likely
that social meanings of any biosignal could become more consistent over time. The vignette
scenario was contrived from believable, but currently non-existent smartphone technology.
Either due to participants’ suspension of their disbelief or due to their actual attitudes about
the heartrate sharing, few participants raised questions regarding privacy implications of
these scenarios.
Since the vignette study took place online, we could have missed the sorts of rich contextual
cues that might be captured by live interviews or other in-person methods. Furthermore, the
internet presents a wide array of distractions to survey-takers, and our survey was not able to
detect the participants’ attention on the task (e.g.., we could not detect whether the subject
was switching between tabs in their web browser, or taking breaks during the survey), nor
did we monitor how long subjects spent filling out the survey.
While this vignette experiment provides evidence that interpretations of biosignals from
sensors (such as wearables) can affect social attributions and behaviors towards others.
Nevertheless, many questions remain. While this study examined social beliefs as they relate
to heartrate, it did not examine how (or if) these beliefs affect social behaviors. Furthermore,
we did not examine how specific our findings are to heartrate. What other signals from the
body might lead to social interpretations?
3.7 Conclusion
In the following chapter, we begin to address the limitations above through controlled,
behavioral experiments, which help us ask more specific questions about how elevated
heartrate affects perceptions of risk in uncertain interactions, e.g., when money is at stake.
This study leads to a more robust understanding of how the transmission of basic biosignals
might affect social behavior.
25
Chapter 4
From the prior chapter’s findings about social attitudes, this chapter moves to a lab-based
experiment to understand how shared heartrate effects social behavior. We apply quantitative
and qualitative analyses to an iterated prisoner’s dilemma game, in which heartrate informa-
tion (“elevated” or “normal”) was shared between players. In a follow-up study, we replicate
our initial study, but replace heartrate with an unfamiliar biosignal, “Skin Reflectivity Index
(SRI).”
We find that both heartrate and the unfamiliar biosignal, when elevated, are associated
with negative mood attributions, but we observe a decrease in cooperative behavior only with
elevated heartrate. Qualitative results indicate that individuals may learn an association
between our unfamiliar biosignal and the cooperative, trusting behavior of their partner. Our
findings highlight the role prior beliefs can play in shaping interpretations of a biosignal,
while suggesting that, in the absense of prior beliefs about a particular signal, users may
learn to associate signals with social meanings over repated interactions.
Our results raise important questions for applications that transmit sensor-derived signals
socially between users. For signals with strong cultural associations, people’s prior beliefs will
color their interpretations, and social outcomes may or may not be positive. In the case of
novel signals, on the other hand, our results imply that designers can (perhaps inadvertently)
teach users to associate these biosignals with social meanings. This effect could be viewed as
beneficial, depending on design objectives. It could also be dangerous if designers suggest,
perhaps even inadvertently, interpretations that lead to discrimination.
heartrate (as opposed to some other "elevated" signal collected from the body), we replicate
our initial experiment with a fictitious biosignal, “skin reflectivity,” which will be unfamiliar
to participants. We find that both heartrate and the fictitious biosignal are associated with
negative mood attributions, but we observe a decrease in cooperative behavior only with
elevated heartrate. Qualitative results indicate that individuals may learn an association
between our fictitious biosignal and the cooperative, trusting behavior of their partner. Our
findings highlight the role prior beliefs can play in shaping interpretations of a biosignal,
while suggesting that designers can, perhaps inadvertently, train users to associate signals
with social meanings. We discuss implications for how wearable sensors can mediate social
interactions.
Generally when individuals believe that their heartrate is elevated, they often believe
their mood and emotions to be more negative [100]. Thus, we apply this same logic to how
individuals will interpret the elevated heartrates of others in uncertain social interactions:
If elevated heartrate has a negative connotation with mood, then elevated heartrate may
increase uncertainty about the behavior of one’s partner as well. When people know that
their partner has an elevated heartrate in an uncertain, risky interactions, they may take
actions to protect themselves against potential losses. In trust-building situations, individuals
take small risks with other people (entrustment behavior) and learn whether the other person
honors that trust or not (cooperative behavior). Thus, individuals have two different ways to
respond to increased uncertainty about their partners’ behavior in trust situations: 1) reduce
the amount they entrust to their partners, or 2) decrease their willingness to cooperate with
the partner [22, 28]. Since we expect elevated heartrate to have pre-existing connotations with
negative attributes, we predict that individuals will entrust and/or cooperate less to protect
themselves from potential harm when the partner has an elevated vs. a normal heartrate.
The overall design of the trust game involves anonymous pairs of fixed partners making
repeated decisions to entrust valued resources to the partner, and to return (cooperate) or
keep (defect) the points entrusted by the other partner. Importantly, individuals can make
the highest amount of money when they entrust many points to a partner and the partner
returns these points. This creates an uncertain social situation in which participants are
trying to earn real money by repeatedly taking risks (entrusting points) to a partner. Since
the partners are making the same decisions to entrust and keep/return points from the other
partner, these are mutually-dependent social interactions.
Figure 4.1: The heartrate monitor. Participants were told to place their finger on the monitor
to take a reading while viewing their partner’s decisions during the previous turn.
We operationalized an uncertain social interaction situation using a trust game called the
Prisoner’s Dilemma with Dependence (PDD) [22, 28]. The PDD game allows individuals to
control the amount of risk that they want to take with their partner by choosing how many
points to entrust, followed by a second decision to either keep or return whatever has been
CHAPTER 4. BIOSIGNALS, MIND AND BEHAVIOR 28
entrusted by their partner. Thus, the PDD game separates trust behavior (choosing how
much to entrust to a partner) from cooperative behavior (choosing to return or keep what
a partner entrusted). In each round of the PDD game, participants were given an initial
endowment of 10 points. Each participant decided whether to entrust any number of points
to their partner, from zero to ten. Then, participants found out at the same time whether
their partner had entrusted them with any of their own points, and if so, how many. Next,
each participant decided whether to keep the points entrusted to them (defection) or return
them (cooperation). The participants could not return only a portion of the entrusted points,
only all or none of them. If the points were returned to the partner, they were automatically
doubled in value for that participant.
After all participants made decisions about returning or keeping any points that had
been entrusted to them, they were then asked to place their finger on the heartrate monitor
for a few seconds in order to get a pulse reading (Figure 4.1). Participants then viewed the
summary of point calculations for the round. Subsequently, participants viewed a visual
display of the partners’ recent heartrate (Figure 4.2). The final point calculation for the
round included any of the initial allotment of points remaining after the trust decision, plus
any points that the participant kept from their partner if they decided not to return them. In
addition, players received points for any entrusted points that their partner returned, which
doubled in value.
When participants arrived at the laboratory, they were given a consent form that described
the nature of the study, as well as the human subjects’ approval information from our university.
We wanted participants to believe that they would be interacting with other real people, and
this perception was enhanced by having 12-16 participants at separate computer terminals in
the same large room during each experimental session. In fact, we controlled the trust and
cooperation behavior of the “partner” for every participant using a simulated computer actor.
As a result, no one in the study interacted with a human partner.
The simulated actor was programmed to always begin by entrusting one point on the first
round, then randomly entrust up to one point above or below whatever the partner entrusted
on the previous round. In addition, the simulated actor was programmed to always cooperate
(i.e., return the points that were entrusted by the partner). Following [22], we chose to use a
highly cooperative interaction partner in order to minimize any other forms of uncertainty in
the interaction. A highly-cooperation partner does not introduce any defection behaviors
that might otherwise reduce cooperation or trust from the participant (thereby hindering
our ability to detect main effects from the experimental manipulation). Thus, the simulated
actor was designed to reciprocate the entrusting behavior of the human participant on each
round, and always cooperate no matter what the human participant chose to do.
The participants completed 20 rounds of the PDD game, but they did not know how
many rounds they would play in order to eliminate end-game effects, such as defecting at
the last minute. After all rounds of the PDD game were completed, participants answered
a short post-questionnaire in order to assess their attitudes and beliefs about their partner.
This questionnaire included 7- point Likert-style response questions (1 = strongly disagree, 7
= strongly agree) about the partners’ beliefs about the partners’ anxiety (e.g., “my partner is
CHAPTER 4. BIOSIGNALS, MIND AND BEHAVIOR 29
Figure 4.2: The heartrate visualization. After viewing the results of the previous round,
participants saw a graph of what they believed to be their partner’s heartrate, either normal
(left) or elevated (right). Error bars fluctuated within pre-set bounds.
At the end of the study, participants were debriefed on the true nature and intent of the
experiment. An experimenter was available at the end of the study in case of any questions,
and we provided participants with the researchers’ email addresses on both the signed informed
consent form, as well as the debrief form, so that they could contact us regarding any aspect
of the study. We did not receive any emails or concerns from participants.
Experimental Manipulation
To assess the effect of interacting with a partner who has an elevated heartrate versus
interacting with a partner who has a normal heartrate, we controlled the heartrate information
that participants saw after each round of the experiment. This created a two-condition design:
always normal heartrate (NH) and always elevated heartrate (EH).
CHAPTER 4. BIOSIGNALS, MIND AND BEHAVIOR 30
Figure 4.3: Means of entrustment and cooperation (left) and mood attributions (right) in
elevated and normal heartrate conditions.
4.3 Results
Quantitative results
Our first hypothesis predicts that, when individuals believe that their partner has a consistently
elevated heartrate, compared to a normal heartrate, they will rate the partner more negatively
on mood attributes. Consistent with prior research, we found an overall strong, statistically
significant effect and medium practical association between attributions and experimental
condition, F(4, 51) = 6.7, p < .0001; Wilk’s lambda = .66, partial eta squared =.34. Turning
to the individual outcomes, we find that perceptions of the partners’ anxiety is significantly
higher in the EH condition (M = 3.86, SD = 1.72) compared to the NH condition (M = 2.14,
SD = 1.27), F(1, 54) = 18, p < .001; partial eta squared = .25. Furthermore, participants
rated their partners as significantly more calm in the NH condition (M = 5.9, SD = 1.3)
compared to the EH condition (M = 4.29, SD = 1.46), F(1, 54) = 18.71 p < .001; partial
eta squared =.26. On the other hand, we found no statistically significant differences for
perception that the partner is “easily upset” or that the partner is “emotional” (p = n.s.).
In sum, we find strong statistical and practical differences in perceptions of both anxiety
and calmness, but no statistical or practical differences in perceptions of how emotional or
easily upset the partner is in the two experimental conditions. Given the significant omnibus
test and significant results on two of the four individual outcomes, Hypothesis 1 is partially
supported.
Our second set of hypotheses predict that participants in the elevated heartrate (EH)
condition will exhibit lower trusting (H2a) and/or cooperative (H2b) behavior compared to
those in the normal heartrate (NH) condition. The average points entrusted by participants
in the EH condition (M = 7.88, SD = 2.18) was not significantly different than the NH
condition (M = 7.7, SD = 2.18), t =.28, p=n.s, one-tailed test. Thus, individuals entrusted
points to their partners at approximately the same level in both conditions (Figure 4.3).
Hypothesis 2a is not supported.
However, we found that the average cooperation rate in the EH condition (M = .74, SD
CHAPTER 4. BIOSIGNALS, MIND AND BEHAVIOR 32
= .37) was statistically significantly lower than the NH condition (M = .89, SD = .25), t
= 1.76, p < .05, one-tailed test. Importantly, this result shows a medium practical effect
size (Cohen’s d = .47), indicating a meaningful real world difference. On average, those in
the normal heartrate condition cooperated 20% more than those in the elevated heartrate
condition (Figure 4.3). Hypothesis 2b is supported.
Since we designed the simulated actors in both conditions with trusting and always-
cooperative behavior, we did not expect participants to rate the simulated actors differently
in terms of the focal behaviors of cooperativeness and trustworthiness between experimental
conditions. This is a critical manipulation check, since we need to rule out any perceived
effect of the simulated partners’ behavior in order to establish that the primary treatment
(heartrate of partner) had an effect on the human participants’ behavior. The omnibus test of
difference in perceptions of the trustworthiness and cooperative behavior between conditions
was not significant, F(2, 53) = .21, p = n.s.; Wilk’s lambda = .99, partial eta squared =.01.
Thus, as we would expect, individuals did not indicate significant behavioral differences for
the trusting, cooperative simulated actor (which was programmed to behave exactly the same
in both conditions).
Qualitative results
At the end of our questionnaire, before the demographic questions and the debriefing,
participants were presented with two open-ended questions. The first asked participants
to “Tell us how you would describe your partner.” The second asked participants “What, if
anything, did heartrate tell you about your partner during this experiment?” This section
discusses and unpacks some of the responses that these questions elicited.
Many people who referred to elevated heartrate in their responses mentioned that it
signaled anxiety. In some cases, participants even reflected on a negative relationship between
elevated heartrate, anxiety and trust:
These quotes further support our first hypothesis, as well as findings of past work showing
that elevated heartrate typically signals anxiety and mood. In other words, elevated heartrate
(and heartrate in general) seemed to be about the partner’s current disposition, rather than
who the partner was as a person. While the majority of those who mentioned elevated
heartrate implied a causal relationship between the signal and the game context, a few did
not:
My partner’s heart rate was elevated the whole time, most students are stressed
so that might be why.
They may have been nervous because of doing the experiment itself.
CHAPTER 4. BIOSIGNALS, MIND AND BEHAVIOR 33
The relative rarity of skepticism about the relationship between heartrate and specific
game events highlights the crucial role of framing and salience in turning what might be a
disembodied signal (heartrate data) into a relevant, contextual clue. We also noted diversity
in beliefs about the meaning of heartrate itself. Where almost all participants who mentioned
heartrate associated it with anxiety, at least one participant had an entirely different take on
his/her partner’s consistently elevated heartrate:
My partner’s heart rate does not change too much which indicates that he or she
is very nice.
These quotes highlight overall diversity in what an elevated heartrate is capable of meaning.
Even within our relatively small, and relatively homogenous sample of university students,
our quotes imply a mostly negative association with elevated heartrate, but also a potentially
long tail of diverse beliefs about elevated heartrate.
Many participants said that normal heartrate indicated that the partner was “calm,”
“chilled out,’ or “not anxious.”
[HR signaled] that my partner was always calm. The heart rate never fluctuated,
it didn’t make a difference.
These quotes show subjects inferring a direct connection between the heartrate signal
and the attribution of a calm mood. One participant specifically mentioned that consistency
of normal heartrate made their partner seem more trustworthy:
My partner’s heart rate has been consistently normal throughout the experiment,
so I guess s/he has no intention to cheat.
Another participant, presumably a cooperative one, thought that their partner’s heartrate
would have risen if s/he had not cooperated:
In all of the above quotes (and the vast majority of responses), participants inferred a
relationship between normal heartrate and calmness. However, a few participants did not
infer any relationships between behavior, moods and the signal they saw.
CHAPTER 4. BIOSIGNALS, MIND AND BEHAVIOR 34
Heartrate did not tell me anything. My partner was average each time. I also am
sure I have an elevated heart rate due to coffee consumption so I did not take my
partners into consideration.
I based my decisions on their previous actions.
Not every participant explicitly inferred a calm mood from the normal heartrate signal, but
most did. Taken alongside our quantitative results, our qualitative results provide evidence
that subjects have used the emotional attributions they made based on their partner’s normal
heartrate to guide their behavior in the trust game.
Hypotheses
Without any context for what SRI means as a signal, participants may assume that any
biological signal that is “elevated” from normal will be negatively associated with one’s
mood. If this is the case, then we should observe the same general pattern of negative
mood attributions and less cooperative behavior when the partner has an elevated SRI as we
observed with heartrate.
On the other hand, perhaps heartrate is special due to its common social associations
with mood, anxiety, and even deception. If heartrate is distinctive in this regard, then we
would not observe the same significant differences between normal and elevated SRI and
mood attributes, trust, and cooperation rates with the partner.
To test the effect of our unfamiliar biosignal on behavior in risky, uncertain interactions,
we evaluate the exact same hypotheses from study 1 again in the context of SRI:
to participants who see a consistently normal SRI in uncertain and risky social
interactions.
Hypothesis 4: Participants who see an elevated SRI will have lower (4a) trust
rates (4b) cooperation rates in uncertain and risky social interactions compared
to participants who see a normal SRI.
Participants
We recruited our sample for the second study from the same population and using the
same method as described in study 1. Our recruitment procedures ensured that no one who
participated in the first study could be recruited for the second study. Sixty-three participants
(63) completed the second experiment, 40 women, 22 men, and one self-identified as ‘other’.
The mean age of participants was 21. Importantly, the gender distribution and age of the
sample was equivalent to the first study.
4.5 Results
Quantitative results
H3 predicts that when individuals believe that their partner has a consistently elevated SRI,
compared to a normal SRI, they will rate the partner more negatively on mood attributes.
As with the first study on heartrate, we found an overall strong, statistically significant effect
and medium practical association between attributions and experimental condition, F(4, 59)
= 4, p < .01; Wilk’s lambda = .79, partial eta squared =.21. For the individual outcomes,
CHAPTER 4. BIOSIGNALS, MIND AND BEHAVIOR 36
Figure 4.4: Means of entrustment and cooperation (left) and mood attributions (right) in
elevated and normal SRI conditions.
we find that perceptions of the partners’ anxiety is significantly higher in the elevated SRI
condition (M = 3.97, SD = 1.62) compared to the normal SRI condition (M = 2.67, SD =
1.24), F(1, 62) = 12.8, p < .001; partial eta squared = .17. Furthermore, participants rated
their partners as significantly more calm in the normal SRI condition (M = 5.5, SD = 1.3)
compared to the elevated SRI condition (M = 4.68, SD = 1.63), F(1, 62) = 4.4 p < .05;
partial eta squared =.07. Just as with the heartrate study, we found no statistically significant
differences for perception that the partner is ‘easily upset’ or that the partner is ‘emotional’
(p = n.s.). In sum, we find strong statistical and practical differences in perceptions of both
anxiety and calm, but no statistical or practical differences in how emotional or easily upset
one perceives the partner to be in SRI conditions. Given the significant omnibus test and
significant results on two of the 4 individual outcomes, Hypothesis 3 is partially supported.
Our final hypotheses predict that participants in the elevated SRI condition will exhibit
lower trusting (H4a) and cooperative (H4b) behavior compared to those in the normal SRI
condition. The average points entrusted by participants in the elevated SRI condition (M =
8.5, SD = 1.27) was not significantly different than the normal SRI condition (M = 8.7, SD
= 1.77), t =.39, p = n.s, one-tailed test. Thus, individuals entrusted points to their partners
at approximately the same level in both conditions (Figure 4.4). Unlike the heartrate study,
however, we found no significant difference in cooperation rate between in the elevated SRI
(M = .89, SD = .21) and the normal SRI condition (M = .88, SD = .25), t = .09, p = n.s.,
one-tailed test. H4a and H4b are not supported.
As with the first study, the simulated actors in study 2 were programmed to be consistently
trusting and cooperative in the elevated and normal SRI conditions. Thus, we do not
expect participants to rate the simulated actors differently in terms cooperativeness and
trustworthiness between experimental conditions. As expected, the omnibus test of difference
in perceptions of the trustworthiness and cooperative behavior between conditions was not
significant, F(2, 61) = 3, p = n.s.; Wilk’s lambda = .91, partial eta squared =.09.
CHAPTER 4. BIOSIGNALS, MIND AND BEHAVIOR 37
Qualitative results
As in the heartrate condition, participants in the SRI condition were asked open-ended
questions at the end of the post-experiment questionnaire, before the demographic questions
and debrief. As in the heartrate condition, participants were asked how they would describe
their partner. However, unlike in the heartrate condition, participants were asked, "Recall
what we were measuring with the sensor. Please describe it below." After completing this
question, participants proceeded were given two more open-ended items: "What, if anything,
did SRI (skin reflectivity) tell you about your partner during this experiment?" and, "As a
signal, what do you believe that SRI says about another person?"
If the SRI reads high, it may indicate that the person expects to be betrayed in
some way or is hopeful of a positive result. I forgot what SRI stands for again.
Since his/her SRI is always elevated, I would assume he/she is nervous/excited or
just it’s hot in here.
SRI may give insight as to how nervous or excited someone’s response is to
something that happens. Maybe someone with a larger range in SRI is more
emotional.
These assessments of SRI are quite similar to interpretations from the elevated heartrate,
and corroborate our quantitative findings that those who saw elevated SRI rate their partners
as more nervous. However, the fact that these emotional assessments were similar in both
elevated heartrate and elevated SRI conditions, but behavioral outcomes were different,
challenges our notion that negative emotional cues caused these behavioral outcomes—a
point we address in more detail in the discussion below. As in the heartrate conditions, some
participants responded that SRI told them little or nothing of interest about their partner:
Nothing at all about the person other than an arbitrary value of a sensor.
CHAPTER 4. BIOSIGNALS, MIND AND BEHAVIOR 38
Since the SRI seemed to be bouncing around in the blue range but never got into
the red range (which I assume would be “abnormal” since the blue range was
normal) I don’t think SRI is an accurate measurement of much.
Elevated SRI
To help explain why elevated heartrate had a chilling effect on cooperative behavior, where
elevated SRI did not, we delve into the responses of participants in the elevated SRI condition.
When asked what SRI told them about their partner, participants often reported nervousness
or anxiety, just as we noted in the quantitative results:
Elevated means they feel safe and trustful. Lower than average means they are
defensive and scared.
This interpretation stands in stark contrast to elevated heartrate, which also signaled
anxiety, but had a negative association with behavior. In explaining why participants found
elevated SRI to signal cooperativeness and trust, we look toward the responses of participants
who seemed to learn a meaning for this signal:
Well, since their SRI was always high and they always gave the money back to me,
(based on these only two bits of info I know) I assume the two are correlated and
an elevated SRI means that they’re going to give the money back. [. . . ] I guess it
means that they’re trustworthy and will do the right thing by their partner.
I cannot tell [what SRI means], but my partner’s was extremely elevated for
the whole experiment and s/he was good at conducting mutually beneficial
transactions.
CHAPTER 4. BIOSIGNALS, MIND AND BEHAVIOR 39
These quotes strongly suggest that, unlike for heartrate, SRI participants picked up on a
pattern between their partner’s always-cooperative behavior and the elevated biosignal that we
displayed to them, thus filling in the gaps about what SRI meant in this context. In contrast,
we found no evidence that elevated heartrate participants learned such an association in the
first study, despite the fact that every participant interacted with a perfectly cooperative
partner in all conditions and studies.
Normal SRI
As with those in the elevated SRI condition, many participants in the normal SRI condition
identified some relationship between SRI and the other person’s mood. I think this helps
identify how people are feeling internally when making decisions.
In some cases, participants in the normal SRI condition inferred that elevated SRI might
have a negative meaning:
not to sure, high sri may indicate panic/fear or anger low sri may indicate calmness
and contentness.
A person is less likely to trust other people if he or she has a high SRI.
Overall, the responses for both SRI conditions support the interpretation that participants
learned an association between cooperative, trustworthy behavior from the partner and SRI.
As we argue in the following discussion, such associations are more likely in the SRI conditions
because, unlike for heartrate, participants should have no preexisting beliefs or associations
with SRI.
Limitations
Controlled, laboratory studies always come with clear advantages (such as high internal
validity) and disadvantages (such as reduced external and ecological validity). Our study
did not attempt to emulate a real-world interaction context with a biometric sharing device,
though this is a clear next step, now that we know there are important differences in how
biosignals are interpreted. Furthermore, our use of highly cooperative, computer-controlled
interaction partners with stable biosignals (always high or always normal), prevents us from
being able to speak to the effects of more dynamic behaviors and/or changes in biosignals
CHAPTER 4. BIOSIGNALS, MIND AND BEHAVIOR 40
over longer periods of time. From these experiments, we also do not know how these results
will transfer to other contexts, and other types of social interactions. Also, our study by
nature focused on first-time, iterated interactions, both with an interface and with another
unknown person. We do not know how these results might apply over the course of more
personal relationships, or after repeated experiences with a specific interface in a biosignal
sharing device. In addition, this research was conducted on young adults at a large public
university, which is an important limitation when considering whether these results would
hold across age groups and other key sources of sociodemographic variation in the larger
population.
4.6 Discussion
We found that both heartrate and SRI signaled negative mood to participants, including
anxiety and lack of calmness. It is possible that almost any “elevated” biosignals could be
associated with negative mood attributions such as anxiety and lack of calmness: many
elevated signals (pulse, temperature, blood pressure) carry associations with being angry,
sick, hot-headed, and a host of other negative attributions. People may default to such
attributions when seeing an unknown signal that comes from the body.
Elevated heartrate had a chilling effect on cooperation, where an unfamiliar biosignal,
SRI, did not. So, why did the negative mood attributions in the elevated SRI condition not
translate into reduced cooperation, as they did for elevated heartrate?
Our results shed light on two relevant phenomena that may address this question. First,
pre-existing beliefs about heartrate are powerful: even when playing with a very cooperative,
trusting game partner, negative connotations surrounding elevated heartrate appear to lead
individuals to cooperate less. Our results suggest that participants bring to uncertain social
interactions their own expectations about what elevated heartrate means, and that these
biases cannot be quickly overridden, even when behavioral evidence sends a positive message
(e.g., high cooperation and trust from the partner).
Second, we find evidence that participants can “learn” a social meaning for a previously
unknown signal. Our qualitative data suggest that participants in the SRI condition associated
whichever signal they saw (elevated or normal) with cooperativeness, and trustworthiness.
Unlike with heartrate, people did not have preconceived notions of how SRI should affect
the social behavior of the partner, since SRI does not exist. Instead, we observe participants
discovering "what SRI means" by watching their partner’s behavior in relation to the biosignal.
In the absence of guidelines for interpreting what SRI is or what it measures, individuals
appear to fill in the gaps with available behavioral information.
Our observation that people can learn social meanings for previous unknown signals
begs a related question: Can pre-existing connotations for familiar biosignals change over
time? The meanings of a signal like heartrate are the product of associations that have
been shared and developed over centuries. However, technology allows for new expressions
of these ancient signals [93]. If social heartrate information became an easily accessible
CHAPTER 4. BIOSIGNALS, MIND AND BEHAVIOR 41
biosignal in trust-based interactions like negotiations, we might find its social meaning could
evolve further. Unfortunately, short-term laboratory studies such as this one are unlikely to
trigger or detect enduring shifts in the social meanings of familiar biosignals. We need both
longer-term experiments, and mixed-methods research that can draw from rich qualitative
data as well as statistically and practically significant changes in interpretations over time.
Broadly, our results raise questions about how and why unfamiliar signals take on social
meanings in different contexts of interaction. Researchers in CSCW and HCI have long
noted our tendency to read into cues and signals in computer-mediated communications.
From impact factors and citation counts in scholarly work [36] to societal indices [102], to
health metrics such as the bodymass index (BMI) [16], human have a tendency to impart
“real” meanings onto metrics, scales and signals – meanings that may not align with the
concepts their designers aimed to measure. It is critical that we continue to question how
biosignal data could shape our interpersonal interactions, and whether the outcomes will
always translate into meaningful social information.
4.8 Conclusion
We find that sharing heartrate can negatively influence trusting attitudes and behaviors.
However, heartrate alone does not communicate trust. Instead, individual’s social expectations
interact with the heartrate data to produce context-specific meanings. Complicating matters
further, our qualitative data reveal a diversity of interpretations regarding the relevance and
meaning of a heartrate in context, and the privacy implications of biosensing technologies.
Our findings advance and complicate our understanding of the role that biosignal sharing
can play in social, computer-mediated contexts, and motivate more detailed study into the
mechanisms by which social interpretations arise from basic physiological signals.
Further, our experimental results imply that interfaces can “teach” the meaning of
some biosignals, where others carry strong, pre-existing connotations that even repeated
interactions cannot easily alter. In general, prior beliefs about the body (drawn from
culture, lived experience) seem to shape what a biosignal can mean in a given context.
However, in the absence of prior beliefs, there exists an opportunity—and a potential
danger—that designers of biosignal-sharing systems can condition participants to learn
(potentially arbitrary) associations between biosignals and social behaviors.
Aside from heartrate, we do not know which of many other biosignals might be associated
with moods and behaviors. Other biosignals (e.g., galvanic skin response, electroencephelog-
raphy or EEG), could offer different affordances for sense-making. It is unclear from our work
how the social interpretation of the signals from these devices could affect social behaviors
such as dyadic and group trust. Similar studies with signals from, e.g., the brain [4] are
a clear direction for future work. Especially interesting cases are signals for which precise
or empirical meanings are still being hotly debated, such as EEG (brainwaves), a sensing
modality we begin to discuss in the next chapter.
43
Chapter 5
While the prior chapter establishes that people build mind-related meanings around biosensory
data, this chapter locates brainscanning as a fruitful case for understanding how particular
sensing technologies construct notions of mind. I report on the qualitative and quantitative
results of survey among participants in a large (n>10,000), longitudinal health study, and an
Amazon Mechanical Turk population.
What can different biosensors reveal about what you are thinking and feeling? In this
study, we posed this question to 200 people, half of whom came from Mechanical Turk,
and half from a longitudinal study in which subjects contribute sensor data to track health
outcomes. We were interested in how people perceived risks around the disclosure of sensor
data, and how their expectations related to both the type of device in question, and the
participants’ prior experience with disclosing data from wearable devices.
Through a quantitative and qualitative analysis of survey data, we find some differences in
perceptions of risk between populations. However, we find that certain devices draw greater
notions of risk of mind-reading than others. In particular, electroencephalography (EEG)
appears to carry an unusually high perceived risk, beyond even fMRI, which has proven
more revealing in past studies [58]. We discuss implications for the design of EEG-based
brain-computer interface, a modality rapidly gaining in popularity in the technology industry
[64, 75, 72], and for wearable technologies generally.
5.1 Background
In their qualitative study of activity trackers, Rader and Slaker (2017) found that the
“visibility” of tracking devices (how data are measured, and what data are calculated as
a result) has a large impact on the way people understand these devices as working, and
may impact the privacy decisions users make as a result [85]. While this study looked at
a broad array of sensors, it did not study particular threats to privacy. Meanwhile, past
work in CSCW and beyond has demonstrated that people build meanings around shared
data from wearable sensors pertaining to mood, emotions, and other aspects of mind [69].
CHAPTER 5. SHIFTING TO THE BRAIN 44
These studies raise the notion of the mind as a site for exploring perceptions of sensor data,
and what these data might mean. However, the interpretations surfaced by previous studies
are typically contextual, specific to particular social contexts [98], and to particular types
of sensors. However, it is not clear from these studies how different sensors compare to one
another in the way users assess the risks of data disclosure.
In this work, we aim to study a specific privacy threat (knowing what a person is thinking
and feeling) across a variety of sensors. Through quantitative and qualitative data, we aim to
perform inductive work around two preliminary questions: (1) Which sensing devices seem
the most (and least) likely to reveal what a person is thinking and feeling? (2) How do these
perceptions change according to this person’s observed willingness to share sensor data with
others? In the following section, we outline how we examined these questions using a survey,
deployed across two distinct populations.
5.2 Methods
Our survey consisted of a question in which subjects ranked various sensors: “Please rank the
following sensors in how likely you believe they are to reveal what a person is thinking and
feeling.” Our selection of sensors (Table 5.1) aimed to include both sensors commonly found
in wearable and mobile devices, and sensors more commonly associated with the medical
industry. We sought to achieve a mix of modalities found only in medical devices, found only
in commercial devices, and found in both commercial and medical devices.
To capture a population willing to share sensor data, we submitted our survey to partici-
pants in Health-e-Heart, a large (n > 40,000) longitudinal study in which subjects volunteer
to share data from wearable sensors longitudinally so that researchers may monitor health
outcomes [37]. To compare this population to a more general population, we also submitted
CHAPTER 5. SHIFTING TO THE BRAIN 45
Figure 5.1: “Please rank the following sensors in how likely you believe they are to reveal
what a person is thinking and feeling.” Higher bars indicate higher rank, or higher likelihood
of being revealing.
our survey to Mechanical Turk workers in the United States. Our survey included 100
Health-e-Heart participants and 100 participants from Mechanical Turk.
5.3 Results
Quantitative results
In our rankings, brainwaves (EEG) are seen as among the most revealing biosignals, just below
body language and facial expression, in their capacity to reveal the inner workings of a person’s
mind. More common sensors such as GPS and step count are seen as less revealing (despite
empirical evidence suggesting such data can be quite revealing indeed [17]). Mechanical Turk
participants thought virtual reality headsets and step counters were significantly more likely
to reveal what a person is thinking and feeling than did Health-e-Heart subjects. On the
CHAPTER 5. SHIFTING TO THE BRAIN 46
other side, Health-e-Heart subjects believed fMRI, blood pressure, blood oxygenation, and
GPS/accelorometer were significantly more revealing than did Mechanical Turk participants.
Qualitative results
When we asked subjects to reflect on why they answered the way they did during the ranking
task (Figure 5.1), EEG solicited the strongest and most diverse reactions. Since this sensing
modality is still relatively obscure in consumer devices, we delved more deeply into qualitative
data in hopes of explaining these concerns. Subjects in both groups generally believed EEG
to reveal various details about the mind, mood, emotions, and identity. In the Health-e-Heart
group, several subjects gave relatively specific explanations as to why they ranked this sensing
modality highly.
(S24) I assume some information can be gleaned from brain wave activity in
various parts of the brain related to rewards or executive control, but without
accompanying information, it may be difficult to discover my thoughts.
(S23) EEGs note parts of the brain that are active. Again, in conjunction with
other measurements, I suspect that some sense of what one is thinking and feeling
could be learned.
(S91) I would rate this relatively high on the list because science has shown that
we can detect a lot about which areas of the brain are accessed and at which times.
This can tell a person a lot about what they might be thinking and especially how
they are feeling.
While these explanations range somewhat in their specificity and confidence, they share
the general sentiment that EEGs can be revealing. Subjects in the Mechanical Turk condition
broadly shared this belief, though tended to use less physiological detail in their explanations.
(S157) Brain activity can pinpoint exact emotions by monitoring certain areas on
the brain.
(S130) Brainwaves could tell you a lot more about what someone is thinking and
feeling. You could measure the patterns of brainwaves in an experiment.
Meanwhile, some subjects from both groups did not fit this trend. Ten subjects ranked
EEG low in its ability to measure what a person is thinking or feeling. Their qualitative
answers revealed a diverse set of reasons for this ranking. Three subjects indicated a general
lack of faith in brainwaves’ reliability.
(S20) I don’t think we have the ability to translate brainwaves into thoughts or
emotions.
(S101) EEG is very nonspecific and rarely can tell details reliably.
(S138) Possible but not accurate.
CHAPTER 5. SHIFTING TO THE BRAIN 47
These explanations broadly centered around EEG as a signal. They range somewhat in
their confidence, from a fundamental skepticism (S20) to caveats about possible accuracy or
specificity (S101, S138). In contrast to these three subjects, S10 ranked EEG low because
s/he felt the premise of a consumer grade EEG was implausible.
(S10) I assume that scientists can identify by brain patterns what others are feeling
and thinking based off of years of research. I’ve never heard of a consumer grade
eeg - and doubt it could be as powerful as a laboratory eeg. If it is then I would be
interested in this product.
This subject’s explanation surfaces the practical differences in attitudes that people
might have to a technology’s theoretical existence, and its realized existence as a consumer
device. Future work could look more closely at how the presumed scientific authority of
a brainscanning apparatus affects people’s willingness to accept specific BCI applications.
Finally, one subject’s skepticism what brainwaves can reveal stemmed from his/her personal
medical experiences.
This particular quote highlights how individuals’ life experiences might shape the way
they engage (or refuse to engage) with brain-sensing devices. In general, this quote and others
motivate the need for a rich, qualitative understanding of people’s first-hand experiences
with brainscanning devices, as well as data collection, in order to understand what role BCI
applications such as passthoughts could play in day-to-day life.
5.4 Discussion
Our results find some differences between the Health-e-Heart and Mechanical Turk groups,
particularly around devices with medical associations. However, device rankings were mostly
the same between conditions. Our findings indicate that sensing modalities play a large role
in building understandings of what sensors might reveal, along with prior experiences sharing
sensor data. We discuss implications for design in sensor-based interactions: different sensors
may trigger different concerns about privacy, which could in turn trigger debates about what
counts as a valid privacy concern, and what does not.
Health-e-Heart participants believed fMRI, blood pressure, and blood oxygenation to be
more revealing than participants in the Mechanical Turk condition. Since these subjects are
participating in a medical study, it is possible that they are more attuned to what medical
devices can reveal, or simply that they are primed to think about them. Health-e-Heart
subjects also thought that GPS and accelerometer were more revealing than their Mechanical
Turk counterparts. This differences indicates that the HeH subjects’ constant participation in
monitoring does not make them less sensitive to privacy concerns (i.e., they do not “acquiesce”
CHAPTER 5. SHIFTING TO THE BRAIN 48
to such monitoring). It does perhaps suggest that their knowledge of tracking modalities
differs, a suggestion supported by our qualitative analysis.
Conversely, Mechanical Turk participants believed the VR headset and step count were
more revealing than did the Health-e-Heart subjects. We found no significant difference in
experience with virtual reality between the two groups. Future work should examine possible
causes for this difference. As virtual reality grows in popularity, and as the producers of these
devices increasingly attempt to outfit VR headsets with sensors [65], it will be important to
understand what about VR causes people concern.
It is worth noting that Mechnical Turk participants may be subject to monitoring as well,
as the human-intelligence tasks they perform on the platform may subject them to various
types of surveillance (e.g., clicks, timing activity, question checks, browser fingerprinting, etc).
Future work should examine more deeply Turker’s knowledge of, and response to this sort of
tracking, issues which connect to to broader questions of digital surveillance in the workplace.
Our most surprising finding, consistent across both groups, was the overall high ranking
of EEG. EEG was perceived as more likely to reveal what a person is thinking or feeling than
fMRI, which prior work indicates to be a more detailed brainscanning apparatus [58]; EEG is
course-grained in comparison. Future work should examine more closely why EEG was so
highly ranked (e.g., perhaps participants did not know what fMRI is). Reasons aside, EEG’s
high rank in our finding offers both opportunities and challenges for designers. People’s belief
in EEG’s ability to sense intimate details may allow designers to create creative, helpful or
therapeutic applications [54]. On the other hand, these same beliefs could allow designers to
trick users [4], or might dissuade prospective users from wearing EEG at all. These questions
are increasingly important as EEG-based BCI is gaining interest in industry [72, 75] and in
the public imagination [64, 99]. How will people encounter these devices, and find their data
meaningful (or not) in the course of life? The answer to these questions depends heavily on
what users think their data can reveal. Thus, future work should look longitudinally at EEG
and BCIs as these devices ebb and flow in the public (and corporate) imaginary.
technologies develop. Sensors such as GPS and accelorometer are now ubiquitous, but
attitudes around them have likely changed since their introduction [27]. Through longitudinal
studies, we stand a chance at observing changes in attitudes, thus putting us in a position to
anticipate changes in privacy attitudes and privacy-preserving behaviors.
5.5 Conclusion
Our findings complicate recent work around the folk interpretations of sensor data, indicating
that prior experience with sensors is only one way to understand where interpretations of
sensor data come from. Beliefs about the body play an important role in shaping beliefs about
what sensors can know. As industry pushes toward new sensing modalities such as EEG,
future work should remain critical in probing the beliefs of end-users, as their apprehensions
will shape the sorts of applications that users are willing to accept.
50
Chapter 6
As we saw in the previous chapter, EEG triggers intriguing beliefs about the knowability
of the mind. In this chapter, we use EEG to shift from users of sensing devices to their
engineers. Having motivated EEG as a case study for further exploration, this chapter
examines the beliefs of software engineers through their interactions with a working brain-
based authentication system. This population’s beliefs are particularly critical as consumer
brainscanning devices have become open to tinkering through software. Although we find
a diverse set of beliefs among our participants, we discover a shared understanding of the
mind as a physical entity that can and will be “read” by machines. These findings shed light
on what sorts of applications engineers may accept as buildable, and prime our concluding
chapter on how built artifacts may come to structure our notions of what minds are.
6.1 Background
In 2017, both Mark Zuckerberg and Elon Musk announced efforts to build a brain-computer
interface (BCI) [64]. One blog post enthusiastically describes Musk’s planned BCI as a
“wizard hat,” which will transform human society by creating a “worldwide supercortex,”
enabling direct, brain-to-brain communication [99].
A slew of inexpensive brainscanning devices underwrite such utopian visions. 2017 saw a
BCI for virtual reality gaming [75] and brainwave-sensing sunglasses [94] join the already
large list of inexpensive, consumer BCIs on the market [64, 54, 44]. These devices, which are
typically bundled with software development kits (SDKs), shift the task of building BCIs
from the realm of research into the realm of software development. But what will software
developers do with these devices?
This study employs a technology probe to surface narratives, and anxieties, around
consumer BCIs among professional software engineers. We provided a working brain-computer
interface to eight software engineers from the San Francisco Bay Area. As brainscanning
CHAPTER 6. TALKING TO ENGINEERS ABOUT BCI 51
Figure 6.1: A participant uses our brainwave authenticator in his startup’s office.
devices become more accessible to software developers, we look to these BCI “outsiders” as a
group likely to participate in the future of brain-computer interface. Specifically, we provided
participants with a brain-based authenticator, an application predicated on the notion that a
BCI can detect individual aspects of a person, making it a potentially fruitful window into
broader beliefs about what BCIs can reveal [87, 35].
Despite heterogeneous beliefs about the exact nature of the mind, the engineers in our
study shared a belief that the mind is physical, and therefore amenable to sensing. In fact, our
participants all believed that the mind could and would be “read” or “decoded” by computers.
We contribute to an understanding of how engineers’ beliefs might foretell the future of
brain-controlled interfaces. If systems are to be built that read the mind in any sense, we
discuss how such systems may bear on the long-term future of privacy and cybersecurity.
equipment). Non-invasive, consumer BCIs, are lightweight, require minimal setup, and do
not require special gels. EEG (electroencephalography) is currently the most viable choice of
sensing modality for consumer BCIs [19].
Historically, researchers have conceived of BCIs as accessibility devices, particularly for
individuals with severe muscular disabilities. However, accessibility devices can sometimes
provide routes for early adoption, and thus broader use. Speech recognition, for example, was
once a tool for individuals who could not type; eventually, it became adopted as a tool for
computer input, now commonplace in IoT devices such as Alexa and Siri. Since accessibility
devices can give rise to broader consumer adoption, we ask what such a pathway might look
like for brain-computer interfaces. With an expanding array of inexpensive brainscanning
hardware, many of which come bundled with engineer-friendly SDKs, the pathway to a future
of consumer BCI increasingly becomes a matter of software engineering.
Thus, we look to software engineers in the San Francisco Bay Area. We use these engineers
as a window into broader beliefs about “Silicon Valley,” a term we use here to stand in for
the technical, economic and political climate that surrounds the contemporary technology
industry in the area [89]. While we do not believe only Silicon Valley engineers will influence
the future of BCIs, historically, these engineers have an outsized impact on the types of
technologies developed for mass consumption, especially with respect to software. As BCI
hardware becomes more accessible, and therefore more amenable to experimentation as
software, this group once again holds a unique role in devising a consumer future for this
biosensor. Indeed, the Muse, and similar devices, have robust SDKs and active developer
communities that are building and showcasing BCI applications [76].
However, we did not want our subjects to have first-hand experience in developing BCIs,
as we did not want them to be primed by existing devices’ limitations. Instead, we selected
individuals who indicated they would be interested in experimenting with consumer BCI
devices in their free time. This screening was meant to draw subjects likely to buy consumer
devices and develop software for them. We believed that these engineers’ professional expertise
in software development afford a desirable criticality around our technical artifact.
neuroscience and cognitive science to argue that specific technical implementations from these
fields (along with their rhetoric around, and beliefs about the brain) allow the mind to be
“read” or “decoded.”
However, there exists an opportunity to investigate how pervasive such beliefs are among
those who are not neuroscience experts, yet nonetheless technical practitioners. Given the
recent shift of brain scanning equipment from research tool to consumer electronic device,
we ask what software engineers, newly able to develop applications around brain scanning,
might build. Answers to this question could have far-reaching consequences, from marketing,
to entertainment, to surveillance. In particular, we aim to center how engineers’ ideas about
the mind, especially its relationship to the brain and body, inform and constrain their beliefs
about what BCIs can (and should) do.
Brain-based authentication
Our study employs a brain-based authenticator as a research probe to elicit engineers’ beliefs
about BCIs (and the mind and/or brain they purport to sense). This section explains how
brain-based authentication works, and why we chose this application for our study.
Authentication (i.e., logging into devices and services) entails a binary classification
problem: given some token, the authenticator must decide whether or not the person is
who they claim to be. These tokens typically relate to one or more “factors”: knowledge
(something one knows, e.g. a password), inherence (something one is, such as a fingerprint),
or possession (something one has, such as a device) [24]. Brain-based authentication relies
on signals generated from individual’s brains to uniquely authenticate them, which has a
CHAPTER 6. TALKING TO ENGINEERS ABOUT BCI 54
number of potential advantages over other authentication strategies (see [71] for a review).
First, brainwaves are more difficult to steal than biometrics fingerprints, which are externally
visible, and left in public as one’s hands touch objects in the environment. Brainwaves also
change over time, making theft even less likely. Second, brain-based authentication requires
no external performance, making it impervious to “shoulder-surfing attacks” (e.g., watching
someone enter their PIN).
We chose to build a brain-based authenticator for our study for a few reasons. First,
having participants use a functioning system helped them imagine how they might use BCIs
themselves. Second, the system is a plausible one, backed by peer reviewed research, thus we
expected our participants to judge its claims credible. Third, the system embeds particular
assumptions about what brain scanners are able to capture. Our system embeds ideas that
our Muse headset can capture aspects of individual brains that are unique; as such, we expect
that a working, brain-based authenticator will encourage participants to reflect not only on
how a BCI applications might be adopted by the broader public, but also on what BCIs may
be able to reveal about the mind and brain, and to critically examine the limits of what BCIs
in general are able to do.
Figure 6.2: Our probe’s visualization of 1’s and 0’s gave our engineers a “raw” view of the
authenticator’s behavior. Pictured, the UI (a) accepting someone, (b) rejecting someone, or
(c) presenting mixed, ambiguous feedback.
CHAPTER 6. TALKING TO ENGINEERS ABOUT BCI 56
Using XGBoost [21], we trained a binary classifier on seven different splits of the train group.
After the classifier was produced, we validated its performance on the withheld validation set.
Given a target participant to classify, our classifier used any reading from this participant
as a positive example, and any reading not from this participant as a negative example.
Negative examples also included signals with poor quality, and signals from which the device
was off-head or disconnected. Ideally, the resulting classifier should produce "authenticate"
labels when the device is on the correct person’s head, and "do not authenticate" labels
at any other time. This classifier could output its labels to a simple user interface (UI),
described in the next section.
Interface
As the device produces data, the classifier outputs labels of “accept” or “reject.” Our interface
displays these labels as a square of 0s and 1s, which filled up as data from the device rolled
in (Figure 6.2).
Several considerations motivated this design. First, the UI represents the probabilistic
nature of the classification process. Individual signals may be misclassified, but over blocks
of time, the classifier should be mostly correct (represented as blocks of mostly 0s by our
interface). Thus our simple UI makes visible both the underlying mechanism of binary
classification, and its probabilistic nature. Second, because our UI provides potentially
ambiguous feedback (as opposed to unambiguous signals of "accept" or "reject"), it allows
for potentially richer meaning-making and explanatory work [91]. Toward this end, the UI’s
real-time reactivity (“blocks” of 1s and 0s filled in over time) allows participants to experiment
actively with the device, forming and testing hypotheses as to what makes classification
succeed or fail.
Finally, our UI gives the probe an “unfinished” appearance. We believed this interface
would cause our participants to activate their “professional vision” as tech-workers [43], and
critique or test the device as if it were a design of their own. Ideally, we hoped participants
would intentionally stress-test the device, or find playful ways of misusing it. These misuses
could allow participants to form hypotheses about why and how the device succeeds and fails.
6.3 Methods
We recruited participants by word of mouth. A recruitment email explained that subjects
would interact with a working BCI, and be asked their opinions about the device, and about
BCI broadly. We screened respondents by their current occupation and stated interest in
experimenting with BCIs in their free time. All participants were employed full-time as
software engineers at technology companies in the area.
A total of eight people participated, three of which were women. Participants’ ages ranged
from 23 to 36. We met with subjects for a single, one-hour session in which we trained and
tested a brain-based authenticator, allowing them to interact with it in an open-ended way.
CHAPTER 6. TALKING TO ENGINEERS ABOUT BCI 57
ticator, and share any impressions, reactions or ideas. The open-endedness of this session
was meant to encourage participants to explore the device’s capabilities and limitations, free
of particular tasks to accomplish. However, we suspected that our participant population
would be particularly prone to “hypothesis-testing,” exploring the devices limitations by
building theories about how it might work. We structured the session around this assumption,
preparing to ask participants to think aloud as they explored the device’s capabilities.
After some free-form exploration (usually involving some back-and-forth with the par-
ticipant), the interviewer would transition into a semi-structured interview, which would
occur with the device still active. The interviewer would ask participants to unpack their
experience, and lead them to explore what they felt the device could reveal about them. After
some discussion, the formal interview would conclude, and the participants would remove the
Muse device from their head.
false acceptances). These subjects often tried to remedy the situation by attempting tasks
they had rehearsed, typically with mixed success. Most of these subjects concluded that there
was not enough training data to produce reliable classification, but that such a system would
work with a larger corpus. In contrast, Alex, a 30 year-old founder of an indoor agriculture
startup, blamed himself, saying “I must not produce very distinguishable thoughts.”
Those participants who felt the probe’s authentication was reliable tended to center their
explanations on why it worked. Participants who experienced less consistent accuracy with
the authenticator tended to center their explanations on how the device might be improved,
e.g. with better or more comprehensive sources of data. This impulse to “fix” likely speaks to
our participants’ general tendency to engineer working systems.
As we hoped, the engineers engaged critically with the technical implementation of the
probe. In general, engineers asked about the machine learning infrastructure underlying the
authenticator, and several participants (particularly John, Mary and Alex) asked specific
questions, and made specific recommendations, diagnosing issues with the authenticator by
thinking about the diversity and size of the training set. Almost all participants noted the
authenticator worked better when they were not looking at the visual feedback from the
user interface. Participants generally theorized that this might occur because they were not
viewing feedback when training the classifier. In these cases, the engineers appeared to apply
their domain knowledge to their observations in using our technology probe.
Things just get progressively smaller until they disappear. And one day this’ll
just be an implant in my brain, doing crazy things. It’ll be interesting socially,
how people come to terms with it, when it’s just an implant, or at least very
pervasive . . . I could send you a message, and it could be like you’re thinking it
yourself, even if you’re on the other side of the Bay. (Terrance)
Terrance believed that BCI will become more prevalent: not just that smaller sensors
will lead to more effective or usable BCIs, but that they will also result in greater uptake
of the technology. While he references the social dimension of their adoption, he indicates
that people will need to “come to terms with” the developments, rather than providing direct
agency to users who may choose to adopt the technology or not.
Two participants felt less sure that such a future of pervasive BCI would ever come to
pass. Elizabeth, a 30 year-old front-end engineer, noted skepticism about signal quality, or
CHAPTER 6. TALKING TO ENGINEERS ABOUT BCI 60
usefulness outside of persons with disabilities. Mary, a 27 year-old software engineer at a large
company, pointed to social reasons for her skepticism. In reflecting on the relative accuracy
of the probe’s authentication performance during her session, she commented that “90 plus
percent” of people would be “totally freaked out” by brain-computer interfaces generally. She
continued to say that companies may themselves stop BCIs from becoming too pervasive or
advanced.
I feel like those companies, even if this were feasible, there’s a moral quandary
they philosophically have not figured out. They will not let the research get that
advanced . . . I just don’t imagine them being like, "okay computer, now read our
brains." (Mary)
While the probe was effective in spurring subjects to talk about issues around BCIs,
its accuracy as an authentication device did not seem to alter participants’ belief in BCI’s
future as a widespread technology. Unsurprisingly, the four subjects who experienced
reliable authenticator accuracy all expressed that BCIs would become commonplace in the
future. However, only Joanna connected the device’s poor performance in her session with a
probability of ongoing accuracy issues for BCIs in the future. The other three subjects who
felt the device did not perform accurately all offered explanations as to why, and explained
that future devices would fix these issues.
When pressed on how strictly he meant his metaphor of programming, John confirmed
that he meant it quite literally, saying, “I think we are just computers that are way more so-
phisticated than anything we understand right now.” We return to this strictly computational
account of the mind as “just” a computer in the discussion.
Mary gave a computational account of mind that was more metaphorical than John’s,
drawing on comparisons between machine learning and the mind. She cited the many “hidden
layers” in deep neural networks, and that, like in the brain, “information is largely distributed.”
CHAPTER 6. TALKING TO ENGINEERS ABOUT BCI 61
While she believed deep learning models and the brain were “different systems foundationally,”
she said “there are patterns” that relate the two to one another, and indicated that advances
in deep learning would spur a greater understanding of the brain.
Although six of our participants provided a largely computational account of mind-as-brain,
not all did. Joanna, a 31 year-old engineer who previously completed a PhD in neuroscience,
felt that the mind was “the part of the brain I am aware of, the part that is conscious.” She
believed that neurotransmitters throughout the body have a causal relationship to what
happens in the mind, but do not constitute the mind themselves; the contents of mind
occur physically in the brain, and the brain alone. In other words, her account is one of
“mind as conscious awareness,” and while unconscious phenomena affect mind (e.g. the body,
environment), they are not part of the mind per se. Interestingly, the probe did not work
well for Joanna, and she felt confident that its poor performance was due to contaminating
signal from her body (a theory she tested, and validated, by moving around and observing
the probe’s feedback).
Meanwhile, in one subject’s account, the mind extended beyond the confines of the body.
Terrance felt that there was “no meaningful difference” between the body and brain, nor
between the body and the physical environment at large, saying that “you can’t have one
without the other.” He believed that all three of these entities constitute the mind in a
mutually-dependent way. However, Terrance indicated that the mind is still strictly physical,
as are these three entities. Although Terrance did not provide details on how exactly the
mind extended beyond the body, it is interesting to note this position’s similarities to Clark’s
(2013) account of the extended mind [25], or Hutchins’s (2005) work on distributed cognition
[52], though Terrance was familiar with neither.
Participants also offered differing levels of confidence in their beliefs about the nature of
the mind. Joanna (who has a background in neuroscience) reported that “we do not know
everything we need to know” about how the mind works. Three other subjects reported
similar beliefs. However, those subjects with a computational account of mind tended to feel
more confident that their account was substantially accurate.
I think the consensus is that the body is mostly like the I/O of the brain. (John)
John’s account here implies that a sufficiently high-resolution brain sensor would accurately
capture all of a person’s experiences. John confirmed this explicitly, saying, “if you could 3D
print a brain, and apply the correct electrical impulses, you could create a person in a jar.”
In this computational metaphor of I/O (input/output), the body itself does not have agency;
instead, the body actuates the brain’s commands (output), and senses the environment,
sending data to brain for processing (input).
body to the physical world. With this physical understanding of the mind, it is not overly
surprising that all participants believed it would someday be possible for a computer to read
or decode the contents of the human mind. No participants expressed hesitation when asked
about such a proposition.
For example, Alex did not feel comfortable providing a specific physical locus for the mind.
Although he did not feel the probe was accurate for him, he took great pains to express his
belief that such a device could work, though not necessarily by sensing the brain.
Though it leaves open room for a variety of interpretations about the exact nature of
mind, Alex’s view is explicit that thoughts are physical, therefore can be read, and will be
read with some future technology.
There was a great deal of heterogeneity in the way this belief was bracketed or qualified.
Joanna felt that there would “always be parts of the mind that can’t be seen.” She likened
the question to the way that other people can know some parts of another person’s mind, e.g.
through empathy; their perspective, however, would always be partial, and she felt the same
would be true for machines.
However, some participants did not bracket their belief that machines would someday
read the mind. Participants for whom the authenticator worked reliably typically said that
a mind-reading machine was “absolutely possible” (Mary) or “just a matter of the right
data” (Alex). Participants who did not feel the authenticator was accurate described current
state-of-the-art as “crude” (John) or “low-granularity” (Elizabeth).
Even Terrance, who believed the mind extended beyond the confines of the body, felt
that the mind was readable by machine. After he stated his personal belief in a mind that
extended to the physical environment, the researcher asked what consequence this belief
might have for the future of BCIs.
BCI anxieties
An important counterpoint to emerging technologies is the anxiety that rises along with
them [84]. Interestingly, engineers in our study expressed no strong anxieties regarding the
development of BCIs, for the most part. Regardless of their experiences with our probe,
participants felt that BCIs would be developed, and would improve people’s lives. Participants
mentioned domains such as work, safety, and increased convenience in the home.
Only Mary reported existential anxiety about the possibility of machines that could read
the human mind. She reported a technology to be “absolutely possible,” and referenced the
probe’s continuing high accuracy as we spoke. However, in stark contrast to Terrance, Mary
feared such a development would occur sooner rather than later.
I hope it’s fifteen years out, but realistically, it’s probably more like ten. (Mary)
Despite Mary’s prior statement about the power of institutions to change the course of
technical developments, here she seems to indicate that such course changes will not occur,
or that they will converge on machines that can read the mind. When pressed on downsides,
the participants who did not volunteer any anxieties about BCI initially did mention security
(especially the “leaking” of “thoughts”) as a concern. For example, Elizabeth did not report any
particular anxieties about BCIs in general, “if the proper protections are in place.” Pressed
on what those protections might look like, she cited encryption as a solution to privacy
CHAPTER 6. TALKING TO ENGINEERS ABOUT BCI 64
concerns. Terrance, who expressed wanting BCIs to become more widespread, described in
deterministic terms the cybersecurity issues such devices might pose.
If there are security holes - which there almost certainly will be - then what
happens when I’m leaking my thoughts to someone? What if I’m thinking about
the seed phrase for my Bitcoin wallet. . . and then you put it in this anonymized
dataset . . . and I lose all my coins? What then? (Terrance)
Even alongside his concern, Terrance very much wanted a mind-reading machine to exist.
He mentioned a desire for a programming assistant that would somehow speed up the process
of software development. Since Terrance’s conception of BCI presents high stakes with regard
to privacy and security (he variously mentioned “telepathy,” and an “ESP device,” implying a
high degree of specificity with regard to what BCIs can resolve), it is telling that he thought
primarily of using BCIs to become a more efficient engineer, rather than concerns around
privacy or potential harm. Later in the discussion, we unpack further how larger cultural
tendencies in Silicon Valley might shape the way engineers build BCI systems.
6.5 Discussion
We find that engineers hold diverse beliefs about what the mind is, what the brain is, and
about the relationship between these entities. However, all of these engineers shared a core
belief that the mind is a physical entity, one that machines can and will decode given the
proper equipment and algorithms. Despite this belief, engineers did not largely express
concerns about privacy or security. As BCI startups continue to grow, we propose further
work within technical communities, with a sensitivity toward emerging narratives, so that we
may instill criticality among this emerging technical practice. We conclude with avenues for
future work focusing on different communities of technical practice.
his pre-existing notions of the mind, producing a hypothesis for what “brain states” might
exist and what states Muse headset might be able to detect. Hypotheses such as these could
be consequential, as they might provide ideas or starting points for engineers looking to build
systems. Our results highlight the importance of both pre-existing beliefs and particular
interactions with BCIs in structuring engineers’ understandings.
Broadly, engineers’ beliefs about the mind-as-computer metaphor (Section 6.4) could
provide starting points for engineers to build BCIs in the future. This computational view of
mind has been popular among engineers at least since the “good old-fashioned AI” (GOFAI) of
the 1950s. While much work has critiqued this stance from various angles [2, 48], those same
critiques have acknowledged the role these metaphors have played in the development of novel
technologies: If the mind is a machine, then those tools used to understand machines can
also be used to understand the mind. Here, we see this metaphor return, its discursive work
now focused on biosensing rather than on artificial intelligence. Of course, these metaphors
illuminate certain possibilities while occluding others [48]. As such, future work should follow
past research [2] in understanding what work this metaphor might do in its new domain of
computational mind-reading.
Even those participants who did not subscribe to computational theories of mind still
believed the mind to be strictly physical. These subjects all agreed that computers could
someday read the mind, precisely because of its physical nature. While our results indicate
that engineers believe the mind to be machine-readable, some work indicates that non-
engineers may share this as well [4]. Future work could further investigate this claim more
deeply in the context of consumer BCIs. If so, a machine designed by engineers and purported
to read the mind might find acceptance among a broader public audience.
Those subjects with a computational account of mind tended to feel more confident that
their account was substantially accurate. John referenced “the consensus” in justifying his
beliefs about the mind being equivalent to the brain. It is worth asking whose consensus this
might be: that of neuroscientists, philosophers of mind, cognitive scientists, or engineers?
In any of these cases, engineers’ confidence in their beliefs could have implications for what
types of systems are considered buildable, and where engineers might look to validate their
implementations. As products come to market, professionals in the tech industry must find
ways of claiming their devices to be legitimate, or working, to the public (consumers), to
potential investors, and to other engineers. These claims of legitimacy could prove to be a
fruitful window for understanding the general sensemaking process around these devices as
their (perceived) capabilities inevitably evolve and grow alongside changing technologies.
BCIs as a device for the masses. For example, Terrance’s concern about someone stealing his
Bitcoins through some BCI-based attack involves a technology which for now remains niche.
This imagined scenario demonstrates how the security (and privacy) concerns of engineers
may not match that of the general public. Such mismatches could have consequences for the
types of systems that are designed, and whose needs these systems will account for.
Crucially, discussions about privacy and security concerns did not cause any participants
to reflect further on the consequences of pervasive BCIs, nor did they deter enthusiasm for the
development of these devices. These findings indicate either that engineers are not be inclined
to prioritize security in the systems they build, or that they have resigned themselves to the
inevitability of security holes in software. In either case, our findings suggest a long-term
direction for cybersecurity concerns. These devices carry potentially serious security and
privacy consequences. If our engineers will try to build devices that make judgments about
the inner workings of a person’s mind, future work must critically examine how to protect
such systems, and the people who use them.
help us trace these intents forward as devices are re-imagined, remixed and repackaged for
other groups of users in the future.
In the nascent field of consumer BCI, researchers and designers should remain in touch
with the beliefs of engineers. We pinpoint beliefs about the mind, and its readability by
emerging biosensing devices, as especially an critical facet. Doing so will allow design to
remain preemptive rather than reactive as software for consumer BCI emerges. Designers and
researchers must not remain on the sidelines; as devices come to market, we must become
actively engaged in engineers’ beliefs (and practices). These systems hold the potential for
exploiting an unprecedented level of personal data, and therefore present real potential for
harm. As such, the area presents a new locus for researchers and designers to engage critically
with technical developments.
Future work
Software engineers are a diverse group, and the geographic confines of Silicon Valley do not
describe all communities worldwide. Future work could explore communities in different
places. Engineers in non-Western contexts may hold different cultural beliefs about the mind,
which could lead to vastly different findings.
Professionals who work in machine learning could present another participant pool for
future work. Machine learning is a critical component of BCIs, and many contemporary
techniques, particularly deep learning, use neural metaphors to interpret and designing
algorithms [6]. Thus, practitioners of these techniques may be inclined to draw metaphors
between the brain and the algorithms they employ, which could color their understanding
how and why BCIs work or fail.
Future work could allow participants to take an active, participatory role in the analysis
of their data, and/or in the design of the BCI system. Although our participants had the
technical expertise required to perform data analysis and systems engineering themselves, we
did not have participants do any such analysis for this study. This participatory approach will
also help us expand our understanding from engineers’ beliefs to engineers’ practices, as they
relate to the emerging domain of consumer brain-computer interfaces. Participants might
form their own interpretations of what the data mean (or can mean), building understandings
that could differ from those we observed in this study.
6.6 Conclusion
As engineers in the San Francisco Bay Area, the participants in our study sit at an historical
site of techno/political power. Our technology probe indicates these engineers believe the
mind is physical, and therefore amenable to sensing. What are the consequences for the rest
of us? I hope this study will encourage engineers to closely examine the potential of these
devices for social harm, and encourage researchers to remain closely attuned to this emerging
class of consumer biosensor.
CHAPTER 6. TALKING TO ENGINEERS ABOUT BCI 68
What this study did not rigorously examine is how the engineers in our study encountered
notions of identity as it might be captured by the brain scanning device. In general, although
engineers broadly believed the mind to be readable by machines, this chapter did not deeply
examine to what extent they believed the identity to be related to the mind or the brain. In
the following chapter, I examine participants’ responses through this lens, charting engineers’
beliefs about the readability of identity as an aspect of mind.
69
Chapter 7
What are the limits of machines’ ability to model the mind? My arguments in this dissertation
reorient this question around human beliefs: What are the limits within which claims of
mind-modeling might be made (by engineers), and believed (by end-users)? I propose the
term telepathy to describe the process of understanding models of minds. I then use this term
to motivate work for charting the limits of what work telepathy might perform in the world.
7.1 Telepathy
Earlier in this dissertation, I framed prior research programs as having built models of minds,
showing how work in philosophy supports their claims. By analyzing critiques of these
research programs, I highlighted the primacy of human beliefs, both engineers’ and users’, in
structuring how models of minds are built, and understood as relevant.
Building models of minds can be split into two major components: the engineering
program of building algorithms that encode and represent mental states, and the social
processes of understanding these representations as relevant in the course of life. While the
boundary between these components is intrinsically unstable, the split is nonetheless useful
in understanding how these models perform work in the world.
To describe the latter component, I propose the term telepathy. While this term has
a strong connection to magic, I believe it is useful to repurpose the term for discussions
about computational models of minds, and how they are understood by people. Consider
telepathy’s etymological pedigree in relation to other popular technologies.
Mind at a distance.
While the first two terms may have sounded like magic at some point in history, technical
infrastructures have provided functionality that made these terms legible not just as technolo-
gies but as social media. Telepathy is in spirit no different. In relation to the other technical
infrastructures, the prefix tele- highlights technical aspects of transmission, along with the
various sociotechnical infrastructures and entanglements that make transmission, encoding,
and decoding possible. Telepathy works to describe how models of minds are “made and
measured” [10], while gesturing toward the unstable boundary between these two activities.
What might telepathy be used for? Answers to this question relate deeply to the beliefs
of users and engineers. Thus, the relevant questions here include: What are the limits within
which claims of telepathy might be made, or believed? How might emerging infrastructures of
ubiquitous bodily and environmental sensing assist such claims, by ascribing higher resolution
to their models? Or detract from them by making biosensory data mundane, thus challenging
their presumed authority? Future work should deeply examine engineers’ beliefs, how they
change with evolving technologies, and how these beliefs affect (and are affected by) technical
practices. Beliefs about the mind will continue to co-evolve along with our rapidly changing
technical capacity to sense and model the world.
Figure 7.1: A big loop: beliefs about the mind inform the design of tools, and the use of
these tools inform beliefs about the mind.
CHAPTER 7. TELEPATHY WITHIN LIMITS 72
I suspect the coming years will provide opportunities to study these questions longitudinally,
as technologies develop and become more diffuse. The remainder of this chapter discusses
another set of longitudinal concerns, which should be studied in parallel: security, privacy,
and surveillance.
7.4 Conclusion
This dissertation aims to paint a few provocative dots on a very large canvas. As sensors
continue to saturate our environment, people will continue to build increasingly high-resolution
models of our bodies and minds. Machines’ purported ability to divine not just what these
bodies do, but what they think and feel, will prove to be a key concern for privacy, personal
autonomy, and cybersecurity in the coming hundred years. It will also generate novel
CHAPTER 7. TELEPATHY WITHIN LIMITS 73
Bibliography
[10] Kirsten Boehner et al. “How emotion is made and measured”. In: International Journal
of Human Computer Studies 65.4 (2007), pp. 275–291. issn: 10715819. doi: 10.1016/
j.ijhcs.2006.11.016.
[11] Kirsten Boehner et al. “How HCI interprets the probes”. In: Proceedings of the SIGCHI
conference on Human factors in computing systems - CHI ’07 (2007), p. 1077. issn:
1595935932. doi: 10 . 1145 / 1240624 . 1240789. url: http : / / portal . acm . org /
citation.cfm?doid=1240624.1240789.
[12] Andrey Bogomolov, Bruno Lepri, and Fabio Pianesi. “Happiness recognition from mo-
bile phone data”. In: Proceedings - SocialCom/PASSAT/BigData/EconCom/BioMedCom
2013. IEEE, 2013, pp. 790–795. isbn: 9780769551371. doi: 10.1109/SocialCom.2013.
118. url: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=
6693415http://ieeexplore.ieee.org/ielx7/6693161/6693296/06693415.pdf?
tp={\&}arnumber=6693415{\&}isnumber=6693296http://ieeexplore.ieee.org/
xpls/abs{\_}all.jsp?arnumber=6693415.
[13] Geoffrey C Bowker and Susan Leigh Star. Sorting things out: Classification and its
consequences. MIT press, 2000.
[14] Simone Browne. Dark Matters: On the Surveillance of Blackness. Duke University
Press, 2015. isbn: 9780822375302. url: https://books.google.com/books?id=
snmJCgAAQBAJ.
[15] Winslow Burleson. “Predicting Creativity in the Wild: Experience Sample and Socio-
metric Modeling of Teams”. In: Proceedings of the ACM 2012 conference on Com-
puter Supported Cooperative Work (CSCW ’12). ACM, 2012, pp. 1203–1212. isbn:
9781450310864. doi: 10.1145/2145204.2145386.
[16] Paul Campos. The Obesity Myth: Why Our Obsessions with Weight is Hazardous to
Our Health. Book, Whole. Penguin, 2004. isbn: 0670042846, 9780670042845.
[17] Luca Canzian and Mirco Musolesi. “Trajectories of depression”. In: Proceedings of the
2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing
(UBICOMP ’15). 2015, pp. 1293–1304. isbn: 9781450335744. doi: 10.1145/2750858.
2805845.
[18] Cardiogram: what’s your heart telling you? url: http://www.cardiogr.am/ (visited
on 05/26/2016).
[19] Francesco Carrino et al. “A self-paced BCI system to control an electric wheelchair:
Evaluation of a commercial, low-cost EEG device”. In: 2012 ISSNIP Biosignals and
Biorobotics Conference: Biosignals and Robotics for Better and Safer Living, BRC
2012. 2012, pp. 1–6. isbn: 9781467324762. doi: 10.1109/BRC.2012.6222185. url:
http://ieeexplore.ieee.org/xpl/login.jsp?tp={\&}arnumber=6222185{\&
}url=http{\%}3A{\%}2F{\%}2Fieeexplore.ieee.org{\%}2Fxpls{\%}2Fabs{\_
}all.jsp{\%}3Farnumber{\%}3D6222185.
BIBLIOGRAPHY 76
[20] David John Chalmers. “The Conscious Mind: In Search of a Fundamental Theory”.
In: International Journal of Quantum Chemistry January 1998 (1998), pp. xvii,
414. issn: 00346632. doi: DOI10.1207/s15327809jls0803&4_10. arXiv: 0402594v3
[arXiv:cond-mat].
[21] Tianqi Chen and Carlos Guestrin. “XGBoost : Reliable Large-scale Tree Boosting
System”. In: arXiv (2016), pp. 1–6. issn: 0146-4833. doi: 10.1145/2939672.2939785.
arXiv: 1603.02754.
[22] C. Cheshire, a. Gerbasi, and K. S. Cook. “Trust and Transitions in Modes of Exchange”.
en. In: Social Psychology Quarterly 73.2 (2010), pp. 176–195. issn: 0190-2725. doi:
10.1177/0190272509359615. url: http://spq.sagepub.com/cgi/doi/10.1177/
0190272509359615.
[23] Eun Kyoung Choe et al. “Living in a Glass House : A Survey of Private Moments in
the Home”. In: UbiComp 2011 (2011), pp. 41–44. doi: 10.1145/2030112.2030118.
url: http://dl.acm.org/citation.cfm?doid=2030112.2030118.
[24] John Chuang et al. “I think, therefore I am: Usability and security of authentication
using brainwaves”. In: International Conference on Financial Cryptography and Data
Security. 2013, pp. 1–16. isbn: 9783642413193. doi: 10.1007/978-3-642-41320-
9_1. url: http://link.springer.com/chapter/10.1007/978- 3- 642- 41320-
9{\_}1http://link.springer.com/content/pdf/10.1007{\%}2F978- 3- 642-
41320-9{\_}1.pdf.
[25] Andy Clark. “Whatever next? Predictive brains, situated agents, and the future
of cognitive science.” In: The Behavioral and brain sciences 36.3 (2013), pp. 181–
204. issn: 1469-1825. doi: 10.1017/S0140525X12000477. arXiv: 0140-525X. url:
http://www.ncbi.nlm.nih.gov/pubmed/23663408.
[26] Andy Clark and David Chalmers. “The Extended Mind”. In: Analysis 58.1 (1998), pp. 7–
19. issn: 0003-2638. doi: 10.1111/1467-8284.00096. arXiv: arXiv:1011.1669v3.
url: http : / / www . blackwell - synergy . com / links / doi / 10 . 1111{\ % }2F1467 -
8284.00096.
[27] Sunny Consolvo et al. “Location disclosure to social relations”. en. In: Proceedings
of the SIGCHI conference on Human factors in computing systems - CHI ’05. ACM
Press, 2005, p. 81. isbn: 1581139985. doi: 10.1145/1054972.1054985. url: http:
//portal.acm.org/citation.cfm?doid=1054972.1054985.
[28] K. S. Cook et al. “Trust Building via Risk Taking: A Cross-Societal Experiment”.
en. In: Social Psychology Quarterly 68.2 (2005), pp. 121–142. issn: 0190-2725. doi:
10.1177/019027250506800202. url: http://spq.sagepub.com/cgi/doi/10.1177/
019027250506800202.
[29] Mcmillan Cottom. “Black CyberFeminism: Ways Forward for Intersectionality and
Digital Sociology”. In: (2016).
BIBLIOGRAPHY 77
[30] Sophie Day and Celia Lury. “Biosensing: Tracking Persons”. In: Quantified: Biosensing
Technologies in Everyday Life (2016), p. 43.
[31] Michael D. Decaria, Stewart Proctor, and Thomas E. Malloy. “The effect of false
heart rate feedback on self-reports of anxiety and on actual heart rate”. en. In:
Behaviour Research and Therapy 12.3 (1974), pp. 251–253. issn: 00057967. doi:
10 . 1016 / 0005 - 7967(74 ) 90122 - 3. url: http : / / linkinghub . elsevier . com /
retrieve/pii/0005796774901223.
[32] Laura Devendorf et al. “"I don’t want to wear a screen": Probing Perceptions of and
Possibilities for Dynamic Displays on Clothing”. In: Proceedings of the 2016 CHI
Conference on Human Factors in Computing Systems - CHI ’16. 2016, pp. 6028–6039.
isbn: 9781450333627. doi: 10.1145/2858036.2858192. url: http://artfordorks.
com/pubs/16{\_}CHI{\_}Ebb.pdf.
[33] Cory Doctrow. Rachel Kalmar’s datapunk quantified self sensor array 2, Institute
for the Future, Palo Alto, California, USA. 2014. url: https://www.flickr.com/
photos/doctorow/15659135172 (visited on 01/31/2018).
[34] Markéta Dolejšová and Denisa Kera. “Soylent Diet Self-Experimentation: Design
Challenges in Extreme Citizen Science Projects”. In: (2017), pp. 2112–2123. doi:
10.1145/2998181.2998365.
[35] Joseph Dumit. “Picturing Personhood: Brain Scans and Biomedical Identity”. In:
Information Series (2004), p. 251. issn: 0162-2439. doi: 10.1353/bhm.2005.0063.
[36] Chris Elsden et al. “ResViz: Politics and design issues in visualizing academic metrics”.
In: Conference on Human Factors in Computing Systems - Proceedings. 2016, pp. 5015–
5027. isbn: 978-1-4503-3362-7. doi: 10.1145/2858036.2858181.
[37] Deborah Estrin and Ida Sim. “Health care delivery. Open mHealth architecture: an
engine for health care innovation.” In: PLoS Medicine 10.2 (2010), e10011395. issn:
0036-8075. doi: 10.1126/science.1196187.
[38] Feel.co. Feel: The world’s first emotion sensor & well-being advisor. 2018. url: https:
//www.myfeel.co/ (visited on 01/01/2018).
[39] Maridel A. Fredericksen et al. “Three-dimensional visualization and a deep-learning
model reveal complex fungal parasite networks in behaviorally manipulated ants”. In:
Proceedings of the National Academy of Sciences (2017), p. 201711673. issn: 0027-8424.
doi: 10.1073/pnas.1711673114. url: http://www.pnas.org/lookup/doi/10.
1073/pnas.1711673114.
[40] Jérémy Frey. “Remote Heart Rate Sensing and Projection to Renew Traditional Board
Games and Foster Social Interactions”. In: Proceedings of the extended abstracts of the
34th annual ACM conference on Human factors in computing systems - CHI EA ’16.
ACM, 2016, pp. 1865–1871. isbn: 9781450340823. doi: 10.1145/2851581.2892391.
arXiv: 1602 . 08358. url: http : / / arxiv . org / abs / 1602 . 08358{\ % }5Cnhttp :
//dx.doi.org/10.1145/2851581.2892391.
BIBLIOGRAPHY 78
[52] Edwin Hutchins. “Distributed cognition”. In: Cognition, Technology & Work 7.1 (2005),
pp. 5–5. issn: 1435-5558. doi: 10.1007/s10111-004-0172-0. url: http://www.slis.
indiana.edu/faculty/yrogers/dist{\_}cog/{\%}5Cnhttp://link.springer.
com/10.1007/s10111-004-0172-0.
[53] Hilary Hutchinson et al. “Technology probes: inspiring design for and with families”.
In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
(CHI ’03) 5 (2003), pp. 17–24. issn: 09501991. doi: 10.1145/642611.642616. url:
http://dl.acm.org/citation.cfm?id=642616.
[54] Interaxon. Muse: The brain sensing headband. 2017. url: http://www.choosemuse.
com/.
[55] Katherine Isbister et al. “The Sensual Evaluation Instrument: Developing an Affective
Evaluation Tool”. In: CHI 2006 Proceedings. 2006, pp. 1163–1172. isbn: 1595931783.
doi: 10.1145/1124772.1124946.
[56] Joris H. Janssen et al. “Intimate heartbeats: Opportunities for affective communication
technology”. In: IEEE Transactions on Affective Computing 1.2 (2010), pp. 72–80.
issn: 19493045. doi: 10.1109/T-AFFC.2010.13. url: http://ieeexplore.ieee.
org/lpdocs/epic03/wrapper.htm?arnumber=5611482.
[57] Jacob Kastrenakes. Apple Watch uses four sensors to detect your pulse. 2014. url:
http : / / www . theverge . com / 2014 / 9 / 9 / 6126991 / apple - watch - four - back -
sensors-detect-activity.
[58] K.N. Kay et al. “Identifying natural images from human brain activity”. en. In:
Nature 452.7185 (2008), pp. 352–355. issn: 0028-0836. doi: 10.1038/nature06713.
Identifying. arXiv: NIHMS150003. url: http : / / www . dam . brown . edu / people /
elie/NEUR{\_}1680/Gallant{\_}Nature{\_}2008.pdf.
[59] Taemie Kim et al. “Meeting mediator”. In: Proceedings of the ACM 2008 conference
on Computer supported cooperative work - CSCW ’08. ACM, 2008, p. 457. isbn:
9781605580074. doi: 10.1145/1460563.1460636. url: http://portal.acm.org/
citation.cfm?doid=1460563.1460636.
[60] Olave E. Krigolson et al. “Choosing MUSE: Validation of a low-cost, portable EEG sys-
tem for ERP research”. In: Frontiers in Neuroscience 11.MAR (2017). issn: 1662453X.
doi: 10.3389/fnins.2017.00109.
[61] Gierad Laput, Yang Zhang, and Chris Harrison. “Synthetic Sensors : Towards General-
Purpose Sensing”. In: Proceedings of the SIGCHI Conference on Human Factors in
Computing Systems (2017), pp. 3986–3999.
[62] Antti Latvala et al. “A longitudinal study of resting heart rate and violent criminality
in more than 700000 men”. en. In: JAMA Psychiatry 72.10 (2015), pp. 971–978. issn:
2168622X. doi: 10 . 1001 / jamapsychiatry . 2015 . 1165. url: http : / / archpsyc .
jamanetwork.com/article.aspx?doi=10.1001/jamapsychiatry.2015.1165.
BIBLIOGRAPHY 80
[63] Lucian Leahu and Phoebe Sengers. “Freaky”. en. In: Proceedings of the 2014 conference
on Designing interactive systems - DIS ’14. ACM Press, 2014, pp. 607–616. isbn:
9781450329026. doi: 10 . 1145 / 2598510 . 2600879. url: http : / / dl . acm . org /
citation.cfm?doid=2598510.2600879.
[64] Steven Levy. Brain-Machine Interface Isn’t Sci-Fi Anymore. 2017. url: https://
www.wired.com/story/brain-machine-interface-isnt-sci-fi-anymore/.
[65] Andrew Liptak. “There Are Some Super Shady Things in Oculus Rift’s Terms of
Service”. In: Gizmodo (2016). url: http://gizmodo.com/there-are-some-super-
shady-things-in-oculus-rifts-terms-1768678169.
[66] Gilad Lotan and Christian Croft. “imPulse”. en. In: CHI ’07 extended abstracts on
Human factors in computing systems - CHI ’07 (2007), p. 1983. doi: 10.1145/1240866.
1240936. url: http://portal.acm.org/citation.cfm?doid=1240866.1240936.
[67] Joanne McNell. “Who Sexts Thumbprints?” In: (2015). url: https://medium.com/
message/who-sexts-thumbprints-2138641c98c.
[68] Nick Merrill and Coye Cheshire. “Habits of the Heart(rate): Social Interpretation of
Biosignals in Two Interaction Contexts”. In: Proceedings of the 2016 conference on
Groupwork (GROUP ’16). 2016.
[69] Nick Merrill and Coye Cheshire. “Trust Your Heart: Assessing Cooperation and Trust
with Biosignals in Computer-Mediated Interactions”. In: Proceedings of the 2017 ACM
Conference on Computer Supported Cooperative Work (CSCW ’17). Portland, OR,
2017.
[70] Nick Merrill and John Chuang. “From Scanning Brains to Reading Minds: Talking
to Engineers about Brain-Computer Interface”. In: Proceedings of the 2018 ACM
Conference on Computer Human Interaction (CHI ’18). Montreal, QC, 2018.
[71] Nick Merrill, Max T Curran, and John Chuang. “Is the Future of Authenticity All
In Our Heads? Moving Passthoughts from the Lab to the World”. In: New Security
Paradigms Workshop (NSPW ’17) (2017).
[72] Cade Metz. “Facebook’s Race to Link Your Brain to a Computer Might Be Unwinnable”.
In: Wired (2017). url: https://www.wired.com/2017/04/facebooks-race-link-
brain-computer-might-unwinnable/.
[73] Marvin Minsky and Seymour Papert. “Perceptrons”. In: (1969).
[74] Dawn Nafus. Quantified: Biosensing technologies in everyday life. MIT Press, 2016.
[75] Neurable. Neurable: Power Brain-Controlled Virtual Reality. 2017. url: http://www.
neurable.com/.
[76] NeurotechX. The international neurotechnology network. url: http://neurotechx.
com/ (visited on 12/20/2017).
BIBLIOGRAPHY 81
[77] Alva Noë and Evan Thompson. “Are There Neural Correlates of Consciousness?”
In: Journal of Consciousness Studies 11.1 (2004), pp. 3–28. issn: 13558250. url:
http://www.redi- bw.de/db/ebsco.php/search.ebscohost.com/login.aspx?
direct = true{\ & }db = psyh{\ & }AN = 2004 - 11344 - 001{\ & }site = ehost - live{\ %
}[email protected]{\%}[email protected].
[78] Geoff Nunberg. “Feeling Watched? ’God View’ Is Geoff Nunberg’s Word Of The
Year”. In: All Tech Considered, NPR (2014). url: https://www.npr.org/sections/
alltechconsidered/2014/12/10/369740829/forget-creepy-nunbergs-word-of-
the-year-is-bigger-and-two-god-view.
[79] Daniel Olguín Olguín et al. “Sensible organizations: Technology and methodology for
automatically measuring organizational behavior”. In: IEEE Transactions on Systems,
Man, and Cybernetics, Part B: Cybernetics 39.1 (2009), pp. 43–55. issn: 10834419.
doi: 10.1109/TSMCB.2008.2006638.
[80] J. Kevin O’Regan and Alva Noë. “A sensorimotor account of vision and visual conscious-
ness”. In: Behavioral and Brain Sciences 24.05 (2001), pp. 939–973. issn: 0140-525X.
doi: 10.1017/S0140525X01000115. url: http://www.journals.cambridge.org/
abstract{\_}S0140525X01000115.
[81] B Parkinson. “Emotional effects of false autonomic feedback.” en. In: Psychological
bulletin 98.3 (1985), pp. 471–494. issn: 0033-2909. doi: 10.1037/0033-2909.98.3.
471. url: http://doi.apa.org/getdoi.cfm?doi=10.1037/0033-2909.98.3.471.
[82] Brian Parkinson. “Emotions in Interpersonal Life”. In: The Oxford handbook of affective
computing. October (2014), pp. 68–83. doi: 10.1093/oxfordhb/9780199942237.013.
004. url: http://oxfordhandbooks.com/view/10.1093/oxfordhb/9780199942237.
001.0001/oxfordhb-9780199942237-e-004.
[83] R. W. Picard and J. Healey. “Affective wearables”. en. In: Personal and Ubiquitous
Computing 1.4 (1997), pp. 231–240. issn: 16174909. doi: 10.1007/BF01682026. arXiv:
arXiv:1411.1164v1. url: http://link.springer.com/10.1007/BF01682026.
[84] James Pierce. “Dark Clouds , Io $ #! + , and ? [ Crystal Ball Emoji ]: Projecting
Network Anxieties with Alternative Design Metaphors”. In: (2017), pp. 1383–1393.
[85] Emilee Rader and Janine Slaker. “The Importance of Visibility for Folk Theories of
Sensor Data”. In: Symposium on Usable Privacy and Security (SOUPS) 2017. Soups.
2017. isbn: 9781931971393.
[86] Terry Regier and Paul Kay. Language, thought, and color: Whorf was half right. 2009.
doi: 10 . 1016 / j . tics . 2009 . 07 . 001. url: http : / / www . sciencedirect . com /
science/article/pii/S1364661309001454 (visited on 06/23/2016).
BIBLIOGRAPHY 82
[87] Nikolas Rose. “Reading the human brain: How the mind became legible”. en. In: Body &
Society 22.2 (2016), pp. 140–177. issn: 1357-034X. doi: 10.1177/1357034X15623363.
url: http : / / bod . sagepub . com / cgi / doi / 10 . 1177 / 1357034X15623363http :
//bod.sagepub.com/content/22/2/140http://bod.sagepub.com/content/22/2/
140.short.
[88] Matthew Sample. “Evaluating Neural Futures: Good Technoscience and the Challenge
of Co-Production”. PhD Dissertation. University of Washington, 2016.
[89] AnnaLee Saxenian. Regional advantage. Harvard University Press, 1996.
[90] Elaine Sedenberg, Richmond Y Wong, and John C.-I. Chuang. “A Window into the
Soul: Biosensing in Public”. In: CoRR abs/1702.0 (2017). arXiv: 1702.04235. url:
http://arxiv.org/abs/1702.04235.
[91] Phoebe Sengers and Bill Gaver. “Staying open to interpretation: engaging multiple
meanings in design and evaluation”. In: Proceedings of the 6th conference on Designing
. . . (2006), pp. 99–108. issn: 1595933670. doi: http://doi.acm.org/10.1145/
1142405.1142422. url: http://dl.acm.org/citation.cfm?id=1142422.
[92] Roger N Shepard and Jacqueline Metzler. “Mental rotation of three-dimensional
objects”. In: Science 171.3972 (1971), pp. 701–703.
[93] Petr Slovák, Joris Janssen, and Geraldine Fitzpatrick. “Understanding heart rate
sharing”. en. In: Proceedings of the 2012 ACM annual conference on Human Factors in
Computing Systems - CHI ’12 February (2012), p. 859. issn: 0022-3514. doi: 10.1145/
2207676.2208526. url: http://dl.acm.org/citation.cfm?id=2207676.2208526.
[94] Smith Optical. Lowdown Focus mpowered by Muse. 2017. url: http://smithoptics.
com/us/lowdownfocus.
[95] Jaime Snyder et al. “MoodLight”. en. In: Proceedings of the 18th ACM Conference
on Computer Supported Cooperative Work & Social Computing - CSCW ’15. ACM
Press, 2015, pp. 143–153. isbn: 9781450329224. doi: 10.1145/2675133.2675191.
url: http://dl.acm.org/citation.cfm?doid=2675133.2675191.
[96] Spire, Inc. Spire is the first wearable to track body, breath, and state of mind. (Visited
on 05/26/2016).
[97] James Stables. The best biometric and heart rate monitoring headphones. 2016. url:
http://www.wareable.com/headphones/best- sports- headphones (visited on
05/22/2016).
[98] Peter Tolmie et al. ““This has to be the cats”: Personal Data Legibility in Net-
worked Sensing Systems”. In: Proceedings of the 19th ACM Conference on Computer-
Supported Cooperative Work & Social Computing - CSCW ’16. 2016, pp. 490–501.
isbn: 9781450335928. doi: 10.1145/2818048.2819992. url: http://dl.acm.org/
citation.cfm?doid=2818048.2819992.
BIBLIOGRAPHY 83
[99] Tim Urban. Neuralink and the Brain’s Magical Future. 2017. url: https://waitbutwhy.
com/2017/04/neuralink.html.
[100] S Valins. “Cognitive effects of false heart-rate feedback.” en. In: Journal of person-
ality and social psychology 4.4 (1966), pp. 400–408. issn: 0022-3514. doi: 10.1037/
h0023791. url: http://content.apa.org/journals/psp/4/4/400.
[101] Robert S Weiss. Learning from strangers: The art and method of qualitative interview
studies. Simon and Schuster, 1995.
[102] Chris Wilson and Joe Oeppen. “On reification in demography”. In: Population, projec-
tions and politics (2003), pp. 113–129.
[103] Terry Winograd and Fernando Flores. On understanding computers and cognition:
A new foundation for design. A response to the reviews. 1987. doi: 10.1016/0004-
3702(87)90026-9.
[104] Richmond Y Wong and Deirdre K Mulligan. “When a Product Is Still Fictional:
Anticipating and Speculating Futures through Concept Videos”. In: Proceedings of the
2016 ACM Conference on Designing Interactive Systems. ACM. 2016, pp. 121–133.
[105] Lynn Wu et al. “Mining face-to-face interaction networks using socimetric badges:
predicting productivity in an ITC configuation task”. In: Icis (2008), pp. 1–19.
[106] David Young, Richard Hirschman, and Michael Clark. “Nonveridical heart rate feedback
and emotional attribution”. en. In: Bulletin of the Psychonomic Society 20.6 (1982),
pp. 301–304. issn: 0090-5054. doi: 10 . 3758 / BF03330108. url: http : / / link .
springer.com/10.3758/BF03330108.