The Anchoring Effect in Decision-Making With Visual Analytics

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/324865313

The Anchoring Effect in Decision-Making with Visual Analytics

Preprint · October 2017


DOI: 10.13140/RG.2.2.16952.85767

CITATIONS READS
0 210

6 authors, including:

Isaac Cho Ryan Wesslen


University of North Carolina at Charlotte University of North Carolina at Charlotte
24 PUBLICATIONS   76 CITATIONS    11 PUBLICATIONS   2 CITATIONS   

SEE PROFILE SEE PROFILE

Alireza Karduni Sashank Santhanam


University of North Carolina at Charlotte University of North Carolina at Charlotte
9 PUBLICATIONS   48 CITATIONS    4 PUBLICATIONS   0 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Social Networking Effects on Human Brain and Cognition View project

All content following this page was uploaded by Isaac Cho on 01 May 2018.

The user has requested enhancement of the downloaded file.


The Anchoring Effect in Decision-Making with Visual Analytics
Isaac Cho* Ryan Wesslen† Alireza Karduni‡ Sashank Santhanam§ Samira Shaikh¶
Wenwen Dou||

University of North Carolina at Charlotte

A BSTRACT biases when using VA systems to analyze large and complex datasets.
Anchoring effect is the tendency to focus too heavily on one piece Moreover, we explore if and how cognitive biases are reflected in
of information when making decisions. In this paper, we present a the way that users interact with visual analytic interfaces.
novel, systematic study and resulting analyses that investigate the In the context of VA research, many recent VA systems [9, 16,
effects of anchoring effect on human decision-making using visual 21, 30, 34] designed to facilitate the decision making on large and
analytic systems. Visual analytics interfaces typically contain multi- complex datasets contain coordinated and multiple views (CMV).
ple views that present various aspects of information such as spatial, By presenting different visual representations that show various
temporal, and categorical. These views are designed to present aspects of the underlying data and automatically coordinating opera-
complex, heterogeneous data in accessible forms that aid decision- tions between views, multiple coordinated views support exploratory
making. However, human decision-making is often hindered by analysis to enable insight and knowledge discovery [39]. In visual
the use of heuristics, or cognitive biases, such as anchoring effect. interfaces that employ CMV design, users often have choices on
Anchoring effect can be triggered by the order in which information which views serve as primary vs. supporting views for their analysis
is presented or the magnitude of information presented. Through and on the strategies to switch between different views.
carefully designed laboratory experiments, we present evidence of The flexibility of visual interfaces with coordinated and multiple
anchoring effect in analysis with visual analytics interfaces when views make cognitive biases such as anchoring bias particularly
users are primed by representation of different pieces of information. relevant to study. People find cognitive biases to be useful heuris-
We also describe detailed analyses of users’ interaction logs which tics when sorting through large amounts of information, when task
reveal the impact of anchoring bias on the visual representation constraints or instructions prime them to focus on specific types of
preferred and paths of analysis. We discuss implications for future information, or when asked to make quick decisions and analyses.
research to possibly detect and alleviate anchoring bias. This has been demonstrated for several biases and shown that bi-
ases affect decision-making processes in predictably faulty ways
Keywords: Visual Analytics, Anchoring Effect, Sense Making, that can result in decision-making failures when information is dis-
Cognitive Bias, Interaction Log Analysis counted, misinterpreted, or ignored [29]. Additionally, the biases
Index Terms: K.6.1 [Management of Computing and Information affect not only regular users, but also expert users, when thinking
Systems]: Project and People Management—Life Cycle; K.7.m intuitively [29]. One type of bias, the anchoring effect describes the
[The Computing Profession]: Miscellaneous—Ethics human tendency to rely too heavily on one/the first piece of informa-
tion offered (the “anchor”) when making decisions [17]. Research
1 I NTRODUCTION has demonstrated that individuals anchor on a readily accessible
value and adjust from it to estimate the true value, often with insuf-
Researchers in multiple fields, including psychology, economics and
ficient adjustments. For instance, if a person is asked to estimate
medicine have extensively studied the effect of cognitive biases on
the length of the Mississippi River, following a question on whether
decision making [7, 8, 17]. Cognitive biases are rules of thumb or
the length is longer or shorter than 500 miles, their answer will be
heuristics that help us make sense of the world and reach decisions
adjusted from the ‘anchor’ value of 500 miles and will underestimate
with relative speed [28]. Decision making, the process of identifying
the true length of the Mississippi River. The effect of such anchors
solutions to complex problems by evaluating multiple alternatives
have been extensively studied in multiple tasks in the laboratory and
[46] has been increasingly exacerbated due to explosion of big
in the field (for a detailed review see [19]). However, the effect of
data [44]. To facilitate human decision-making processes on large
anchoring in Visual Analytics interfaces have not been systemati-
and complex datasets, Visual Analytics (VA) combines automated
cally studied. More importantly, the effect of anchoring bias on the
analysis techniques with interactive visualizations to increase the
strategies that users deploy to interact with the visual interface and
amount of data users can effectively work with [31]. Evidently,
their analysis outcomes remains an open question.
the effectiveness of VA to support decision making is an area that
In this paper, we study the effect of anchoring on users exploration
warrants study. Our goal in this work is therefore to conduct a study
processes and outcomes. When interacting with visual interfaces
which incorporates three complementary strands of research, given
employing CMV design, there is a possibility that users rely too
the premises that VA supports decision making, and that decision
heavily on one particular view. The reasons for such reliance in-
making is impacted by cognitive biases. Specifically, we investigate
clude but are not limited to prior experience, familiarity with certain
how users’ decision making processes are impacted by cognitive
visualizations, and different ways they were trained to use the visual
* e-mail:[email protected] interface. The significance and impact of such anchoring is the
† e-mail:[email protected] subject of our study.
‡ e-mail:[email protected] Prior work in the VA community provides empirical data on
§ e-mail:[email protected] cognitive costs of visual comparisons and context switching in
¶ e-mail:[email protected] coordinated-multiple-view visual interfaces [11, 38]. Findings from
|| e-mail:[email protected] these experiments inform design guidelines of CMVs. However,
there is little research on how cognitive biases transfer to visualiza-
tions, in particular to visual interfaces with coordinated multiple
views. MacEachren [33] argues that prior efforts in visualization of
uncertainty deal with representation of data uncertainty, but do not
address the reasoning that takes place under these conditions. We 2.2 Visual Analytics and Cognitive Biases
therefore aim to investigate the impact of anchoring effect on human Sacha et al. [43] investigate how uncertainties can propagate through
decision-making processes when using VA systems, because it has visual analytics systems and examine the role of cognitive biases
been shown to be overwhelmingly affect decision-making [17]. Our in understanding uncertainties, and also suggest guidelines for the
experiment design addresses several challenging requirements that design of VA systems that may further facilitate human decision-
are necessary to derive meaningful implications: first, the experi- making. Similarly, research in the detection of biased decision
ments need to be conducted using a VA system with tasks relevant making with VA software is in the early stages [36]. Harrison et al.
to decision-making based on large and complex datasets; second, found through a crowd-sourcing experiment that affective priming
measures and experiment data that reflect users’ decision making can influence accuracy in common graphical perception tasks [23].
processes (beyond task completion time and accuracy) need to be George et al. [20] examined robustness of anchoring and adjustment
collected; third, novel analyses methods need to be developed to effect in the context of decision support systems. Although their
tease out the effect of anchoring bias on decision making with VA study revealed the presence of anchoring bias in the user’s decision
systems. Accordingly, our work makes the following original contri- making task of estimating the price of house, their decision support
butions: system did not contain a highly complex visual interface consisting
of coordinated multiple view. Researchers have also investigated
• To situate our study in complex decision making tasks with the role of various other biases such as confirmation bias [12] and
visual interfaces, the experiments are conducted with a sophis- attraction effect [14] in the context of visual analytics. Dimara et
ticated visual analytics system [10] with multiple coordinated al. [14] studied attraction effect using crowdsourcing experiments to
views. The design of the visual analytics system enables the vi- determine that attraction bias did in fact generalize to information
sual anchor on either geo or time related representation through visualization and that irrelevant alternatives may influence users
tutorial/training. choice in scatterplots. Their findings provide implications for future
• In order to study the effect of anchoring bias on the decision- research on how to possibly alleviate attraction effect when design-
making processes with greater nuance and granularity, we col- ing information visualization plots but no study to date has explored
lect not only quantitative measures about users’ performance, the anchoring bias in visual interfaces. Additionally, no research
including questionnaire responses, but we also collect detailed to date has examined the interaction patterns and activities of users
interaction logs within the visual interface. The interaction in decisionmaking while these users are explicitly anchored under
logs capture the decisionmaking process at a action level. Sig- controlled experimental conditions.
nificant differences in actions were found between subjects In the next section, we describe a novel approach to analyzing the
assigned to different visual anchors. users’ interaction patterns which is grounded in the analysis of web
log data.
• In addition to running statistical tests on the quantitative mea-
sures collected through pre- and post-questionnaires, we apply 2.3 Use of Topic Models for Analyzing Web Logs
two novel methods of analysis - graph analysis and structural For our analysis of the interaction logs, we employ a variant of topic
topic modeling - to analyze the paths and patterns of users models, structural topic modeling (STM), that facilitates testing
interactions and identify the effect of anchoring bias. Our the effect of document-level variables on topic proportions. By
analysis revealed that visual anchors impact users’ decision- characterizing the temporal sequence of actions taken by the user
making processes while numerical anchors affect the analysis during their interactions with the interface as a ’text document’ and
outcomes. characterizing actions as ’topics’, we are able to test the effects of
several factors, which include not only demographic variables such
2 BACKGROUND AND R ELATED W ORK as age and gender, but also the effects of anchoring bias on the
In this section, we describe literature in the areas relevant to our user’s actions, and hence their decision-making processes. Although
study. topic models have been used to analyze web logs previously, our
application of STM to user interaction logs is novel by providing
2.1 Background on Anchoring Effect a mechanism to test the effect of independent variables on actions
Humans have the tendency to rely on heuristics to make judgments, (topic proportions). Early applications of topics models [13, 26] to
which can lead to efficient and accurate decisions [22], however analyze web log behavior used probabilistic latent semantic indexing
these heuristics may also lead to systematic errors known as cog- (pLSI) [24], a predecessor model to LDA-based topic models [5]. In
nitive biases [29]. Psychologists have long studied the presence of the case of analyzing web log data, the pLSI model has been helpful
cognitive biases in human decision making process [29, 45]. The in capturing meaningful clusters of users’ actions, and found to
anchoring and adjustment bias, defined as the inability of people to surpass state-of-the-art methods in generating user recommendations
make sufficient adjustments starting from an initial value to yield a for Google News [13].
final answer [45], is one of the most studied cognitive biases that One limitation of this method was that it did not consider time or
can lead individuals to make sub-optimal decisions. In the classic user-level attributes (independent variables) within the model. To
study by Tversky and Kahneman [45], the authors found evidence address the issue of time, Iwata et al. [25] created a LDA-based topic
that when individuals are asked to form estimates, they typically model (Topic Tracking Model) to identify trends of individuals’ web
start with an easily accessible value or reference point and make logs on two large consumer purchase behavior datasets. In their
adjustments from this value. While such an approach may not al- model, they created a time component to identify the dynamic and
ways lead to sub-optimal decisions, research has demonstrated that temporal nature of topics. As we will discuss in section 5, we address
individuals typically fail to adjust their estimates away from their the same concern by creating a time component in our STM model.
initial starting point the anchor. Research has shown that anchor- Further, we employ STM’s flexible causal inference framework as
ing affects decision making in various contexts, including judicial a mechanism to test anchor bias by treating each anchor group as
sentencing [3], negotiations [32] and medical diagnoses [7]. Given additional independent variables.
this documented prevalence of anchoring bias in various contexts of
decisionmaking activities, we hypothesize that such effects may also 3 U SER E XPERIMENT
be present when individuals interact with data while using visual In this section, we first describe our research questions and provide
analytics. a detailed description of the visual analytic system used in the exper-
Figure 2: Event list. The flower glyph shows 5 measures of the
future event and the number of tweets in the center (A). The three bar
charts in the center show hourly distribution of tweet positing time (B),
the number of tweets pointing to the event in last 30 days (C), and
averages of emotion scores of the tweets (D). A list of keywords that
summarize the tweets are displayed next to the bar charts (E). The
user can bookmark the event by clicking the star icon (F).
Figure 1: Crystalball interface: the interface has 4 main views: (A)
calender view, (B) map view, (C) word cloud view, and (D) social
network view. The calendar view shows the future event overview (a)
by default. The event list (b) is shown when the user selects a subset identifying future events from Twitter streams [10]. Detecting fu-
of future events. The tweet panel (E) is shown when the user clicks ture events from tweets is a challenging problem as the signals of
the Twitter icon. future events are often overwhelmed by the discussion of on-going
events. Crystalball is designed to detect possible future events from
streaming tweets by extracting multiple features and enables users
iment. We then describe the experiment design rationale and tasks to identify potentially impactful events.
designed to elicit and test anchoring bias, and provide details about
the experimental procedures and participants next. 3.2.1 Analyzing large and noisy Twitter data
On average, around 500 million tweets are posted on Twitter per day
3.1 Research Questions by more than 300 million Twitter users [1]. However, many of them
Given that our research lies at the intersection of anchoring bias, de- discuss past and ongoing events, and news headlines. To find, iden-
cision making processes, and visual analytics systems, we designed tify and characterize possible future events, the Crystalball system
two types of anchors, namely visual and numerical to evaluate their pipeline contains multiple components, including entity extraction,
effects in the context of visual analytics systems. The numerical event identification and a visual interface.
anchor is based on many psychology studies to test whether the The pipeline first extracts location and date from tweets. If the
participants can adjust away from the numerical anchor in their fi- extracted date refers a future time and the extracted location is
nal answers. The visual anchor is designed specifically to prime valid, then the tweet goes to the event identification component.
people with different views in visual analytics interfaces with CMV Even if a tweet may mention a future time and valid location, it
design. The design of the numerical anchors is to evaluate if users is possible that the tweet does not contain any informative content.
are subject to anchoring bias when using visual analytics interfaces Thus, in order to determine the quality of tweets as indicators of
to aid decision making in a way similar to whats found by previous future events, we employ 7 measures: Normalized Pointwise Mutual
experiments conducted without the use of a visual analytics inter- Information (NPMI) of time-location pairs, link ratio, hashtag ratio,
face; while the design of the visual anchors is to test specifically user credibility, user diversity, degree centrality and tweet similarity.
whether users can be anchored visually and how that affects the anal-
ysis process and outcome. More specifically, we seek to answer the 3.2.2 Multiple Coordinated views in the CrystalBall Interface
following research questions with respect to the impact of anchoring and User Interactions
on decision-making activities using visual analytics systems: Figure 1 shows the Crystalball interface. The interface has 4 main
views: a calendar view, map view, word cloud view and social
• RQ1 - Visual Anchor: Can individuals be anchored on a spe-
network view. The calendar view displays a list of future events
cific view in a CMV?
(Figure 1A). By default, it shows overview of future events (event
• RQ2 - Numerical Anchor: Are the effects of numerical priming overview, Figure 1a). The event overview shows all identified events
transferable to VA? and connections among them. Circles represent identified future
events. The circles are grouped by dates. Events that have a same
• RQ3 - Interaction Patterns I: How does anchoring influence location are connected with a solid line and events that have same
the paths of interactions? keywords are connected with a dotted line.
The event view shows detailed event information (event list)
• RQ4 - Interaction Patterns II: Are there systematic differences when the user selects a subset of the future events as shown in Figure
in the interaction patterns and information seeking activities of 1b. Figure 2 shows enlarged image of Figure 1b. A flower glyph
individuals primed by different anchors? visualizes 5 of 7 measures of a future events with the number of
tweets in the center (Figure 2A). The 5 measures are link and hashtag
To answer these questions, we designed and conducted a controlled ratios, NPMI, user diversity and tweet similarity. Two timeline
experiment using a custom VA system, which is described next. bar charts visualize distribution of tweet positing time (Figure 2B)
and the number of tweets in last 30 days (Figure 2C). The bottom
3.2 CrystalBall - a visual analytics system used for the bar chart shows average emotion scores of the tweets (Figure 2D).
experiment Keywords that summarize tweets of the event is displayed on the
To study the anchoring effect during complex decision making tasks right side of the view (Figure 2E). The event can be bookmarked as
performed in a visual analytics system, we conduct the experiment favorite by clicking the star icon (Figure 2F). The bookmarked events
with Crystalball, a visual analytics system that facilitates users in are stored in database so that the user can review them anytime.
Calendar Overview:
Interface Login interface records all user interaction logs with a timestamp and a
Circle Hover
Map View: Data Selection user’s name to database which are analyzed in Section 5 in order to
Solid Line Hover
Solid Line Click Cluster Click Menu Icon Click
Date Select
show users’ decision making process.
Dotted Line Hover Circle Click
Dotted Line Click Map Zoom
Legends Hover Map Pan Tweet View: 3.3 Design Rationale
“Legend Change” Button Click Tweet Icon Click
Legends Click
Date Click “Find Events” Button Click More Button Click
URL Click
The anchoring effect has been replicated in numerous studies in the
laboratory and in the field [17]. Our experiment design is thoroughly
Favorite View:
Event List View:
Favorite Icon Click
grounded in these best practices of controlled experimental studies in
Close Button Click
Clear Button Click that we use priming to elicit the anchoring bias. First, we focused the
Location Click participants experience around a well-defined cognitively engaging
Location Hover Word Cloud View: Social Network View:
Favorite Click Word Click Node Click decision-making task - we asked participants to estimate the number
Flower Glyph Hover Word Hover Node Hover
Data Bar Chart Hover Navigation Navigation of protest events in a given period of time and in a given location. We
Emotion Chart Hover conducted our experiment with the CrystalBall interface to predict
Keyword “more” Hover
and detect protest events from Twitter data. The Calendar View and
the Map View as described in Section 3.2 serve as the time and geo
Figure 3: User interaction logs: the figure displays main user in- (visual) anchor. In order to test our hypotheses, we followed a 2×2
teraction logs of the Crystalball interface. Each view has different between-subjects factorial design with two factors (numerical and
interaction logs based on its visual elements. visual) and each factor had two levels as described below.

3.4 Experimental Stimuli


The map view shows identified events on the map to indicate The visual and numerical anchors for the experiments were devised
where they will occur (Figure 1B). Events are aggregated based on in order to prime the participants in two ways. The numerical anchor
the zoom level and are shown as rings. The color of ring proportions primed participants on a number (High or Low) and the visual
represent event dates ranging from tomorrow (dark blue) to more anchor primed participants on a specific view in the CrystalBall
than a month (light blue). Clicking a ring will show its tweets as interface (map view, representing geo anchor or calendar view, which
circles. Clicking a circle will show a tooltip showing the tweet. represented the time anchor). The decision-making task presented
to the participants is one of the four choices presented below:
There are two facilitating views to help users explore and further
analyze the future events: word cloud view and social network Geo + high/low anchor: Do you think that the number of
view (Figure 1 C and D). The word cloud view shows keywords protest events in the state of California <geo anchor first> was
extracted from tweets of the identified events. The size of keywords higher or lower than 152 (or 8) <high (or low) numerical an-
represent frequencies of the keywords. The word cloud view is chor> between November 10, 2016 and November 24, 2016 <time
updated when selected events are changed. The social network view anchor>?
represents relationships between future events and Twitter users.
Clusters in the view represent future events in same locations. In Time + high/low anchor: Do you think that the number of
many cases, a cluster has several future events in a same location. protest events between November 10, 2016 and November 24, 2016
<time anchor first> was higher or lower than 152 (or 8) <high (or
User Interactions: The highly interactive and exploratory na- low) numerical anchor> in the state of California <geo anchor>?
ture of the Crystalball interface enables users to start their explo- As can be noted, the magnitude of the numerical anchor, either
ration and analysis of future events from any of the four main views high or low, is subject to the experimental condition. These high
present in the interface. and low numerical anchors were chosen based on the actual num-
A user can start with the calendar view in order to know when ber of protest events present in the data (as determined by trained
the event will occur. Hovering the mouse over a circle in the event annotators). Additionally, the order of presentation of the visual
overview will highlight the corresponding events on the map, word anchors varies in the two questions. The visual anchors were further
cloud and social network view. The user can find events that share a reinforced through custom training videos orienting the participants
same location or keywords by examining links. The user can select a to the use of the CrystalBall interface.1 The two training videos
particular date then the event list will be shown in the calendar view reinforce the visual anchors by starting and driving the analysis from
that shows all events of the date with detailed information. Other either the map view (geo) or the calendar view (time).
views will be automatically updated to show corresponding events
in the views. 3.5 Experiment Design
Alternatively, the user can start the analysis from the map view to 3.5.1 Procedures
make sense of where the event will occur first. The map view shows The data collection for this study involved in-person laboratory
detailed evenets when zooming into a region of interest. When the participation. Participants were recruited via in class recruitment,
zoom factor is lower than a zoom threshold, the calendar, word email to listservs and the psychology research pool at our univer-
cloud and social network views are updated to show the events in sity. Sessions were conducted between February 10th, 2017 and
the current map extent. The user can open the event list to show all March 15th, 2017. After signing up for the study, participants were
the events in the map extent by clicking the “show events” button on assigned a unique code for secure identification. Associated with
the bottom right of the map view. this code, was the random assignment to one of four experiment con-
The interactions implemented in CrystalBall allows users to per- ditions (High/Geo, Low/Geo, High/Time, Low/Time). Participants
form exploratory analysis to support decision-making tasks. Con- were asked to come to the lab for the duration of one hour. The
sequently, the decision making process is reflected by the actions experimenter would first elicit their responses to informed consent.
participants take within CrystalBall. In our experiment, in order Next, the participants would view two training videos specifically
to analyze the effect of anchoring bias on a decision making task designed for this experiment. The first video was a general training
conducted in CrystalBall, we defined and logged 39 unique user video (duration 5 minutes) which oriented them to the use of the
interactions. Figure 3 lists 36 user interactions, situated in their Crystal Ball interface and its basic functionality (e.g., primary and
corresponding views. The rest of the 3 interactions were used rarely
but our participants during their interactions with CrystalBall. The 1 Please refer to supplemental materials for the two training videos.
Table 1: Distribution of participants in 4 conditions. Row-Numerical
Gender 43 38
anchor; column-Visual anchor.
Male Female

Geo Time Grand Total Age 22 36 23

18-22 23-27 >=28


High 20 21 41 Education 25 45 7 4
Low 22 18 40
Undegraduate Masters Ph.D Other
Total 42 39 81
Major 7 11 6 11 14 18 14

Non-Computing (35) Computing (46)


Business Social Sciences Architecture Other non-computing
Data Science Computer Science Other computing
supporting interactions). This video was shown to all the partici-
pants, regardless of experimental condition. Next, the participants
Figure 4: Demographic. A summary of demographic information of
were shown a priming video (duration of 3 minutes) based on their
the participants based on gender, age, education and major.
visual anchoring group. The priming video was designed to guide
the users through a case scenario through either Geo or Time Visual
Anchors.
Following the training, the participants were asked to complete a 4.1 RQ1 - Visual Anchor: Can individuals be anchored
pre-test questionnaire. The pre-test questionnaire consisted of ques- on a specific view?
tions related to participant’s demographics (age, gender, education), To quantitatively evaluate whether participants can be anchored on
their familiarity with visual analytics systems and social media, and a view in CrystalBall, we conducted two types of stastical analysis
Big-5 personality questions [27]. The informed consent, training - two-way ANOVAs and Bonferroni-corrected pairwise t-tests. We
video and pre-test questionnaire typically took around 20 minutes to extracted the overall time duration a participant spent in geo or time-
complete. The participants were then assigned the task, and asked oriented views from the interaction logs by taking into account the
to interact with CrystalBall for about 25 minutes. We designed and time stamp of each action occurred in a particular view.
implemented interaction logging with the CrystalBall interface to A two-way ANOVA was conducted on the influence of two inde-
capture their timestamped actions as they proceeded through the pendent variables (numerical and visual anchor) on the amount of
task. The interaction logging is transparent to the participants. At time spent in different views (map vs. calendar). The main effect
the end of their interaction, participants were asked to estimate the for visual anchor was statistically significant and had an F ratio of
number of protest events based on their analyses within CrystalBall. F(1, 78) = 11.57, p < .001. The main effect for numerical anchor
Next participants were asked to complete a post-test questionnaire. indicated that the effect for numerical anchoring did not significantly
The completion of the post-test questionnaire ended their partici- affect the time spent in map vs. calendar view (p > 0.05). The
pation in the study. The post-test contained questions regarding interaction effect was not significant, F(1, 77) = 0.12, p > 0.05.
the usability of the system (ease, attention, stimulation, likability), Bonferroni corrected pairwise t-tests (α=0.05/4) were conducted
level of engagement during the task and questions to gauge their to compare the duration of time spent in the different conditions
susceptibility to bias. The bias questions consisted of eight questions with α=0.0125 level of significance. We found that visual anchor
designed to measure the level of bias. Participants were compen- had significant effect on the time spent in map view vs. calendar
sated by either a $5 gift card or class credit assigned at the discretion view across both conditions (p < 0.01 in both cases) whereas the
of the class instructor willing to assign extra credit. numerical anchor did not (p > 0.05 in both cases).
In Figure 5, we show the duration of time spent in each view
for each participant across all four experimental conditions. The
3.5.2 Participants x-axis represents the time in minutes, with the blue bars representing
duration in calender view and the black bar representing duration in
A total of 85 participants completed the study. We discarded the data
map view; the y-axis are the participant ids in each condition. We
for four participants due to usage of incorrect identification codes
see from charts labeled High/Geo and Low/Geo that participants
during the experiment. Distribution of participants across experi-
spent significantly more time in the map view vs. the calendar view
ment conditions was relatively even and is represented in Table 1.
in the geo anchoring conditions. The charts labeled High/Time
Figure 4 shows a summary of the demographic characteristics of par-
and Low/Time reveal that in the Time priming conditions, the time
ticipants across factors including age, gender, education and major.
spent in each view was variable and no statistical trends can be ob-
We note that there is an even balance of participants across vari-
served. We have included four separate charts in Figure 5 to provide
ous demographic characteristics such as gender (male vs. female),
sufficient comparative detail across the experiment conditions.
age (different age ranges) and education background (computing vs.
other majors), although there is some skewness in the data towards 4.2 RQ2 - Numerical Anchor: Are the effects of numeri-
students pursuing Masters degrees. Participant demographic charac- cal priming transferable to VA?
teristics were also balanced across the four experiment conditions
due to random assignment of participant to experiment condition. The effect of numerical anchor on time spent within Crystal-
Males and females participants were uniformly distributed across Ball. As reported in Section 4.1, two-way ANOVAs conducted in
experiment conditions (25% in each condition, SD = 10% for males, order to determine the effects of numerical anchoring indicated the
SD = 6% for females). Average ratio for males to females was 1.02 main effect for numerical anchor did not significantly affect the time
in each experiment condition. Average proportion of undergraduate, spent in map vs. calendar view (p > 0.05). The interaction effect
masters and PhD education level in each experiment condition was was also not significant, F(1, 77) = 0.12, p > 0.05.
also 25% (SD = 13%, 10% and 24% resp.). These findings indicate that being primed by a numerical anchor
did not have an effect on the amount of time spent in map view
compared to the calendar view. We discuss the implications of these
4 E XPERIMENT R ESULTS : A NALYZING Q UANTITATIVE findings further in the discussion section, and suggest that more
M EASURES investigation is needed to determine the cause of these effects.
Two types of quantitative analyses are conducted to answer research The effect of numerical anchor on the decision-making out-
questions RQ1 and RQ2 introduced in Section 3.1. come. To further asses the impact of numerical anchoring, we
3.87 8.94 1.96 7.23 6.27 4.88 4.32 4.46
22
calendar view
map view
participants

1
30 20 10 0 10 20 30 20 10 0 10 20 30 20 10 0 10 20 30 20 10 0 10 20 30
(min)
High/Geo Low/Geo High/Time Low/Time

Figure 5: This figure provides a summary of the amount of time spent in calendar and map views on each of the four different conditions. The red
dashed line is the mean of the amount of time spent in calendar and map view.

5.1 RQ3: How does anchoring effect influence the


Numerical Anchoring Condition

paths of interactions?
Low To analyze the paths users take during their analysis with Crystal-
Ball and the effect of anchors on these paths, we developed a novel
8 21.1
method to study the sequences of users interactions as a network of
interaction nodes. We constructed the interaction network as follows:
each interaction is logged with five attributes: time stamp of the inter-
High
action, the view it took place in, the type of interaction, and detailed
Anchoring Value
52.7 152 Mean description for each interaction (e.g., 12:56:35.56, Calendar view,
0 100 200 300
Click, Zoom 89.55 36.00). As shown in Fig 3 there are 36 main inter-
Estimated Number of Events Reported by Users actions as well as 3 main secondary interactions over multiple views.
Each of these interactions form the nodes in a network. The edges
in the network are chronological pairs of interactions. For example,
Figure 6: The estimated number of geo-political events reported if a user has zoomed on the map and then hovered on a particular
by each participant is represented by red dots. The orange line
location in the calendar view, this would add an edge between the
represents the anchor value and black line is the mean of estimated
Map Zoom and Calendar Location Hover nodes. The edge weight
number of political events.
would incrementally increase for each additional observed pair. For
visualization purposes, we disregarded self-loops (i.e, repeated ac-
tions) because we are more interested in the relationship between
analyzed the responses given by participants in the pre- and post- different interactions and the paths of interactions taken by our users.
tests. The participants were asked to estimate the number of protest This method yields a weighted directed graph which enables us to
events, before and after their interactions with the data and the in- cluster interactions through community detection, rank each action
terface. The findings are shown in Figure 6. On the x-axis we show by multiple centrality measures and compare aggregate user path
the two groups in the numerical anchoring condition High and Low. differences controlling for each anchor. The network visualizations
On the y-axis are each participant’s estimates regarding the number in this section were created with Gephi [2].
of protest events, before the interaction (in orange) and after the In this section, first we take actions of all users into account
interaction (in red). Mean post-test responses are in black. Our to get a complete picture of users’ paths of interactions. We then
findings indicate that participants were consistently anchored on analyze differences in users paths to detect different user strategies
initial number presented to them in the framing of the questions controlling for the two anchors (visual and numerical). We studied
(p < 0.05). Our findings suggest that the effects of the classic an- the network of all user logs, as well as our four experiment anchors.
choring bias elicited by priming with numerical anchors in previous We did not find significant differences in the networks created from
laboratory studies can be replicated in VA. We did not find any ef- logs of users primed on the two numerical anchors. Hence in this
fects of the visual anchoring (geo vs. time) on the final outcome discussion, we will focus on the three remaining networks: a full
(t(40) = 2.02, p > 0.05), suggesting that the effects of the visual network (AllNetwork), a Geo-anchored network (GeoNetwork) and
anchor may be more subtle than can be determined via post-test a Time-anchored network (TimeNetwork).
questionnaire responses. We conducted detailed analyses to capture
these effects, which we shall describe in the next section. 5.1.1 Analyzing the network of all interactions
By adopting an exploratory data analysis method, we started by
analyzing different features of AllNetwork (39 nodes and 640 edges)
5 E XPERIMENT R ESULTS : A NALYZING I NTERACTION L OGS (Figure 7). We first utilized the community detection algorithm
FOR U SER ACTIVITY PATTERNS developed by Blondel et al. [6], which resulted in 5 different commu-
nities of interactions. Most of these communities are comprised of
To test the hypothesis that user interaction logs reflect the partici- interactions that occur within the same view or have a close semantic
pants’ decision making processes, we applied two additional types relationship to each other. The community detection results allowed
of analyses on the logs in order to evaluate the impact of anchoring us to categorize nodes in our network into three main groups that
effect on the patterns of user interactions. The two analyses address were in line with our initial system design strategies: preliminary
research questions RQ3 and RQ4 (Section 3.1). interactions, primary interactions, and supporting interactions. The
5.1.2 Comparing interaction networks of Geo- vs. Time-
anchored users
To answer our research questions of whether visual anchor has an
effect on the way different groups of participants interact with the
visual interface, we constructed two networks based on the actions
of the geo and time-anchored groups. These networks consist of the
same 39 action nodes but have different edges and weights allowing
us to compare the interactions of participants primed on the two
visual anchors through the lens of their respective networks.
Similar to our analysis of AllNetwork, we first started by detect-
ing communities within GeoNetwork and TimeNetwork. Interest-
ingly, the results show similar community structures to AllNetwork.
However, there are subtle differences that point to the differences
regarding the usage of CrystalBall between the two groups. For
example, the action of Favorite Icon Click (through which users can
save an event to view later in the Favorites Menu) in GeoNetwork is
part of the preliminary actions community, but for TimeNetwork it
is part of the Time related primary actions community. This subtle
change could indicate that the time primed users had more interac-
tions between saving an event as a favorite and then viewing the list
of favorite actions in comparison to our Geo primed users.
We calculated Pagerank for interaction nodes in both these net-
works. Comparing these values would allow us to understand im-
portant interactions within each network and how they are affected
Figure 7: The directed network of all interactions. Nodes are interac-
by the visual anchor. Figure 8 illustrates significant differences of
tions and edges are interactions that occur after each other. The size
of nodes are proportional to Pagerank values and width of edges are
the two networks. In the GeoNetwork, the top nodes are a mixture
proportional to the edge weights. Note, if a line is drawn between a of interactions from the Map and Event views, with the highest
start-node and an end-node, the outgoing edge from the start node is ranked interaction from the Map view. This pattern is consistent
on the relative left side of that line. with the strategies shown in the Geo priming video. In contrast,
in the TimeNetwork, the top ranked nodes are interactions within
the Events view and the Calendar View, which is also consistent
with the strategy shown in the time-anchor video. Other important
but lower ranking actions in TimeNetwork are from the Map View.
primary interactions include those that users have to go through in Furthermore, by observing the paths between the Events community
order to find the events of interest. The supporting interactions are (colored purple in Figure 8) in both GeoNetwork and TimeNetwork,
those that users perform in order to find supporting information to we see that weight of the edge between Events Location Hover and
confirm the previously found events. The preliminary interactions Events Location Click is relatively higher in the GeoNetwork in
such as login and clicking on the menu bar were used infrequently comparison to the TimeNetwork. This could indicate that our Geo
as they are not critical to the analysis process. Figure 7 shows the primed users use maps to explore and primarily use hovering and
network colored and annotated based on the community detection clicking on a location together to view more details in the map, word
results. cloud, and social network views. Our time primed users on the other
hand, utilize the hovering on locations to explore events. These
In order to measure the importance of different interactions, we differences show interesting behavioral variations in sequences of in-
utilized the Pagerank algorithm [37]. Since Pagerank takes into teractions between our two groups. These differences show that time
account the weight of edges between interactions, it is much more primed users are more likely to use the Calendar and Events view
powerful than simply calculating the frequency of each interaction. actions as their primary exploratory tool. Figure 8 shows the com-
Pagerank assigns probability distributions to each node denoting the parisons between these two networks and two bar charts comparing
importance of the node. These probability distributions are appropri- the top 5 ranked nodes in each network.
ate metrics for importance of the interactions in our system as they Analyzing the interactions of our participants as a network has
show the likelihood of a random surfer in the network to traverse many benefits. It allows us to take into account the sequence of
to a specific end node. The top ranked interactions in our interface interactions, as well as the paths taken by users to arrive at the
are all from the primary action communities with the exception of conclusion. The paths taken reflect the strategies users employ
Word Hover. As seen in Figure 7, edge weights between important during the decision-making process. Furthermore, we can take an
nodes in the same community are higher than ones between different overview of all interactions within the CrystalBall interface and
communities. Furthermore, we can observe mutual higher weighted analyze what strategies need to be improved to make the interface
paths between these higher ranking interactions. Some exceptions more effective.
to this observation is when the users are moving away from a view
to another conduct more in depth analysis. For example, in AllNet- 5.2 RQ4: Estimating the effects of anchoring bias on
work, the edge between Events Location Hover to Events Location interaction patterns and information seeking activi-
Click is weighted very strong, but the path of opposite direction is ties
not. We can interpret Events Location Click as an interaction that One drawback of the network analysis is that the estimated impact
drives users out of this community to others such as Word Cloud for each anchor is measured without standard errors to calculate
and Social Network to get complementary information of an event the statistical significance of each result. To address this problem,
( See Fig 7). The AllNetwork and the analyses resulting from it we use structural topic modeling (STM) to measure the impact of
serve as a reference for the comparisons we wish to make across the the anchors and user-level attributes on users’ actions [40]. Origi-
GeoNetwork and TimeNetwork. nally built for text summarization, STM is a generalized topic model
Figure 8: Side by side visualization of GeoNetwork and TimeNetwork. The size of nodes is proportional to Pagerank values of nodes in each
graph, the color of nodes corresponds to the detected community of each node, and the width of each edges corresponds to the weight of that
edges. The bar charts show the top 5 nodes based on their Pagerank value and is color coded based the community the nodes community.

Table 3: This table provides the seven actions with the highest proba-
framework for testing the impact of document-level variables. For
bilities for three sample topics: Map View, Calendar View and Event
our model, topics are clusters of interactions measured as probability
List (all tools). Action combinations (bi- or tri-grams) are denoted by
distributions over the action space. We test our hypotheses of the the plus sign.
effect of anchoring on the topic proportions through an embedded
regression-component. STM is a consolidation of three predecessor
models: the correlated topic model (CTM) [4], the sparse additive
generative model (SAGE) [15] and the Dirichlet multinomial regres-
sion model (DMR) [35]. The CTM model introduces a correlation
structure for topic distributions while the DMR and SAGE models
provide mechanisms to estimate the effect of independent variables
on either topic proportions (via DMR model) or word distributions
for each topic (via SAGE model). 2

Table 2: Independent Variables Tested

Type Independent Variable Level


Visual Anchor Time / Geo
Condi�on
Numerical Anchor High /Low
Time Percent of Ac�ons Ac�on Deciles (b-spline)
Gender Make / Female
Major Compu�ng / Non Compu�ng
A�ribute
Age Under 23 / Over 23
Educa�on Undergraduate / Graduate
the DMR (GLM regression) component. On the other hand, a disad-
Extroversion High / Low
vantage of the BoW assumption is that it ignores the order of interac-
Agreeableness High / Low
tions. To address this issue, we made two modifications: extracting
Personality Conscien�ousness High / Low
bi-/tri-grams and creating a session time variable by interaction
Openness High / Low deciles. First, we extracted every bi- and tri-gram as chronological
Neuro�cism High / Low action pairs and triplets from the interaction logs. Including bi- and
tri-grams and the single actions, we had 237 unique features after
removing sparse features. Second, we created a time variable that
However, as with most topic models, STM is built from the divided each user’s session into ten evenly distributed groups (inter-
bag-of-words (BoW) assumption that provides a key advantage and action decile). Given that each user’s session averaged nearly 800
disadvantage in our analysis. The advantage is that it yields statis- individual actions, each decile maintained sufficient interactions to
tical properties (exchangability) that identifies topics as clusters of facilitate topic inference. Additionally, inclusion of the time variable
co-occurring interactions and facilitates statistical testing through had the advantage of increasing our sample size (number of docu-
2 We used the stm R package [41] for our analysis. This package includes ments) from 81 to 810 as the document-level went from each user to
additional tools for topic modeling including a spectral initialization process a user’s interaction decile (e.g. first 10% of user X’s interactions).
that aids in addressing the multi-modality problem (stability of the results). To test the effect of anchoring bias on users’ interactions, our
Figure 9: The figure on the left provides the expected topic (action-cluster) proportions with judgmental labels to aid in interpretation. The figures
on the right provide the estimated effect of the visual and numerical anchors on each of the eight topics’ proportions. The dot is the point estimate
and the line represents a 95 percent confidence interval. The red dots/lines are topics that are significant with 95% confidence.

baseline model to explain topic proportions (dependent variable) around each estimate. From these figures, we find that the Map View
incorporates three independent variables: the visual anchor, the and the Calendar View topic proportions have the most significant
numerical anchor, and time as interaction deciles. After analyzing differences between the two groups. Consistent with our findings
the model, we tested other demographic attributes including gender, in sections 4.1 and 5.1, Geo primed users are anchored more to the
major, age, education level, and the Big-5 personalities. Table 2 view they were primed on while we see less of an effect in Time
above provides a list of the independent variables tested and the primed users. On the other hand, we found that the visual anchor
categorical levels. We binned the user attributes into binary levels. had an unexpected effect with the Event List: All Tools (topic 3).
Similarly, we converted the Big-5 personality results into binary Geo primed users tended to use tools like the Keyword More and
levels in which users who scored above the mean were categorized Emotion Bar more than Time primed users. Alternatively, we find
as High while users who scored below the mean were categorized as that the numerical anchor had only a marginal effect on two topics
Low. (Calendar View (topic 6) and Event List: All Tools (topic 3)). These
results imply that the visual anchor had a more significant impact
5.2.1 The effect of visual anchor on interaction patterns esti- on the proportion of users’ interactions than the numerical anchor.
mated by topic proportions This is important as we observed opposite effect (numerical anchor
We find the visual anchor has a significant effect on the proportion was significant, visual anchor was not) in the users’ estimation of
of users’ interactions as topics clustered automatically in view-based the event outcome.
groups (e.g., map, calendar, events). Figure 3 provides the top seven
interactions for three sample topics. We observe that the interactions 5.2.2 The effect of visual anchor on interactions used over
tend to cluster into groups related to each interaction’s associated time estimated by topic proportions
view hierarchy as shown in Figure 3. For example, topic 8 includes
We find evidence of a temporal effect on the topic proportions. To
interactions related to the map view including Map Zoom, Map Pan,
measure this effect, we divided each user’s interaction path into
Map Circle Click and Map Click Cluster. Therefore, we gave topic
interaction deciles (see Section 5.1). To aid estimation, we used a
8 the manual label of Map View since its interactions are all related
b-spline to smooth the values. Figure 10 provides the effect of the
to that view. Following this approach, we created manual labels
visual anchor (line color) and time (x-axis) for the Map View and
for the other seven topics. Further, we find that the topics tend to
Calendar View topic proportions. We observe a significant impact of
cluster in groups consistent with our network communities found in
the time of the user’s session on this topic proportions. For example,
Section 5.1. For instance, the four prominent interactions of the Map
Map View topic proportion is nearly twice during the user’s first
View topic (by probability) have the strongest connections as well as
twenty percent of interactions than users’ remaining 80 percent of
highest PageRank in the Map View community cluster (green nodes)
interactions. Moreover, we see this distinct drop for both visual
in Figure 7.
anchor groups. This observation implies that users tended to use
Second, we find in Figure 9 that the Map View and Flower Glyph
the Map View more in the beginning of the session as they were
topics had the largest topic proportions. Alternatively, the social
getting acclimated to the interface. Alternatively, the Calendar View
network and word cloud were the smallest topics. To test our num-
topic trended down resulting in much lower use by session end (15%
ber of topics, we followed the procedure recommended by [42] by
Time, single digits Geo). We found marginal effects of time for
considering multiple topic scenarios (5, 8, 10, 15, 20, 25, 30, 40) and
the other six topics, with most nearly flat given already low topic
comparing each model’s held-out likelihood and average semantic
proportions (less than 10%).
coherence. We decided on an eight topic model given a high aver-
age semantic coherence and parsimony of topics (see supplemental
materials). 5.2.3 The effect of demographic variables on interaction pat-
We observe that the visual anchor had a significant effect on the terns estimated by topic proportions
Map View and Calendar View topics. Figure 9 provides the effect To test other possible variables, we ran five additional model sce-
the anchors had on the topic proportions. In this plot, each dot is narios replacing the numerical anchor variable (as it showed only
the estimated topic proportion difference for each topic by the two marginal significance) with the demographic variables (gender, ma-
levels of each anchor. The line represents a 95% confidence interval jor, student level and Big-5 personality). We found that none of
rely on heuristics to make judgments does often lead to efficient
and accurate decisions. However, we need to determine when such
heuristic decision-making is being applied, in order to ensure that
the resulting decisions are optimal.
Experiment sample size limitation. As can be expected with
any laboratory experiment, this research has limitations. One such
limitation is the sample size of 81 participants in our experiment.
However, the diversity of our sample with respect to gender, age,
educational background and personality factors are steps we have
taken to ensure the validity of our results. Our findings replicate the
effects of anchoring that have been long studied in literature, further
attesting to the validity of the experiment.
Experiment control limitation. Another limitation of our exper-
iment is that we do not consider a control group , that is, participants
who engage in the decision-making task without being primed by
any anchors. While our initial study reported here was focused on
determining whether the effects of anchoring are at all present and
can be elicited in such experiments, our future work will be aimed at
replicating these findings in more extensive experiments with larger
sample size and will include control groups for comparison.
STM analysis limitations. Fong and Grimmer [18] note that
topic models are susceptible to problems in estimating marginal
effects due to the zero-sum properties of topic proportions. Further,
topic models cluster only based on the count and ignore interaction
duration (time spent). To address such limitation, the quantitative
analysis in section 4 explicitly accounted for the duration of each
interaction.
Figure 10: This figure provides two charts on the effect between the
visual anchors (line color) and time as measured by interaction deciles
(x-axis) for two topics (Map View and Calendar View). Each line is
7 C ONCLUSION
the estimated topic proportions across the session and controlling for In this paper, we presented a systematic study and resulting analyses
the visual anchor. The solid line is the point estimate and the dotted that investigate the effect of anchoring bias on decision-making pro-
line is a 95 percent confidence interval. For the interaction deciles cesses and outcome using visual analytic systems. Our experimen-
(time), we divided users’ sessions into ten evenly distributed groups. tal results provide evidence on anchoring effect being transferable
A b-spline was used to smooth the curve across the ten points. to visual analytics in that visual and numerical anchors affect the
decision-making process and outcome respectively. The present
the variables produced significant (95%) changes in topic propor- study is a first step in an overarching research agenda of determining
tions, although some produced marginally significant effects (see the use of heuristics in decision-making processes from the user
supplemental materials). For example, most variation occurs in interactions and if these decision-making processes can be reliably
the secondary view topics (Word Cloud, Social Network and an inferred then to automatically suggest ways in which to improve the
interaction topics). process.
6 D ISCUSSION AND L IMITATIONS
R EFERENCES
In this section, we provide implications of our experiment results on
[1] Twitter usage statistics. http://www.internetlivestats.com/
anchoring effect in visual analytics, and point out possible limitations
twitter-statistics/. Accessed: 2017-03-31.
related to the study design and analysis.
[2] M. Bastian, S. Heymann, and M. Jacomy. Gephi: An open source
Experiment implications As shown by our data analysis (sec- software for exploring and manipulating networks, 2009.
tion 4 & 5), our experimental results indicate that anchoring bias [3] M. W. Bennett. Confronting cognitive anchoring effect and blind
does transfer to visual analytics. Most interesting finding is that spot biases in federal sentencing: A modest solution for reforming a
the visual anchor seems to significantly impact the decision-making fundamental flaw. J. Crim. L. & Criminology, 104:489, 2014.
process, while the numerical anchor has a significant effect on the [4] D. M. Blei and J. D. Lafferty. A correlated topic model of science. The
decision-making outcome. The decision-making process reflects the Annals of Applied Statistics, pp. 17–35, 2007.
way that users interact with CrystalBall; the outcome is the final an- [5] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation.
swer that the participants provided at the end of the decision-making Journal of machine Learning research, 3(Jan):993–1022, 2003.
[6] V. D. Blondel, J.-L. Guillaume, R. Lambiotte, and E. Lefebvre. Fast
process.
unfolding of communities in large networks. Journal of statistical
Such findings have implications for user training on visual ana- mechanics: theory and experiment, 2008(10):P10008, 2008.
lytics systems with CMV, as well as how decision-making tasks are [7] J. S. Blumenthal-Barby and H. Krieger. Cognitive biases and heuristics
framed. With respect to training/tutorial, the visual analytic systems in medical decision making: a critical review using a systematic search
development team should provide multiple scenarios employing strategy. Medical Decision Making, 35(4):539–557, 2015.
strategies that involve the use of different views as the primary visu- [8] L. Cen, G. Hilary, and K. J. Wei. The role of anchoring bias in the
alization to drive the analysis. As of decision-making task framing, equity market: Evidence from analysts earnings forecasts and stock
one should avoid accidentally anchoring the participants on an ex- returns. Journal of Financial and Quantitative Analysis, 48(01):47–76,
pected outcome or when possible, employ measures of cognitive 2013.
bias (such as in our post-test) to evaluate the inherent cognitive bias [9] I. Cho, W. Dou, D. X. Wang, E. Sauda, and W. Ribarsky. Vairoma: A
of the users. As noted in Section §2.1, the tendency of humans to visual analytics system for making sense of places, times, and events
in roman history. IEEE transactions on visualization and computer doi: 10.1007/978-3-540-70956-5 7
graphics, 22(1):210–219, 2016. [32] D. D. Loschelder, R. Trötschel, R. I. Swaab, M. Friese, and A. D. Galin-
[10] I. Cho, R. Wesslen, S. Volkova, W. Ribarsky, and W. Dou. Crystalball: sky. The information-anchoring model of first offers: When moving
A visual analytic system for future event discovery and analysis from first helps versus hurts negotiators. Journal of Applied Psychology,
social media data. Visual Analytics Science and Technology (VAST), 101(7):995, 2016.
2017 IEEE Conference on. [33] A. M. MacEachren. Visual analytics and uncertainty: Its not about the
[11] G. Convertino, J. Chen, B. Yost, Y. S. Ryu, and C. North. Exploring data. 2015.
context switching and cognition in dual-view coordinated visualiza- [34] H. A. Meier, M. Schlemmer, C. Wagner, A. Kerren, H. Hagen, E. Kuhl,
tions. In Proceedings International Conference on Coordinated and and P. Steinmann. Visualization of particle interactions in granular
Multiple Views in Exploratory Visualization - CMV 2003 -, pp. 55–62, media. IEEE transactions on visualization and computer graphics,
July 2003. doi: 10.1109/CMV.2003.1215003 14(5):1110–1125, 2008.
[12] M. B. Cook and H. S. Smallman. Human factors of the confirmation [35] D. Mimno and A. McCallum. Topic models conditioned on arbi-
bias in intelligence analysis: Decision support from graphical evidence trary features with dirichlet-multinomial regression. arXiv preprint
landscapes. Human Factors, 50(5):745–754, 2008. arXiv:1206.3278, 2012.
[13] A. S. Das, M. Datar, A. Garg, and S. Rajaram. Google news personal- [36] A. Nussbaumer, K. Verbert, E.-C. Hillemann, M. A. Bedek, and A. Diet-
ization: scalable online collaborative filtering. In Proceedings of the rich. A framework for cognitive bias detection and feedback in a visual
16th international conference on World Wide Web, pp. 271–280. ACM, analytics environment. In Proceedings of the European Intelligence
2007. and Security Informatics Conference, pp. 1–4. IEEE, 2016.
[14] E. Dimara, A. Bezerianos, and P. Dragicevic. The attraction effect in [37] L. Page, S. Brin, R. Motwani, and T. Winograd. The pagerank citation
information visualization. IEEE Transactions on Visualization and ranking: Bringing order to the web. Technical report, Stanford InfoLab,
Computer Graphics, 23(1):471–480, 2017. 1999.
[15] J. Eisenstein, A. Ahmed, and E. P. Xing. Sparse additive generative [38] M. D. Plumlee and C. Ware. Zooming versus multiple window inter-
models of text. 2011. faces: Cognitive costs of visual comparisons. ACM Trans. Comput.-
[16] D. M. Eler, F. V. Paulovich, M. C. F. d. Oliveira, and R. Minghim. Hum. Interact., 13(2):179–209, June 2006. doi: 10.1145/1165734.
Coordinated and multiple views for visualizing text collections. In 1165736
2008 12th International Conference Information Visualisation, pp. 246– [39] J. C. Roberts. State of the art: Coordinated multiple views in ex-
251, July 2008. doi: 10.1109/IV.2008.39 ploratory visualization. In Fifth International Conference on Coordi-
[17] M. Englich. Anchoring effect. Cognitive Illusions: Intriguing Phenom- nated and Multiple Views in Exploratory Visualization (CMV 2007),
ena in Judgement, Thinking and Memory, p. 223, 2016. pp. 61–71, July 2007. doi: 10.1109/CMV.2007.20
[18] C. Fong and J. Grimmer. Discovery of treatments from text corpora. [40] M. E. Roberts, B. M. Stewart, and E. M. Airoldi. A model of text
In In Proceedings of the Annual Meeting of the Association for Compu- for experimentation in the social sciences. Journal of the American
tational Linguistics, 2016. Statistical Association, 111(515):988–1003, 2016.
[19] A. Furnham and H. C. Boo. A literature review of the anchoring effect. [41] M. E. Roberts, B. M. Stewart, and D. Tingley. stm: R package for
The Journal of Socio-Economics, 40(1):35–42, 2011. structural topic models. R package version 1.2.1, 1, 2017.
[20] J. F. George, K. Duffy, and M. Ahuja. Countering the anchoring [42] M. E. Roberts, B. M. Stewart, D. Tingley, C. Lucas, J. Leder-Luis, S. K.
and adjustment bias with decision support systems. Decision Support Gadarian, B. Albertson, and D. G. Rand. Structural topic models for
Systems, 29(2):195–206, 2000. open-ended survey responses. American Journal of Political Science,
[21] N. A. Giacobe and S. Xu. Geovisual analytics for cyber security: 58(4):1064–1082, 2014.
Adopting the geoviz toolkit. In Visual analytics science and technology [43] D. Sacha, H. Senaratne, B. C. Kwon, G. Ellis, and D. A. Keim. The
(VAST), 2011 IEEE Conference on, pp. 315–316. IEEE, 2011. role of uncertainty, awareness, and trust in visual analytics. IEEE
[22] G. Gigerenzer and H. Brighton. Homo heuristicus: Why biased minds transactions on visualization and computer graphics, 22(1):240–249,
make better inferences. Topics in Cognitive Science, 1(1):107–143, 2016.
2009. [44] J. M. Tien. Big data: Unleashing information. Journal of Systems
[23] L. Harrison, D. Skau, S. Franconeri, A. Lu, and R. Chang. Influencing Science and Systems Engineering, 22(2):127–151, 2013.
visual judgment through affective priming. In Proceedings of the [45] A. Tversky and D. Kahneman. Judgment under uncertainty: Heuristics
SIGCHI Conference on Human Factors in Computing Systems, CHI and biases. In Utility, probability, and human decision making, pp.
’13, pp. 2949–2958. ACM, New York, NY, USA, 2013. doi: 10.1145/ 141–162. Springer, 1975.
2470654.2481410 [46] P.-L. Yu. Multiple-criteria decision making: concepts, techniques, and
[24] T. Hofmann. Probabilistic latent semantic indexing. In Proceedings extensions, vol. 30. Springer Science & Business Media, 2013.
of the 22nd annual international ACM SIGIR conference on Research
and development in information retrieval, pp. 50–57. ACM, 1999.
[25] T. Iwata, S. Watanabe, T. Yamada, and N. Ueda. Topic tracking model
for analyzing consumer purchase behavior. In IJCAI, vol. 9, pp. 1427–
1432, 2009.
[26] X. Jin, Y. Zhou, and B. Mobasher. A unified approach to personaliza-
tion based on probabilistic latent semantic models of web usage and
content. In Proceedings of the AAAI 2004 Workshop on Semantic Web
Personalization (SWP’04), 2004.
[27] O. P. John, E. M. Donahue, and R. L. Kentle. The big five inventoryver-
sions 4a and 54, 1991.
[28] D. Kahneman. A perspective on judgment and choice: mapping
bounded rationality. American psychologist, 58(9):697, 2003.
[29] D. Kahneman. Thinking, fast and slow. Macmillan, 2011.
[30] D. Keefe, M. Ewert, W. Ribarsky, and R. Chang. Interactive coordi-
nated multiple-view visualization of biomechanical motion data. IEEE
transactions on visualization and computer graphics, 15(6):1383–1390,
2009.
[31] D. Keim, G. Andrienko, J.-D. Fekete, C. Görg, J. Kohlhammer, and
G. Melançon. Visual Analytics: Definition, Process, and Challenges,
pp. 154–175. Springer Berlin Heidelberg, Berlin, Heidelberg, 2008.

View publication stats

You might also like