1 s2.0 B9780444626042000150 Main
1 s2.0 B9780444626042000150 Main
1 s2.0 B9780444626042000150 Main
Abstract
Neuroscience, by its nature, seems to hold considerable promise for understanding the fundamen-
tal mechanisms of decision making. In recent years, several studies in the domain of “neuroeco-
nomics” or “decision neuroscience” have provided important insights into brain function. Yet, the
apparent success and value of each of these domains are frequently called into question by
researchers in economics and behavioral decision making. Critics often charge that knowledge
about the brain is unnecessary for understanding decision preferences. In this chapter, I contend
that knowledge about underlying brain mechanisms helps in the development of biologically plau-
sible models of behavior, which can then help elucidate the mechanisms underlying individual
choice biases and strategic preferences. Using a novel risky choice paradigm, I will demonstrate
that people vary in whether they adopt compensatory or noncompensatory rules in economic
decision making. Importantly, neuroimaging studies using functional magnetic resonance imag-
ing reveal that distinct neural mechanisms support variability in choices and variability in strategic
preferences. Converging evidence from a study involving decisions between hypothetical stocks
illustrates how knowledge about the underlying mechanisms can help inform neuroanatomical
models of cognitive control. Last, I will demonstrate how knowledge about these underlying neu-
ral mechanisms can provide novel insights into the effects of decision states like sleep deprivation
on decision preferences. Together, these findings suggest that neuroscience can play a critical role
in creating robust and flexible models of real-world decision behavior.
Keywords
decision making, decision strategies, risky choice, cognitive control, dorsomedial prefrontal
cortex, sleep deprivation
1 INTRODUCTION
The past few years of research in judgment and decision making can be broadly clas-
sified into four distinct phases. The first phase in the early 1950s was primarily con-
cerned with the mathematical modeling of human judgment and decision preferences
(Hammond, 1955). The second phase involved the adoption of cognitive sciences
and information processing approach to decision making. Though the origins of this
phase can be traced back to Simon (1955), it became popular and began to accelerate
the field of decision research in the mid-1970s (Payne, 1976a,b). The late-1990s
brought about an “emotions revolution” that sought to integrate how incidental
and task-based emotions affect the content and process of decision making
(Lerner and Keltner, 2000; Loewenstein et al., 2001). This fourth and most recent
phase involves the integration of neuroscience techniques to understand preferences
and individual variability in decision making, a field of research titled “decision neu-
roscience” or “neuroeconomics” (Glimcher and Rustichini, 2004). The enthusiasm
for decision neuroscience stems from the exponential increase in knowledge about
brain systems and neural mechanisms since the late 1990s as well as the increased
availability of neuroscientific methods like functional magnetic resonance imaging
(fMRI), electroencephalography, and more recently, transcranial magnetic stimula-
tion to investigate decision-making phenomenon. Each of these phases is comple-
mentary in nature, with several people still emphasizing formal modeling of
decision behavior (Glockner and Betsch, 2008) as well as emphasizing the trade-
off between cognition and emotion in the same model of risk preference
(Mukherjee, 2010). In this chapter, I will focus primarily on the fourth phase involv-
ing decision neuroscience and highlight how it has complemented the other phases in
improving our understanding of decision making and preferences.
The field of decision neuroscience has proliferated in recent years and has pro-
vided important insights into mechanisms that underlie decision preferences and eco-
nomic and social phenomenon (Camerer et al., 2004; Glimcher, 2003; Loewenstein
et al., 2008; Ochsner and Lieberman, 2001; Platt and Huettel, 2008; Sanfey et al.,
2006). For instance, decision neuroscience studies have elucidated the potential neu-
ral underpinnings of nearly all of the core variables present in standard economic
models, including value of monetary rewards (Rangel et al., 2008; Yacubian
et al., 2007) and other rewards (Berns et al., 2001; Smith et al., 2010), risk
(Huettel et al., 2006; Preuschoff et al., 2006), ambiguity (Hsu et al., 2005), proba-
bility weighting (Hsu et al., 2009), and intertemporal choice (Kable and
Glimcher, 2007; McClure et al., 2004; Prevost et al., 2010). Neuroscience has also
provided important insights into variables like loss aversion defined from prospect
theory (Kahneman and Tversky, 1979). It has been argued based on neuroscience
evidence that loss aversion may reflect unequal responses to gains and losses within
the same region in the ventral striatum, with activation increasing for potential gains
and decreasing more steeply for potential losses (Tom et al., 2007). More recent stud-
ies have focused on effects of more complex variables implied by particular frame-
works for decision making, such as framing strength (De Martino et al., 2006), regret
1 Introduction 269
(Coricelli et al., 2005) and other fictive signals (Hayden et al., 2009; Lohrenz et al.,
2007), and even constructs like altruism (Tankersley et al., 2007) and social coop-
eration (Rilling et al., 2002).
Decision neuroscience, therefore, goes beyond the common practice of econo-
mists to use psychological insights as inspiration for economic modeling or to take
into account experimental evidence that challenge behavioral assumptions of eco-
nomic models. Yet, critics still charge that the field of decision neuroscience and
neuroeconomics has few implications for economics and decision making (or other
social sciences) (Bernheim, 2008). Specifically, opponents of decision neuroscience
have argued that neuroscience methods are fundamentally incompatible with tradi-
tional stream of research, and that knowledge about the underlying brain mecha-
nisms has no relevance for understanding economic and decision-making
phenomenon (Bernheim, 2008; Gul and Pesendorfer, 2008; Harrison, 2008). In
the words of Gul and Pesendorfer (2008), “Neuroscience evidence cannot refute eco-
nomic models because the latter make no assumptions and draw no conclusions
about physiology of the brain. Conversely, brain science cannot revolutionize eco-
nomics because it has no vehicle for addressing the concerns of the latter.”
Clithero and colleagues summarized criticisms of decision neuroscience and neu-
roeconomics into two broad arguments: Behavioral Sufficiency—claim that behav-
ioral data are necessary and sufficient to evaluate the validity of economic and other
social science models, and Emergent Phenomenon—denial that an understanding of
mechanism has relevance for predicting aggregate phenomenon (Clithero et al.,
2008). In other words, the foundation of decision neuroscience and neuroeconomics
rests on two core principles that directly address the two fundamental criticisms
above. The first labeled Mechanistic Convergence argues that neuroscience data will
not replace traditional sources of data in the social sciences (i.e., Behavioral Suffi-
ciency holds) but will help identify good and novel avenues for behavioral experi-
ments. The second, termed Biological Plausibility, argues that though measures of
choice behavior are essential for validating or falsifying economic models, neurosci-
ence can allow us to identify broad classes of models that are likely to be robust,
parsimonious, and predictive (Clithero et al., 2008).
In this chapter, I provide empirical evidence for how knowledge about underlying
brain mechanisms help in the development of biologically plausible models of be-
havior using a series of three different experiments. It is well known that decision
making differs across contexts, states, and individuals. Different individuals will of-
ten respond differently to the same problem, while the same individual may respond
differently to what appears to be subtle changes in problem descriptions, decision
environment, or individual’s current state. Therefore, sources of variability in deci-
sion making can be broadly classified into within-subject factors like decision con-
text (e.g., problem type, presentation format, decision frame, time pressure), state of
decision maker (e.g., emotions, sleep deprivation, and cognitive depletion) and
individual differences (e.g., demographic factors like age and gender, traits like
impulsivity and genetic and hormonal factors). Understanding individual variability
in both behavior and information acquisition process is an ongoing challenge, and is
270 CHAPTER 15 Strategic variability in decision making
one area where decision neuroscience can make significant contribution to decision
research (Payne and Venkatraman, 2011).
In the first part of this chapter, I will demonstrate the neural mechanisms under-
lying decision preferences and individual variability in the use of decision strategies
using a complex economic decision-making task. In the next section, I will discuss
how we can aggregate across different decision-making tasks to better inform bio-
logical and neuroanatomical models of cognitive control. Finally, I will demonstrate
how changes in individual’s state like sleep deprivation (SD) can influence decision
preferences. Importantly, I will demonstrate how knowledge about underlying neural
mechanisms can be used to adjudicate between competing theories about the effects
of SD on decision preferences. A key goal of decision neuroscience in the future will
be the development of robust biological models that can predict behavior across a
variety of states and decision contexts.
(A) (B)
$80 p=0.20 a1 a2
$40 p=0.25
A 70 70
$0 p=0.20
$80 p=0.20
-$25 B a1 a2
$40 p=0.25
-$70 C A 70 70
$0 +$15 = $15 p=0.20
-$25 $80 p=0.20 B a1 a2
-$70 +$ $40 p=0.25
(4–6s) (4–6s) C A 70 70
$0 +$15 = $15 p=0.20
-$25 p=0.15 B 80 60
(6s) -$70 +$15 = -$55 p=0.20 (6s) C 60 80
FIGURE 1
Schematic of decision-making tasks. (A) Multi-outcome risky choice task. Subjects were
presented with a series of five-outcome complex mixed gambles. They then improved each
gamble by adding money in one of three ways: increasing the magnitude of the highest gain
(Gmax), decreasing the magnitude of the worst loss (Lmin), or by improving the overall
probability of winning (Pmax). (B) Attribute-balancing task. Subjects were first shown, for
4–6 s, three anonymized stocks (A, B, and C) with percentile ratings on two attributes. Then,
two stocks were highlighted in red, whereupon subjects had 6 s to decide which they
preferred. Here, stock A represents a balanced option (with equal ratings on both attributes)
while stock B represents an extreme option (with a good rating on one attribute but a
poor rating on the other). In this trial, both stocks A and B have equal expected values
(equal) while in other trials, the sum of the two attributes may differ between these stocks
(congruent or incongruent).
Reproduced with permission from Venkatraman and Huettel (2012).
associated with reduced expected value. These results provided strong evidence that
many, but not all, individuals incorporate information about the overall probabilities
of positive and negative outcomes into their decision making, consistent with both
older (Lopes and Oden, 1999) and recent frameworks that include aspiration levels in
utility calculations (Diecidue and van de Ven, 2008).
In a set of two behavioral studies, we explored the boundary conditions for the use
of the simplifying Pmax strategy (Venkatraman et al., 2011b). In the first study
(N ¼ 128), subjects chose the Pmax option in about 69% of the trials. Even in trials
where this choice was associated with lower expected value, subjects still chose this
option in 59% of the trials. In a second study (N ¼ 71), we replicated these basic find-
ings. Additionally, in some problems, the middle option (x3) was subtly modified
such that the subject’s decision could not change its valence (e.g., on some trials,
it started at $5 instead of $0). Here, subjects chose the middle option only on
39% of trials, a highly significant decrease from the base condition. Note that this
subtle change in the magnitude of one of the outcomes will not affect the predictions
of most economic models and yet we found a huge shift in preferences across indi-
viduals. Finally, we hypothesized that people will be particularly attracted to changes
in overall probabilities that involve moving from an uncertain gain to a certain gain
or from a certain loss to an uncertain loss (Payne et al., 1980, 1981). So, we translated
all values from select gambles by adding the magnitude of the largest loss (i.e., the
worst outcome became $0) or subtracting the magnitude of the largest gain (i.e., the
best outcome became $0). When faced with such gambles, subjects indeed showed a
significantly increased tendency to choose the Pmax heuristic (82%).
An important aspect of both these experiments was the vast individual variability
in preferences across subjects. While majority of the subjects showed a strong pref-
erence for the Pmax heuristic across trials, the magnitude of these preferences varied
as a function of decision context. There were still a few individuals whose choices
were consistent with more traditional decision models. Interestingly, the bias toward
the overall probability-maximizing choices was correlated with a trait measure of
satisficing (Schwartz et al., 2002), consistent with the notion that these choices rep-
resent a simplifying heuristic strategy that emphasizes only certain aspects of the
decision problem while ignoring the rest.
Using fMRI, we sought to understand the mechanisms underlying decision pref-
erences in this task (Venkatraman et al., 2009a). Twenty three subjects completed a
series of 120 value allocation problems. According to the canonical dual system, we
hypothesized that economic rationality or value-maximizing choices in this task
would be associated with increased activation in the nominally cognitive regions
of the brain while the more heuristic and simplifying choices would be driven with
increased activation in the emotional regions of the brain. However, somewhat coun-
ter intuitively, we found that increased activation in emotional regions predicted that
a subject would make choices consistent with economic models: activation in ante-
rior insula (aINS) predicted Lmin choices, whereas activation in ventromedial pre-
frontal cortex (vmPFC) predicted Gmax choices (Fig. 2A). These data support an
interpretation in terms of the specific consequences of choices in this task: anterior
2 Neural correlates of strategic control 273
(A)
1.0
signal change
Normalized
(B)
0.6
gaze duration
Normalized
0
Win Intermediate Loss
attributes attributes attributes
Insula
dmPFC
Compensatory
choice
Si cho
Strategy control Choice-related
m ic
pl e
control
ify
0.16
in
Insula
g
Connectivity strength (a.u.)
dLPFC
dLPFC
Compensatory Simplifying
-0.04 choice choice
FIGURE 3
Dorsomedial prefrontal cortex plays a role in strategic control during decision making.
Activation in dmPFC was greater when individuals made a decision opposite their typical
strategic bias (upper left panel). Moreover, psychophysiological interaction analyses revealed
a double dissociation in the connectivity of dmPFC with different choice-related regions (lower
left panel). When people made compensatory (Lmin) choices, changes in dmPFC signal over
time were positively correlated with regions like the insular cortex that showed greater overall
activation to those choices (upper right panel). Conversely, when people made choices
consistent with a simplifying strategy (Pmax), the dmPFC signal was positively correlated with
regions like the dlPFC that exhibited increased overall activation on those trials (lower right
panel).
Reproduced with permission from Venkatraman and Huettel (2012).
computation. Accordingly, the activation of a given brain system (e.g., dlPFC) may
sometimes lead to behavior consistent with economic theories of rationality (Sanfey
et al., 2003) and in other circumstances, such as here, predict a nonnormative choice
consistent with a simplifying or heuristic strategy. In other words, decision making
reflects an interaction among brain systems coding for different sorts of computa-
tions, with some regions (e.g., aINS, vmPFC) coding for specific behaviors and
others (e.g., dmPFC) for preferred strategies. In the next section, I will discuss
how the proposed role of dmPFC for strategic control can be integrated with the
broader functional specialization of this region for cognitive control.
276 CHAPTER 15 Strategic variability in decision making
word like “cat” or an incongruent number like “one” appeared on the screen). Only
regions that showed significant covariation across individuals with a response-time-
based incongruency measure were analyzed. For decision- and strategy-related con-
trol, we used an attribute-balancing task where individuals had to choose between
different stocks (Fig. 1B). Specifically, they were asked to choose between two
stocks that were rated on two independent attributes. Subjects in this task could again
choose adaptively between two different strategies: invest in the stock with highest
expected value (as calculated by the sum of ratings on the two attributes) or choose
the stock that is more balanced on the two attributes. The latter choice is consistent
with an attribute-balancing heuristic, where subjects prefer the more balanced option
and avoid options that are extreme on the two attributes (Fig. 1B) (Chernev, 2004).
Consistent with the first study, the magnitude of strategy control was defined based
on the degree of bias toward one of the two available decision strategies across sub-
jects. The difficulty of the decision or decision-related control was manipulated by
increasing or decreasing the relative values of the attributes for the two stocks. In the
easiest congruent trials (mean RT ¼ 0.72 s), the balanced choice also had higher
expected value and was chosen in 89% of the trials. In Incongruent trials (mean
RT ¼ 0.87 s), the balanced choice had lower expected value and hence was chosen
less often (23%). The equal trials were the hardest (mean RT ¼ 1.13 s) since both
options had equal expected value. Here, the balanced choice was still the preferred
choice (65%). We took care to address several potential confounds when designing
the experiment (Venkatraman et al., 2009b). For instance, the decision and response
phases were explicitly separated for the attribute-balancing task to prevent activa-
tions for decision-related control being confounded by motor preparation and
response selection. Also, data for all types of control were acquired in the same
subjects within the same session and were associated with unique and independent
behavioral covariates.
Consistent with risky choice paradigm, we again showed that a region in the
anterior dmPFC predicted strategic variability across subjects. In other words, acti-
vation in this region was greatest when subjects made choices that ran counter to their
preferred strategy, validating the hypothesis that the dmPFC codes for preferences at
a strategic level (Venkatraman et al., 2009b). Importantly, we found strong evidence
for an anterior-to-posterior topography within the dmPFC, based on varying control
demands (Fig. 4A). Using the Stroop task, we showed that the more posterior regions
were associated with response-related control (Fig. 4B). The middle regions within
the dmPFC were associated with decision-related control. Activation in this subre-
gion varied parametrically with increasing difficulty in making decisions (Fig. 4D).
Finally, the more anterior regions were associated with strategy-related control de-
mands when subjects had to choose counter to their preferred strategy (Fig. 4C).
Therefore, our results provided strong evidence in favor of a functional organization
within the dmPFC, similar to the hierarchical organization found in the lateral PFC.
A popular integrative model for cognitive control holds that the dmPFC monitors
and detects the need for control, while the lateral PFC regions help in the implemen-
tation of necessary changes (Kerns et al., 2004). Consistent with such a model, Taren
278 CHAPTER 15 Strategic variability in decision making
(A) (B)
0.5
Response
control
(incongruent)
control
r = 0.60
Strategy
control
0 200
-0.2
Response time variability (ms)
posterior dmPFC
(C) (D)
0.6 1
Congruent
r = 0.61 Incongruent
(extreme-balanced)
Signal change (%)
Equal
(% signal change)
Decision phase
-2.5 2.5
-0.2
Strategic variability 0
anterior dmPFC Middle dmPFC
FIGURE 4
Evidence for functional topography in dmPFC. (A) Using tasks that evoke different kinds of
control demands, we found an anterior-to-posterior functional topography within the dmPFC
with three separate regions predicting strategy, decision, and response-related control
(Venkatraman et al., 2009a). (B) Activation in posterior dmPFC significantly covaried with
increases in response times for incongruent over neutral trials. (C) Activation in an anterior
dmPFC indexed strategy conflict, such that difference in activation between extreme and
balanced choices was significantly correlated with individual variability in the preference for
balancing strategy across individuals. (D) Activation in middle dmPFC during the decision
phase showed increasing activation with increasing task difficulty (equal > incongruent
>congruent conditions).
Adapted from Venkatraman et al. (2009b).
control (Taren et al., 2011). Such a functional gradient in connectivity could reflect a
dynamic mechanism for identifying and responding adaptively to contextual changes
in behavior (Kouneiher et al., 2009; Venkatraman and Huettel, 2012). In the final
section, I will discuss how the knowledge about underlying mechanisms obtained
from the two experiments above can help characterize the effects of SD on decision
making.
of state (RW, SD) or choice (Gmax vs. Pmax, Lmin vs. Pmax), there was a significant
state-by-choice interaction. Sleep-deprived subjects exhibited an increased prefer-
ence for Gmax choices in gain-focus trials but a decreased preference for Lmin
choices in loss-focus trials (Venkatraman et al., 2011a). Importantly, subjects
remained sensitive to the expected-value relationship between the two alternatives
in both states, indicating that SD led to a change in preferences, not a simple increase
in decision variability. Therefore, there was sufficient evidence for a behavioral
shift in preferences following SD whereby the same individual moved from defend-
ing against losses to chasing large gains in the absence of explicit posttrial feedback.
Such a behavioral shift can however be explained by two broad underlying mech-
anisms, which are often indistinguishable with just behavioral experiments. In the
first scenario, SD may just lead to an overall reduction in processing of information
through its well-known effects of cognition and selective attention. Therefore, sleep-
deprived subjects may be simplifying the problem and focusing only on a subset of
information in making their decisions. Specifically, they choose the option with the
higher-ranked outcome (Gmax for gain-focus and Pmax for loss-focus trials, respec-
tively). In the second alternative scenario, SD could bias the computations underly-
ing risky choice. Here, rather than leading to an overall reduction in processing, SD
leads to increased weighting of gain information and diminished weighting of loss
information. Therefore, the higher ranked choices appear more attractive following
SD. Distinguishing between these mechanisms is critical from a treatment perspec-
tive, because the use of stimulants like caffeine may have a significant impact in per-
formance in the first scenario, but not the later. We can make clear predictions about
underlying neural mechanisms in each of these scenarios, based on our knowledge
from previous studies using a similar paradigm. In the first scenario, there will be an
overall reduction in activation in all brain systems following SD, reflecting reduced
processing. Critically, there will be reduced activation in the dmPFC, since subjects
are no longer capable of adapting their decisions to subtle changes in decision con-
text. In the second scenario, sleep deprivation will be associated with distinct
changes for loss- and gain-focus trials. Specifically, gain-focus trials will lead to in-
creased activation in the vmPFC reflecting an increased sensitivity to gains, while
loss-focus trials will lead to decreased aINS reflecting reduced sensitivity to losses
following SD. Our findings were consistent with the later scenario.
Consistent with the first study in normal adults, we found that vmPFC activation
correlated with the proportion of Gmax choices in gain-focus trials, while right an-
terior insula (aINS) activation correlated with proportion of Lmin choices in loss-
focus trials in the well-rested state. Importantly, SD led to increased activation in
the vmPFC, consistent with a behavioral shift toward gain-maximizing choices fol-
lowing SD (Fig. 5A). SD also resulted in reduced activation in the right anterior
insula during loss-focus trials (Fig. 5B). Notably, these SD-induced changes in
activation correlated with SD-induced changes in behavior. A reduced propensity
to make Lmin choices when sleep deprived correlated with reduced right anterior
insula activation during these trials. Strikingly, SD did not affect dlPFC activation
in contrast to prior expectations emerging from behavioral studies (Harrison and
4 Effects of SD on decision preferences 281
(A) (B)
x = -4 Y = 20 R
-0.35 0
GF trials LF trials GF trials LF trials
Ventromedial PFC R. anterior insula
FIGURE 5
Sleep deprivation biases neural mechanisms underlying economic preferences. (A) SD
resulted in increased activation (in this case, reduced deactivation) in the ventromedial
prefrontal cortex for both gain- and loss-focus trials. (B) SD was also associated with reduced
activation in right anterior insula only for the loss-focus trials.
Adapted from Venkatraman et al. (2011a)
Horne, 2000; Killgore et al., 2006). Finally, we did find SD-related decrease in an-
terior insula activation, associated with loss-focus trials to correlate with SD-related
increases in vmPFC activation during gain-focus trials.
Therefore, SD appeared to create an optimism bias whereby subjects behaved as
if positive consequences were more likely (or more valuable) and negative conse-
quences less likely (or less harmful). As activation in vmPFC and anterior insula
are typically associated with salience of negative and positive outcomes, respectively
(Kuhnen and Knutson, 2005; Preuschoff et al., 2008; Venkatraman et al., 2009a),
282 CHAPTER 15 Strategic variability in decision making
our findings are more consistent with SD biasing the valuation of mixed gambles by
bringing about an increased attentional bias toward higher-ranked positive outcomes
while concurrently reducing concern for losses.
During the outcome phase, where subjects passively viewed gambles being
resolved to an actual monetary gain or loss, there was increased activity in the ventral
striatum (vStr) and vmPFC for gains relative to losses following SD. SD was also
associated with marked attenuation of loss-related activation within the left anterior
insula. Finally, the decrease in activation of the left anterior insula for losses corre-
lated with the increase in activation in ventral striatum for gains. These findings are
also consistent with the hypothesis that lack of adequate sleep leads to increased sen-
sitivity to positive reward outcomes with a corresponding diminished response to
losses and negative consequences (Venkatraman et al., 2011a).
Strikingly, the shifts in economic preferences in the multiple-outcome gambling
experiment were independent of the effects of SD on psychomotor vigilance (Van
Dongen et al., 2004). Together, these findings suggest that SD affects decision pref-
erences independent of its more general effects on cognition, by biasing the attention
paid to gains relative to losses. This point is relevant for the increasing number of
persons seeking to maintain performance when sleep deprived by taking stimulants.
Stimulants may improve vigilance but may have minimal influence (or even negative
effects) on other aspects of cognition, such as decision making (Gottselig et al., 2006;
Huck et al., 2008; Killgore et al., 2007, 2008). Our findings that SD shapes decision
preferences independent of its effects on vigilance cautions that the traditional coun-
termeasures may be ineffective in ameliorating the decision biases engendered by
limited sleep.
5 CONCLUSIONS
A common criticism of decision neuroscience is that neuroscience data is simply
irrelevant for core models in economics and decision making. This criticism has been
countered on several grounds over the years (Clithero et al., 2008). For example, eco-
nomic models of behavior are often disconnected from the substantial psychological
and neuroscience literature on individual differences. Because of this disconnect, a
model may well describe behavior of healthy well-rested adults making decisions in
a relaxed setting, but nevertheless have little predictive validity when applied to
same adults making decisions under time pressure or following 24 h of SD. Simi-
larly, a model may well describe behavior in a particular decision context but have
little predictive validity when subtle changes are made to the decision environment
leading to complete change in strategies within individuals. In this chapter, I have
argued with evidence that neuroscience may become critical for creating robust
and flexible models of real-world decision behavior by illuminating the mechanisms
underlying individual choice biases and strategic preferences.
Despite its brain-centered focus, decision neuroscience has the potential to
help shape the course of theories and models in economics and decision sciences.
References 283
Acknowledgments
I would like to thank my collaborators Scott Huettel, John Payne, and Michael Chee
for their contributions to the various projects discussed here.
References
Badre, D., 2008. Cognitive control, hierarchy, and the rostro-caudal organization of the frontal
lobes. Trends Cogn. Sci. 12 (5), 193–200.
Bechara, A., Tranel, D., Damasio, H., 2000. Characterization of the decision-making deficit of
patients with ventromedial prefrontal cortex lesions. Brain 123 (Pt 11), 2189–2202.
Behrens, T.E., Woolrich, M.W., Walton, M.E., Rushworth, M.F., 2007. Learning the value of
information in an uncertain world. Nat. Neurosci. 10 (9), 1214–1221.
284 CHAPTER 15 Strategic variability in decision making
Bernheim, B.D., 2008. Neuroeconomics: A Sober (but Hopeful) Appraisal. National Bureau of
Economic Research, Cambridge, MA.
Berns, G.S., McClure, S.M., Pagnoni, G., Montague, P.R., 2001. Predictability modulates
human brain response to reward. J. Neurosci. 21 (8), 2793–2798.
Botvinick, M., Nystrom, L.E., Fissell, K., Carter, C.S., Cohen, J.D., 1999. Conflict monitoring
versus selection-for-action in anterior cingulate cortex. Nature 402 (6758), 179–181.
Camerer, C.F., Loewenstein, G., Prelec, D., 2004. Neuroeconomics: why economics needs
brains. Scand. J. Econ. 106 (3), 555–579.
Carter, C.S., Braver, T.S., Barch, D.M., Botvinick, M., Noll, D., Cohen, J.D., 1998. Anterior
cingulate cortex, error detection, and the online monitoring of performance. Science 280
(5364), 747–749.
Centres for Disease Control and Prevention, 2009. Perceived insufficient rest or sleep among
adults—United States, 2008. MMWR Morb. Mortal. Wkly Rep. 58 (42), 1175–1179.
Chee, M.W.L., Choo, W.C., 2004. Functional imaging of working memory after 24 hr of total
sleep deprivation. J. Neurosci. 24 (19), 4560–4567.
Chee, M.W.L., Tan, J.C., Zheng, H., Parimal, S., Weissman, D.H., Zagorodnov, V., et al.,
2008. Lapsing during sleep deprivation is associated with distributed changes in brain
activation. J. Neurosci. 28 (21), 5519–5528.
Chernev, A., 2004. Extremeness aversion and attribute-balance effects in choice. J. Consum.
Res. 31, 249–263.
Clithero, J.A., Tankersley, D.T., Huettel, S.A., 2008. Foundation of neuroeconomics: from
philosophy to practice. PLoS Biol. 6 (11), 2348–2353.
Coricelli, G., Critchley, H.D., Joffily, M., O’Doherty, J.P., Sirigu, A., Dolan, R.J., 2005. Regret
and its avoidance: a neuroimaging study of choice behavior. Nat. Neurosci. 8 (9), 1255–1262.
Coutlee, C.G., Huettel, S.A., 2012. The functional neuroanatomy of decision making: prefron-
tal control of thought and action. Brain Res. 1428, 3–12.
De Martino, B., Kumaran, D., Seymour, B., Dolan, R.J., 2006. Frames, biases, and rational
decision-making in the human brain. Science 313 (5787), 684–687.
Diecidue, E., van de Ven, J., 2008. Aspiration level, probability of success and failure, and
expected utility. Int. Econ. Rev. 49 (2), 683–700.
Dreher, J.C., Tremblay, L., 2009. Handbook of Reward and Decision Making. Academic
Press, London.
Drummond, S.P., Meloy, M.J., Yanagi, M.A., Orff, H.J., Brown, G.G., 2005. Compensatory
recruitment after sleep deprivation and the relationship with performance. Psychiatry Res.
140 (3), 211–223.
Glimcher, P.W., 2003. Decisions, Uncertainty, and the Brain: the Science of Neuroeconomics.
MIT Press, Cambridge, MA.
Glimcher, P.W., Rustichini, A., 2004. Neuroeconomics: the consilience of brain and decision.
Science 306 (5695), 447–452.
Glockner, A., Betsch, T., 2008. Modeling option and strategy choices with connectionist net-
works: towards an integrative model of automatic and deliberate decision making. Judgm.
Decis. Mak. 3 (3), 215–228.
Gottselig, J.M., Adam, M., Retey, J.V., Khatami, R., Achermann, P., Landolt, H.P., 2006. Ran-
dom number generation during sleep deprivation: effects of caffeine on response mainte-
nance and stereotypy. J. Sleep Res. 15 (1), 31–40.
Gul, F., Pesendorfer, W., 2008. The case for mindless economics. In: Caplin, A., Schotter, A.
(Eds.), Foundations of Positive and Normative Economics, Methodologies of Modern
Economics. Oxford University Press, Oxford, pp. 3–39.
References 285
Habeck, C., Rakitin, B.C., Moeller, J., Scarmeas, N., Zarahn, E., Brown, T., et al., 2004. An
event-related fMRI study of the neurobehavioral impact of sleep deprivation on perfor-
mance of a delayed-match-to-sample task. Brain Res. Cogn. Brain Res. 18 (3), 306–321.
Hammond, K.R., 1955. Probabilistic functionalism and clinical method. Psychol. Rev. 62,
255–262.
Harrison, G.W., 2008. Neuroeconomics: a critical reconsideration. Econ. Phil. 24 (03), 303–344.
Harrison, Y., Horne, J.A., 1999. One night of sleep loss impairs innovative thinking and flex-
ible decision making. Organ. Behav. Hum. Decis. Process. 78 (2), 128–145.
Harrison, Y., Horne, J.A., 2000. The impact of sleep deprivation on decision making: a review.
J. Exp. Psychol. Appl. 6 (3), 236–249.
Hayden, B.Y., Pearson, J.M., Platt, M.L., 2009. Fictive reward signals in the anterior cingulate
cortex. Science 324 (5929), 948–950.
Hsu, M., Bhatt, M., Adolphs, R., Tranel, D., Camerer, C.F., 2005. Neural systems responding
to degrees of uncertainty in human decision-making. Science 310 (5754), 1680–1683.
Hsu, M., Krajbich, I., Zhao, C., Camerer, C.F., 2009. Neural response to reward anticipation
under risk is nonlinear in probabilities. J. Neurosci. 29 (7), 2231–2237.
Huck, N.O., McBride, S.A., Kendall, A.P., Grugle, N.L., Killgore, W.D., 2008. The effects
of modafinil, caffeine, and dextroamphetamine on judgments of simple versus
complex emotional expressions following sleep deprivation. Int. J. Neurosci. 118
(4), 487–502.
Huettel, S.A., 2010. Ten challenges for decision neuroscience. Front. Neurosci. 4, 171.
Huettel, S.A., Payne, J.W., 2009. Integrating neural and decision sciences: convergence and
constraints. J. Mark. Res. 46 (1), 14–17.
Huettel, S.A., Stowe, C.J., Gordon, E.M., Warner, B.T., Platt, M.L., 2006. Neural signatures of
economic preferences for risk and ambiguity. Neuron 49 (5), 765–775.
Institute of Medicine, 2006. Sleep Disorders and Sleep Deprivation: An Unmet Public Health
Problem. The National Academies Press, Washington, DC.
Kable, J.W., Glimcher, P.W., 2007. The neural correlates of subjective value during intertem-
poral choice. Nat. Neurosci. 10 (12), 1625–1633.
Kahneman, D., Tversky, A., 1979. Prospect theory: an analysis of decision under risk. Econ-
ometrica 47 (2), 263–291.
Kerns, J.G., Cohen, J.D., MacDonald 3rd, A.W., Cho, R.Y., Stenger, V.A., Carter, C.S., 2004.
Anterior cingulate conflict monitoring and adjustments in control. Science 303 (5660),
1023–1026.
Killgore, W.D., Balkin, T.J., Wesensten, N.J., 2006. Impaired decision making following 49 h
of sleep deprivation. J. Sleep Res. 15 (1), 7–13.
Killgore, W.D., Lipizzi, E.L., Kamimori, G.H., Balkin, T.J., 2007. Caffeine effects on risky de-
cision making after 75 hours of sleep deprivation. Aviat. Space Environ. Med. 78 (10),
957–962.
Killgore, W.D., Grugle, N.L., Killgore, D.B., Leavitt, B.P., Watlington, G.I., McNair, S., et al.,
2008. Restoration of risk-propensity during sleep deprivation: caffeine, dextroamphet-
amine, and modafinil. Aviat. Space Environ. Med. 79 (9), 867–874.
Koechlin, E., Ody, C., Kouneiher, F., 2003. The architecture of cognitive control in the human
prefrontal cortex. Science 302 (5648), 1181–1185.
Kouneiher, F., Charron, S., Koechlin, E., 2009. Motivation and cognitive control in the human
prefrontal cortex. Nat. Neurosci. 12 (7), 939–947.
Kuhnen, C.M., Knutson, B., 2005. The neural basis of financial risk taking. Neuron 47 (5),
763–770.
286 CHAPTER 15 Strategic variability in decision making
Lerner, J.S., Keltner, D., 2000. Beyond valence: toward a model of emotion-specific influ-
ences on judgment and choice. Cogn. Emot. 14 (4), 473–493.
Linde, L., Edland, A., Bergstrom, M., 1999. Auditory attention and multiattribute decision-
making during a 33 h sleep-deprivation period: mean performance and between-subject
dispersions. Ergonomics 33 (5), 696–713.
Loewenstein, G.F., Weber, E.U., Hsee, C.K., Welch, N., 2001. Risk as feelings. Psychol. Bull.
127 (2), 267–286.
Loewenstein, G.F., Rick, S., Cohen, J.D., 2008. Neuroeconomics. Annu. Rev. Psychol. 59,
647–672.
Lohrenz, T., McCabe, K., Camerer, C.F., Montague, P.R., 2007. Neural signature of fictive
learning signals in a sequential investment task. Proc. Natl. Acad. Sci. U.S.A. 104 (22),
9493–9498.
Lopes, L.L., Oden, G.C., 1999. The role of aspiration level in risky choice: a comparison of
cumulative prospect theory and SP/A theory. J. Math. Psychol. 43 (2), 286–313.
McClure, S.M., Laibson, D.I., Loewenstein, G., Cohen, J.D., 2004. Separate neural systems
value immediate and delayed monetary rewards. Science 306 (5695), 503–507.
McKenna, B.S., Dicjinson, D.L., Orff, H.J., Drummond, S.P., 2007. The effects of one night of
sleep deprivation on known-risk and ambiguous-risk decisions. J. Sleep Res. 16 (3),
245–252.
Mu, Q., Mishory, A., Johnson, K.A., Nahas, Z., Kozel, F.A., Yamanaka, K., et al., 2005.
Decreased brain activation during a working memory task at rested baseline is associated
with vulnerability to sleep deprivation. Sleep 28 (4), 433–446.
Mukherjee, K., 2010. A dual system model of preferences under risk. Psychol. Rev. 177,
243–255.
Ochsner, K.N., Lieberman, M.D., 2001. The emergence of social cognitive neuroscience. Am.
Psychol. 56 (9), 717–734.
Payne, J.W., 1976a. Human Judgment and Decision-Processes. M.F. Kaplan and S. Schwartz
(Eds.). Contemp. Psychol. 21 (10), 728–729.
Payne, J.W., 1976b. Task complexity and contingent processing in decision-making—infor-
mation search and protocol analysis. Organ. Behav. Hum. Decis. Process. 16 (2), 366–387.
Payne, J.W., 2005. It is whether you win or lose: the importance of the overall probabilities of
winning or losing in risky choice. J. Risk Uncertain. 30 (1), 5–19.
Payne, J.W., Venkatraman, V., 2011. Opening the black box. In: Schulte-Mecklenbeck, M.,
Kuhberger, A., Ranyard, R. (Eds.), A Handbook of Process Tracking Methods for Decision
Research. Psychology Press, New York, NY.
Payne, J.W., Laughhunn, D.J., Crum, R., 1980. Translation of gambles and aspiration level
effects in risky choice behavior. Manag. Sci. 26 (10), 1039–1060.
Payne, J.W., Laughhunn, D.J., Crum, R., 1981. Further tests of aspiration level effects in risky
choice behavior. Manag. Sci. 27 (8), 953–958.
Platt, M.L., Glimcher, P.W., 1999. Neural correlates of decision variables in parietal cortex.
Nature 400 (6741), 233–238.
Platt, M.L., Huettel, S.A., 2008. Risky business: the neuroeconomics of decision making under
uncertainty. Nat. Neurosci. 11 (4), 398–403.
Pochon, J.B., Riis, J., Sanfey, A.G., Nystrom, L.E., Cohen, J.D., 2008. Functional imaging of
decision conflict. J. Neurosci. 28 (13), 3468–3473.
Poldrack, R.A., 2011. Inferring mental states from neuroimaging data: from reverse inference
to large-scale decoding. Neuron 72 (5), 692–697.
References 287
Preuschoff, K., Bossaerts, P., Quartz, S.R., 2006. Neural differentiation of expected reward
and risk in human subcortical structures. Neuron 51 (3), 381–390.
Preuschoff, K., Quartz, S.R., Bossaerts, P., 2008. Human insula activation reflects risk predic-
tion errors as well as risk. J. Neurosci. 28 (11), 2745–2752.
Prevost, C., Pessiglione, M., Metereau, E., Clery-Melin, M.L., Dreher, J.C., 2010. Separate
valuation subsystems for delay and effort decision costs. J. Neurosci. 30 (42),
14080–14090.
Rangel, A., Camerer, C., Montague, P.R., 2008. A framework for studying the neurobiology of
value-based decision making. Nat. Rev. Neurosci. 9 (7), 545–556.
Rilling, J., Gutman, D., Zeh, T., Pagnoni, G., Berns, G., Kilts, C., 2002. A neural basis for
social cooperation. Neuron 35 (2), 395–405.
Rushworth, M.F., Kennerley, S.W., Walton, M.E., 2005. Cognitive neuroscience: resolving
conflict in and over the medial frontal cortex. Curr. Biol. 15 (2), R54–R56.
Sanfey, A.G., Rilling, J.K., Aronson, J.A., Nystrom, L.E., Cohen, J.D., 2003. The neural
basis of economic decision-making in the Ultimatum Game. Science 300 (5626),
1755–1758.
Sanfey, A.G., Loewenstein, G., McClure, S.M., Cohen, J.D., 2006. Neuroeconomics: cross-
currents in research on decision-making. Trends Cogn. Sci. 10 (3), 108–116.
Schwartz, B., Ward, A., Monterosso, J., Lyubomirsky, S., White, K., Lehman, D.R., 2002.
Maximizing versus satisficing: happiness is a matter of choice. J. Pers. Soc. Psychol.
83 (5), 1178–1197.
Simon, H.A., 1955. A behavioral model of rational choice. Q. J. Econ. 69, 99–118.
Smith, D.V., Hayden, B.Y., Truong, T.K., Song, A.W., Platt, M.L., Huettel, S.A., 2010.
Distinct value signals in anterior and posterior ventromedial prefrontal cortex. J. Neurosci.
30 (7), 2490–2495.
Sterpenich, V., Albouy, G., Darsaud, A., Schmidt, C., Vandewalle, G., Dang Vu, T.T., et al.,
2009. Sleep promotes the neural reorganization of remote emotional memory. J. Neurosci.
29 (16), 5143–5152.
Tankersley, D.T., Stowe, C.J., Huettel, S.A., 2007. Altruism is associated with an increased
neural response to agency. Nat. Neurosci. 10 (2), 150–151.
Taren, A.A., Venkatraman, V., Huettel, S.A., 2011. A parallel functional topography between
medial and lateral prefrontal cortex: evidence and implications for cognitive control.
J. Neurosci. 31 (13), 5026–5031.
Tom, S.M., Fox, C.R., Trepel, C., Poldrack, R.A., 2007. The neural basis of loss aversion in
decision-making under risk. Science 315 (5811), 515–518.
Tomasi, D., Wang, R.L., Telang, F., Boronikolas, V., Jayne, M.C., Wang, G.J., et al., 2009.
Impairment of attentional networks after 1 night of sleep deprivation. Cereb. Cortex 19 (1),
233–240.
Tversky, A., Kahneman, D., 1992. Advances in prospect theory: cumulative representation of
uncertainty. J. Risk Uncertain. 5, 297–323.
Van Dongen, H.P., Baynard, M.D., Maislin, G., Dinges, D.F., 2004. Systematic interindividual
differences in neurobehavioral impairment from sleep loss: evidence of trait-like differen-
tial vulnerability. Sleep 27 (3), 423–433.
Venkatraman, V., Huettel, S.A., 2012. Strategic control in decision-making under uncertainty.
Eur. J. Neurosci. 35 (7), 1075–1082.
Venkatraman, V., Ansari, D., Chee, M.W.L., 2005. Neural correlates of symbolic and non-
symbolic arithmetic. Neuropsychologia 43 (5), 744–753.
288 CHAPTER 15 Strategic variability in decision making
Venkatraman, V., Payne, J.W., Bettman, J.R., Luce, M.F., Huettel, S.A., 2009a. Separate neu-
ral mechanisms underlie choices and strategic preferences in risky decision making. Neu-
ron 62 (4), 593–602.
Venkatraman, V., Rosati, A.G., Taren, A.A., Huettel, S.A., 2009b. Resolving response, deci-
sion, and strategic control: evidence for a functional topography in dorsomedial prefrontal
cortex. J. Neurosci. 29 (42), 13158–13164.
Venkatraman, V., Huettel, S.A., Chuah, L.Y., Payne, J.W., Chee, M.W.L., 2011a. Sleep depri-
vation biases the neural mechanisms underlying economic preferences. J. Neurosci. 31 (10),
3712–3718.
Venkatraman, V., Payne, J.W., Huettel, S.A., 2011b. Neuroeconomics of risky decisions: from
variables to strategies. In: Delgado, M.R., Phelps, E.A., Robbins, T.W. (Eds.), Decision
Making, Affect and Learning. Oxford University Press, USA.
Venkatraman, V., Payne, J. W., Huettel, S. A., in review. An overall probability of winning
heuristic for complex risky decisions: choice and eye fixation evidence. Organ. Behav.
Hum. Dec.
Yacubian, J., Sommer, T., Schroeder, K., Glascher, J., Braus, D.F., Buchel, C., 2007. Subregions
of the ventral striatum show preferential coding of reward magnitude and probability.
Neuroimage 38 (3), 557–563.