Journal of Biomedical Informatics 35 (2002) 52–75
www.academicpress.com
Methodological Review
Emerging paradigms of cognition in medical decision-making
Vimla L. Patel,a,* David R. Kaufman,a and Jose F. Arochab
a
Laboratory for Decision Making and Cognition, Departments of Medical Informatics and Psychiatry, Columbia University,
Vanderbilt Clinic Bldg., 5th Floor, 622 West 168th Street, New York 1003, USA
b
Department of Health Studies and Gerontology, University of Waterloo, Waterloo, Ont., Canada
Received 7 January 2002
Abstract
The limitations of the classical or traditional paradigm of decision research are increasingly apparent, even though there has been
a substantial body of empirical research on medical decision-making over the past 40 years. As decision-support technology continues to proliferate in medical settings, it is imperative that ‘‘basic science’’ decision research develop a broader-based and more
valid foundation for the study of medical decision-making as it occurs in the natural setting. This paper critically reviews both
traditional and recent approaches to medical decision making, considering the integration of problem-solving and decision-making
research paradigms, the role of conceptual knowledge in decision-making, and the emerging paradigm of naturalistic decisionmaking. We also provide an examination of technology-mediated decision-making. Expanding the scope of decision research will
better enable us to understand optimal decision processes, suitable coping mechanisms under suboptimal conditions, the development of expertise in decision-making, and ways in which decision-support technology can successfully mediate decision processes. 2002 Elsevier Science (USA). All rights reserved.
Keywords: Medical decision-making; Cognition; Naturalistic problem solving; Conceptual knowledge; Diagnostic reasoning; Heuristics; Biases;
Distributed cognition; Research paradigms
1. Introduction
In light of an increasing awareness of errors in medicine and of the importance of decision support in clinical
systems, the study of medical decision making1 is an increasingly influential area of research in medical informatics. There is a growing awareness that physiciansÕ
decisions too often result in suboptimal outcomes, which
sometimes lead to adverse consequences for a patient. In
general, decision research in medicine has focused on two
sets of interdependent objectives: (1) understanding how
physicians, other healthcare professionals, and patients
make decisions in experimental and ‘‘real-world’’ settings
and (2) devising ways to facilitate the decision process,
including the development of technologies ranging from
paper-based guidelines to computer-assisted decisionsupport technologies and training in decision methods. In
*
Corresponding author. Fax: +212-305-3302.
E-mail address:
[email protected] (V.L. Patel).
1
We use the terms decision-making and medical decision-making both
to characterize a cognitive process involving decisions and to name a field
of research with particular theoretical and methodological approaches.
an important sense, understanding decision processes can
provide a meaningful framework for ameliorating or facilitating decision-making in practice. This paper is
principally concerned with empirical decision research
devoted to characterizing the decision processes in medicine. Although medical decision-making research has
been an active area of inquiry for many years, we believe
progress has been less than satisfactory in both understanding the decision process and conceiving of methods,
both instructional and technologic, for improving this
process.
This paper presents a review of new directions in decision-making research. These relatively new approaches,
as exemplified by naturalistic decision-making research,
have expanded the scope of ‘‘traditional decision research.’’2 Following Beach and Lipshitz [1], the tradi2
The term ‘‘traditional’’ is used to designate a large and heterogeneous body of descriptive and prescriptive research that uses normative models (e.g., subjective expected utility, Bayesian) as points of
reference. This category greatly simplifies some significant differences
in perspectives and approaches. ‘‘Classical decision theory’’ is sometimes used to reference the same set of ideas and bodies of work.
1532-0464/02/$ - see front matter 2002 Elsevier Science (USA). All rights reserved.
PII: S 1 5 3 2 - 0 4 6 4 ( 0 2 ) 0 0 0 0 9 - 6
V.L. Patel et al. / Journal of Biomedical Informatics 35 (2002) 52–75
tional or ‘‘classical’’ decision theory refers to the collection of ‘‘axiomatic models of uncertainty and risk
(probability theory, including Bayesian theory) and
utility (including multi-attribute utility theory), that
prescribe the optimal choice of an option from an array
of options, where optimality is defined by underlying
models and the choice is dictated by an explicit rule,
usually some variation of maximization of (subjective)
expected utility’’ (p. 21).
Some have argued that the new approaches are inconsistent with traditional decision research, suggesting
that such investigation necessitates a new and more
ecologically valid3 paradigm for decision research [2]
Although we view some of the criticisms as compelling,
we also acknowledge that conventional decision research has much to offer. There is a need to find common ground such that we can better understand and
modify suboptimal decision practices. The objective of
this paper is to review critically both traditional and
recent approaches to medical decision-making. Our
primary focus is on descriptive research concerned with
the characterization of decision-making behavior rather
than prescriptive research designed to inform cliniciansÕ
decision practices. To foreshadow our conclusions, we
argue for a need to reconstitute the ‘‘basic science’’
framework underlying decision-making research and
applications.
Although we focus on cognitive issues in the study of
medical decision-making, this paper is not intended to
be a comprehensive review of the area. The paper largely
focuses on understanding the decision-making processes
of physicians and other participants in the healthcare
process. In addition, the emphasis is on conceptual issues rather than methodological ones.
The paper is organized into six sections. First, we
present some fundamental ideas underlying traditional
decision-making research, followed by a brief review of
empirical research on medical decision-making. In subsequent sections, we discuss developments in cognitive
perspectives on decision research, which includes an
integration of problem-solving and decision-making research paradigms, the role of conceptual knowledge in
decision-making, the emerging paradigm of naturalistic
decision-making, and finally, an examination of technology-mediated decision-making. The last four
perspectives challenge some of the fundamental
presuppositions of the ‘‘traditional’’ view of decisionmaking. This paper represents an elaboration or extension of our published primer on medical cognition [3]
and is similarly built around a set of claims. Claims are
hypotheses about the decision-making process that have
substantial support in the literature. The central argument in this paper is that the empirical/descriptive re3
‘‘Ecological validity’’ refers to whether an experimental approach
or construct adequately mirrors conditions in naturalistic settings.
53
search program in medical decision-making has been
too narrowly conceived. As currently constituted, the
traditional program in decision research cannot adequately inform the development and implementation of
effective decision-support systems and the practice of
evidence-based medicine. In our view, some of the newer
directions in decision research can contribute to a more
robust framework for understanding and modifying the
medical decision-making process.
2. Decision research
Decision making is central to all human intellectual
activity. It is not excessive to suggest that decisionmaking is nearly synonymous with thinking. Decision
making has been an active subject of psychological inquiry, since the beginning of experimental psychology.
There have been thousands of experiments, journal articles, and countless anthologies devoted to the subject.
However, this subject is not uniquely the province of
psychology. Economics, law, political science, organizational science, and medical informatics (to name but a
few disciplines) are focally involved in the study of decision-making; each with a voluminous bibliography
devoted to the topic. The ‘‘science’’ of decision making
has given rise to cottage industries and numerous related
endeavors designed to influence our decision choices
towards both noble pursuits (e.g., improving therapy for
diabetic patients) and less virtuous ends (e.g., selling of
tobacco products). There are clearly many people who
have vested interests in understanding how human beings make decisions. Although experts across domains,
including healthcare professionals, are generally highly
proficient decision-makers, their erroneous decisions
have become the source of considerable public scrutiny.
In industries such as aviation and aerospace, faulty decisions by both designers and practitioners have been
singled out as the cause for several errors. The Institute
of MedicineÕs recent report [4] also acknowledges that
certain errors are a result of flawed decision processes
either by a single individual or by a team of healthcare
workers.
Given that the landscape is vast, how can we possibly
make progress on this complex endeavor? We may start
by provisionally putting forth a set of propositions that
are elaborated further in subsequent sections. Decisions
involve choosing a course of action among a set of options with the intent of achieving a goal. According to
Hastie [5], a decision involves three components: (a)
choice options and courses of actions; (b) beliefs about
objective states, processes, and events in the world, including outcomes states and means to achieve them; and
(c) ‘‘desires, values or utilities that describe the consequences associated with the outcomes of each actionevent combination.’’ Good decisions are those that
54
V.L. Patel et al. / Journal of Biomedical Informatics 35 (2002) 52–75
effectively choose means that are available in a given
situation to achieve as well as possible the individualÕs
goals.
The question of what constitutes a good decision
suggests that we can specify criteria or evaluative standards for decision-making [6]. This is a framing assumption of the normative programs of research [7], also
known as rational choice theories or rational decisionmaking. Prescriptive models flow from normative standards in specifying how decisions ought to be made.
Descriptive models are characterizations of how physicians (or others) actually make decisions. Like most
cognitive scientists, we undertake research that largely
adheres to a descriptive approach in that we are interested in understanding how healthcare professionals,
patients, and lay people make health-related decisions.
However, this is not an ivory tower pursuit. We believe
that research on understanding decisions and decision
makers can contribute substantially to the development
and implementation of clinical-practice guidelines, the
design of electronic medical records, and clinical training.
Medical decision-making has been the subject of
formal research and related applications for nearly half
a century. There is a professional and academic society
(the Society for Medical Decision Making), annual
meetings, and a dedicated journal (Sage Publications:
Medical Decision Making) devoted to understanding
and improving decision practices. There is also a reasonably well-established paradigm for studying medical
decision-making processes grounded in the normative
comparative approach (see next section). The stereotypical version of the medical decision maker suggests a
coolly dispassionate, hyper-rational physician systematically considering well-defined options (i.e., therapeutic choices or diagnostic alternatives) on the basis of a
careful weighing of the evidence. Equally common is his
or her decidedly less competent colleague—a fallible
reasoner—subject to biases and particularly deficient in
the application of probability theory to decision problems. These shortcomings frequently result in faulty
decision practices.
Traditional empirical research on decision-making
focuses on an individualÕs decisions in a controlled
laboratory setting. There have been several research
approaches to the psychological study of decisions.
Problem-solving research [8,9] has also influenced the
study of decision-making. As discussed in a subsequent
section, problem-solving and decision-making research
constitute two distinct paradigms and employ different
theoretical assumptions and methodologies. Problemsolving research emphasizes the sequential process of
searching for a solution path, whereas decision research
focuses more on the nature of the decision outcome and
how it may deviate from an acceptable normative
standard. In addition, there have been several distinct
approaches to decision-making research [10]. One such
approach is ‘‘social judgment theory’’ [11], based on the
pioneering work of Brunswik [12]; a second approach is
‘‘information integration theory’’ [13]. The most widely
known and influential research program in the psychology of decision-making was Tversky and KahnemanÕs work on judgment under uncertainty [14]. This
program, exemplified by work on heuristics and biases,
had a profound influence on decision research in many
disciplines.
In a ‘‘typical’’ decision-making study, a subject is
presented with a brief decision scenario or clinical vignette (e.g., description of medical problem) and is required to select a course of action from a set of fixed
alternatives. This contrasts with a normative model,
based on expected utility theory or probability theory,
indicating ‘‘optimal’’ choices under conditions of uncertainty [15]. Uncertainty reflects a judgment of the
likelihood of a given event to occur in a particular situation (e.g., a patientÕs adverse reaction to medication)
and is often expressed in terms of probabilities. Most
studies emphasize how the decision maker deviates from
normative standards. This is in contrast with a descriptive approach of decision making in which the objective is to characterize a decision process or approach
in which expert performance serves as the gold standard
of performance. These approaches are discussed in
subsequent sections of this paper.
Decision research has greatly expanded in the last 20
years. Consider the following scenarios:
(a) A 36-year-old woman has just been informed that
she has breast cancer. She must decide whether to
undergo a radical surgical intervention that is associated with a very good survival rate or a less appearance-altering operation that carries with a
greater concomitant risk of mortality.
(b) A HIV-positive patient who had previously strictly
adhered to a complex combination anti-retroviral
treatment regimen and schedule has decided to stop
taking his medications for a short while because the
side effects are adversely affecting his lifestyle.
(c) ICD-10 specifies two categories of causes for heart
failure (congestive heart failure and left ventricular
heart failure) as well as a residual unspecified category.
The listing expressly excludes heart failure secondary
to obstetric surgery or secondary to renal disease.
(d) A physician studies a decision flowchart embedded
in a clinical practice guideline to prescribe antihypertensive medications to a patient with a complex medical history.
(e) An electronic medical record system, which employs
an elaborated medical vocabulary and highly structured interface, is found to affect systematically the
diagnostic and therapeutic practices in a diabetic
clinic.
(f) A team of healthcare professionals in an intensive
care unit engages in an animated debate about
V.L. Patel et al. / Journal of Biomedical Informatics 35 (2002) 52–75
whether to wean a patient off a ventilator. The patient, who had been previously progressing satisfactorily, had a difficult night and her respiratory
status was hard to gauge.
(g) A laparoscopic gallbladder removal surgery is proceeding normally up until a point when the surgeon
is having difficulty locating the bile duct. Continuing
the procedure runs the risk of serious injury to the
patient. The surgeon must decide whether to proceed or to convert the operation to open surgery.
Scenarios (a) and (b) highlight the role of patients as
decision makers. They also serve to illustrate that decision-making is not merely an analytic or dispassionate
process, but often involves an affective component [16].
It is reasonable to assert that, to varying extents, all
healthcare decisions, whether made by physicians or
patients, have an affective quality to them. It is of course
possible to model emotionally laden beliefs using subjective utilities. However, decisions carry meaning for
the participants in ways that are not easily expressible in
the common currency of utility [17]. The third scenario
(c) indicates that developing a nomenclature can be
construed as a particular kind of medical decision. ICD10 is designed to both reflect the decision practices of
physicians as well as to shape them. Scenarios (d) and (e)
implicate technology in the decision process. Many, if
not most, medical decisions are mediated by technology.4 It is our view that technology does not merely
support or enhance the decision processes, but fundamentally transforms it. The teamwork scenario (f)
demonstrates that many medical decisions are distributed over a number of individuals with different spheres
of expertise. Even if a single individual ultimately decides on a final course of action, others are critically
involved in the process. The ICU example also
illustrates that in everyday situations, decisions are
embedded in a broader context and are part of a decision-action cycle that is affected by monitoring and
feedback rather than a single judgment [18]. In addition,
the cognitive properties of a group may differ from those
of individuals [19]. We expand on these themes in the
section on naturalistic decision-making below. Scenario
(g) shows that uncertainty changes over time and that
complex perceptual-motor judgment is not easily reducible to simple principles.
3. Psychological dimensions of decision-making
Empirical research on decision-making can be traced
back to the 1940s and 50s [20]. Much of this work was
inspired by von Neumann and MorgensternÕs theory of
4
Technology is used in this paper to refer to both computermediated technology and other artifacts (e.g., charts, paper guidelines)
that are used to achieve similar ends.
55
games [7], within which social scientists in several disciplines advanced the systematic study of decisionmaking by developing abstract theoretical models and
conducting empirical studies. Scholars in many social
science disciplines, including economics, business, psychology, sociology, and political science devoted considerable effort to applying such models and refining
them to investigate diverse phenomena and to develop
related applications.
Arguably most of this research was influenced in
some way or by the normative or rational decision approaches, the normative character of which is predicated
on the use of various mathematical formalisms that are
supposed to represent a standard of rationality and rational decisions. Typically, normative theories of decision-making are based on two main types of models.
The first type of models makes use of expected utility
(EU) and subjective expected utility (SEU) as criteria for
‘‘rationality.’’ The idea behind these models is that in
making decisions one should maximize oneÕs gain, which
is calculated as the ratio of chance taken by amount of
payoff. The second type of model makes use of the notion of conditional probability, as expressed in the
subjectivist, personalist, or Bayesian perspective (the
Laplace–Bayes theorem). Two attractive aspects of these
approaches are that they offer a standard to compare or
to improve actual human decision-making and that they
provide apparently well-defined mathematical models of
rational decisions. Most subsequent decision-making
research has been influenced by these approaches, especially in psychology and the behavioral sciences.
3.1. The heuristics and biases program
Claim 1. Heuristics and biases significantly impact the
process of decision-making and have been well documented in the context of health-related decisions.
In keeping with the rational choice approaches, much
of the research on the psychology of decision-making
contrasts observed decision-making to a normative
standard (e.g., SEU or Bayesian models). Systematic
deviations from normative standards are seen as decision biases. According to Chapman and Elstein [21],
biases are important for two reasons: (1) they offer
insights into the cognitive processes underlying decisionmaking and (2) they may be suggestive of areas where
improvement is needed. Improved decision processes
can result in better patient care and health outcomes.
By the late 1960s, psychologists had amassed a considerable body of research documenting numerous
decision-making and reasoning anomalies in individuals
[20]. It was apparent that people are not skilled Bayesians and that their probability judgments deviated from
the normative standards in systematic ways. However, a
psychologically adequate explanation for individualsÕ
56
V.L. Patel et al. / Journal of Biomedical Informatics 35 (2002) 52–75
biased judgments was entirely lacking. Kahneman and
TverskyÕs (e.g., [22]) seminal studies and theories revolutionized the field of decision research. Their research is
exemplified by the following quotation:
How do people assess the probability of an uncertain event or
the value of an uncertain quantity? This paper shows that people rely on a limited number of heuristic principles, which reduce the complex tasks of assessing probabilities and
predicting values to simpler judgmental operations. In general,
these heuristics are quite useful, but sometimes they lead to severe and systematic errors [14].
Sometimes judgments are erroneous because we attend to variables that we should ignore and, alternatively, ignore variables that are worthy of our attention
[6]. Misleading heuristics that subjects use to generate
inferences and biases contribute to faulty judgments.
Tversky and Kahneman argued that subjects typically
fail to base judgments on perceptions of likelihood, but
estimate the probability of an event from a population
according to the representativeness of the sample. The
salient features of the problem and its similarity to salient aspects of the population appear to be a guiding
force in peopleÕs estimation. This is better illustrated in
the context of an example. Consider the following classic
problem [23]:
A cab was involved in a hit and run accident at night. Two cab
companies, the Green and Blue, operate in the city. You are given the following data.
(a) 85% of the cabs in the city are Green and 15% are
Blue.
(b) A witness identified the cab as Blue. The court tested
the reliability of the witness under the same circumstances that existed on the night of the accident and
concluded that the witness correctly identified each
of the two colors 80% of the time and failed 20%
of the time.
What is the probability that the cab involved in the
accident was Blue rather than Green?
BayesÕ Theorem tells us:
pðH jDÞ ¼
pðDjH Þ pðH Þ
:
pðDjH Þ pðH Þ þ pðDj H Þ pð H Þ
In this formulation, pðH jDÞ is the probability of H
(the hypothesis that the cab was Blue) given D (the
datum that the witness reported the car was Blue). Thus,
pðH jDÞ is the posterior probability of H after D is
known, p(H) is the prior probability that the car is Blue
(stated to be 15%), pðDjH Þ is the probability of D given
H (that is, the chance the witness will report the car to be
Blue if it is really Blue, stated to be 0.8), pðDj H Þ is the
chance the witness will report the car to be Blue if it is
really Green (stated to be 0.2), and pð H Þ is the prior
probability of not being H (in this case, the proportion
of cabs that are Green rather than Blue, or 85%). Substituting into BayesÕ Theorem, we calculate the proba-
bility that the cab is Blue rather than Green, when the
witness says it is Blue, as:
pðH jDÞ ¼
:80 :15
¼ :41
:80 :15 þ :20 :85
Experience in this and many other similar examples
show that many respondents, when presented with this
question, are heavily swayed by the information on the
witnessÕ accuracy (or the most salient aspect of the
event) and suggest that the probability that the cab is
Blue is 80%. Almost all respondents provide estimates in
excess of 50%, even though the correct answer is 41%.
They fail to consider adequately the base rate of Blue
cabs in the city (15%). There have been similar findings
in the domain of medical decision-making, which is
striking, given the frequency with which similar issues
arise in the interpretation of test results. Clinicians often
overestimate the impact of a positive test, failing to
appreciate the importance of the base rate (prevalence)
of the disease they are considering. Consider the following example [24]:
Estimate the probability that a woman has breast
cancer given that she has a positive mammogram on the
basis of the following information:
(a) The probability that a patient has breast cancer is
1%. (This provides the prior probability.)
(b) If the patient has breast cancer, the probability that
the radiologist will correctly diagnose it is 80%.
(This provides the sensitivity or hit rate.)
(c) If the patient has a benign lesion (no breast cancer),
the probability that the radiologist will misdiagnose
it is 9.6%. (This provides the false positive rate.)
According to BayesÕ rule, the probability that this
patient has breast cancer is about 8%. Eddy found that
95 out of 100 physicians estimated the probability of
breast cancer after a positive mammogram to be around
75%. The test result and its sensitivity seem to be the
most salient feature and the base rate is largely ignored.
The results are analogous to those found in the accident
scenario and many other similarly documented decision
situations [25].
The two above examples illustrate the judgment under uncertainty approach pioneered by Tversky and
Kahneman and widely used by the decision-making and
judgment community. First, subjects are presented with
a problem and their responses are compared with normative responses. The deviation between individualÕs
responses and the normative response is explained by
heuristics in which the reasoner selectively attends to
certain variables or exhibits particular kinds of biases
such as ignoring base rates. Biases are, more generally,
violations of consistency constraints imposed by probability theory. Representativeness and availability are
perhaps the most widely studied heuristics. Availability
is the tendency to assess the frequency, probability,
or likely causes of an event by the degree to which the
V.L. Patel et al. / Journal of Biomedical Informatics 35 (2002) 52–75
instances or occurrences of the event are readily available in memory. For example, an event that is distinct,
easily imagined, and specific will be more available than
will an event that is unemotional in nature [17]. A
physician may readily recall a vivid encounter with a
patient when treating a patient whose symptoms are
superficially similar, but different in important respects.
This may cause a range of errors in physicianÕs decisionmaking. For example, they may lend greater credibility
to the likelihood of a rare disorder without a proper
consideration of base rates.
Researchers have documented numerous heuristics
including anchoring and adjustment, and simulation
(i.e., if an individual can readily envision an event, she is
more likely to assign it a higher likelihood of occurrence
than an alternative) and many biases such as insensitivity
to sample size, illusory correlations, overconfidence, and
hindsight bias. Studies have similarly documented a wide
range of biases in physiciansÕ decision-making. In a recent review of the literature, Chapman and Elstein [21]
consider 12 distinct biases. We briefly review such biases
here. They fall into two broad classes: (1) biases that
emerge when judging the probability of events such as
possible diagnoses and treatment outcomes and (2) biases that occur in evaluating the utility of an outcome.
Hindsight bias occurs when the decision maker inflates the possibility that they would have correctly diagnosed a patient. Arkes and Harness [26] presented a
clinical case to one group of physicians along with four
diagnostic hypotheses. They were asked to estimate the
probability of each diagnosis. In a second experiment,
physicians were presented with the same case and diagnoses. However, they were also told of the correct
diagnosis before estimating the probabilities. The
groups differed systematically in their judgments. The
group which had been told the correct diagnosis inflated
its probability. The bias results in a non-normative
judgment, since knowing the outcome should in theory
have no effect on probability judgments. Several studies
similarly documented that knowledge of an outcome
focuses the decision makerÕs attention on case information supportive with the known hypothesis and causes them to ignore information that made alternative
diagnoses plausible. Dawson and Arkes [27] argue that
hindsight bias may hinder learning from cases if physicians assume that the clinical outcome is predictable.
Postmortem evaluations of clinical cases are a crucial
component of continuing medical education (e.g.,
medical grand rounds). Hindsight bias has also been
cited as a reason for the misattribution of error in
quality analysis [4]. Outcome bias is similar to hindsight
bias in that knowledge of the outcome skews the physicianÕs perception of a problem. Decisions are evaluated
more favorably if they lead to good outcomes rather
than poor ones, if they are both based on equally sound
clinical judgments [21].
57
Hypothesis testing has been widely studied in many
spheres of decision-making including medicine. Confirmation bias is perhaps the most widely documented
deviation from BayesÕ Theorem,5 The bias is evidenced
by the generation of a hypothesis and the subsequent
search for evidence consistent with the hypothesis, often leading to the failure to consider adequately the
alternative diagnostic possibilities. This may result in a
less than thorough investigation with possibly adverse
consequences for the patient. A desire to confirm oneÕs
preferred hypothesis may moreover contribute to increased inefficiency and costs by ordering of additional
laboratory tests that will do little to revise oneÕs opinion, providing largely redundant data6 [21]. Furthermore, the laboratory tests may increase oneÕs
confidence in the hypothesis without increasing the
accuracy.
The framing effect, a robust finding in the decision
literature, suggests that alternate representations of a
problem can give rise to different judgments and preferences. This form of bias has received considerable
attention in decision-making research. The preference
for a particular course of action is different when a
problem is posed in terms of potential gain rather than
potential loss, even though the underlying situation is
identical. McNeil et al. [28] presented a hypothetical
lung cancer decision scenario to physicians and patients.
In one framing of the problem, the treatments were
described in terms of survival rates, whereas in the other
they were described as mortality rates. The treatment
options were surgery and radiation therapy, the latter of
which had an immediate higher survival (lower mortality) rate, but a lower 5-year survival rate. In the survival
frame there was a clear preference for surgery, whereas
in the mortality frame, the two choices were preferred
equally. One possible explanation is that the positive
framing leads to more risk-averse choices, while the
negative framing increases risk-seeking decision-making
[21].
Research on heuristics and biases has yielded a substantial body of knowledge on medical decision-making
with a particular emphasis on how decision makers
systematically deviate from a certain standard of
rationality and coherence as defined by probability
5
Confirmation bias has also been criticized on logical grounds [29].
The crux of the argument is that it is an example of the logical fallacy
of affirming the consequent (e.g., If it rains, then it will be wet; it is wet
therefore it rained). Because of this, falsification, rather than confirmation, has been proposed as the rational and only valid type of
reasoning in science [30].
6
It has been demonstrated in many different domains including
medicine that experts often pursue a single hypothesis to the exclusion
of alternatives. In the overwhelming majority of cases, such a strategy
is well justified. However, this is a more problematic strategy when it is
used by less than expert subjects. For a more thorough discussion of
this issue see Patel et al. [3].
58
V.L. Patel et al. / Journal of Biomedical Informatics 35 (2002) 52–75
theory [31]. The prescriptive program in medical decision making, as embodied in decision analysis techniques and to some extent in evidence-based medicine,
draws on similar notions of rational decision-making. In
addition, training programs have endeavored to improve cliniciansÕ decision-making acumen, even though
the prescriptive program has not been an unequivocal
success thus far. In the next section, we review some
criticisms of traditional decision-making research. There
are undoubtedly numerous reasons why many decisionsupport systems lead to suboptimal outcomes. However, we believe that the framing assumptions about the
decision-making process are at best seriously incomplete. In the next section, we consider some of these
criticisms.
3.2. The critique of the classical approach to decisionmaking
Claim 2. The classical approach and research exemplified
by heuristics and biases research do not adequately
characterize the decision-making process.
In the previous section, we have selectively surveyed a
fraction of the voluminous body of research in the
‘‘heuristics and biases’’ approach to medical decisionmaking and judgment literature. This remarkably productive area of study has been subject to substantial
criticism, both conceptual and empirical. In this section,
we discuss several of these criticisms, which challenge
fundamental assumptions of the traditional decisionmaking paradigm. The first set of criticisms is philosophical in scope in that they question the nature of the
particular interpretations used to circumscribe decision
problems, including whether the SEU or the Bayesian
frameworks constitute appropriate frames of reference
for assessing human rationality. The second set of critiques questions the ecological validity of traditional
decision research. They focus on the limitations of a
classical approach in meaningfully informing or influencing ‘‘real-world’’ decision-making.
A first philosophical criticism takes issue with whether principles from SEU or Bayesian probability theory
provide the best gold standard for evaluating peopleÕs
decisions. Recall that because SEU and Bayesian models
are considered to be standards of optimal decisionmaking, decisions that deviate from the standards are
seen as suboptimal or biased. However, critics argue
that the term ‘‘rationality’’ covers many different
meanings [32], only one of which (rationality as maximizing gain or economic rationality) is covered by use of
the term in normative decision theories. It can be argued
therefore that someone can behave rationally in other
ways (e.g., ethically) while behaving irrationally in decision-theoretic terms. The implication is that individuals cannot be faulted (and their judgments are not
necessarily biased or non-rational) if they make decisions in ways that contradict the SEU principles. Such
critics accordingly ask whether classical decision
theoryÕs use of normative decision principles is an
appropriate standard for evaluating (or facilitating)
decision-making. We raise these criticisms principally
because they opened the door for alternate proposals on
decision-making that are arguably based on a more
empirically adequate foundation [33,34].
A second philosophical criticism [35,36] has been
raised against the use of Bayesian probability, specifically against the definition by some workers of probability as the strength of belief in a hypothesis. First, it
can be argued that it is very restrictive (and conceptually
wrong) to define probability as measuring the strength
of belief because probability theory, being a branch
of mathematics [37], has no factual interpretation.
Similarly, in number theory, numbers do not refer to
anything extra-mathematical. Those who define probabilities as measures of strength of belief make Bayesian
probability a psychological theory for the simple reason
that ‘‘belief’’ is a psychological construct (not a mathematical one). Second, even if it is maintained that the
definition of probability as degree of belief holds only
for the application of Bayesian probability to factual
problems, such an interpretation is problematic. For
instance, what does a physician mean when he or she
says that there is a probability of 0.75 that his or her
patient has cancer? What does the probability refer to?
The common sense interpretation is that this probability
value refers to the chance that the patient has cancer,
which is an actual or possible objective event in the
world, not a belief. From a Bayesian perspective, it is
unclear to what this probability refers (to the chance of
such a belief popping into the physicianÕs head?). This is
complicated because there are no scientifically acceptable procedures for assigning probabilities to beliefs,
except arbitrarily. Furthermore, there is evidence to
suggest that most people are not Bayesians and that
their decision-making improves when probability is
presented in objective terms [38,39].
Empirically based criticisms have arisen from the
experimental study of decision making in a variety of
domains. As described above, psychological research
has compared how people actually make decisions to
how they would do so under the principles of rational
decision theory and found that people do not meet the
normative principles. Second, researchers in economics,
organizational science, and management [40–43] have
criticized the normative theory based on their own theoretical and empirical studies in organizational decisionmaking. Two early critics of the normative approach
were Maurice Allais and Herbert Simon (both Nobel
laureates in Economics). Allais [40] proposed what has
been known as the Allais ‘‘paradox’’ which showed that
utility maximization principles are contradicted by
V.L. Patel et al. / Journal of Biomedical Informatics 35 (2002) 52–75
effective real life decisions (most decision makers tend to
be risk averse). Simon [43] showed that people are
‘‘satisficers’’ rather than ‘‘maximizers’’ [44]; that is, he
showed that decisions are determined by opportunity,
availability, and uncertainty about the consequences of
action and personal preferences [42], and constrained by
cognitive limitations. Simon introduced the concept of
‘‘bounded rationality’’ [43] as a challenge to the prevailing economic models of the rational decision maker.
Human beings have significant information-processing
limitations (e.g., attention, memory, and perceptual
constraints) and need to rely on simplifying heuristics.
However, these heuristics, instead of being viewed as
erroneous, can be seen as powerful and effective strategies for making many everyday decisions. In this regard,
the normative decision model, characterized by the
systematic evaluation of multiple options or simultaneously considering several hypotheses, is simply untenable as a psychological theory. In recent years, there
have been proposals of alternative normative theories
and approaches (e.g., [45–48]), which do not necessitate
the stringent requirements of the standard rational
choice theory (i.e., utility maximization and knowledge
of all the available options).
Third, when alternative interpretations of probability
[36,49] are used in psychological studies of decisionmaking [38,39], different results are obtained. Gigerenzer [38] and Cosmides and Tooby [39] have conducted
research that suggests that the Bayesian interpretation
typically used by classical decision theorists may be inappropriate for both prescriptive and descriptive viewpoints. For example, in the frequentist interpretation,
probabilities refer to relative frequencies over time. They
argue that the frequentist interpretation more naturally
reflects the way human beings reason under conditions
of uncertainty. In particular, studies using similar tasks
to those used Tversky and Kahneman, but expressed in
frequentist terms, show that people make use of probabilistic information while showing fewer uses of heuristics and biases [39].
Fourth, descriptive decision research (as reflected in
heuristics and biases) portrays decision makers as fallible reasoners whose judgments are often substantially at
variance with the normative standard for rationality. A
semantic issue raised by Gigerenzer [50] poses another
methodological problem for this research paradigm. If
there are different meanings of probability, then how are
they cued in everyday language and how does this affect
judgment? The findings from this work are partially
predicated on how individuals interpret the meaning of
probability. Gigerenzer argues that humans are evolutionarily adapted to acquire information about risks in
their environment through natural sampling of event
frequencies rather than in terms of sets of probabilities
or percentages. He argues further that one can make
perfectly rational inferences and decisions without
59
overtly paying attention to base rates. Human beings
may appear to be less competent or rational if they are
asked to render judgments, based on explicit statistical
information. He and his colleagues have conducted
several studies that juxtapose decision problems formatted in probability terms and as natural frequencies.
In one of the studies, he modified EddyÕs [24] mammography problem in the following way to represent
natural frequencies:
• Ten out of every 1000 women have breast cancer.
• Out of these 10 women with breast cancer, 8 will have
a positive mammogram.
• Of the remaining 990 women without breast cancer,
99 will still have a positive mammogram.
• Imagine a sample of 100 women (age 40–50, no symptoms), who have positive mammograms in your
breast cancer screening. How many do actually have
breast cancer?
In the probability-formatted questions, Gigerenzer
basically replicated EddyÕs finding with only 8% of the
physicians producing the correct Bayesian answer and
with a median estimate of a 70% probability. However
with the natural frequencies format, 46% of the physicians produced the correct response. The findings were
similar across three other diagnostic problems. From
our vantage point, the critical finding is that the representation of problem information can have a rather
dramatic effect on performance and judgments of competence/incompetence. It also shows that human beings
are sensitive to statistical information in their environment and that this influences their judgments under
uncertainty. Decision makers are attuned to probabilities, even if they cannot articulate an accurate probability estimate.
GigerenzerÕs research is not without controversy [51]
and an extended discussion of his theories is beyond the
scope of this paper. His work serves to highlight the
adaptive (rather than the suboptimal) character of decision-making, a point that is strongly emphasized in
problem-solving and naturalistic decision-making research (reviewed in subsequent sections). From a
methodological point of view, his research also serves to
emphasize how the cuing of language can immensely
influence the understanding of probabilistic information.
Medin and Bazerman [17] argue that the heuristics
and biases approach has been overly constrained by a
focus on how people make mistakes at the point of decision. Research on heuristics and biases has implicitly
assumed that the goal is known and the details of implementing decisions are not part of the problem. Gigerenzer [50] challenges the belief that there is a single
normative standard and that most individuals make
decisions that are substantially at variance with such
norms.
60
V.L. Patel et al. / Journal of Biomedical Informatics 35 (2002) 52–75
There have been several educational initiatives intended to train physicians and other healthcare personnel in formal decision-analytic techniques to reduce
the effects of biases and to improve clinical decisions
[52]. As discussed previously, it is widely recognized
that formal decision-making techniques have not
achieved widespread acceptance in medical practice
and in other professions [53]. In addition, teaching
decision-analytic techniques to professionals has yielded mixed results [31]. The use of formal decision
methods has often not resulted in sustained and generalizable improvements in physiciansÕ diagnostic decisions [54] or in their therapeutic interventions [55].
There are several reasons why decision-analytic techniques may yield suboptimal results in naturalistic decision situations [56]. For example, in many cases, the
decision maker may not view the task as one of choice
and even when the task does involve a selection, much
effort may be required to identify the alternatives
available. In addition, much of the information may
resist the quantification required for implementing
formal decision models.
We should point out that decision analysis and decision-support systems are by no means fossilized or
static enterprises. Rather they continue to be vibrant
fields of research and significant advances have been
made in several areas of study, including formal methods for modeling uncertainty in reasoning or decisionmaking [57]. We do not doubt that emerging systems
can contribute to improving the process of clinical decision-making. However, our contention is that basic
theoretical assumptions about decision makers and decision situations need to be reconsidered.
One of the primary reasons that training with normative decision-making models does not endure in
practice is that such models are not readily applicable if
the decision must be made under the kind of constraints
(e.g., stress, time pressure, and limited resources) found
in many natural settings [57]. It is apparent that we need
a better understanding of the process of decision-making
in real-world situations. Decision-making research in
dynamic ‘‘real-world’’ environments has investigated
domains ranging from fire fighting to air traffic control
[58], and to healthcare domains such as anesthesiology
[59], emergency nursing telephone triage [60] and, most
recently, intensive care medicine [61]. We find the naturalistic critique to be quite compelling. One can argue
that the impoverished situations presented to physicians
in judgment studies may have no true analog in the
world of clinical medicine. Although we were critical of
traditional decision-making research in an earlier publication [62], we have reappraised our views on the
matter and see value in the approach, despite its limitations and lack of ecological validity. We concur with
Medin and Bazerman [17] who suggest that the heuristics and biases literature has yielded a fascinating
catalog of human decision errors that is important for
both theoretical and practical reasons.
We have reviewed a number of conceptual and empirical criticisms pertaining to classical decision research. In particular, classical authors have addressed
the privileged and narrow view of rationality that permeates much of the decision research. In addition, this
approach is somewhat lacking in ecological validity and
this may compromise the extent to which it can inform
applications in decision support. In the next section, we
focus on several alternative approaches to the study of
decision-making—ones that both expand the scope of
decision research and offer a different take on decisionmaking competency.
4. Expanding the scope of decision research
This section considers areas of research that intersect
with the study of decision-making, but have not been
widely considered in traditional decision research. These
areas have developed in parallel to the traditional investigations and collectively have contributed to a
broadening of the framework of how individuals make
decisions. A newly emerging framework, as discussed in
this section needs: (1) to develop a more adequate descriptive account of the decision-making process, (2) to
explain the adaptive as well as the suboptimal characteristics of decision makers, and (3) to recognize that
decision makers are not solitary thinkers, but live in a
social world thick with artifacts and populated by other
agents who jointly determine the decision processes and
outcomes.
There are several innovative research programs that
could be discussed in this section. We have limited the
discussion to issues that we view as particularly important in refining the basic sciences framework for decision-making. In addition, the areas discussed in this
paper reflect problems that have been central to our own
work in decision-making. The one caveat is that the
reviewed research is somewhat skewed towards our own
interests and our own contributions are heavily represented. The research discussed in the next two sections
places emphasis on the acquisition of conceptual
knowledge and on understanding the growth of decision-making acumen as a function of expertise. This
work is grounded in ‘‘mainstream’’ information-processing theories of cognition. That research serves to
emphasize dimensions of an individualÕs competency
(e.g., domain knowledge) that influence decisions in
critical ways. We also draw parallels with early research
in medical artificial intelligence. These researchers were
confronted with similar issues, most notably with the
intractability of pure Bayesian approaches to decisionmaking and the brittleness of systems that lack a certain
conceptual depth. In important respects, the final two
V.L. Patel et al. / Journal of Biomedical Informatics 35 (2002) 52–75
sections focus on the need for a richer understanding of
how decision-making occurs in real-world settings as
mediated by teamwork and technologies. Although
much of this work is steeped in older intellectual traditions, they are just beginning to recast decision-making
in a new light—one that is substantially at variance with
the classical tradition.
4.1. Problem solving and decision-making
Claim 3. Medical decision-making research and problemsolving research employ distinct theoretical and methodological approaches drawing on diverse historical
traditions to study the same phenomena and resulting in
substantially different conclusions.
Claim 4. Decision heuristics and biases often form the
basis of robust reasoning strategies by expert clinicians.
The term medical cognition refers to studies of cognitive processes, such as perception, comprehension,
reasoning, decision-making, and problem solving in
medical practice itself or in tasks representative of
medical practice. Much of the research in this area can
be subsumed by one of the two distinct theoretical and
methodological approaches: a decision-making and
judgment tradition, as exemplified by the work previously described, and a problem-solving and expertise
approach [62–64]. Several differences exist between these
two traditions. First, problem-solving research emphasizes a characterization of cognitive processes in reasoning tasks, use of protocol-analytic techniques [65],
and the development of cognitive models of performance [66], while decision-making research focuses on
how and why decisions deviate from a certain standard
of rationality. Second, problem-solving research views
expert performance as a gold standard, whereas decision
research views the expert performance as fallible and
subject to the same biases and faulty decision practices
as the layperson. Third, problem-solving research focuses on the role of expert knowledge organization in
performance, whereas traditional decision research places less emphasis on the role of domain-specific knowledge.7 Further differences between these two approaches
are discussed in more detail in Patel et al. [3,62].
Despite these differences, there is much common
ground between the two approaches, such as a focus on
diagnostic and therapeutic tasks. In addition, decisionmaking and problem-solving research have common
intellectual roots in that both were influenced by the
seminal ideas of SimonÕs conception of bounded rationality [43] and were given a certain impetus by the
7
In the traditional laboratory-based decision approach, subjects are
often placed in situations where prior knowledge becomes (almost)
irrelevant to the kind of decision choices that one must make.
61
emergence of cognitive science in the 1950s [66]. However, there have been numerous points of convergence
and divergence in the arenas of medical cognition and
medical artificial intelligence over the course of the last
several decades. For the most part, researchers in medical cognition have worked either in the problem-solving
or decision-making tradition, although there have been
investigators (e.g., Elstein) who have made substantial
contributions to both endeavors.
The guiding metaphor for decision research has been
rational choice among alternatives. In problem-solving
research, a key concept is search in the problem space in
which a problem solver is viewed as performing an operation (either an inference or action) from a space of
possible operations in moving toward a solution or goal
state (e.g., diagnosis or treatment plan) [8]. The problem
space places a greater emphasis on an evolving process
rather than a fixed selection process. This conceptualization had an enormous impact on both cognitive science and artificial intelligence research, thus, enabling
researchers to study both search strategies in human
problem solvers and to develop computational models
that embody them. Elstein et al. [9] were the first to
employ problem-solving methods and theories to study
clinical competency. Their seminal research led to the
development of an elaborated model of hypotheticodeductive reasoning, which proposed that physicians
reasoned by first generating and then systematically
testing a set of hypotheses to account for clinical data
(i.e., reasoning from hypothesis to data). This model of
problem solving has had a substantial influence on
studies of both medical cognition and medical education, although its generality is the subject of some controversy [67].
Medical artificial intelligence (AI) and in particular
research on knowledge-based systems seeded important
ideas that guided work on medical problem solving.
Although AI in medicine has more openly embraced
decision-analytic methods in recent years (constituting
significant advances in representational and modeling
techniques), most of the early work emphasized symbolic computation rather than numeric information and
relied on heuristic search methods rather than Bayesian
methods [68]. Purely Bayesian methods of analysis were
not viewed as tractable for real-world problems for a
number of reasons, most notably their inordinate demands for data and knowledge of conditional probabilities [69] and the inability to define and adequately
account for conditional dependencies.
Empirical methods (e.g., protocol analysis) and theories from cognitive science were used to develop cognitive models that shaped the development of medical
AI systems. For example, Gorry [70] examined the ways
in which a computational model of medical problem
solving compared to the actual problem-solving behavior of physicians. This analysis provided a basis for
62
V.L. Patel et al. / Journal of Biomedical Informatics 35 (2002) 52–75
characterizing a sequential process of medical decisionmaking, one that differs in important respects from early
diagnostic computational systems based on BayesÕ theorem. Pauker et al. [71]. drew on GorryÕs work and
developed PIP, a program designed to take the present
illness of a patient with renal disease. Several of the
questions guiding this research, including the nature and
organization of expert knowledge, were of central concern to both developers of medical expert systems and
researchers in medical cognition. The development and
refinement of the program were partially based on
studies of medical problem solving. In addition, the
program embodies a cognitive model of patient history
taking and architectural assumptions about human
memory systems. PIP employs categorical methods to
derive hypotheses and both probabilistic and categorical
methods for evaluating hypotheses.
ClanceyÕs research on intelligent tutorial systems (by
extending and then reconfiguring MYCIN [72] led to a
highly influential cognitive model of medical problem
solving [73]. In particular, he introduced key epistemological distinctions that were to have a substantial influence on the conceptualization of medical expertise.
Clancey distinguished between findings, hypotheses,
evidence (finding/hypotheses links), justifications (why a
finding/hypothesis link is true), structure (how findings
and hypotheses are related among themselves), and
strategy (why a finding request comes to mind). These
distinctions provided a more refined basis for characterizations of clinical reasoning strategies and medical
explanation [74].
In general, researchers in medical problem solving
and decision-making employed different methodologies
and largely addressed different issues. However, there
were notable attempts to reconcile the two traditions.
Towards this end, Joseph and Patel [75] asked experts
and subexperts (seasoned physicians working outside
their own specialty area) to think aloud and explain
clinical data on an endocrine problem presented in a
sequential form, one sentence at a time (mirroring to a
certain extent the interactive clinical information-gathering process). Joseph and Patel found that experts
generated the correct hypothesis early in the problem
and devoted the rest of the time to confirming and refining the diagnosis by explaining the rest of the patient
data. Although the subexperts also generated the correct
diagnosis, they took a longer time to take the final decision. The main difference appeared to be on the
subexpertsÕ difficulty in evaluating hypotheses, which
resulted in the inability to eliminate incorrect alternatives. Previous research [76] showed that when experts
include the correct diagnostic hypothesis in the initial
hypothesis set, subsequent processing is directed at the
confirmation of the hypothesis rather than at the generation of any new hypotheses. However, if the correct
hypothesis is not included, then further processing in-
cludes the generation of alternate hypotheses as new
data are presented.
A subsequent study, using the same experimental
paradigm, compared the diagnostic decision-making
process of senior physicians, cardiologists, and endocrinologists on a cardiac problem [62]. The problem
described the case of a 62-year-old man who was diagnosed as having pericardial effusion with pre-tamponade,
a condition in which there is a compression of the heart
produced by the accumulation of fluid in the pericardial
sac thereby preventing normal expansion of the heart.
The results showed that experts interpreted problem
data from the first few segments in terms of diagnostic
hypotheses and that once the experts generated these
hypotheses, they used them as a basis for evaluating
data that were subsequently presented, without introducing any new hypotheses. In contrast, subexperts
continued to generate new hypotheses, even after producing the correct diagnosis. From a decision-making
perspective, the experts in the studies by Joseph and
Patel [75] and Patel et al. [3,62] may have been guilty of
a confirmation bias. As this study and many others illustrate, this bias is highly productive in most situations.
Lesgold et al. [77] documented similar findings in the
domain of radiological diagnosis. They investigated the
abilities of radiologists at different levels of expertise, in
the interpretation of chest X-ray pictures. The results
revealed that the experts were able to initially detect a
general pattern of disease. This resulted in a gross anatomical localization and served to constrain the possible
interpretations. Novices had a greater difficulty in focusing on the important structures and were more likely
to maintain inappropriate interpretations, despite discrepant findings in the patient history. This rapid pattern recognition process is characteristic of experts in
both perceptual and richly symbolic domains [78].
Kushniruk et al. [79] studied the process of intensivecare decision-making with cases that showed varying
lung scan results as well as varying clinical evidence for
pulmonary embolism. For example, a complex case
might include a low probability lung scan and high
probability clinical evidence. The subjects were asked to
provide a differential diagnosis and a therapeutic and
management plan. The most salient differences emerged
in the complex cases, where the intermediates (residents)
proposed a treatment strategy based on the evaluation
of the available evidence, even when it was clearly
equivocal or contradictory. In contrast, experts (critical
care specialists) first stabilized the patient and deferred
their decision pending the results of further tests and
investigations. Experts were engaged in a process of
situation assessment (for example, a careful consideration of the patientÕs history and events in the hospital),
whereas the intermediates were much more proactive in
considering treatment options and ordering further investigations.
V.L. Patel et al. / Journal of Biomedical Informatics 35 (2002) 52–75
These research studies illustrate a view of heuristics
that contrasts with the view in the classical decisionmaking paradigm. In this research, expert strategies,
which include a range of heuristics, are associated with
high levels of accuracy. Experts represent the problem in
such a way that recognizable patterns emerge from the
data thereby minimizing extraneous search through a
myriad of irrelevant information and extraneous hypotheses. Interestingly, some of the expert heuristics are
suggestive of biases that would be labeled as problematic
according to standards of decision research. It is certainly
conceivable that a confirmation bias may occasionally
prejudice the most seasoned practitioner and lead them
to misdiagnose a problem. In the cases reported in many
studies of expertise, however, heuristics serve to generate
the correct decision in an economical manner. In this
sense, expert strategies are immensely adaptive. There is
a substantial body of research on medical problem
solving [62], and more generally in other domains that
illuminate the ways in which solution strategies are instantiated in diverse contexts. This research also suggests
a continuum of skill acquisition that could serve as
benchmarks for instruction and training. In general,
problem-solving studies are more ‘‘diagnostic’’ in specifying potential sources of error as well as characterizing
the productive roots of expert performance.
4.2. Conceptual knowledge and decision strategies in
medicine
Claim 5. Conceptual knowledge differs in important respects from procedural knowledge and has a qualitatively
distinct and predictable effect on decision practices.
Decision-making research typically focuses on decision strategies and characterizes acumen in probabilistic
judgment. Although decision strategies are equally important in problem-solving research, the explanatory
focus is often on differences in content knowledge. Surprisingly, differences in content or conceptual knowledge
have received scant attention in the decision literature.
As discussed in the last section, our own research on
diagnostic reasoning has been strongly influenced by the
problem-solving approach. One of our primary analytic
foci is the differential role of basic science knowledge
(e.g., physiology and biochemistry) in solving problems
of varying complexities and differences between subjects
at different levels of expertise (see [62] for a more extended treatment of these issues). This has been a source
of controversy in the study of medical cognition [80,81]
as well as in medical education and artificial intelligence.
As expertise develops, the disease knowledge of a clinician becomes more dependent on clinical experience and
clinical problem solving is increasingly guided by the use
of exemplars and analogy, becoming less dependent on a
functional understanding of the system in question.
63
However, an in-depth conceptual understanding of basic
science plays a central role in reasoning about complex
problems and is also important in generating explanations and justifications for decisions.
AI researchers were confronted with similar problems
in extending the utility of systems beyond their immediate knowledge base. The problem with most first
generation systems was that they were inherently brittle
in that they exhibited a sudden performance degradation when the problem at hand was near or beyond the
limits of their domain knowledge. Many subsequent
medical expert systems have attempted to overcome the
brittleness problem by explicitly incorporating knowledge of the underlying pathophysiological mechanisms.
Biomedical knowledge can serve different functional
roles depending on the goals of the system [82]. Most
cases of diagnostic reasoning could be construed as a
process of classification involving the subsumption of
clinical findings under a malfunction hypothesis. However, it is sometimes necessary to identify the structural
fault that has led to the aberrant behavior. To engage in
this type of causal reasoning, an agent needs knowledge
of the space of possible malfunctions and knowledge
that relates observations to malfunctions. This necessitates a certain understanding of how behavior, structure, and function interrelate.
Certain expert systems explicitly encoded biomedical
knowledge in a multi-level causal network. This approach was exemplified by ABEL, a consultation system
for electrolyte and acid–base disorders [83]. ABEL attempted to identify the disease process causing a patientÕs illness. Knowledge was encoded in a hierarchical
semantic network and could explain pathophysiological
states in varying degrees of granularity, for example,
from clinical levels to specific biochemical processes.
ABEL attempted to account for the clinical findings by
developing a multi-level explanation of the problem,
known as a patient-specific model [83]. The program
constructed this explanation by navigating between
levels via processes such as aggregation (summarizes the
description to the next more aggregate level) and elaboration (elaborates the description to the next more
detailed level). The pathophysiological description provided the ability to solve complex clinical situations with
multiple etiologies and to organize large amounts of
information into a coherent causal explanation [83].
ABEL was but one such example of how ‘‘deep
knowledge’’ is needed to confront complex clinical decisions employing causal reasoning and symbolic
knowledge. Systems such as MDX-2 [82] or QSIM [84]
had an explicit representation of structural components
and their relations, the functions of these components
(in essence their purpose), and their relationship to behavioral states. The causal and diagnostic knowledge
could be generated by ‘‘running’’ or simulating the
system and qualitatively deriving behavioral sequences
64
V.L. Patel et al. / Journal of Biomedical Informatics 35 (2002) 52–75
that could identify and explain the malfunction. The
knowledge was not precompiled as in ABEL, but could
be generated in real time to find fault in a system. This
principled knowledge could theoretically be used to
generate the widest range of possible diagnostic hypotheses and to explain multi-system conditions that the
program had never previously encountered.
The kinds of conceptual knowledge needed to reason
productively are an important issue in both medical AI
and medical cognition. How to effectively impart this
knowledge has also been a subject of much debate in
medical education [62]. We now consider studies that
examine the relationship between conceptual knowledge
and medical decision-making.
Kaufman et al. [85] sketch a cognitive framework for
characterizing medical decision-making in patients as
well as physicians. The objective of this framework was
to describe different kinds of knowledge and reasoning
strategies that support healthcare decision practices. It is
useful to distinguish among three kinds of knowledge:
factual, conceptual, and procedural. Factual knowledge
merely reflects knowing a fact or a set of facts (e.g., risk
factors for coronary artery disease) without any indepth understanding. Facts are routinely disseminated
through a wide range of sources such as pamphlets and
pharmaceutical labels. The acquisition of factual
knowledge alone would not necessarily lead to any increase in the understanding or behavioral change. We
are particularly suspect of continuing medical or patient
education initiatives that promote blind adherence to a
course of action without targeting the understanding in
any meaningful sense.
The acquisition of conceptual knowledge involves the
integration of new information with prior knowledge
and necessitates a deeper level of understanding. For
example, risk factors may be associated in the physicianÕs
mind with physiological and biochemical mechanisms
and typical patient presentations. Conceptual understanding can support the explanation and may result in
appropriate actions. Procedural knowledge is a kind of
knowing related to how to perform various activities. It is
knowledge that is more instrumentally connected to
immediate action. Decision rules, as represented in
clinical guidelines, embody a kind of procedural knowledge. In the absence of conceptual knowledge, procedural knowledge has a rather limited range of
applicability. Similarly, conceptual knowledge alone
may not readily translate into action or appropriate decision choices. This phenomenon, known as inert
knowledge, is commonly observed in students who may
have an in-depth textbook understanding of disease, but
cannot instantiate this knowledge in practical situations.
The integration of conceptual and procedural results
in more generative and robust knowledge that is more
readily transferable across a range of (superficially dissimilar) clinical situations. However, they are often not
tightly integrated and may in fact be in conflict. For
example, a physician may demonstrate a certain understanding of specific concepts, but may use decision
strategies that are inconsistent with this knowledge.
Conversely, a physician may take the appropriate actions or decisions without conceptual understanding.
This dissociation may reflect correct performance without articulated knowledge or alternatively, accurate
knowledge, followed by inappropriate action. This decoupling of knowledge and action has been documented
in several studies of medical decision-making. Poses et
al. [55] found that teaching physicians how to improve
their estimates of disease probabilities in regard to
streptococcal pharyngitis did not affect their treatment
decisions. Elstein et al. [86] found that physiciansÕ beliefs
and understanding of hormonal replacement therapy
was not predictive of the kinds of the decisions that they
made when presented with related clinical cases.
Kaufman et al. [85] examined physiciansÕ understanding of concepts and decision-making strategies in
problems pertaining to hypercholesterolemia and coronary heart disease. The study was carried out in two
phases: (1) a simulated clinical interview in which two
clinical problems were presented and (2) a session in
which subjects responded to a series of questions. The
questions were related to the analysis of risk factors, diagnostic criteria for determining elevated lipid values,
and differential diagnoses for lipid disorders. The results
indicate that all subjects exhibited gaps in their conceptual understanding. In particular, most physicians demonstrated a lack of knowledge on the primary genetic
disorders that contribute to coronary heart disease, as
well as deficiencies in understanding the secondary causes
of hypercholesterolemia. The majority of subjects tended
to overestimate the lipid value intervals for determining
patients at high risk. Physicians had no difficulty in diagnosing the first patient problem of familial hypercholesterolemia (a relatively straightforward decision rule of
family history coupled with a particular lipoprotein
profile), but failed to identify the problem of elevated
lipids secondary to hypothyroidism. This necessitates
either empirical knowledge of the co-occurrence of hypercholesterolemia and thyroid disease or a mechanistic
understanding of the way in which the two are associated.
Procedural and conceptual knowledge are fostered
via different learning experiences. For example, continued adherence to a set of clinical guidelines would likely
lead to a change in procedural knowledge. After a period of using these guidelines, a physician would be able
to internalize the decision alternatives and follow appropriate decision strategies without explicit use of the
guidelines. The acquisition of conceptual knowledge
necessitates mindful engagement involving reflection
and discourse with peers through various forums such as
seminars, medical rounds, and continuing medical education. In addition, many computer-based learning
V.L. Patel et al. / Journal of Biomedical Informatics 35 (2002) 52–75
environments are developed with the objective of fostering conceptual knowledge. Increasingly, clinical
guidelines endeavor to straddle the boundaries between
more elaborate explanations intended to foster conceptual knowledge and algorithmic representations (e.g.,
decision trees) designed to facilitate procedural knowledge. Diverse forms of knowledge have specific effects
on medical decision-making and their characterization is
necessary for explaining variations in clinical practice.
Furthermore, these considerations are important when
developing decision support and instructional interventions to enhance the decision-making performance.
4.3. Naturalistic decision-making
Claim 6. Decision making in ‘‘real-world’’ situations imposes unique demands (e.g. time pressure and stress) on
the decision process and these demands are not adequately
captured in most laboratory decision studies.
Naturalistic decision-making (NDM) has emerged as
an active area of decision research over the last two
decades. This research concerns investigations of cognition in ‘‘real-world’’ work environments that often
have a dynamic (e.g., rapidly changing) quality to them
[87]. This research was born out of frustration with efforts to apply methods and findings from traditional
decision research in these complex settings. What has
emerged is something akin to a new paradigm. Decisionmaking research in naturalistic settings differs substantively from typical decision-making research, which
most often focuses on a single decision event, and a fixed
set of alternatives in a stable environment [88]. One can
argue that traditional and NDM researchers are in fact
studying markedly different phenomena [89]. However,
both groups are focally concerned with understanding
and ameliorating the decision process. In a recent edited
volume, Salas and Klein [90] describe NDM as:
. . . the effort to understand and improve decision-making in
field settings, particularly by helping people more quickly to develop expertise and apply it to challenges they face. One of the
significant features of NDM is that it seeks explicitly to understand how people handle complex tasks and environments. We
have found that we cannot study decision-making in isolation
from other processes, such as situation awareness, problem
solving, planning, uncertainty management, and the development of expertise. (p. 3)
The majority of this research combines conventional
protocol analytic methods with innovative methods designed to investigate cognition and behavior in realistic
settings [91–93]. The study of decision-making in the work
context necessitates an extended cognitive science
framework beyond typical characterizations of knowledge structures, processes, and skills to include modulating variables such as stress, time pressure, and fatigue
as well as communication patterns in team performance.
65
Claim 7. Decision making in realistic settings is often
characterized by a serial assessment of a single option
rather than the evaluation of a fixed set of alternatives.
Systematic weighing of discrete pieces of evidence is the
exception rather than the rule.
Claim 8. Decisions in high stress situations necessitate
immediate response behavior and perceptual cues may
play a more prominent role in the decision process.
Klein and colleagues have undertaken seminal research in the area of dynamic decision-making, working
with fire commanders and platoon leaders [94]. The
methods employed include field observations and retrospective accounts of actual emergency events. The
types of decisions fire commanders were required to
make included whether to initiate a search and rescue,
whether to initiate an offensive attack on the fire, or
whether to use a more precautionary defensive strategy.
Commanders acted on the basis of prior experience,
immediate feedback, and careful monitoring and assessment of the situation. They used a process of serial
evaluation of options rather than systematically selecting between alternatives or weighing probabilities (either
subjectively or explicitly). The results indicated that
expert commanders relied more extensively on strategies
of situation recognition, using minimal deliberation,
whereas less experienced or novice commanders tended
to employ a more deliberative decision-making
approach. This kind of recognition-primed decisionmaking appears to be characteristic of dynamic decision-making environments [95]. Expert decision makers
often recognize a situation as being similar to ones they
have previously encountered and negotiated successfully. This provides a basis for effective solution strategies. Skilled decision makers are also said to engage in a
process of situation assessment. This is characterized by
a concerted effort to understand or ‘‘size up’’ the situation rather than to generate decision options. This is
somewhat similar to the finding that expert problem
solvers devote considerably more time to representing
the problem, whereas novices are much more likely to
jump right in and attempt to rapidly generate hypotheses [76] or implement a solution [96]. In the hands of a
skilled problem solver, the solution sometimes appears
to emerge from the representation. Similarly, a forwarddirected reasoning strategy is contingent on a coherent
problem representation.
NDM researchers8 have studied a wide range of
problems in different domains such as air traffic control,
nuclear power plant management, software design,
8
NDM research is not a monolithic entity. Rather NDM researchers
embrace diverging theoretical and methodological approaches. Grouping them under one rubric runs the same risk as categorizing decisionmaking researchers as adherents to a traditional or classical approach.
66
V.L. Patel et al. / Journal of Biomedical Informatics 35 (2002) 52–75
military command and control, financial planning, and
forensic science. In this section, our focus is predominantly on NDM research conducted in healthcare settings with particular attention to research conducted in
our own laboratory.
Leprohon and Patel [60] studied the decision-making
strategies used by nurses in emergency telephone triage
settings. In this context, nurses are required to respond
to public emergency calls for medical help (exemplified
by 911 telephone service). The study analyzed transcripts of nurse-patient caller telephone conversations of
different levels of urgency and complexity and interviewed nurses immediately following their conversations. In decision-making situations such as emergency
telephone triage, there is a chronic sense of time urgency—decisions often have to be made in seconds. This
may involve the immediate mobilization and allocation
of resources. Decisions are always made on the basis of
partial and sometimes unreliable information.
The results were consistent with three patterns of
decision-making that reflect the perceived urgency of the
situation. The first pattern corresponds to immediate
response behavior as reflected in situations of high urgency. In these circumstances, decisions are made with
great rapidity. Actions are typically triggered by symptoms or the unknown urgency level in a forward-directed manner. The nurses in this study responded with
perfect accuracy (i.e., allocating the proper resources to
meet the demands) in these situations. The second pattern involves limited problem solving and typically
corresponds to a situation of moderate urgency and to
cases that are of some complexity. The behavior is
characterized by information seeking and clarification
exchanges over a more extended period of time. These
circumstances resulted in the highest percentage of decision errors (mostly false positives). The third pattern
involves deliberate problem solving and planning and
typically corresponds to low urgency situations. These
situations involved evaluating the whole situation and
exploring options and alternative solutions, such as
identifying the basic needs of a patient and referring the
patient to an appropriate clinic. The nurses made fewer
errors than in situations of moderate urgency and more
errors than in situations requiring immediate response
behavior. They could accurately perceive a situation as
not being of high urgency. Decision-making accuracy
was significantly higher in nurses with 10 years or more
of experience than nurses with less experience, which is
consistent with the acquisition of expertise in other domains.
Most decisions were based on symptoms rather than
on diagnostic hypotheses, especially in urgent situations.
These decisions rely on prior instances that facilitate
rapid schema access, based on minimal information and
enable them to represent the situation to gather information and make decisions. Nurses learn to recognize
critical symptoms that evoke decision heuristics. This
finding is consistent with the research by Benner and
Tanner [97] who found that nurses respond on the basis
of prior experiences in memory and do not decompose
decisions into sets of alternatives or attempt to understand the underlying pathophysiology of a patient
problem. NursesÕ training, which focuses on observational skills and detection of abnormal and urgent
symptoms, would contribute to the acquisition of this
type of decision-making process. Benner also suggests
that experience-based knowledge forms the basis of
much of nursesÕ intuitive clinical judgments.
Crandall and Calderwood [98] studied nursesÕ decision-making about patients with sepsis in a neonatal
intensive care unit. They employed an interview methodology known as the critical decision method, which
involves asking individuals about particularly challenging incidents and probing for cues that resulted in particular decisions. The findings indicated that experienced
nurses rely heavily on perceptually based indicators and
findings not documented in the medical literature. The
nurses were very sensitive to subtle changes in an infantÕs condition, were able to detect trends early on in
the clinical course, and predicted potentially adverse
outcomes (a worsening septic shock). The researchers
elicited much of this information through probes, since
nurses had difficulty in verbally explaining the perceptual cues.
Claim 9. Team decision-making is characterized by
emergent properties that cannot be captured by merely
studying individual decision makers
We have been engaged for the last eight years in the
study of decision-making in critical care settings
(emergency departments and intensive care settings).
Our objectives are to understand: (1) how decisions are
jointly negotiated9 and updated by participants differing
substantially in their areas of expertise (e.g., pharmacology, respiratory medicine); (2) the complex communication process that is routine in these settings; (3) the
role of technology in mediating decisions, (4) the sources
of error in the decision-making process, and (5) how
pedagogy and the apprenticeship process are integrated
into the work settings.
Patel et al. [61] studied decision-making in a medical
intensive care unit (ICU). The principal sources of data
included audio tape recording of medical morning
rounds, complete patient charts and records, and interviews with the participants. The goals of the ICU are: (a)
first to stabilize the patient and then identify and treat
the underlying problem and (b) to coordinate collection,
analysis, and management of data from the various
9
Jointly negotiated decisions do not necessarily suggest a democratic
decision process or that each participant has an equal voice.
V.L. Patel et al. / Journal of Biomedical Informatics 35 (2002) 52–75
sources, which involves coordination and distribution of
workloads to various participants, namely residents,
nurses, laboratory technicians, pharmacists, and nutritionists. The teamÕs decision-making involves: (a) management of multiple streams of information and (b)
communication and coordination among individuals
and from different data sources. Team leaders hierarchically ultimately control most decisions and actions,
but expertise is distributed among individuals and responsibility is allocated among the team to maximize
efficiency.
The medical teamÕs goals continuously shift as the
patientÕs condition changes and new problems or complications arise. These complications often result in
rapidly shifting goals and changing priorities. In the
course of treatment, physicians constantly encounter
new data that lead them to make changes in the patientmanagement regimen. The data sources include laboratory tests, nurse reports, expert consultations, output
from various patient monitoring devices, and new findings revealed in the examination of a patient. In realistic
settings, cognition can appropriately be construed as
distributed [99]. The idea is that other individuals and
external artifacts (such as computers and instruments)
do not merely add to the cognitive process, but transform it in significant ways [100]. The combined products
of a cognitively distributed system (e.g., multiple team
members) cannot be accounted for by operation of its
isolated components. However, each of the entities or
individuals can still be seen as having personal attributes
such as knowledge and skills, some of which are an integral part of the ‘‘distributed partnership’’ and others
which are not [99]. This issue is discussed further in the
context of technology-mediated decision-making in the
next section.
Intensive care decision-making is characterized by a
rapid serial evaluation of options, leading to immediate
action. In this real-time decision-making, the reasoning
is schema-driven in a forward direction towards action
with minimal inference or justification. The results of the
action (as measured by the patientÕs response) feed back
into the decision-action cycle and the proper course of
further action ensues. When the circumstance is ambiguous or the patient did not respond in a manner
consistent with the original hypothesis, then the original
decision comes under scrutiny. This can result in a
brainstorming session where the team retrospectively
evaluates and reconsiders the decisions that had been
made and considers several possible alternative future
courses of action. We have observed several such distinct patterns of decision-making. The goals of these
reflective sessions are: (a) to critically evaluate decisions
that are made, (b) to rationalize and debate decisions
and actions that are taken, and (c) to discuss future
plans of action. The multiple kinds of reasoning used to
evaluate alternatives include probabilistic reasoning,
67
diagnostic reasoning, and biomedical causal reasoning.
An investigation of the cognitive processes involved in
these reflective sessions reveals a number of important
mechanisms related to coordination of theory and evidence [101].
This kind of retrospective evaluation is exemplified in
a weekly session where all attending physicians and staff
meet to discuss various patient problems. The discussion
focused on the treatment of an ICU patient who had
suffered a cardiac arrest. He was treated with streptokinase, a potent blood-thinning agent, on his arrival at
the emergency room. The participants in the evaluation
session included cardiologists, respiratory therapists,
residents, and students. The positive evidence in favor of
the use of streptokinase is that it is the usual treatment
strategy for patients showing signs of myocardial infarction (abnormal ECG patterns) and it can reduce
morbidity and mortality. The ICU patient was stabilized
when he was treated with streptokinase. However, the
patient suffered subsequent bleeding, which is a common
side affect of this medication. The critical question was
whether the ECG provided conclusive evidence of
myocardial infarction, and, since only a small percentage of the people benefit from this treatment, whether
the decision for this particular patient was valid. In this
particular session, two respiratory therapists argued that
the patient had received the appropriate therapy and
two cardiologists argued to the contrary. They collectively constructed the sequence of events, debated over
the interpretation of specific evidence such as the results
of the electrocardiogram, discussed a priori probabilities, and its interpretation in this context.
These sessions serve a valuable pedagogical role in
that they help articulate assumptions that would not
normally be discussed during clinical rounds. In actual
practice, there is little time available for engaging in the
deliberative weighing of multiple decision alternatives or
extended causal reasoning. Nevertheless, the underlying
causal models are sometimes critical to supporting realtime decision-making. These types of learning environments help foster the development of such a knowledge.
Such learning is often crystallized (i.e., made explicit to
the learner) after the fact in grand rounds and other less
formal discussion formats.
More recently, we have extended our research to a
range of critical care environments including surgical
intensive care, pediatric, and psychiatry emergency departments [102]. These are all distinct clinical settings
that have unique organizational structures and deal with
patient problems involving very different forms of decision-making. An interesting point of difference is that
medical intensive care units have a greater diffusion of
responsibility and heterarchical decision processes,
whereas surgical ICUs appear to be characterized by a
stronger centralized chain of decision authority and hierarchical decision processes. The workflow process also
68
V.L. Patel et al. / Journal of Biomedical Informatics 35 (2002) 52–75
critically shapes the decision process. For example, the
processing of a patient in an emergency department
follows a relatively linear and orderly process from the
reception area into nursing triage and so forth. On the
other hand, such patterns are decidedly less linear and
predictable in medical ICU settings. Perrow [103] introduces a useful distinction between complex and linear
systems. Complex systems are characterized by a high
degree of specialization and interdependency among
agents or components (including technologies) and
multiple feedback loops. For example, patient management in an ICU setting is characterized by continuous
cycles of administering aggressive therapies and monitoring and countering their side effects [61]. The process
of patient care in emergency departments is somewhat
more modular and less interdependent than in an ICU,
although it is important to point out that the difference
is a matter of degree rather than kind. For example, the
interdependency in an emergency department is evidenced by the triage process, which provides a direction
and commits a set of resources for the care of a patient.
Each stage of the triage critically shapes the ensuing
decisions and actions that follow in subsequent stages of
the patient care process. However, the ICU appears to
be characterized by a more complexly entangled set of
dependencies thereby creating more possibilities for errors. These distinctions provide a basis for characterizing differences in workflow, communication patterns,
technology use, and opportunities for different kinds of
learning. In our view, it is vitally important to understand better how these factors differentially shape the
decision process.
Supporting decision-making in clinical settings necessitates an understanding of both workflow and
communication patterns. Several studies have documented that communication between individuals is both
the primary source for addressing information needs
(e.g., [104]) and a significant source of medical error.
Coeira [105] makes a compelling argument that the
computational view of decision support as ‘‘acquiring
and presenting data’’ is too narrowly conceived. He espouses a view in which conversations are best characterized by ‘‘the fluid and interactive notions of asking
and telling, inquiring and explaining’’ (p 278). He argues
that the communication space is the largest part of the
health systemÕs information space, constituting the bulk
of information transactions and cliniciansÕ time. There
are two broad implications to this view. The first is that
technology that directly supports communication
among clinicians should greatly improve how organizations acquire, present, and use information. The second view suggests that developing a richer
understanding of communication tasks should enable us
to employ communication and information technologies
more productively to address information needs and
decision processes. CoeiraÕs perspective is generally
consonant with the viewpoint of NDM. His work is also
compatible with the view that all technologies differentially mediate decision processes, which is the subject of
the next section.
In general, NDM research has provided a fresh perspective for understanding decision-making as it occurs
in the real-world environment. The approach offers
unique theoretical and methodological insights. However, it is not without its detractors (see Yates [89] for a
balanced discussion). The approach is largely a descriptive one and does not offer a clear gold standard for
evaluating the quality of decisions. In addition, it is
difficult to draw conclusive generalizations from any one
study given the fact that it is conducted in relatively few
settings (often only one). Moreover, like traditional
decision research, NDM does not have a sterling track
record in developing effective models for training and
decision support (although this is an active area of research). Part of the problem is that the descriptions are
couched in high-level abstractions (e.g., archical decision
processes) and the precise implications are difficult to
discern or adopt. Paradoxically, there is a need for both
fine-grained analysis of decision and communication
processes and quantitative studies that compare settings
on a few measures. However, NDM has considerably
broadened the landscape for decision research and has
focused our attention on the immensely adaptive nature
of skilled decision makers.
4.4. Technology-mediated decision-making
Claim 10. Technologies mediate the decision-making
process in distinct and often counterintuitive ways that can
produce unintended consequences.
Claim 11. Decision technology does not merely facilitate
or augment decision-making rather it reorganizes decision-making practices.
All technologies mediate human performance. What
do we mean by mediate? Technologies, whether they be
computer-based or in some other form, transform individualsÕ and groupsÕ cognition and the ways that they
work. They do not merely augment, enhance or expedite
performance, although a given technology may do all of
these things. The difference is not one of quantitative but
rather of qualitative change. The mediating role of
technology has long been recognized in human factors.
Recent reports on errors in medicine have clearly delineated that the interface between humans and technology is a primary source of error [4]. In some respects,
this is a non-contentious or even an obvious point.
However, it is the one that has received inadequate
attention in the decision-making community. In this
section, we endeavor to clarify the idea of mediating
and illustrate it in contexts of both high and low (e.g.,
V.L. Patel et al. / Journal of Biomedical Informatics 35 (2002) 52–75
paper-based guidelines) technologies. First, let us briefly
consider a strong form of the mediation argument put
forth by proponents of the situated cognition and/or
sociotechnical perspective.10
The distributed view of cognition represents a shift in
the study of cognition from being the sole property of
the individual to being stretched across groups, material
artifacts, and cultures [19]. The situated and sociotechnical schools of thought acknowledge that much of everyday cognition is embedded in social practices and
interwoven with the use of artifacts [106,107]. This has
theoretical, methodological, and practical consequences
for the design and implementation of technologies. Cole
and Engestrom [108] argue that the natural unit of
analysis for the study of human behavior is an activity
system, comprising relations among individuals and
their proximal, ‘‘culturally organized environments.’’ A
system consisting of individuals, groups of individuals,
and technologies can be construed as a single indivisible
unit of analysis. Berg [109] is a leading proponent of the
sociotechnical point view within the world of medical
informatics. He argues that:
Work practices are conceptualized as networks of people, tools,
organizational routines, documents and so forth. An emergency
ward, outpatient clinic or inpatient department is seen as an interrelated assembly of humans and things whose functioning is
primarily geared to the delivery of patient care.. . . [A few paragraphs later] The elements that constitute these networks
should then not be seen as discrete, well-circumscribed entities
with pre-fixed characteristics. (p 89)
In his view, the study of information systems must
eschew an approach that fractionates individual and
collective, human and machine, as well as the social and
technical dimensions of information technology. Berg
also argues for a participant observation methodology
that will enable deep empirical insight into the work
practices in which any technology is used. He also argues against the use of rigid formal models, standards,
or algorithmic approaches to reducing variability in
practice. Although Berg is not in principle opposed to
decision-support technologies, he views them as having
a transformative and unpredictable effect on medical
practice [109]. Bowker and Starr [110] draw on similar
theoretical notions in their penetrating analysis of the
social construction of classification systems and standards and their unintended consequences.
We find this perspective to be particularly compelling
and have learned much from this approach, particularly
in our research of decision-making in naturalistic set-
10
It is once again necessary to point out that lumping schools of
thought together under a single rubric inevitably simplify distinct
perspectives and points of view. Situated and sociotechnical categories
reflect a wide range of methodological and theoretical approaches to
understanding technology, drawing on predominantly anthropological
and sociological schools of thought.
69
tings [61] and computer-mediated collaborative design
[111]. However, our perspective differs in important respects from a sociotechnical or situated point of view in
that the role of the individual figures prominently in our
analysis [112]. Additionally, we advocate a methodological pluralism that focuses on different levels of
analysis [113]. For example, at the most basic level, our
analyses focus on the technology itself and study its
design, affordances, and the kinds of cognitive challenges it is likely to present to a population of users
[113]. Towards the other end of the continuum, we study
the process of distributed decision-making in various
clinical settings. The task-analytic information-processing approach offers a robust set of methodological and
theoretical tools to understand cognition, whereas the
situated approach helps us to understand how social
entities jointly make (or distribute) decisions and cognitive resources [114]. The situated/distributed approach
also provides a basis for understanding how communication is grounded through the use of various mediating
communication technologies (e.g., email) and across
geographic distances, how groups jointly learn to attain
a satisfactory level of team performance, and the ways in
which organizational entities are constituted to produce
(and sometimes obstruct) work.
The mediating role of technology can be evaluated at
several levels of analysis. For example, electronic medical records alter the practice of individual clinicians in
significant ways as discussed below. Changes to an information system substantially impact organizational
and institutional practices from research to billing to
quality assurance. Even the introduction of patientcentered medical records early in the twentieth century
necessitated changes in hospital architecture and considerably affected work practices in clinical settings
[109]. Salomon et al. [115] introduce a useful distinction
in considering the mediating role of technology on individual performance, the effects with technology and
the effects of technology. The former is concerned with
the changes in performance displayed by users while
being equipped with the technology. For example, when
using an effective medical information system, physicians should be able to gather information more systematically and efficiently. In this capacity, medical
information technologies may alleviate some of the
cognitive load associated with a given task and permit
them to focus on higher-order thinking skills, such as
hypothesis generation and evaluation. The effects of
technology refer to enduring changes in general cognitive capacities (knowledge and skills) as a consequence
of interaction with a technology. For example, frequent
use of information technologies may result in lasting
changes in medical decision-making practices, even in
the absence of the system.
The mediating role of technology on clinical decisions
has been studied in a wide range of situations and
70
V.L. Patel et al. / Journal of Biomedical Informatics 35 (2002) 52–75
technologies from laparoscopy [116] to airway management [117]. Technologies often effectively mediate
clinical decisions, resulting in improved performance.
However, they sometimes result in predictable patterns
of error [118] and at other times have surprising effects
whose consequences are difficult to gauge. We have
conducted several studies evaluating the effects of electronic medical records (EMRs) on information gathering, data representation, and decision practices. EMRs
differ substantially in terms of their functionality, use of
controlled medical vocabulary, and graphical user interface. All of these have differential effects on decision
practices.
The particular system used in one set of studies was a
pen-based EMR system [119]. Using the pen or computer keyboard, physicians could directly enter information into the EMR, such as the patientÕs chief
complaint, past history, history of present illness, laboratory tests, and differential diagnoses. The system incorporated an extended version of the ICD-9
vocabulary standard. The EMR allowed the physician
to record information about the patientÕs differential
diagnosis, the ordering of tests, and the prescription of
medication. The system also provided supporting reference information in the form of an integrated electronic version of the Merck Manual, drug monographs
for medications, and information on laboratory tests.
The graphical interface provided a highly structured set
of resources for representing a clinical problem. To enter
the medical findings, the physician determined the context of the visit. For example, if the patient presented
with a specific complaint such as abdominal pain, the
physician would select a clinical note template (CNT)
for abdominal pain, which displayed a selection of the
medical findings and observations on the computer
screen.
We studied the use of this EMR in both laboratorybased research [119] and in actual clinical settings [120].
We observed two global patterns of EMR usage in the
interactive condition: one in which the subject pursues
information from the patient predicated on a hypothesis
they have formulated and a second involving the use of
the EMR display to provide the subject with guidance
for requesting information. All experienced users of this
system appeared to use both strategies. There appears to
be a point in the process of skill acquisition, whereby the
second screen-driven strategy is incorporated in the
userÕs repertoire.
In general, a screen-driven strategy can enhance the
performance by alleviating the cognitive load imposed
by information gathering goals and allowing the physician to allocate more cognitive resources for discriminating among hypotheses and making complex
decisions. On the other hand, this strategy can induce a
certain sense of complacency and perhaps imbue the
device with a certain intelligence that it does not really
possess. We observed both effective as well as counterproductive uses of this screen-driven strategy. A more
experienced user deliberately used the strategy and was
able to exploit the affordances offered by the system to
structure a (simulated) doctor-patient encounter,
whereas a novice user was demonstrably less successful
in her pursuit. The subject used the structured list of
findings on the screen to prompt her to ask questions. In
employing this screen-driven strategy, she elicited almost all of the relevant findings in a simulated patient
encounter. However, she also elicited numerous irrelevant findings and pursued incorrect hypotheses. In this
particular case, the strategy seemed to induce a certain
cognitive complacency and the subject had difficulty in
imposing her own set of working hypotheses to guide
the information-gathering and diagnostic-reasoning
processes.
The differential use of strategies is evidence of the
mediating effects with technology. We extended this line
of research to study the cognitive consequences of using
the same EMR system in a diabetes clinic [120]. The
research combined qualitative and quantitative analyses
focusing on the levels of physicianÕs interactions with the
system as well as doctor-patient interaction. The study
considered the following questions ([120], p 571): (1)
how do physicians manage information flow when using
an EMR system? (2) What are the differences in the way
physicians organize and represent this information using
paper-based and EMR systems? And (3) are there longterm, enduring effects of the use of EMR systems on
knowledge representations and clinical reasoning. We
have reported several interrelated studies with the first
focusing on an in-depth characterization of changes in
knowledge organization in a single subject as a function
of using the system. The study first compared the contents and structure of ten patient records, matched for
variables such as age and problem type, produced by the
physician using the EMR system as well as paper-based
patient records. After having used the system for six
months, the physician was asked to conduct his next five
patient interviews using only hand-written paper records.
The results indicated that the EMRs contained more
information relevant to the diagnostic hypotheses. In
addition, the structure and content of information were
found to correspond to the structured representation of
the particular medium. For example, EMRs were found
to contain more information about the patientÕs past
medical history, reflecting the query structure of the
interface. The paper-based records appeared to preserve
the integrity of the time course of the evolution of the
patient problem, whereas this was notably absent from
the EMR. Perhaps, the most striking finding is that,
after having used the system for six months, the structure and content of the physicianÕs paper-based records
bear a closer resemblance to the organization of
V.L. Patel et al. / Journal of Biomedical Informatics 35 (2002) 52–75
information in the EMR than did the paper-based records produced by the physician, prior to the exposure
to the system. This finding is consistent with the enduring effects of technology, even in the absence of the
particular system.
We also conducted a series of related studies with
physicians in the same diabetes clinic [120]. The results
of one replicated and extended the results of the single
subject study (reported above) regarding the differential
effects of EMRs and paper-based records on represented
(recorded) patient information. For example, physicians
entered significantly more information about the patientsÕ chief complaint using the EMR. Alternatively,
physicians represented significantly more information
about the history of present illness and review of systems
using paper-based records. It is reasonable to assert that
such differences are likely to have an effect on clinical
decision-making. The authors also video-recorded and
analyzed 20 doctor-patient computer interactions by
two physicians varying in their level of expertise. One of
the physicians was an intermediate-lever user of the
EMR and the other was an expert user. The analysis of
the physician-patient interactions revealed that the lessexpert subject was more strongly influenced by the
structure and content of the interface. In particular, he
was guided by the order of information on the screen
when asking the patient questions and recording the
responses. This screen-driven strategy is similar to what
we documented in a previous study [119]. Although the
expert user similarly used the EMR system to structure
his questions, he was much less bound to the order and
sequence of presented information on the EMR screen.
This body of research documented both effects with and
effects of technology in the context of EMR use. These
include effects on knowledge-organization and information-gathering strategies. The authors conclude that
given these potentially enduring effects, the use of a
particular EMR will almost certainly have a direct effect
on medical decision making.
The mediating role of technology can also be seen in
the use of clinical practice guidelines, where different
effects can be observed on different types of users. Two
of the major aims in developing clinical practice guidelines are to standardize and improve the quality of care.
However, their implementation in healthcare environments has produced less than desired results [121,122].
For clinical practice guidelines to be widely adopted,
they need to become part of the normal clinical environment with which the physician interacts. Guidelines
must mediate decisions in a manner consistent with the
temporal flow of clinical reasoning [123]. To ensure
optimal usability, guidelines need to be tuned to specific
types of user. Studies of guideline utilization [114], in
which expert physicians and non-expert physicians used
algorithmic and text-based guidelines, show that
whereas clinical practice guidelines are used as remind-
71
ers by both expert as well as non-expert physicians, they
also serve other functions depending on the expertise.
Guidelines serve to constrain the problem-solving process of experts (by focusing the physicianÕs attention on
important aspects) but serve as educational devices for
the non-experts (by suggesting disease alternatives that
were not originally thought about). The different ways in
which guidelines mediate decisions among different
populations of users is an area in need of further research.
5. Conclusions
The traditional or classical approach has served as a
wellspring for both theoretical and applied decision
researche for more than half a century. It has also served
to advance the practice of decision-making in healthcare
settings. By most accounts, it is an immensely successful
research enterprise. However, there are several significant limitations to the traditional decision research
program, some of which have been enumerated in the
paper. To recapitulate, normative models have yielded
disappointing returns as reliable guides to decisionmaking. Also, empirical work based on the normative
approach has proved to be somewhat simplistic and
limited as a model of actual, real-world decision-making, notwithstanding its accomplishments. We argue for
a decision science that broadens the boundary of traditional decision-making research. In particular, there is a
need for an expanded empirical scope employing a
greater range of methodologies and a greater emphasis
on ecological validity (understanding the dynamics of
real-world settings).
This broadening can be understood along several
dimensions. First, in our view, the contrast between
decision-making and problem-solving research is rather
illuminating. Problem-solving research emphasizes: (1)
understanding skilled performance as realized by experts, (2) understanding the decision-making process,
and (3) recognizing conceptual knowledge as both a
resource to aid decisions and a contributor to patterns
of misunderstanding, leading to suboptimal decisions.
The view is that expert decision makers (or problem
solvers) have immensely adaptive, albeit imperfect
strategies for overcoming information-processing limitations. Heuristics and biases are rooted in productive
decision-making.
Second, the naturalistic decision-making approach
focuses on real-world settings and places a premium on
the descriptive adequacy of models. This necessitates a
commitment to in-depth qualitative methodologies.11
11
This does not in any way preclude quantitative analyses or models.
In many contexts, they are complementary methods and provide a
necessary measure of convergence.
72
V.L. Patel et al. / Journal of Biomedical Informatics 35 (2002) 52–75
The characterization of performance in these settings is
often at variance with the classical rational, deliberative,
and calculating decision maker. Stress, time pressure,
and risk (among other factors) necessitate the development of adaptive strategies that correspond to the constraints of a particular situation. In addition, these
strategies may be the product of individuals or teams.
The social and collaborative dimensions of decisionmaking warrant a closer scrutiny if decision-support
technology is to infiltrate the ebb and flow of daily work.
Third, the differential role of technology cannot be
ignored. Technologies are mediators of performance
and serve to reorganize practice rather than merely to
produce a quantitative gain in performance. This
changes the way we think about decision-support technologies. These need to be understood in the context of
practice, where there is a need for a deeper understanding of a) performance in actual settings; b) effects
of technology propagating through the different layers
of an organization; and c) the adaptability of health
professionals (and consumers of healthcare services) to
an increasingly technologically mediated world.
In this paper, we endeavored to characterize the
strengths and weakness of traditional decision-making
research and suggested some promising new directions.
However, we did not put forth a grand synthesis in
which the two approaches are seamlessly melded into a
coherent whole. At minimum, this synthesis would need
to account for the 11 claims articulated in this paper.
There are numerous remaining methodological and
epistemological challenges in erecting a comprehensive
decision-making framework. In our view, we know a
great deal more about decision-making and decision
makers than we have realized. A reconstituted basic
science framework could make the rather impressive
accomplishments of decision research more transparent.
We remain optimistic that we can more fully exploit that
knowledge in designing and implementing technologies
that can facilitate decision processes in real-world clinical settings.
Acknowledgments
We owe our thanks to many researchers and graduate
students who contributed to this paper by providing
valuable discussions. We especially thank Ted Shortliffe
and two anonymous reviewers for their critical comments on the earlier drafts of the manuscript. Preparation of the manuscript was supported in part by the
National Library of Medicine under Grant LM06594,
with additional support from the Department of the
Army and the Agency for Healthcare Research and
Quality, and by Grant MRC-MA 13439 from the
Canadian Institute for Health Research awarded to
Dr. Patel.
References
[1] Beach LR, Lipshitz R. Why classical decision theory is an
inappropriate standard for evaluating and aiding most human
decision making. In: Klein GA, Orasanu J, Calderwood R,
Zsambok CE, editors. Decision making in action: models and
methods. Norwood, NJ: Ablex Publishing Corporation; 1993.
p. 21–35.
[2] Cohen MS. Three paradigms for viewing decision biases. In:
Klein GA, Orasanu J, Calderwood R, Zsambok CE, editors.
Decision making in action: models and methods. Norwood, NJ:
Ablex Publishing Corporation; 1993.
[3] Patel VL, Arocha JF, Kaufman DR. A primer on aspects of
cognition for medical informatics. J Am Med Inform Assoc
2001;8:324–43.
[4] Kohn LT, Corrigan JM, Donaldson MS. To err is human:
building a safer health system. Institute of Medicine. Washington, DC: National Academy Press; 2000.
[5] Hastie R. Problems for judgment and decision making. Ann Rev
Psychol 2001;52:653–83.
[6] Baron J. Thinking and deciding. 3rd ed. Cambridge: Cambridge
University Press; 2000.
[7] von Neumann J, Morgenstern O. Theory of games and economic
behavior. Princeton: Princeton University Press; 1944.
[8] Newell A, Simon HA. Human problem solving. Englewood
Cliffs, NJ: Prentice-Hall; 1972.
[9] Elstein AS, Shulman LS, Sprafka SA. Medical problem solving:
an analysis of clinical reasoning. Cambridge, MA: Harvard
University Press; 1978.
[10] Pitz FF, Sachs NJ. Judgment and decision: theory and application. Ann Rev Psychol 1984;35:139–63.
[11] Hammond KR, McClelland GH, Mumpower J. Human judgment and decision making: theories, decisions, and procedures.
New York: Praeger; 1981.
[12] Brunswik E. The conceptual framework of psychology. Chicago:
University of Chicago Press; 1952.
[13] Anderson NH. Foundations of information integration theory.
New York: Academic Press; 1981.
[14] Tversky A, Kahneman D. Judgement under uncertainty: heuristics and biases. Science 1974;185:1124–31.
[15] Camerer CF, Johnson EJ. The process-performance paradox in
expert judgment: how can experts know so much and predict so
badly?. In: Ericsson A, Smith J, editors. Toward a general theory
of expertise. New York: Cambridge University Press; 1991.
p. 195–217.
[16] Bechara A, Damasio H, Tranel D, Damasio AR. Deciding
advantageously before knowing the advantageous strategy.
Science 1997;275:1293–5.
[17] Medin DL, Bazerman MH. Broadening behavioral decision
research: multiple levels of processing. Psychol Bull Rev
1999;6:533–46.
[18] Orasanu J, Connolly T. The reinvention of decision making. In:
Klein GA, Orasanu J, Calderwood R, Zsambok CE, editors.
Decision making in action: models and methods. Norwood, NJ:
Ablex Publishing Corporation; 1993.
[19] Hutchins E. Cognition in the wild. Cambridge: MIT Press; 1995.
[20] Goldstein WM, Hogarth RM. Judgment and decision making:
some historical context. In: Goldstein WM, Hogarth RM,
editors. Research on judgment and decision making: currents,
connections, and controversies. Cambridge: Cambridge University Press; 1997.
[21] Chapman GB, Elstein AS. Cognitive processes and biases in
medical decision making. In: Chapman GB, Sonnenberg FS,
editors. Decision making in health care: theory, psychology and
applications. Cambridge: Cambridge University Press; 2000.
p. 183–210.
V.L. Patel et al. / Journal of Biomedical Informatics 35 (2002) 52–75
[22] Kahneman D, Tversky A. On the psychology of prediction.
Psychol Rev 1973;80:237–51.
[23] Tversky A, Kahneman D. Causal schemas in judgments under
uncertainty. In: Fishbein M, editor. Progress in social psychology. Hillsdale, NJ: Lawrence Erlbaum; 1980. p. 49–72.
[24] Eddy DM. Probabilistic reasoning in clinical medicine: problems
and opportunities. In: Kahneman D, Slovic P, Tversky A,
editors. Judgment under uncertainty: heuristics and biases.
Cambridge: Cambridge University Press; 1982.
[25] Kahnemann D, Slovic P, Tversky A. Judgment under uncertainty: heuristics and biases. Cambridge: Cambridge University
Press; 1982.
[26] Arkes HR, Harness AR. Effect of making a diagnosis on
subsequent recognition of symptoms. J Exp Psychol Hum
Percept Perform 1980;6:566–75.
[27] Dawson NV, Arkes HR. Systematic errors in medical decision
making: judgment limitations. J Gen Intern Med 1987;2:183–7.
[28] McNeil BJ, Pauker S, Sox Jr. H, Tversky A. On the elicitation
of preferences for alternative therapies. N Engl J Med
1982;306:1259–62.
[29] Meehl P. Theoretical risks and tabular asterisks: Sir Karl, Sir
Ronald, and the slow progress of soft psychology. J Consult Clin
Psychol 1978;46:806–34.
[30] Popper K. The logic of scientific discovery. New York: Harper;
1959.
[31] Borstein BH, Emier AC. Rationality in medical decision making:
a review of the literature on doctorsÕ decision-making biases. J
Eval Clin Pract 2001;7:97–107.
[32] Shafir E, LeBoeuf R. Rationality. Ann Rev Psychol
2002;53:491–517.
[33] Kahneman D, Tversky A. Prospect theory: an analysis of
decision under risk. Econometrika 1979;47:263–91.
[34] Shafir E, Simonson I, Tversky A. Reason-based choice. Cognition 1993;49:11–36.
[35] Feinstein AR. The haze of Bayes, the aerial palaces of decision
analysis, and the computerized Ouija board. Clin Biostatis
1977;21:482–96.
[36] Bunge M. Four concepts of probability. Appl Math Mod
1981;5:306–12.
[37] Kolmogorov AN. Foundations of the theory of probability.
New York: Chelsea; 1950.
[38] Gigerenzer G. From tools to theories: a heuristic of discovery in
cognitive psychology. Psychol Rev 1991;98:254–67.
[39] Cosmides L, Tooby J. Beyond intuition and instinct blindness:
toward an evolutionarily rigorous cognitive science. Cognition
1994;50:41–77.
[40] Allais M. The foundations of a positive theory of choice
involving risk and a criticism of the postulates and axioms of the
American school. In: Allais M, Hagen O, editors. The expected
utility hypothesis and the allais paradox. Dordretch, Netherlands: Reidel; 1979.
[41] Janis IL, Mann L. Decision making: a psychological analysis of
conflict, choice, and commitment. New York: Free Press; 1977.
[42] Hammond JS. Better decision with preference theory. Harv Bus
Rev 1967;45:123–41.
[43] Simon HA. A behavioral model of rational choice. Q J Econ
1955;69:99–118.
[44] Berk JB, Hughson E, Vandezande K. The price is right but are
the bids? An investigation of rational choice theory. Am Econ
Rev 1996;86:954–70.
[45] Dawes RM. Rational choice in an uncertain world. New York:
Harcourt Brace Janovich; 1988.
[46] Hagen O. Risk in utility theory, in business, and in the world of
fear and hope. In: Gotschl J, editor. Revolutionary changes:
understanding man and society. Dordretch: Kluwer; 1995.
[47] Machina ML, Munier B. Models and experiments on risk and
rationality. Dordretch: Kluwer; 1994.
73
[48] Tversky A, Simonson I. Context-dependent preferences. Manage
Sci 1993;10:1179–89.
[49] Popper KR. Conjectures and refutations. London: Routledge
and Kegan Paul Limited; 1957.
[50] Gigerenzer G. Adaptive thinking: rationality in the real world.
New York: Oxford University Press; 2000.
[51] Vranas PBM. GigerenzerÕs normative critique of Kahneman and
Tversky. Cognition 2000;76:179–93.
[52] Swets JA, Getty DJ, Pickett RM, DÕOrsi CJ, Seltzer SE, McNeil
BJ. Enhancing and evaluating diagnostic accuracy. Med Decis
Making 1991;11:9–18.
[53] Bokenholt U, Weber EU. Use of formal methods in medical
decision making: A survey and analysis. Med Decis Making
1992;12:298–306.
[54] Plasencia CM, Alderman BW, Baron AE, Rolfs RT, Boyko EJ.
A method to describe physician decision thresholds and its
application in examining the diagnosis of coronary artery disease
based on exercise treadmill testing. Med Decis Making
1992;12:204–12.
[55] Poses RM, Cebul CD, Wigton RS. You can lead a horse to
water–improving physicianÕs knowledge of probabilities may not
affect their decisions. Med Decis Making 1995;15:65–75.
[56] Means B, Salas E, Crandall B, Jacobs TO. Training decision
makers for the real world. In: Klein GA, Orasanu J, Calderwood
R, Zsambok CE, editors. Decision making in action: models and
methods. Norwood, NJ: Ablex Publishing Corporation; 1993.
[57] Heckerman DE, Shortliffe EH. From certainty factors to belief
networks. Artif Intell Med 1992;4:35–52.
[58] Orasanu J, Connolly T. The reinvention of decision making. In:
Klein GA, Orasanu J, Calderwood R, Zsambok CE, editors.
Decision making in action: models and methods. Norwood, NJ:
Ablex Publishing Corporation; 1993. p. 3–20.
[59] Gaba D. Dynamic decision-making in anesthesiology: cognitive
models and training approaches. In: Evans DA, Patel VL,
editors. Advanced models of cognition for medical training and
practice. Heidelberg, Germany: Springer; 1992. p. 123–48.
[60] Leprohon J, Patel VL. Decision making strategies for telephone
triage in emergency medical services. Med Decis Making
1995;15:240–53.
[61] Patel VL, Kaufman DR, Magder SA. The acquisition of medical
expertise in complex dynamic environments. In: Ericsson KA,
editor. The road to excellence: the acquisition of expert
performance in the arts and sciences, sports and games.
Hillsdale, NJ: Lawrence Erlbaum; 1996. p. 127–65.
[62] Patel VL, Arocha JF, Kaufman DR. Diagnostic reasoning and
medical expertise. Psychol Learn Motiv 1994;31:187–252.
[63] Elstein AS, Rovner DR, Holzman GB, Ravitch MM, Rothert
ML, Holmes MM. Psychological approaches to medical decision
making. Am Behav Sci 1982;25:557–84.
[64] Kassirer JP, Kuipers BJ, Gorry GA. Toward a theory of clinical
expertise. In: Dowie J, Elstein AS, editors. Professional judgment: a reader in clinical decision making. New York: Cambridge University Press; 1988. p. 212–25.
[65] Ericsson KA, Simon HA. Protocol Analysis: verbal reports as
data. Revised edition. Cambridge, MA: MIT Press; 1993.
[66] Bechtel W, Abrahamsen A, Graham G. The life of cognitive
science. In: Bechtel W, Graham G, et al., editors. A companion to cognitive science. Malden, MA: Blackwell; 1998. p.
2–104.
[67] Patel VL, Groen GJ. Knowledge-based solution strategies in
medical reasoning. Cognit Sci 1986;10:91–166.
[68] Horvitz EJ, Breese JS, Henrion M. Decision theory in expert
systems and artificial intelligence. J Approximate Reasoning
1988;2:247–302, Special Issue on Uncertainty in Artificial
Intelligence.
[69] Szolovits P, Pauker SG. Categorical and probabilistic reasoning
in medical diagnosis. Artif Intell 1978;11:115–44.
74
V.L. Patel et al. / Journal of Biomedical Informatics 35 (2002) 52–75
[70] Gorry GA. Computer-assisted clinical decision making. Meth
Inform Med 1973;12:45–51.
[71] Pauker SG, Gorry GA, Schwartz WB, Kassirer JP. Towards the
simulation of clinical cognition. Am J Med 1976;60:981–96.
[72] Shortliffe EH. Computer-based medical consultations: MYCIN.
New York: American Elsevier; 1976.
[73] Clancey WJ. Acquiring, representing and evaluating a competence model of diagnostic strategy. In: Chi MTH, Glaser R, Farr
MJ, editors. The nature of expertise. Hillsdale, NJ: Lawrence
Erlbaum; 1988. p. 343–418.
[74] Patel VL, Groen GJ, Arocha JF. Medical expertise as a function
of task difficulty. Mem Cognit 1990;18:394–406.
[75] Joseph G-M, Patel VL. Domain knowledge and hypothesis
generation in diagnostic reasoning. Med Decis Making
1990;10:31–46.
[76] Patel VL, Evans DA, Kaufman DR. Cognitive framework for
doctor–patient interaction. In: Evans DA, Patel VL, editors.
Cognitive science in medicine: biomedical modeling. Cambridge,
MA: MIT Press; 1989. p. 253–308.
[77] Lesgold A, Rubinson H, Feltovich P, Glaser R, Klopfer D,
Wang Y. Expertise in a complex skill: diagnosing X-ray
pictures. In: Chi MTH, Glaser R, Farr MJ, editors. The
nature of expertise. Hillsdale, NJ: Lawrence Erlbaum; 1988. p.
311–42.
[78] Ericsson KA, Smith J. Prospects and limits of the empirical
study of expertise: an introduction. In: Ericsson KA, Smith
J, editors. Toward a general theory of expertise: prospects
and limits. New York: Cambridge University Press; 1991. p.
1–38.
[79] Kushniruk AW, Patel VL, Fleiszer DM. Complex decision
making in providing surgical intensive care. In: Proceedings of
the Seventeenth Annual Conference of the Cognitive Science
Society. Hillsdale, NJ: Lawrence Erlbaum; 1995. p. 287–92.
[80] Boshuizen HPA, Schmidt HG. On the role of biomedical
knowledge in clinical reasoning by experts, intermediates and
novices. Cognit Sci 1992;16:153–84.
[81] Patel VL, Kaufman JR. Clinical reasoning and biomedical
knowledge: implications for teaching. In: Higgs J, Jones M,
editors. Clinical reasoning in health professions. Oxford, UK:
Butterworth Heinemenn; 1995. p. 117–28.
[82] Chandrasakeran B, Smith JW, Sticklen J. Deep models and their
relation to diagnosis. Artif Intell Med 1989;1:29–40.
[83] Patil RS, Szolovits P, Schwartz WB. Causal understanding of
patient illness in medical diagnosis. In: Clancey WJ, Shortliffe
EH, editors. Readings in medical artificial intelligence. Reading,
MA: Addison-Wesley; 1984. p. 339–60.
[84] Kuipers BJ. Qualitative simulation as causal explanation. IEEE
Trans Syst Man Cybernet 1987;17:432–44.
[85] Kaufman DR, Kushniruk AW, Yale JF, Patel VL. Conceptual
knowledge and decision strategies in relation to hypercholesterolemia and coronary heart disease. Int J Med Inform
1999;55:159–77.
[86] Elstein AD, Holzman DB, Belzer LJ, Elllis RD. Hormonal
replacement therapy: analysis of clinical strategies used by
residents. Med Decis Making 1992;12:165–73.
[87] Klein A, Orasanu J, Calderwood R, Zsambok CE, editors.
Decision making in action: models and methods. Norwood, NJ:
Ablex Publishing Corporation; 1993.
[88] Klein GA, Calderwood R, McGregor D. Critical decision
method for eliciting knowledge. IEEE Syst Man Cybernet
1989;19:462–72.
[89] Yates JF. Outside: impressions of naturalistic decision making.
In: Salas E, Klein G, editors. Linking expertise and naturalistic
decision making. Hillsdale, NJ: Lawrence Erlbaum; 2001. p. 9–
33.
[90] Salas E, Klein G. Expertise and naturalistic decision making: an
overview. In: Salas E, Klein G, editors. Linking expertise and
[91]
[92]
[93]
[94]
[95]
[96]
[97]
[98]
[99]
[100]
[101]
[102]
[103]
[104]
[105]
[106]
[107]
[108]
[109]
[110]
[111]
naturalistic decision making. Hillsdale, NJ: Lawrence Erlbaum;
2001. p. 3–8.
Woods DD. Observations from studying cognitive systems in
context. In: Proceedings of the Sixteenth Annual Conference of
the Cognitive Science Society. Hillsdale, NJ: Lawrence Erlbaum;
1994. p. 961–4.
Woods DD. Process-tracing methods for the study of cognition
outside of the experimental psychology laboratory. In: Klein
GA, Orasanu J, Calderwood R, Zsambok CE, editors. Decision
making in action: models and methods. Norwood, NJ: Ablex
Publishing Corporation; 1993. p. 228–51.
Rasmussen J, Pejtersen AM, Goodstein LP. Cognitive systems
engineering. New York: John Wiley & Sons Inc; 1994.
Klein GA, Calderwood R. Decision models: some lessons from
the field. IEEE Trans Syst Man Cybernet 1991;21:1018–26.
Klein A. A recognition-primed decision (RPD) model of rapid
decision making. In: Klein GA, Orasanu J, Calderwood R,
Zsambok CE, editors. Decision making in action: models and
methods. Norwood, NJ: Ablex Publishing Corporation; 1993. p.
138–47.
Larkin JH, McDermott J, Simon HA, Simon DP. Expert and
novice performances in solving physics problems. Science
1980;208:1335–42.
Benner P, Tanner C. Clinical judgment: how expert nurses use
intuition. Am J Nurs 1987:23–31.
Crandall B, Calderwood R. Clinical Assessment skills of
experienced neonatal intensive care nurses: national center for
nursing research, National Institutes of Health; 1989.
Salomon G. No distribution without individualsÕ cognition: a
dynamic interactional view. In: Salomon G, editor. Distributed cognition: psychological and educational considerations. Cambridge, MA: Cambridge University Press; 1993.
p. 111–38.
Perkins DN. Person-plus: a distributed view of thinking and
learning. In: Salomon G, editor. Distributed cognition: psychological and educational considerations. Cambridge, MA: Cambridge University Press; 1993. p. 88–110.
Patel VL, Dunbar K, Kaufman DR. Goal-constrained distributed reasoning in medicine and science. In: Proceedings of the
36th Annual Meeting of the Psychonomic Society; Cambridge,
MA. 1995. p. 2.
Patel VL, Arocha JF. The nature of constraints on collaborative
decision making in healthcare settings. In: Salas E, Klein G,
editors. Linking expertise and naturalistic decision making.
Mahwah, NJ: Lawrence Erlbaum; 2001. p. 385–407.
Perrow C. Normal accidents: living with high risk technologies.
New York: Basic Books; 1984.
Covell D, Uman GC, Manning PR. Information needs in
office practice: are they being met?. Ann Intern Med 1985;
103:596–9.
Coiera E. When conversation is better than computation. J Am
Med Inform Assoc 2000;7:277–86.
Lave J. Cognition in practice: mind, mathematics, and culture in
everyday life. Cambridge: Cambridge University Press; 1988.
Suchman LA. Plans and situated actions: the problem of
human–machine communication. Cambridge: Cambridge University Press; 1987.
Cole M, Engestrom Y. A cultural-historical approach to
distributed cognition. In: Salomon G, editor. Distributed cognition: psychological and educational considerations. Cambridge, MA: Cambridge University Press; 1993. p. 1–46.
Berg M. Patient care information systems and healthcare work: a
sociotechnical approach. Int J Med Inform 1999;55:87–101.
Bowker G, Starr SL. Sorting things out: classification and its
consequences. Cambridge: MIT Press; 1999.
Patel VL, Arocha JF, Kaufman DR. Medical cognition. In:
Durso FT, Nickerson RS, Schvaneveldt RW, Thomas ST,
V.L. Patel et al. / Journal of Biomedical Informatics 35 (2002) 52–75
[112]
[113]
[114]
[115]
[116]
[117]
Lindsay DS, Chi MTH, editors. Handbook of applied cognition.
Chichester, Sussex, UK: Lawrence Erlbaum; 1999. p. 75–99.
Kushniruk AW, Patel VL. Cognitive evaluation of decision
making processes and assessment of information technology in
medicine. Int J Med Inform 1998;51:83–90.
Patel VL, Kaufman DR. Science and practice: a case for medical
informatics as a local science of design. J Am Med Inform Assoc
1998;5:489–92.
Patel VL, Arocha JF, Diermeier M, Mottur-Pilson C, How J.
Cognitive psychological studies of representations and use of
clinical practice guidelines. Int J Med Inform 2001;63:147–67.
Salomon G, Perkins DN, Globerson T. Partners in cognition:
extending human intelligence with intelligent technologies. Educ
Res 1991;20:2–9.
Woods D, Cook R. A tale of two stories: contrasting views of
patient safety. National Health Care Safety Council of the
National Patient Safety Foundation at the American Medical
Association; 1998.
Mackenzie CF, Jeffries NJ, Hunter A, Bernhard W, Xiao Y,
Group L. Comparison of self reporting of deficiencies in airway
management with video analyses of actions performance. Hum
Fact 1996;38:623–35.
75
[118] Lin L, Isla R, Doniz K, Harkness H, Vicente KJ, Doyle DJ.
Applying human factors to the design of medical equipment:
patient-controlled analgesia. J Clin Monitor Comp 1998;14:253–
63.
[119] Kushniruk AW, Kaufman RD, Patel VL, Levesque Y, Lottin P.
Assessment of a computerized patient record system: a cognitive
approach to evaluation of an emerging medical technology. MD
Comp 1996;13:406–15.
[120] Patel VL, Kushniruk AW, Yang S, Yale JF. Impact of a
computerized patient record system of medical data collection,
organization and reasoning. J Am Med Inform Assoc
2000;7:569–85.
[121] Grimshaw JM, Russell IT. Effect of clinical guidelines on
medical practice: a systematic review of rigorous evaluations.
Lancet 1993;342:1317–22.
[122] Cabana MD, Rand CS, Powe NR, Wu AW, Wilson MH,
Abboud P-A, et al. Why donÕt physicians follow clinical practice
guidelines? A framework for improvement. J Am Med Assoc
1999;282:1458–65.
[123] Boxwala AA, Tu S, Peleg M, Zeng Q, Ogunyemi O, Greenes
RA, et al. Toward a representation format for sharable clinical
guidelines. J Biomed Inform 2001;34:157–69.