Structured Analytic Techniques For Intelligence Analysis
Structured Analytic Techniques For Intelligence Analysis
Structured Analytic Techniques For Intelligence Analysis
Figures
Foreword by John McLaughlin
Preface
1. Introduction and Overview
1.1 Our Vision
1.2 Two Types of Thinking
1.3 Dealing with Bias
1.4 Role of Structured Analytic Techniques
1.5 Value of Team Analysis
1.6 History of Structured Analytic Techniques
1.7 Selection of Techniques for This Book
1.8 Quick Overview of Chapters
2. Building a System 2 Taxonomy
2.1 Taxonomy of System 2 Methods
2.2 Taxonomy of Structured Analytic Techniques
3. Choosing the Right Technique
3.1 Core Techniques
3.2 Making a Habit of Using Structured Techniques
3.3 One Project, Multiple Techniques
3.4 Common Errors in Selecting Techniques
3.5 Structured Technique Selection Guide
4. Decomposition and Visualization
4.1 Getting Started Checklist
4.2 AIMS (Audience, Issue, Message, and Storyline)
4.3 Customer Checklist
4.4 Issue Redefinition
4.5 Chronologies and Timelines
4.6 Sorting
4.7 Ranking, Scoring, and Prioritizing
4.7.1 The Method: Ranked Voting
4.7.2 The Method: Paired Comparison
4.7.3 The Method: Weighted Ranking
4.8 Matrices
4.9 Venn Analysis
4.10 Network Analysis
4.11 Mind Maps and Concept Maps
2
4.12 Process Maps and Gantt Charts
5. Idea Generation
5.1 Structured Brainstorming
5.2 Virtual Brainstorming
5.3 Nominal Group Technique
5.4 Starbursting
5.5 Cross-Impact Matrix
5.6 Morphological Analysis
5.7 Quadrant Crunching™
6. Scenarios and Indicators
6.1 Scenarios Analysis
6.1.1 The Method: Simple Scenarios
6.1.2 The Method: Cone of Plausibility
6.1.3 The Method: Alternative Futures Analysis
6.1.4 The Method: Multiple Scenarios Generation
6.2 Indicators
6.3 Indicators Validator™
7. Hypothesis Generation and Testing
7.1 Hypothesis Generation
7.1.1 The Method: Simple Hypotheses
7.1.2 The Method: Multiple Hypotheses Generator™
7.1.3 The Method: Quadrant Hypothesis Generation
7.2 Diagnostic Reasoning
7.3 Analysis of Competing Hypotheses
7.4 Argument Mapping
7.5 Deception Detection
8. Assessment of Cause and Effect
8.1 Key Assumptions Check
8.2 Structured Analogies
8.3 Role Playing
8.4 Red Hat Analysis
8.5 Outside-In Thinking
9. Challenge Analysis
9.1 Premortem Analysis
9.2 Structured Self-Critique
9.3 What If? Analysis
9.4 High Impact/Low Probability Analysis
9.5 Devil’s Advocacy
9.6 Red Team Analysis
9.7 Delphi Method
3
10. Conflict Management
10.1 Adversarial Collaboration
10.2 Structured Debate
11. Decision Support
11.1 Decision Trees
11.2 Decision Matrix
11.3 Pros-Cons-Faults-and-Fixes
11.4 Force Field Analysis
11.5 SWOT Analysis
11.6 Impact Matrix
11.7 Complexity Manager
12. Practitioner’s Guide to Collaboration
12.1 Social Networks and Analytic Teams
12.2 Dividing the Work
12.3 Common Pitfalls with Small Groups
12.4 Benefiting from Diversity
12.5 Advocacy versus Objective Inquiry
12.6 Leadership and Training
13. Validation of Structured Analytic Techniques
13.1 Limits of Empirical Analysis
13.2 Establishing Face Validity
13.3 A Program for Empirical Validation
13.4 Recommended Research Program
14. The Future of Structured Analytic Techniques
14.1 Structuring the Data
14.2 Key Drivers
14.3 Imagining the Future: 2020
4
More Advance Praise for Structured
Analytic Techniques for Intelligence
Analysis
—Kathrin Brockmann
5
—Alan More
“Heuer and Pherson have written a book that provides law enforcement
intelligence and crime analysts with numerous techniques to assist in
homeland security and crime prevention. The book is a must read for
analysts in the law enforcement community responsible for analyzing
intelligence and crime data. Analysis of Competing Hypotheses is but one
non-traditional example of a tool that helps them challenge assumptions,
identify investigative leads and trends, and anticipate future
developments.”
6
—Morten Hansen
“Heuer and Pherson are the leading practitioners, innovators, and teachers
of the rigorous use of structured analytic techniques. Their work stands out
above all others in explaining and evaluating the utility of such methods
that can appreciably raise the standards of analysis. The methods they
present stimulate the imagination, enhance the rigor, and apply to hard
intelligence problems as well as other areas requiring solid analysis. This
new, expanded edition is a must-have resource for any serious analyst’s
daily use as well as one’s professional bookshelf.”
“The science of reasoning has grown considerably over the past 40-odd
years. Among the many fascinating aspects of the human intellect is the
ability to amplify our own capabilities by creating analytic tools. The tools
in this book are for those whose profession often requires making
judgments based on incomplete and ambiguous information. You hold in
your hands the toolkit for systematic analytic methods and critical
thinking. This is a book you can read and then actually apply to
accomplish something. Like any good toolkit, it has some simple tools that
explain themselves, some that need explanation and guidance, and some
7
that require considerable practice. This book helps us in our quest to enrich
our expertise and expand our reasoning skill.”
—Robert R. Hoffman
8
Structured Analytic Techniques for
Intelligence Analysis
2
9
CQ Press, an imprint of SAGE, is the leading publisher of books,
periodicals, and electronic products on American government and
international affairs. CQ Press consistently ranks among the top
commercial publishers in terms of quality, as evidenced by the numerous
awards its products have won over the years. CQ Press owes its existence
to Nelson Poynter, former publisher of the St. Petersburg Times, and his
wife Henrietta, with whom he founded Congressional Quarterly in 1945.
Poynter established CQ with the mission of promoting democracy through
education and in 1975 founded the Modern Media Institute, renamed The
Poynter Institute for Media Studies after his death. The Poynter Institute
(www.poynter.org) is a nonprofit organization dedicated to training
journalists and media leaders.
10
Structured Analytic Techniques for
Intelligence Analysis
2
SAGE
Los Angeles
London
New Delhi
Singapore
Washington DC
11
FOR INFORMATION:
CQ Press
E-mail: [email protected]
1 Oliver’s Yard
55 City Road
United Kingdom
India
3 Church Street
12
#10-04 Samsung Hub
Singapore 049483
Heuer, Richards J.
pages cm
ISBN 978-1-4522-4151-7
JK468.I6H478 2015
327.12—dc23 2014000255
14 15 16 17 18 10 9 8 7 6 5 4 3 2 1
13
Typesetter: C&M Digitals (P) Ltd.
14
Figures
2.0 System 1 and System 2 Thinking 20
2.2 Eight Families of Structured Analytic Techniques 25
3.0 Value of Using Structured Techniques to Perform Key Tasks 30
3.2 The Five Habits of the Master Thinker 34
4.4 Issue Redefinition Example 55
4.5 Timeline Estimate of Missile Launch Date 59
4.7a Paired Comparison Matrix 65
4.7b Weighted Ranking Matrix 66
4.8 Rethinking the Concept of National Security: A New Ecology 70
4.9a Venn Diagram of Components of Critical Thinking 72
4.9b Venn Diagram of Invalid and Valid Arguments 73
4.9c Venn Diagram of Zambrian Corporations 75
4.9d Zambrian Investments in Global Port Infrastructure Projects 77
4.10a Social Network Analysis: The September 11 Hijackers 80
4.10b Social Network Analysis: September 11 Hijacker Key Nodes
83
4.10c Social Network Analysis 84
4.11a Concept Map of Concept Mapping 87
4.11b Mind Map of Mind Mapping 88
4.12 Gantt Chart of Terrorist Attack Planning 95
5.1 Picture of Structured Brainstorming 104
5.4 Starbursting Diagram of a Lethal Biological Event at a Subway
Station 114
5.5 Cross-Impact Matrix 117
5.6 Morphological Analysis: Terrorist Attack Options 121
5.7a Classic Quadrant Crunching™: Creating a Set of Stories 124
5.7b Terrorist Attacks on Water Systems: Flipping Assumptions 125
5.7c Terrorist Attacks on Water Systems: Sample Matrices 126
5.7d Selecting Attack Plans 127
6.1.1 Simple Scenarios 140
6.1.2 Cone of Plausibility 142
6.1.3 Alternative Futures Analysis: Cuba 144
6.1.4a Multiple Scenarios Generation: Future of the Iraq Insurgency
146
6.1.4b Future of the Iraq Insurgency: Using Spectrums to Define
Potential Outcomes 147
15
6.1.4c Selecting Attention-Deserving and Nightmare Scenarios 147
6.2a Descriptive Indicators of a Clandestine Drug Laboratory 150
6.2b Using Indicators to Track Emerging Scenarios in Zambria 153
6.2c Zambria Political Instability Indicators 155
6.3a Indicators Validator™ Model 157
6.3b Indicators Validator™ Process 159
7.1.1 Simple Hypotheses 172
7.1.2 Multiple Hypothesis Generator™: Generating Permutations 174
7.1.3 Quadrant Hypothesis Generation: Four Hypotheses on the
Future of Iraq 177
7.3a Creating an ACH Matrix 186
7.3b Coding Relevant Information in ACH 187
7.3c Evaluating Levels of Disagreement in ACH 188
7.4 Argument Mapping: Does North Korea Have Nuclear Weapons?
195
8.1 Key Assumptions Check: The Case of Wen Ho Lee 213
8.4 Using Red Hat Analysis to Catch Bank Robbers 225
8.5 Inside-Out Analysis versus Outside-In Approach 229
9.0 Mount Brain: Creating Mental Ruts 237
9.1 Structured Self-Critique: Key Questions 244
9.3 What If? Scenario: India Makes Surprising Gains from the Global
Financial Crisis 252
9.4 High Impact/Low Probability Scenario: Conflict in the Arctic 258
9.7 Delphi Technique 268
11.2 Decision Matrix 299
11.3 Pros-Cons-Faults-and-Fixes Analysis 301
11.4 Force Field Analysis: Removing Abandoned Cars from City
Streets 306
11.5 SWOT Analysis 309
11.6 Impact Matrix: Identifying Key Actors, Interests, and Impact
312
11.7 Variables Affecting the Future Use of Structured Analysis 318
12.1a Traditional Analytic Team 325
12.1b Special Project Team 326
12.2 Wikis as Collaboration Enablers 328
12.5 Advocacy versus Inquiry in Small-Group Processes 334
12.6 Effective Small-Group Roles and Interactions 336
13.3 Three Approaches to Evaluation 347
14.1 Variables Affecting the Future Use of Structured Analysis 357
16
Foreword
John McLaughlin
And yet, analysis has probably never been a more important part of the
profession—or more needed by policymakers. In contrast to the bipolar
dynamics of the Cold War, this new world is strewn with failing states,
proliferation dangers, regional crises, rising powers, and dangerous
nonstate actors—all at play against a backdrop of exponential change in
fields as diverse as population and technology.
To be sure, there are still precious secrets that intelligence collection must
uncover—things that are knowable and discoverable. But this world is
equally rich in mysteries having to do more with the future direction of
events and the intentions of key actors. Such things are rarely illuminated
by a single piece of secret intelligence data; they are necessarily subjects
for analysis.
Analysts charged with interpreting this world would be wise to absorb the
thinking in this book by Richards Heuer and Randy Pherson and in
Heuer’s earlier work The Psychology of Intelligence Analysis. The reasons
are apparent if one considers the ways in which intelligence analysis
differs from similar fields of intellectual endeavor.
17
others have left off; in most cases the questions they get are about
what happens next, not about what is known.
✶ Second, they cannot be deterred by lack of evidence. As Heuer
pointed out in his earlier work, the essence of the analysts’ challenge
is having to deal with ambiguous situations in which information is
never complete and arrives only incrementally—but with constant
pressure to arrive at conclusions.
✶ Third, analysts must frequently deal with an adversary that
actively seeks to deny them the information they need and is often
working hard to deceive them.
✶ Finally, for all of these reasons, analysts live with a high degree of
risk—essentially the risk of being wrong and thereby contributing to
ill-informed policy decisions.
The risks inherent in intelligence analysis can never be eliminated, but one
way to minimize them is through more structured and disciplined thinking
about thinking. On that score, I tell my students at the Johns Hopkins
School of Advanced International Studies that the Heuer book is probably
the most important reading I give them, whether they are heading into the
government or the private sector. Intelligence analysts should reread it
frequently. In addition, Randy Pherson’s work over the past six years to
develop and refine a suite of structured analytic techniques offers
invaluable assistance by providing analysts with specific techniques they
can use to combat mindsets, groupthink, and all the other potential pitfalls
of dealing with ambiguous data in circumstances that require clear and
consequential conclusions.
The book you now hold augments Heuer’s pioneering work by offering a
clear and more comprehensive menu of more than fifty techniques to build
on the strategies he earlier developed for combating perceptual errors. The
techniques range from fairly simple exercises that a busy analyst can use
while working alone—the Key Assumptions Check, Indicators
Validator™, or What If? Analysis—to more complex techniques that work
best in a group setting—Structured Brainstorming, Analysis of Competing
Hypotheses, or Premortem Analysis.
The key point is that all analysts should do something to test the
conclusions they advance. To be sure, expert judgment and intuition have
their place—and are often the foundational elements of sound analysis—
but analysts are likely to minimize error to the degree they can make their
18
underlying logic explicit in the ways these techniques demand.
Just as intelligence analysis has seldom been more important, the stakes in
the policy process it informs have rarely been higher. Intelligence analysts
these days therefore have a special calling, and they owe it to themselves
and to those they serve to do everything possible to challenge their own
thinking and to rigorously test their conclusions. The strategies offered by
Richards Heuer and Randy Pherson in this book provide the means to do
precisely that.
19
Preface
20
This second edition of the book includes five new techniques—AIMS
(Audience, Issue, Message, and Storyline) and Venn Analysis in chapter 4,
on Decomposition and Visualization; Cone of Plausibility in chapter 6, on
Scenarios and Indicators; and Decision Trees and Impact Matrix in chapter
11, on Decision Support. We have also split the Quadrant Crunching™
technique into two parts—Classic Quadrant Crunching™ and Foresight
Quadrant Crunching™, as described in chapter 5, on Idea Generation—
and made significant revisions to four other techniques: Getting Started
Checklist, Customer Checklist, Red Hat Analysis, and Indicators
Validator™.
21
well as in academia, business, medicine, and the private sector. Managers,
policymakers, corporate executives, strategic planners, action officers, and
operators who depend on input from analysts to help them achieve their
goals will also find it useful. Academics and consulting companies who
specialize in qualitative methods for dealing with unstructured data will be
interested in this pathbreaking book as well.
We designed the book for ease of use and quick reference. The spiral
binding allows analysts to have the book open while they follow step-by-
step instructions for each technique. We grouped the techniques into
logical categories based on a taxonomy we devised. Tabs separating each
chapter contain a table of contents for the selected chapter. Each technique
chapter starts with a description of that technique category and then
provides a brief summary of each technique covered in that chapter.
22
The Authors
Richards J. Heuer Jr. is best known for his book Psychology of
Intelligence Analysis and for developing and then guiding automation of
the Analysis of Competing Hypotheses (ACH) technique. Both are being
used to teach and train intelligence analysts throughout the Intelligence
Community and in a growing number of academic programs on
intelligence or national security. Long retired from the Central Intelligence
Agency (CIA), Mr. Heuer has nevertheless been associated with the
Intelligence Community in various roles for more than five decades and
has written extensively on personnel security, counterintelligence,
deception, and intelligence analysis. He has a B.A. in philosophy from
Williams College and an M.A. in international relations from the
University of Southern California, and has pursued other graduate studies
at the University of California at Berkeley and the University of Michigan.
23
Acknowledgments
The authors greatly appreciate the contributions made by Mary Boardman,
Kathrin Brockmann and her colleagues at the Stiftung Neue
Verantwortung, Nick Hare and his colleagues at the UK Cabinet Office,
Mary O’Sullivan, Kathy Pherson, John Pyrik, Todd Sears, and Cynthia
Storer to expand and improve the chapters on analytic techniques, as well
as the graphics design and editing support provided by Adriana Gonzalez
and Richard Pherson.
Both authors also recognize the large contributions many individuals made
to the first edition, reviewing all or large portions of the draft text. These
include J. Scott Armstrong, editor of Principles of Forecasting: A
Handbook for Researchers and Practitioners and professor at the Wharton
School, University of Pennsylvania; Sarah Miller Beebe, a Russian
specialist who previously served as a CIA analyst and on the National
Security Council staff; Jack Davis, noted teacher and writer on intelligence
analysis, a retired senior CIA officer, and now an independent contractor
with the CIA; Robert R. Hoffman, noted author of books on naturalistic
decision making, Institute for Human & Machine Cognition; Marilyn B.
Peterson, senior instructor at the Defense Intelligence Agency, former
president of the International Association of Law Enforcement Intelligence
Analysts, and former chair of the International Association for Intelligence
Education; and Cynthia Storer, a counterterrorism specialist and former
CIA analyst now associated with Pherson Associates, LLC. Their
thoughtful critiques, recommendations, and edits as they reviewed this
book were invaluable.
24
D. Dietz, American Public University System; Bob Duval, West Virginia
University; Chaka Ferguson, Florida International University; Joseph
Gordon, National Intelligence University; Kurt Jensen, Carleton
University; and Doug Watson, George Mason University.
The ideas, interest, and efforts of all the above contributors to this book are
greatly appreciated, but the responsibility for any weaknesses or errors
rests solely on the shoulders of the authors.
Disclaimer
All statements of fact, opinion, or analysis expressed in this book are those
of the authors and do not reflect the official positions of the Office of the
Director of National Intelligence (ODNI), the Central Intelligence Agency
(CIA), or any other U.S. government agency. Nothing in the contents
should be construed as asserting or implying U.S. government
authentication of information or agency endorsement of the authors’
views. This material has been reviewed by the ODNI and the CIA only to
prevent the disclosure of classified information.
25
1 Introduction and Overview
26
dealing with the kinds of incomplete, ambiguous, and sometimes deceptive
information with which analysts must work. Structured analysis is a
mechanism by which internal thought processes are externalized in a
systematic and transparent manner so that they can be shared, built on, and
easily critiqued by others. Each technique leaves a trail that other analysts
and managers can follow to see the basis for an analytic judgment. These
techniques are used by individual analysts but are perhaps best utilized in a
collaborative team or group effort in which each step of the analytic
process exposes participants to divergent or conflicting perspectives. This
transparency helps ensure that differences of opinion among analysts are
heard and seriously considered early in the analytic process. Analysts tell
us that this is one of the most valuable benefits of any structured
technique.
These are called “techniques” because they usually guide the analyst in
thinking about a problem rather than provide the analyst with a definitive
answer, as one might expect from a method. Structured analytic techniques
in general, however, do form a methodology—a set of principles and
procedures for qualitative analysis of the kinds of uncertainties that many
analysts must deal with on a daily basis.
27
System 1 is intuitive, fast, efficient, and often unconscious. It draws
naturally on available knowledge, past experience, and often a long-
established mental model of how people or things work in a specific
environment. System 1 thinking is very alluring, as it requires little effort,
and it allows people to solve problems and make judgments quickly and
efficiently. It is often accurate, but intuitive thinking is also a common
source of cognitive biases and other intuitive mistakes that lead to faulty
analysis. Cognitive biases are discussed in the next section of this chapter.
All biases, except perhaps the personal self-interest bias, are the result of
fast, unconscious, and intuitive thinking (System 1)—not the result of
thoughtful reasoning (System 2). System 1 thinking is usually correct, but
frequently influenced by various biases as well as insufficient knowledge
and the inherent unknowability of the future. Structured analytic
techniques are a type of System 2 thinking designed to help identify and
overcome the analytic biases inherent in System 1 thinking.
28
in the early 1970s.4 Richards Heuer’s work for the CIA in the late 1970s
and the 1980s, subsequently followed by his book Psychology of
Intelligence Analysis, first published in 1999, applied Tversky and
Kahneman’s insights to problems encountered by intelligence analysts.5
Since the publication of Psychology of Intelligence Analysis, other authors
associated with the U.S. Intelligence Community (including Jeffrey
Cooper and Rob Johnston) have identified cognitive biases as a major
cause of analytic failure at the CIA.6
29
identifying a wider range of options for analysts to consider. For example,
a Key Assumptions Check requires the identification and consideration of
additional assumptions. Analysis of Competing Hypotheses requires
identification of alternative hypotheses, a focus on refuting rather than
confirming hypotheses, and a more systematic analysis of the evidence.
All structured techniques described in this book have a Value Added
section that describes how this technique contributes to better analysis and
helps mitigate cognitive biases and intuitive traps often made by
intelligence analysts and associated with System 1 thinking. For many
techniques, the benefit is self-evident. None purports to always give the
correct answer. They identify alternatives that merit serious consideration.
No formula exists, of course, for always getting it right, but the use of
structured techniques can reduce the frequency and severity of error. These
techniques can help analysts mitigate the proven cognitive limitations,
sidestep some of the known analytic biases, and explicitly confront the
problems associated with unquestioned mental models or mindsets. They
help analysts think more rigorously about an analytic problem and ensure
that preconceptions and assumptions are not taken for granted but are
explicitly examined and, when possible, tested.10
Analytic methods are important, but method alone is far from sufficient to
ensure analytic accuracy or value. Method must be combined with
substantive expertise and an inquiring and imaginative mind. And these, in
turn, must be supported and motivated by the organizational environment
30
in which the analysis is done.
31
1.6 History of Structured Analytic Techniques
The first use of the term “structured analytic techniques” in the U.S.
Intelligence Community was in 2005. However, the origin of the concept
goes back to the 1980s, when the eminent teacher of intelligence analysis,
Jack Davis, first began teaching and writing about what he called
“alternative analysis.”13 The term referred to the evaluation of alternative
explanations or hypotheses, better understanding of other cultures, and
analysis of events from the other country’s point of view rather than by
mirror imaging. In the mid-1980s some initial efforts were made to initiate
the use of more alternative analytic techniques in the CIA’s Directorate of
Intelligence. Under the direction of Robert Gates, then CIA Deputy
Director for Intelligence, analysts employed several new techniques to
generate scenarios of dramatic political change, track political instability,
and anticipate military coups. Douglas MacEachin, Deputy Director for
Intelligence from 1993 to 1996, supported new standards for systematic
and transparent analysis that helped pave the path to further change.14
The term “alternative analysis” became widely used in the late 1990s after
Adm. David Jeremiah’s postmortem analysis of the U.S. Intelligence
Community’s failure to foresee India’s 1998 nuclear test, a U.S.
congressional commission’s review of the Intelligence Community’s
global missile forecast in 1998, and a report from the CIA Inspector
General that focused higher-level attention on the state of the Directorate
of Intelligence’s analytic tradecraft. The Jeremiah report specifically
encouraged increased use of what it called “red-team” analysis.
When the Sherman Kent School for Intelligence Analysis at the CIA was
created in 2000 to improve the effectiveness of intelligence analysis, John
McLaughlin, then Deputy Director for Intelligence, tasked the school to
consolidate techniques for doing what was then referred to as “alternative
analysis.” In response to McLaughlin’s tasking, the Kent School
developed a compilation of techniques, and the CIA’s Directorate of
Intelligence started teaching these techniques in a class that later evolved
into the Advanced Analytic Tools and Techniques Workshop. The course
was subsequently expanded to include analysts from the Defense
Intelligence Agency and other elements of the U.S. Intelligence
Community.
32
attacks of September 11, 2001, and then the erroneous analysis of Iraq’s
possession of weapons of mass destruction, cranked up the pressure for
more alternative approaches to intelligence analysis. For example, the
Intelligence Reform Act of 2004 assigned to the Director of National
Intelligence “responsibility for ensuring that, as appropriate, elements of
the intelligence community conduct alternative analysis (commonly
referred to as ‘red-team’ analysis) of the information and conclusions in
intelligence analysis.”
In 2004, when the Kent School decided to update its training materials
based on lessons learned during the previous several years and publish A
Tradecraft Primer,15 Randolph H. Pherson and Roger Z. George were
among the drafters. “There was a sense that the name ‘alternative analysis’
was too limiting and not descriptive enough. At least a dozen different
analytic techniques were all rolled into one term, so we decided to find a
name that was more encompassing and suited this broad array of
approaches to analysis.”16 Kathy Pherson is credited with coming up with
the name “structured analytic techniques” during a dinner table
conversation with her husband, Randy. Roger George organized the
techniques into three categories: diagnostic techniques, contrarian
techniques, and imagination techniques. The term “structured analytic
techniques” became official in June 2005, when updated training materials
were formally approved.
33
One thing cannot be changed, however, in the absence of new legislation.
The Director of National Intelligence (DNI) is still responsible under the
Intelligence Reform Act of 2004 for ensuring that elements of the U.S.
Intelligence Community conduct alternative analysis, which it now
describes as the inclusion of alternative outcomes and hypotheses in
analytic products. We view “alternative analysis” as covering only a part
of what now is regarded as structured analytic techniques and recommend
avoiding use of the term “alternative analysis” to avoid any confusion.
From the several hundred techniques that might have been included here,
we identified a core group of fifty-five techniques that appear to be most
useful for the intelligence profession, but also useful for those engaged in
related analytic pursuits in academia, business, law enforcement, finance,
and medicine. Techniques that tend to be used exclusively for a single type
of analysis in fields such as law enforcement or business consulting,
however, have not been included. This list is not static. It is expected to
increase or decrease as new techniques are identified and others are tested
and found wanting. In fact, we have dropped two techniques from the first
34
edition and added five new ones to the second edition.
Some training programs may have a need to boil down their list of
techniques to the essentials required for one particular type of analysis. No
one list will meet everyone’s needs. However, we hope that having one
fairly comprehensive list and common terminology available to the
growing community of analysts now employing structured analytic
techniques will help to facilitate the discussion and use of these techniques
in projects involving collaboration across organizational boundaries.
The names of some techniques are normally capitalized, while many are
35
not. For consistency and to make them stand out, the names of all
techniques described in this book are capitalized.
36
✶ Chapter 4 (“Decomposition and Visualization”) covers the basics
such as checklists, sorting, ranking, classification, several types of
mapping, matrices, and networks. It includes two new techniques,
Venn Analysis and AIMS (Audience, Issue, Message, and Storyline).
✶ Chapter 5 (“Idea Generation”) presents several types of
brainstorming. That includes Nominal Group Technique, a form of
brainstorming that rarely has been used in the U.S. Intelligence
Community but should be used when there is concern that a
brainstorming session might be dominated by a particularly
aggressive analyst or constrained by the presence of a senior officer.
A Cross-Impact Matrix supports a group learning exercise about the
relationships in a complex system.
✶ Chapter 6 (“Scenarios and Indicators”) covers four scenario
techniques and the indicators used to monitor which scenario seems
to be developing. The Indicators Validator™ developed by Randy
Pherson assesses the diagnostic value of the indicators. The chapter
includes a new technique, Cone of Plausibility.
✶ Chapter 7 (“Hypothesis Generation and Testing”) includes the
following techniques for hypothesis generation: Diagnostic
Reasoning, Analysis of Competing Hypotheses, Argument Mapping,
and Deception Detection.
✶ Chapter 8 (“Assessment of Cause and Effect”) includes the widely
used Key Assumptions Check and Structured Analogies, which
comes from the literature on forecasting the future. Other techniques
include Role Playing, Red Hat Analysis, and Outside-In Thinking.
✶ Chapter 9 (“Challenge Analysis”) helps analysts break away from
an established mental model to imagine a situation or problem from a
different perspective. Two important techniques developed by the
authors, Premortem Analysis and Structured Self-Critique, give
analytic teams viable ways to imagine how their own analysis might
be wrong. What If? Analysis and High Impact/Low Probability
Analysis are tactful ways to suggest that the conventional wisdom
could be wrong. Devil’s Advocacy, Red Team Analysis, and the
Delphi Method can be used by management to actively seek
alternative answers.
✶ Chapter 10 (“Conflict Management”) explains that confrontation
between conflicting opinions is to be encouraged, but it must be
managed so that it becomes a learning experience rather than an
emotional battle. It describes a family of techniques grouped under
the umbrella of Adversarial Collaboration and an original approach to
37
Structured Debate in which debaters refute the opposing argument
rather than support their own.
✶ Chapter 11 (“Decision Support”) includes several techniques,
including Decision Matrix, that help managers, commanders,
planners, and policymakers make choices or trade-offs between
competing goals, values, or preferences. This chapter describes the
Complexity Manager developed by Richards Heuer and two new
techniques, Decision Trees and the Impact Matrix.
How can we know that the use of structured analytic techniques does, in
fact, improve the overall quality of the analytic product? As we discuss in
chapter 13 (“Validation of Structured Analytic Techniques”), there are two
approaches to answering this question—logical reasoning and empirical
research. The logical reasoning approach starts with the large body of
psychological research on the limitations of human memory and
perception and pitfalls in human thought processes. If a structured analytic
technique is specifically intended to mitigate or avoid one of the proven
problems in human thought processes, and that technique appears to be
successful in doing so, that technique can be said to have “face validity.”
The growing popularity of many of these techniques would imply that they
are perceived by analysts—and their customers—as providing distinct
added value in a number of different ways.
38
which the same techniques are used by most intelligence analysts. Chapter
13 proposes a broader approach to the validation of structured analytic
techniques. It calls for structured interviews, observation, and surveys in
addition to experiments conducted under conditions that closely simulate
how these techniques are used by intelligence analysts. Chapter 13 also
recommends formation of a separate organizational unit to conduct such
research as well as other tasks to support the use of structured analytic
techniques.
39
Intelligence Analysis (Washington, DC: CIA Center for the Study of
Intelligence, 2005); and Rob Johnston, Analytic Culture in the U.S.
Intelligence Community: An Ethnographic Study (Washington, DC: CIA
Center for the Study of Intelligence, 2005).
8. Ibid., 112.
9. Emily Pronin, Daniel Y. Lin, and Lee L. Ross, “The Bias Blind Spot:
Perceptions of Bias in Self versus Others,” Personality and Social
Psychology Bulletin 28, no. 3 (2002): 369–381.
10. Judgments in this and the next sections are based on our personal
experience and anecdotal evidence gained in work with or discussion with
other experienced analysts. As we will discuss in chapter 13, there is a
need for systematic research on these and other benefits believed to be
gained through the use of structured analytic techniques.
40
updates/tradecraft-primer-may-4-2009.html.
41
2 Building a System 2 Taxonomy
42
analysis. The others are critical thinking, empirical analysis, and quasi-
quantitative analysis. As discussed in section 2.2, structured analysis
consists of eight different categories of structured analytic techniques. This
chapter describes the rationale for these four broad categories and
identifies the eight families of structured analytic techniques.
The word “taxonomy” comes from the Greek taxis, meaning arrangement,
division, or order, and nomos, meaning law. Classic examples of a
taxonomy are Carolus Linnaeus’s hierarchical classification of all living
organisms by kingdom, phylum, class, order, family, genus, and species
that is widely used in the biological sciences, and the periodic table of
elements used by chemists. A library catalogue is also considered a
taxonomy, as it starts with a list of related categories that are then
progressively broken down into finer categories.
43
Development of a taxonomy is an important step in organizing knowledge
and furthering the development of any particular discipline. Rob Johnston
developed a taxonomy of variables that influence intelligence analysis but
did not go into any depth on analytic techniques or methods. He noted that
“a taxonomy differentiates domains by specifying the scope of inquiry,
codifying naming conventions, identifying areas of interest, helping to set
research priorities, and often leading to new theories. Taxonomies are
signposts, indicating what is known and what has yet to be discovered.”2
44
goal is to gain a better understanding of the domain of structured analytic
techniques, investigate how these techniques contribute to providing a
better analytic product, and consider how they relate to the needs of
analysts. The objective has been to identify various techniques that are
currently available, identify or develop additional potentially useful
techniques, and help analysts compare and select the best technique for
solving any specific analytic problem. Standardization of terminology for
structured analytic techniques will facilitate collaboration across agency
and international boundaries during the use of these techniques.
45
intelligence methodologist and practitioner Jack Davis, is the
application of the processes and values of scientific inquiry to the
special circumstances of strategic intelligence.11 Good critical
thinkers will stop and reflect on who is the key customer, what is the
question, where can they find the best information, how can they
make a compelling case, and what is required to convey their message
effectively. They recognize that this process requires checking key
assumptions, looking for disconfirming data, and entertaining
multiple explanations as long as possible. Most students are exposed
to critical thinking techniques at some point in their education—from
grade school to university—but few colleges or universities offer
specific courses to develop critical thinking and writing skills.
✶ Structured analysis: Structured analytic techniques involve a
step-by-step process that externalizes the analyst’s thinking in a
manner that makes it readily apparent to others, thereby enabling it to
be reviewed, discussed, and critiqued piece by piece, or step by step.
For this reason, structured analysis usually becomes a collaborative
effort in which the transparency of the analytic process exposes
participating analysts to divergent or conflicting perspectives. This
type of analysis is believed to mitigate some of the adverse impacts of
a single analyst’s cognitive limitations, an ingrained mindset, and the
whole range of cognitive and other analytic biases. Frequently used
techniques include Structured Brainstorming, Scenarios Analysis,
Indicators, Analysis of Competing Hypotheses, and Key Assumptions
Check. Structured techniques are taught at the college and graduate
school levels and can be used by analysts who have not been trained
in statistics, advanced mathematics, or the hard sciences.
✶ Quasi-quantitative analysis using expert-generated data:
Analysts often lack the empirical data needed to analyze an
intelligence problem. In the absence of empirical data, many methods
are designed that rely on experts to fill the gaps by rating key
variables as High, Medium, Low, or Not Present, or by assigning a
subjective probability judgment. Special procedures are used to elicit
these judgments, and the ratings usually are integrated into a larger
model that describes that particular phenomenon, such as the
vulnerability of a civilian leader to a military coup, the level of
political instability, or the likely outcome of a legislative debate. This
category includes methods such as Bayesian inference, dynamic
modeling, and simulation. Training in the use of these methods is
provided through graduate education in fields such as mathematics,
46
information science, operations research, business, or the sciences.
✶ Empirical analysis using quantitative data: Quantifiable empirical
data are so different from expert-generated data that the methods and
types of problems the data are used to analyze are also quite different.
Econometric modeling is one common example of this method.
Empirical data are collected by various types of sensors and are used,
for example, in analysis of weapons systems. Training is generally
obtained through graduate education in statistics, economics, or the
hard sciences.
No one of these four methods is better or more effective than another. All
are needed in various circumstances to optimize the odds of finding the
right answer. The use of multiple methods during the course of a single
analytic project should be the norm, not the exception. For example, even
a highly quantitative technical analysis may entail assumptions about
motivation, intent, or capability that are best handled with critical thinking
approaches and/or structured analysis. One of the structured techniques for
idea generation might be used to identify the variables to be included in a
dynamic model that uses expert-generated data to quantify these variables.
Of these four methods, structured analysis is the new kid on the block, so
to speak, so it is useful to consider how it relates to System 1 thinking.
System 1 thinking combines subject-matter expertise and intuitive
judgment in an activity that takes place largely in an analyst’s head.
Although the analyst may gain input from others, the analytic product is
frequently perceived as the product of a single analyst, and the analyst
tends to feel “ownership” of his or her analytic product. The work of a
single analyst is particularly susceptible to the wide range of cognitive
pitfalls described in Psychology of Intelligence Analysis and throughout
this book.12
47
process that identifies and assesses alternative perspectives can also help to
avoid “groupthink,” the most common problem of small-group processes.
“The first step of science is to know one thing from another. This
knowledge consists in their specific distinctions; but in order that it
may be fixed and permanent, distinct names must be given to
different things, and those names must be recorded and
remembered.”
48
Figure 2.2). Structured analytic techniques can mitigate some of the human
cognitive limitations, sidestep some of the well-known analytic pitfalls,
and explicitly confront the problems associated with unquestioned
assumptions and mental models. They can ensure that assumptions,
preconceptions, and mental models are not taken for granted but are
explicitly examined and tested. They can support the decision-making
process, and the use and documentation of these techniques can facilitate
information sharing and collaboration.
49
A secondary goal when categorizing structured techniques is to correlate
categories with different types of common analytic tasks. This makes it
possible to match specific techniques to individual analysts’ needs, as will
be discussed in chapter 3. There are, however, quite a few techniques that
fit comfortably in several categories because they serve multiple analytic
functions.
50
3. Robert M. Clark, Intelligence Analysis: A Target-Centric Approach, 2nd
ed. (Washington, DC: CQ Press, 2007), 84.
51
3 Choosing the Right Technique
52
mitigated by learning and applying the Five Habits of the Master Thinker,
discussed in section 3.2.
53
3.1 Core Techniques
The average analyst is not expected to know how to use every technique in
this book. All analysts should, however, understand the functions
performed by various types of techniques and recognize the analytic
circumstances in which it is advisable to use them. An analyst can gain this
knowledge by reading the introductions to each of the technique chapters
and the overviews of each technique. Tradecraft or methodology
specialists should be available to assist when needed in the actual
implementation of many of these techniques. In the U.S. Intelligence
Community, for example, the CIA and the Department of Homeland
Security have made good progress supporting the use of these techniques
through the creation of analytic tradecraft support cells. Similar units have
been established by other intelligence analysis services and proven
effective.
All analysts should be trained to use the core techniques discussed here
because they are needed so frequently and are widely applicable across the
various types of analysis—strategic and tactical, intelligence and law
enforcement, and cyber and business. They are identified and described
briefly in the following paragraphs.
54
simple exercise often employed at the beginning of an analytic project to
elicit relevant information or insight from a small group of knowledgeable
analysts. The group’s goal might be to identify a list of such things as
relevant variables, driving forces, a full range of hypotheses, key players
or stakeholders, available evidence or sources of information, potential
solutions to a problem, potential outcomes or scenarios, or potential
responses by an adversary or competitor to some action or situation; or, for
law enforcement, potential suspects or avenues of investigation. Analysts
should be aware of Nominal Group Technique as an alternative to
Structured Brainstorming when there is concern that a regular
brainstorming session may be dominated by a senior officer or that junior
personnel may be reluctant to speak up.
55
Indicators (chapter 6):
Indicators are observable or potentially observable actions or events that
are monitored to detect or evaluate change over time. For example, they
might be used to measure changes toward an undesirable condition such as
political instability, a pending financial crisis, or a coming attack.
Indicators can also point toward a desirable condition such as economic or
democratic reform. The special value of Indicators is that they create an
awareness that prepares an analyst’s mind to recognize the earliest signs of
significant change that might otherwise be overlooked. Developing an
effective set of Indicators is more difficult than it might seem. The
Indicator Validator™ helps analysts assess the diagnosticity of their
Indicators.
56
will happen.
57
force analysts to stop and reflect on how to be more efficient over
time.
✶ Premortem Analysis and Structured Self-Critique usually take
more time but offer major rewards if errors in the original analysis are
discovered and remedied.
The best response to this valid observation is to practice using the core
techniques when deadlines are less pressing. In so doing, analysts will
ingrain new habits of thinking critically in their minds. If they and their
colleagues practice how to apply the concepts embedded in the structured
techniques when they have time, they will be more capable of applying
these critical thinking skills instinctively when under pressure. The Five
58
Habits of the Master Thinker are described in Figure 3.2.1 Each habit can
be mapped to one or more structured analytic techniques.
Key Assumptions:
In a healthy work environment, challenging assumptions should be
commonplace, ranging from “Why do you assume we all want pepperoni
pizza?” to “Won’t increased oil prices force them to reconsider their
export strategy?” If you expect your colleagues to challenge your key
assumptions on a regular basis, you will become more sensitive to your
own assumptions, and you will increasingly ask yourself if they are well
founded.
Alternative Explanations:
When confronted with a new development, the first instinct of a good
analyst is to develop a hypothesis to explain what has occurred based on
the available evidence and logic. A master thinker goes one step further
and immediately asks whether any alternative explanations should be
considered. If envisioning one or more alternative explanations is difficult,
then a master thinker will simply posit a single alternative that the initial or
lead hypothesis is not true. While at first glance these alternatives may
appear much less likely, over time as new evidence surfaces they may
evolve into the lead hypothesis. Analysts who do not generate a set of
alternative explanations at the start and lock on to a preferred explanation
will often fall into the trap of Confirmation Bias—focusing on the data that
are consistent with their explanation and ignoring or rejecting other data
that are inconsistent.
Inconsistent Data:
Looking for inconsistent data is probably the hardest habit to master of the
five, but it is the one that can reap the most benefits in terms of time saved
when conducting an investigation or researching an issue. The best way to
train your brain to look for inconsistent data is to conduct a series of
Analysis of Competing Hypotheses (ACH) exercises. Such practice helps
the analyst learn how to more readily identify what constitutes compelling
contrary evidence. If an analyst encounters an item of data that is
inconsistent with one of the hypotheses in a compelling fashion (for
59
example, a solid alibi), then that hypothesis can be quickly discarded,
saving the analyst time by redirecting his or her attention to more likely
solutions.
Key Drivers:
Asking at the outset what key drivers best explain what has occurred or
will foretell what is about to happen is a key attribute of a master thinker.
If key drivers are quickly identified, the chance of surprise will be
diminished. An experienced analyst should know how to vary the weights
of these key drivers (either instinctively or by using such techniques as
Multiple Scenarios Generation or Quadrant Crunching™) to generate a set
of credible alternative scenarios that capture the range of possible
outcomes. A master thinker will take this one step further, developing a list
of Indicators to identify which scenario is emerging.
Context:
Analysts often get so engaged in collecting and sorting data that they miss
the forest for the trees. Learning to stop and reflect on the overarching
context for the analysis is a key habit to learn. Most analysis is done under
considerable time pressure, and the tendency is to plunge in as soon as a
task is assigned. If the analyst does not take time to reflect on what the
client is really seeking, the resulting analysis could prove inadequate and
much of the research a waste of time. Ask yourself: “What do they need
from me,” “How can I help them frame the issue,” and “Do I need to place
their question in a broader context?” Failing to do this at the outset can
easily lead the analyst down blind alleys or require reconceptualizing an
entire paper after it has been drafted. Key structured techniques for
developing context include Starbursting, Mind Mapping, and Structured
Brainstorming.
Learning how to internalize the five habits will take a determined effort.
Applying each core technique to three to five real problems should implant
the basic concepts firmly in any analyst’s mind. With every repetition, the
habits will become more ingrained and, over time, will become instinctive.
Few analysts can wish for more; if they master the habits, they will both
increase their impact and save themselves time.
60
3.3 One Project, Multiple Techniques
Many projects require the use of multiple techniques, which is why this
book includes fifty-five different techniques. Each technique may provide
only one piece of a complex puzzle, and knowing how to put these pieces
together for any specific project is part of the art of structured analysis.
Separate techniques might be used for generating ideas, evaluating ideas,
identifying assumptions, drawing conclusions, and challenging previously
drawn conclusions. Chapter 12, “Practitioner’s Guide to Collaboration,”
discusses stages in the collaborative process and the various techniques
applicable at each stage.
Multiple techniques can also be used to check the accuracy and increase
the confidence in an analytic conclusion. Research shows that forecasting
accuracy is increased by combining “forecasts derived from methods that
differ substantially and draw from different sources of information.”2 This
is a particularly appropriate function for the Delphi Method (chapter 9),
which is a structured process for eliciting judgments from a panel of
outside experts. If a Delphi panel produces results similar to the internal
analysis, one can have significantly greater confidence in those results. If
the results differ, further research may be appropriate to understand why
and evaluate the differences. A key lesson learned from mentoring analysts
in the use of structured techniques is that major benefits can result—and
major mistakes avoided—if two different techniques (for example,
Structured Brainstorming and Diagnostic Reasoning) are used to attack the
same problem or two groups work the same problem independently and
then compare their results.
The back cover of this book opens up to a graphic that shows eight
families of techniques and traces the links among the families and the
individual techniques within the families. The families of techniques
include Decomposition and Visualization, Idea Generation, Scenarios and
Indicators, Hypothesis Generation and Testing, Assessment of Cause and
Effect, Challenge Analysis, Conflict Management, and Decision Support.
61
for doing the analysis. Unfortunately, it is easy for analysts to go astray
when selecting the best method. Lacking effective guidance, analysts are
vulnerable to various influences:3
62
To find structured techniques that would be most applicable for a task,
analysts pick the statement(s) that best describe what they want to do and
then look up the relevant chapter(s). Each chapter starts with a brief
discussion of that category of analysis followed by a short description of
each technique in the chapter. Analysts should read the introductory
discussion and the paragraph describing each technique before reading the
full description of the techniques that are most applicable to the specific
issue. The description for each technique describes when, why, and how to
use it. For many techniques, the information provided is sufficient for
analysts to use the technique. For more complex techniques, however,
training in the technique or assistance by an experienced user of the
technique is strongly recommended.
The structured analytic techniques listed under each question not only help
analysts perform that particular analytic function but help them overcome,
avoid, or at least mitigate the influence of several intuitive traps or System
1 errors commonly encountered by intelligence analysts. Most of these
traps are manifestations of more basic cognitive biases.
63
evolving situation?
Chapter 7: Hypothesis Generation and Testing
Chapter 8: Assessment of Cause and Effect
Chapter 9: Challenge Analysis
5. Monitor a situation to gain early warning of events or changes
that may affect critical interests; avoid surprise?
Chapter 6: Scenarios and Indicators
Chapter 9: Challenge Analysis
6. Generate and test hypotheses?
Chapter 7: Hypothesis Generation and Testing
Chapter 8: Assessment of Cause and Effect (Key
Assumptions Check)
7. Assess the possibility of deception?
Chapter 7: Hypothesis Generation and Testing (Analysis of
Competing Hypotheses, Deception Detection)
Chapter 8: Assessment of Cause and Effect (Key
Assumptions Check, Role Playing, Red Hat Analysis)
8. Foresee the future?
Chapter 6: Scenarios and Indicators
Chapter 7: Hypothesis Generation and Testing (Analysis of
Competing Hypotheses)
Chapter 8: Assessment of Cause and Effect (Key
Assumptions Check, Structured Analogies)
Chapter 9: Challenge Analysis
Chapter 11: Decision Support (Complexity Manager)
9. Challenge your own mental model?
Chapter 9: Challenge Analysis
Chapter 5: Idea Generation
Chapter 7: Hypothesis Generation and Testing (Diagnostic
Reasoning, Analysis of Competing Hypotheses)
Chapter 8: Assessment of Cause and Effect (Key
Assumptions Check)
10. See events from the perspective of the adversary or other
players?
Chapter 8: Assessment of Cause and Effect (Key
Assumptions Check, Role Playing, Red Hat Analysis)
Chapter 9: Challenge Analysis (Red Team Analysis, Delphi
Method)
Chapter 10: Conflict Management
Chapter 11: Decision Support (Impact Matrix)
11. Manage conflicting mental models or opinions?
Chapter 10: Conflict Management
Chapter 7: Hypothesis Generation and Testing (Analysis of
Competing Hypotheses, Argument Mapping)
64
Chapter 8: Assessment of Cause and Effect (Key
Assumptions Check)
12. Support a manager, commander, action officer, planner, or
policymaker in deciding between alternative courses of action;
draw actionable conclusions?
Chapter 11: Decision Support
Chapter 10: Conflict Management
Chapter 7: Hypothesis Generation and Testing (Analysis of
Competing Hypotheses)
3. The first three items here are from Craig S. Fleisher and Babette E.
Bensoussan, Strategic and Competitive Analysis: Methods and Techniques
for Analyzing Business Competition (Upper Saddle River, NJ: Prentice
Hall, 2003), 22–23.
65
4 Decomposition and Visualization
One of the most obvious constraints that analysts face in their work is the
limit on how much information most people can keep at the forefront of
their minds and think about at the same time. Imagine that you have to
make a difficult decision. You make a list of pros and cons. When it comes
time to make the decision, however, the lists may be so long that you can’t
think of them all at the same time to weigh off pros against cons. This
often means that when you think about the decision, you will vacillate,
focusing first on the pros and then on the cons, favoring first one decision
and then the other. Now imagine how much more difficult it would be to
think through an intelligence problem with many interacting variables. The
limitations of human thought make it difficult, if not impossible, to do
error-free analysis without the support of some external representation of
the parts of the problem that is being addressed.
Two common approaches for coping with this limitation of our working
memory are decomposition—that is, breaking down the problem or issue
into its component parts so that each part can be considered separately—
and visualization—placing all the parts on paper or on a computer screen
in some organized manner designed to facilitate understanding how the
various parts interrelate. Actually, all structured analytic techniques
employ these approaches, as the externalization of one’s thinking is part of
the definition of structured analysis. For some of the basic techniques,
however, decomposing an issue to present the data in an organized manner
66
is the principal contribution they make to more effective analysis. These
are the basic techniques that will be described in this chapter.
Any technique that gets a complex thought process out of the analyst’s
head and onto paper or the computer screen can be helpful. The use of
even a simple technique such as a checklist can be extremely productive.
Consider, for example, the work of Dr. Peter Pronovost, who is well
known for his research documenting that a simple checklist of precautions
reduces infections, deaths, and costs in hospital intensive care units
(ICUs). Dr. Pronovost developed a checklist of five standard precautions
against infections that can occur when the ICU staff uses catheter lines to
connect machines to a patient’s body in a hospital intensive care unit.
ICUs in the United States insert five million such lines into patients each
year, and national statistics show that, after ten days, 4 percent of those
lines become infected. Infections occur in 80,000 patients each year, and
between 5 percent and 28 percent die, depending on how sick the patient is
at the start. A month of observation showed that doctors skipped at least
one of the precautions in more than a third of all patients.
67
—Richards J. Heuer Jr., The Psychology of Intelligence Analysis
(1999)
Overview of Techniques
Getting Started Checklist, AIMS, Customer Checklist, and
Issue Redefinition
are four techniques that can be combined to help analysts conceptualize
and launch a new project. If an analyst can start off in the right direction
and avoid having to change course later, a lot of time can be saved.
However, analysts should still be prepared to change course should their
research so dictate. As Albert Einstein said, “If we knew what we were
doing, it would not be called research.”
Sorting
is a basic technique for organizing data in a manner that often yields new
insights. Sorting is effective when information elements can be broken out
into categories or subcategories for comparison by using a computer
program, such as a spreadsheet. It is particularly effective during initial
data gathering and hypothesis generation.
68
be considered, indicators, possible scenarios, important players, historical
precedents, sources of information, questions to be answered, and so forth.
Such lists are even more useful once they are ranked, scored, or prioritized
to determine which items are most important, most useful, most likely, or
should be at the top of the priority list.
Matrices
are generic analytic tools for sorting and organizing data in a manner that
facilitates comparison and analysis. They are used to analyze the
relationships among any two sets of variables or the interrelationships
among a single set of variables. A matrix consists of a grid with as many
cells as needed for whatever problem is being analyzed. Some analytic
topics or problems that use a matrix occur so frequently that they are
described in this book as separate techniques.
Venn Analysis
is a visual technique that can be used to explore the logic of arguments.
Venn diagrams are commonly used to teach set theory in mathematics;
they can also be adapted to illustrate simple set relationships in analytic
arguments, reveal flaws in reasoning, and identify missing data.
Network Analysis
is used extensively by counterterrorism, counternarcotics,
counterproliferation, law enforcement, and military analysts to identify and
monitor individuals who may be involved in illegal activity. Social
Network Analysis is used to map and analyze relationships among people,
groups, organizations, computers, websites, and any other information
processing entities. The terms Network Analysis, Association Analysis,
Link Analysis, and Social Network Analysis are often used
interchangeably.
69
that show and briefly describe the connections among these ideas. Mind
Maps and Concept Maps are used by an individual or a group to help sort
out their own thinking or to facilitate the communication of a complex set
of relationships to others, as in a briefing or an intelligence report. Mind
Maps can also be used to identify gaps or stimulate new thinking about a
topic.
When to Use It
Analysts should view the Getting Started Checklist in much the same way
a pilot and copilot regard the checklist they always review carefully before
70
flying their airplane. It should be an automatic first step in preparing an
analysis and essential protection against future unwanted surprises. Even if
the task is to prepare a briefing or a quick-turnaround article, taking a
minute to review the checklist can save valuable time, for example, by
reminding the analyst of a key source, prompting her or him to identify the
key customer, or challenging the analyst to consider alternative
explanations before coming to closure.
Value Added
By getting the fundamentals right at the start of a project, analysts can
avoid having to change course later on. This groundwork can save the
analyst—and reviewers of the draft—a lot of time and greatly improve the
quality of the final product. Seasoned analysts will continuously adapt the
checklist to their work environment and the needs of their particular
customers.
The Method
Analysts should answer several questions at the beginning of a new
project. The following is our list of suggested starter questions, but there is
no single best way to begin. Other lists can be equally effective.
✶ What has prompted the need for the analysis? For example, was it
a news report, a new intelligence report, a new development, a
perception of change, or a customer request?
✶ What is the key intelligence, policy, or business question that
needs to be answered?
✶ Why is this issue important, and how can analysis make a unique
and meaningful contribution?
✶ Has this question or a similar question already been answered by
you or someone else, and what was said? To whom was that analysis
delivered, and what has changed since then?
✶ Who are the principal customers? Are their needs well
understood? If not, try to gain a better understanding of their needs
and the style of reporting they like.
✶ Are there any other stakeholders who would have an interest in the
answer to the question? Would any of them prefer that a different
question be answered? Consider meeting with others who see the
71
question from a different perspective.
✶ What are all the possible answers to this question? What
alternative explanations or outcomes should be considered before
making an analytic judgment on the issue?
✶ Would the analysis benefit from the use of structured analytic
techniques?
✶ What potential sources or streams of information would be most
useful—and efficient—to exploit for learning more about this topic or
question?
✶ Where should we reach out for expertise, information, or
assistance within our organization or outside our unit?
✶ Should we convene an initial brainstorming session to identify and
challenge key assumptions, examine key information, identify key
drivers and important players, explore alternative explanations, and/or
generate alternative hypotheses?
✶ What is the best way to present my information and analysis?
Which parts of my response should be represented by graphics,
tables, or in matrix format?
72
through the AIMS of their product.3 AIMS is a mnemonic that stands for
Audience, Issue or Intelligence Question, Message, and Storyline. Its
purpose is to prompt the analyst to consider up front who the paper is
being written for, what key question or questions it should address, what is
the key message the reader should take away, and how best to present the
analysis in a compelling way. Once these four questions are answered in a
crisp and direct way, the process of drafting the actual paper becomes
much easier.
When to Use It
A seasoned analyst knows that much time can be saved over the long run if
the analyst takes an hour or so to define the AIMS of the paper. This can
be done working alone or, preferably, with a small group. It is helpful to
include one’s supervisor in the process either as part of a short
brainstorming session or by asking the supervisor to review the resulting
plan. For major papers, consideration should be given to addressing the
AIMS of a paper in a more formal Concept Paper or Terms of Reference
(TOR).
Value Added
Conceptualizing a product before it is drafted is usually the greatest
determinant of whether the final product will meet the needs of key
consumers. Focusing on these four elements—Audience, Issue or
Intelligence Question, Message, and Storyline—helps ensure that the
customer will quickly absorb and benefit from the analysis. If the AIMS of
the article or assessment are not considered before drafting, the paper is
likely to lack focus and frustrate the reader. Moreover, chances are that it
will take more time to be processed through the editing and coordination
process, which could reduce its timeliness and relevance to the customer.
The Method
✶ Audience: The first step in the process is to identify the primary
audience for the product. Are you writing a short, tightly focused
article for a senior customer, or a longer piece with more detail that
will serve a less strategic customer? If you identify more than one key
73
customer for the product, we recommend that you draft multiple
versions of the paper while tailoring each to a different key customer.
Usually the best strategy is to outline both papers before you begin
researching and writing. Then when you begin to draft the first paper
you will know what information should be saved and later
incorporated into the second paper.
✶ Issue or intelligence question: Ask yourself what is the key issue
or question with which your targeted audience is struggling or will
have to struggle in the future. What is their greatest concern or
greatest need at the moment? Make sure that the key question is
tightly focused, actionable, and answerable in more than one way.
✶ Message: What is the bottom line that you want to convey to your
key customer or customers? What is the “elevator speech” or key
point you would express to that customer if you had a minute with her
or him between floors on the elevator? The message should be
formulated as a short, clear, and direct statement before starting to
draft your article. Sometimes it is easier to discern the message if you
talk through your paper with a peer or your supervisor and then write
down the key theme or conclusion that emerges from the
conversation.
✶ Storyline: With your bottom-line message in mind, can you
present that message in a clear, direct, and persuasive way to the
customer? Do you have a succinct line of argumentation that flows
easily and logically throughout the paper and tells a compelling story?
Can you illustrate this storyline with equally compelling pictures,
videos, or other graphics?
74
4.3 Customer Checklist
The Customer Checklist helps an analyst tailor the product to the needs of
the principal customer for the analysis.4 When used appropriately, it
ensures that the product is of maximum possible value to this customer. If
the product—whether a paper, briefing, or web posting—is intended to
serve many different kinds of customers, it is important to focus on the
customer or customers who constitute the main audience and meet their
specific concerns or requirements.
When to Use It
The Customer Checklist helps the analyst focus on the customer for whom
the analytic product is being prepared when the analyst is first starting on a
project, by posing a set of twelve questions. Ideally, an analytic product
should address the needs of the principal customer, and it is most efficient
to identify that individual at the very start. It also can be helpful to review
the Customer Checklist later in the drafting process to ensure that the
product continues to be tightly focused on the needs of the principal
recipient.
Value Added
The analysis will be more compelling if the needs and preferences of the
principal customer are kept in mind at each step in the drafting process.
Using the checklist also helps focus attention on what matters most and to
generate a rigorous response to the task at hand.
The Method
Before preparing an outline or drafting a paper, ask the following
75
questions:
✶ Who is the key person for whom the product is being developed?
✶ Will this product answer the question the customer asked?
✶ Did the customer ask the right question? If not, do you need to
place your answer in a broader context to better frame the issue
before addressing his or her particular concern?
✶ What is the most important message to give this customer?
✶ What value added are you providing with your response?
✶ How is the customer expected to use this information?
✶ How much time does the customer have to digest your product?
✶ What format would convey the information most effectively?
✶ Is it possible to capture the essence in one or a few key graphics?
✶ Does distribution of this document need to be restricted? What
classification is most appropriate? Should you prepare different
products at different levels of restriction?
✶ What is the customer’s level of interest in or tolerance for
technical language and detail? Can you provide details in appendices
or backup materials, graphics, or an annex?
✶ Would the customer expect you to reach out to other experts to tap
their expertise in drafting this paper? If so, how would you flag the
contribution of other experts in the product?
✶ To whom might the customer turn for other views on this topic?
What data or analysis might others provide that could influence how
the customer would react to what you will be preparing?
✶ What perspectives do other interested parties have on this issue?
What are the responsibilities of the other parties?
76
4.4 Issue Redefinition
Many analytic projects start with an issue statement: What is the issue,
why is it an issue, and how will it be addressed? Issue Redefinition is a
technique for experimenting with different ways to define an issue. This is
important, because seemingly small differences in how an issue is defined
can have significant effects on the direction of the research.
When to Use It
Using Issue Redefinition at the beginning of a project can get you started
off on the right foot. It may also be used at any point during the analytic
process when a new hypothesis or critical new evidence is introduced.
Issue Redefinition is particularly helpful in preventing “mission creep,”
which results when analysts unwittingly take the direction of analysis
away from the core intelligence question or issue at hand, often as a result
of the complexity of the problem or a perceived lack of information.
Analysts have found Issue Redefinition useful when their thinking is stuck
in a rut, and they need help to get out of it.
Value Added
Proper issue identification can save a great deal of time and effort by
forestalling unnecessary research and analysis on a poorly stated issue.
Issues are often poorly presented when they are
77
✶ Too narrow or misdirected (Will college students purchase this
product?)
The Method
The following tactics can be used to stimulate new or different thinking
about the best way to state an issue or problem. (See Figure 4.4 for an
example.) These tactics may be used in any order:
78
“How much of the ground capability of China’s People’s Liberation
Army would not be involved in the initial Taiwan assault?”
79
4.5 Chronologies and Timelines
A Chronology is a list that places events or actions in the order in which
they occurred; a Timeline is a graphic depiction of those events put in
context of the time of the events and the time between events. Both are
used to identify trends or relationships among the events or actions and, in
the case of a Timeline, among the events and actions as well as other
developments in the context of the overarching intelligence problem.
When to Use It
Chronologies and Timelines aid in organizing events or actions. Whenever
it is important to understand the timing and sequence of relevant events or
80
to identify key events and gaps, these techniques can be useful. The events
may or may not have a cause-and-effect relationship.
Value Added
Chronologies and Timelines aid in the identification of patterns and
correlations among events. These techniques also allow you to relate
seemingly disconnected events to the big picture; to highlight or identify
significant changes; or to assist in the discovery of trends, developing
issues, or anomalies. They can serve as a catch-all for raw data when the
meaning of the data has not yet been identified. Multiple-level Timelines
allow analysts to track concurrent events that may have an effect on one
another. Although Chronologies and Timelines may be developed at the
onset of an analytic task to ascertain the context of the activity to be
analyzed, they also may be used in postmortems to break down the stream
of reporting, find the causes for analytic failures, and highlight significant
events after an intelligence or business surprise.
The Method
Chronologies and Timelines are effective yet simple ways for you to order
incoming information as you go through your daily message traffic. An
Excel spreadsheet or even a Word document can be used to log the results
of research and marshal evidence. You can use tools such as the Excel
drawing function or the Analysts’ Notebook to draw the Timeline. Follow
these steps:
81
✶ Review the Chronology or Timeline by asking the following
questions:
– What are the temporal distances between key events? If
“lengthy,” what caused the delay? Are there missing pieces of
data that may fill those gaps that should be collected?
– Did the analyst overlook piece(s) of information that may
have had an impact on or be related to the events?
– Conversely, if events seem to have happened more rapidly
than were expected, or if not all events appear to be related, is it
possible that the analyst has information related to multiple
event Timelines?
– Does the Timeline have all the critical events that are
necessary for the outcome to occur?
– When did the information become known to the analyst or a
key player?
– What are the information or intelligence gaps?
– Are there any points along the Timeline when the target is
particularly vulnerable to the collection of intelligence or
information or countermeasures?
– What events outside this Timeline could have influenced the
activities?
✶ If preparing a Timeline, synopsize the data along a line, usually
horizontal or vertical. Use the space on both sides of the line to
highlight important analytic points. For example, place facts above
the line and points of analysis or commentary below the line.
Alternatively, contrast the activities of different groups,
organizations, or streams of information by placement above or below
the line. If multiple actors are involved, you can use multiple lines,
showing how and where they converge.
✶ Look for relationships and patterns in the data connecting persons,
places, organizations, and other activities. Identify gaps or
unexplained time periods, and consider the implications of the
absence of evidence. Prepare a summary chart detailing key events
and key analytic points in an annotated Timeline.
Potential Pitfalls
In using Timelines, analysts may assume, incorrectly, that events
following earlier events were caused by the earlier events. Also, the value
82
of this technique may be reduced if the analyst lacks imagination in
identifying contextual events that relate to the information in the
Chronology or Timeline.
Example
A team of analysts working on strategic missile forces knows what steps
are necessary to prepare for and launch a nuclear missile. (See Figure 4.5.)
The analysts have been monitoring a country they believe is close to
testing a new variant of its medium-range surface-to-surface ballistic
missile. They have seen the initial steps of a test launch in mid-February
and decide to initiate a concentrated watch of the primary and secondary
test launch facilities. Observed and expected activities are placed into a
Timeline to gauge the potential dates of a test launch. The analysts can
thus estimate when a possible missile launch may occur and make decision
makers aware of indicators of possible activity.
83
4.6 Sorting
Sorting is a basic technique for organizing a large body of data in a manner
that often yields new insights.
When to Use It
Sorting is effective when information elements can be broken out into
categories or subcategories for comparison with one another, most often
by using a computer program such as a spreadsheet. This technique is
particularly effective during the initial data-gathering and hypothesis-
generation phases of analysis, but you may also find sorting useful at other
times.
Value Added
Sorting large amounts of data into relevant categories that are compared
with one another can provide analysts with insights into trends,
similarities, differences, or abnormalities of intelligence interest that
otherwise would go unnoticed. When you are dealing with transactions
data in particular (for example, communications intercepts or transfers of
goods or money), it is very helpful to sort the data first.
84
The Method
Follow these steps:
Examples
Example 1:
Are a foreign adversary’s military leaders pro-U.S., anti-U.S., or neutral on
85
their attitudes toward U.S. policy in the Middle East? To answer this
question, analysts sort the leaders by various factors determined to give
insight into the issue, such as birthplace, native language, religion, level of
professional education, foreign military or civilian/university exchange
training (where/when), field/command assignments by parent service,
political influences in life, and political decisions made. Then they review
the information to see if any parallels exist among the categories.
Example 2:
Analysts review the data from cell-phone communications among five
conspirators to determine the frequency of calls, patterns that show who is
calling whom, changes in patterns of frequency of calls prior to a planned
activity, dates and times of calls, subjects discussed, and so forth.
Example 3:
Analysts are reviewing all information related to an adversary’s weapons
of mass destruction (WMD) program. Electronic intelligence reporting
shows more than 300,000 emitter collections over the past year alone. The
analysts’ sorting of the data by type of emitter, dates of emission, and
location shows varying increases and decreases of emitter activity with
some minor trends identifiable. The analysts filter out all collections
except those related to air defense. The unfiltered information is sorted by
type of air defense system, location, and dates of activity. Of note is a
period when there is an unexpectedly large increase of activity in the air
defense surveillance and early warning systems. The analysts review
relevant external events and find that a major opposition movement
outside the country held a news conference where it detailed the
adversary’s WMD activities, including locations of the activity within the
country. The air defense emitters for all suspected locations of WMD
activity, including several not included in the press conference, increased
to a war level of surveillance within four hours of the press conference.
The analysts reviewed all air defense activity locations that showed the
increase assumed to be related to the press conference and the WMD
programs and found two locations showing increased activity but not
previously listed as WMD-related. These new locations were added to
collection planning to determine what relationship, if any, they had to the
WMD program.
86
Potential Pitfalls
Improper sorting can hide valuable insights as easily as it can illuminate
them. Standardizing the data being sorted is imperative. Working with an
analyst who has experience in sorting can help you avoid this pitfall in
most cases.
When to Use It
Use of a ranking technique is often the next step after using an idea-
generation technique such as Structured Brainstorming, Virtual
Brainstorming, or Nominal Group Technique (see chapter 5). A ranking
technique is appropriate whenever there are too many items to rank easily
just by looking at the list, the ranking has significant consequences and
must be done as accurately as possible, or it is useful to aggregate the
opinions of a group of analysts.
Value Added
Combining an idea-generation technique with a ranking technique is an
excellent way for an analyst to start a new project or to provide a
foundation for intraoffice or interagency collaboration. An idea-generation
technique is often used to develop lists of such things as driving forces,
variables to be considered, or important players. Such lists are more useful
once they are ranked, scored, or prioritized. For example, you might
determine which items are most important, most useful, most likely, or that
87
need to be done first.
The Method
Of the three methods discussed here, Ranked Voting is the easiest and
quickest to use, and it is often sufficient. However, it is not as accurate
after you get past the top two or three ranked items, because the group
usually has not thought as much (and may not care as much) about the
lower-ranked items. Ranked Voting also provides less information than
either Paired Comparison or Weighted Ranking. Ranked Voting shows
only that one item is ranked higher or lower than another; it does not show
how much higher or lower. Paired Comparison does provide this
information, and Weighted Ranking provides even more information. It
specifies the criteria that are used in making the ranking, and weights are
assigned to those criteria for each of the items in the list.
88
dimension, such as importance or preference, using an interval-type scale.
89
simple version of Weighted Ranking has been selected for presentation
here because intelligence analysts are normally making subjective
judgments rather than dealing with hard numbers. As you read the
following steps, refer to Figure 4.7b:
✶ Create a table with one column for each item. At the head of each
column, write the name of an item or assign it a letter to save space.
✶ Add two more blank columns on the left side of this table. Count
the number of selection criteria, and then adjust the table so that it has
that number of rows plus three more, one at the top to list the items
and two at the bottom to show the raw scores and percentages for
each item. In the first column on the left side, starting with the second
row, write in all the selection criteria down the left side of the table.
There is some value in listing the criteria roughly in order of
importance, but that is not critical. Leave the bottom two rows blank
for the scores and percentages.
✶ Now work down the far-left column assigning weights to the
selection criteria based on their relative importance for judging the
ranking of the items. Depending upon how many criteria there are,
take either 10 points or 100 points and divide these points between
the selection criteria based on what is believed to be their relative
importance in ranking the items. In other words, decide what
percentage of the decision should be based on each of these criteria.
Be sure that the weights for all the selection criteria combined add up
to either 10 or 100, whichever is selected. Also be sure that all the
criteria are phrased in such a way that a higher weight is more
desirable.
✶ Work across the rows to write the criterion weight in the left side
of each cell.
✶ Next, work across the matrix one row (selection criterion) at a
time to evaluate the relative ability of each of the items to satisfy that
selection criteria. Use a ten-point rating scale, where 1 = low and 10 =
high, to rate each item separately. (Do not spread the ten points
proportionately across all the items as was done to assign weights to
the criteria.) Write this rating number after the criterion weight in the
cell for each item.
✶ Again, work across the matrix one row at a time to multiply the
criterion weight by the item rating for that criterion, and enter this
number for each cell, as shown in Figure 4.7b.
✶ Now add the columns for all the items. The result will be a ranking
90
of the items from highest to lowest score. To gain a better
understanding of the relative ranking of one item as compared with
another, convert these raw scores to percentages. To do this, first add
together all the scores in the “Totals” row to get a total number. Then
divide the score for each item by this total score to get a percentage
ranking for each item. All the percentages together must add up to
100 percent. In Figure 4.7b it is apparent that item B has the number
one ranking (with 20.3 percent), while item E has the lowest (with
13.2 percent).
91
Potential Pitfalls
When any of these techniques is used to aggregate the opinions of a group
of analysts, the rankings provided by each group member are added
together and averaged. This means that the opinions of the outliers, whose
views are quite different from the others, are blended into the average. As
a result, the ranking does not show the range of different opinions that
might be present in a group. In some cases the identification of outliers
with a minority opinion can be of great value. Further research might show
that the outliers are correct.
92
Ranking, Scoring, and Prioritizing are common analytic processes in many
fields. All three forms of ranking described here are based largely on
Internet sources. For Ranked Voting, we referred to
http://en.wikipedia.org/wiki/Voting_system; for Paired Comparison,
http://www.mindtools.com; and for Weighted Ranking,
www.ifm.eng.cam.ac.uk/dstools/choosing/criter.html. We also reviewed
the Weighted Ranking process described in Morgan Jones’s The Thinker’s
Toolkit. This method is taught at some government agencies, but we found
it to be more complicated than necessary for intelligence applications that
typically use fuzzy, expert-generated data rather than hard numbers.
4.8 Matrices
A matrix is an analytic tool for sorting and organizing data in a manner
that facilitates comparison and analysis. It consists of a simple grid with as
many cells as needed for whatever problem is being analyzed.
Some analytic topics or problems that use a matrix occur so frequently that
they are handled in this book as separate techniques. For example:
93
When to Use It
Matrices are used to analyze the relationships between any two sets of
variables or the interrelationships between a single set of variables. Among
other things, they enable analysts to
A matrix is such an easy and flexible tool to use that it should be one of
the first tools analysts think of when dealing with a large body of data.
One limiting factor in the use of matrices is that information must be
organized along only two dimensions.
Value Added
Matrices provide a visual representation of a complex set of data. By
presenting information visually, a matrix enables analysts to deal
effectively with more data than they could manage by juggling various
pieces of information in their head. The analytic problem is broken down
to component parts so that each part (that is, each cell in the matrix) can be
analyzed separately, while ideally maintaining the context of the problem
as a whole.
The Method
A matrix is a tool that can be used in many different ways and for many
different purposes. What matrices have in common is that each has a grid
with sufficient columns and rows for you to enter two sets of data that you
want to compare. Organize the category headings for each set of data in
some logical sequence before entering the headings for one set of data in
94
the top row and the headings for the other set in the far-left column. Then
enter the data in the appropriate cells.
95
Source: 2009 Pherson Associates, LLC.
96
from unclassified sources and the imperative is to disseminate it to
everyone as soon as possible. This dynamic can create serious tensions at
the midpoint, for example, when those working in the homeland security
arena must find ways to share sensitive national security information with
state and local law enforcement officers.
97
Figure 4.9b Venn Diagram of Invalid and Valid Arguments
When to Use It
The technique has also been used by intelligence analysts to organize their
thinking, look for gaps in logic or data, and examine the quality of an
argument. Analysts can use it to determine how to satisfy a narrow set of
conditions when multiple variables must be considered, for example, in
determining the best time to launch a satellite. It can also be used any time
an argument involves a portion of something that is being compared with
98
other portions or subsets.
Value Added
Venn Analysis helps analysts determine if they have put like things in the
right groups and correctly identified what belongs in each subset. The
technique makes it easier to visualize arguments, often revealing flaws in
reasoning or spurring analysts to examine their assumptions by making
them explicit when constructing the diagram. Examining the relationships
between the overlapping areas of a Venn diagram helps put things in
perspective and often prompts new research or deeper inquiry. Care should
be taken, however, not to make the diagrams too complex by adding levels
of precision that may not be justified.
The Method
Venn Analysis is an agile tool that can be applied in various ways. It is a
simple process but can spark prolonged debate. The following steps show
how Venn Analysis can be used to examine the validity of an argument:
99
Example
Consider this analytic judgment: “The substantial investments state-owned
companies in the fictional country of Zambria are making in major U.S.
port infrastructure projects pose a threat to U.S. national security.”
Using Venn Analysis, the analyst would first draw a large circle to
represent the autocratic state of Zambria, a smaller circle within it
representing state-owned enterprises, and small circles within that circle to
represent individual state-owned companies. A mapping of all Zambrian
corporations would include more circles to represent other private-sector
companies as well as a few that are partially state-owned (see Figure 4.9c).
Simply constructing this diagram raises several questions, such as: What is
the percentage of companies that are state-owned, partially state-owned,
and in the private sector? How does this impact the size of the circles?
What is the definition of “state-owned”? What does “state-owned” imply
politically and economically in a nondemocratic state like Zambria? Do
these distinctions even matter?
100
The diagram should also prompt the analyst to examine several
assumptions implicit in the questions:
Each of these assumptions, made explicit by the Venn diagram, can now
be challenged. For example, the original statement may have
oversimplified the problem. Perhaps the threat may be broader than just
state-owned enterprises. Could Zambria exert influence through its private
companies or private-public partnerships?
101
circle would show investments by Zambrian businesses in U.S. domestic
and foreign port infrastructure projects, as shown in Figure 4.9d.
102
Figure 4.9d Zambrian Investments in Global Port Infrastructure Projects
103
Origins of This Technique
This application of Venn diagrams as a tool for intelligence analysis was
developed by John Pyrik for the Government of Canada. The materials are
used with the permission of the Canadian government. For more
discussion of this concept, see Peter Suber, Earlham College, “Symbolic
Logic,” at http://legacy.earlham.edu/~peters/courses/log/loghome.htm; and
Lee Archie, Lander University, “Introduction to Logic,” at
http://philosophy.lander.edu/logic.
104
enforcement and national security, information used in Network Analysis
usually comes from informants or from physical or technical surveillance.
These networks are most often clandestine and therefore not visible to
open source collectors. Although software has been developed to help
collect, sort, and map data, it is not essential to many of these analytic
tasks. Social Network Analysis, which involves measuring associations,
does require software.
Analysis of networks is broken down into three stages, and analysts can
stop at the stage that answers their questions:
When to Use It
Network Analysis is used extensively in law enforcement,
counterterrorism analysis, and analysis of transnational issues such as
narcotics and weapons proliferation to identify and monitor individuals
who may be involved in illegal activity. Network Charting (or Link
Charting) is used literally to “connect the dots” between people, groups, or
other entities of intelligence or criminal interest. Network Analysis puts
these dots in context, and Social Network Analysis helps identify hidden
associations and degrees of influence between the dots.
Value Added
105
Network Analysis has proved to be highly effective in helping analysts
identify and understand patterns of organization, authority,
communication, travel, financial transactions, or other interactions among
people or groups that are not apparent from isolated pieces of information.
It often identifies key leaders, information brokers, or sources of funding.
It can identify additional individuals or groups who need to be
investigated. If done over time, it can help spot change within the network.
Indicators monitored over time may signal preparations for offensive
action by the network or may reveal opportunities for disrupting the
network.
The Method
Analysis of networks attempts to answer the question, “Who is related to
whom and what is the nature of their relationship and role in the network?”
The basic network analysis software identifies key nodes and shows the
links between them. SNA software measures the frequency of flow
between links and explores the significance of key attributes of the nodes.
We know of no software that does the intermediate task of grouping nodes
into meaningful clusters, though algorithms do exist and are used by
individual analysts. In all cases, however, you must interpret what is
represented, looking at the chart to see how it reflects organizational
structure, modes of operation, and patterns of behavior.
There are tried and true methods for making good charts that allow the
analyst to save time, avoid unnecessary confusion, and arrive more quickly
at insights. Network charting usually involves the following steps:
106
Figure 4.10a Social Network Analysis: The September 11 Hijackers
107
Source: Valdis Krebs, Figure 3, “Connecting the Dots: Tracking Two
Identified Terrorists,” Orgnet.com; www.orgnet.com/tnet.html.
Reproduced with permission of the author.
108
elements on the same link or node. The need for additional elements
often happens when the intelligence question is murky (for example,
when “I know something bad is going on, but I don’t know what”);
when the chart is being used to answer multiple questions; or when a
chart is maintained over a long period of time.
✶ Work out from the central nodes, adding links and nodes until you
run out of information from the good sources.
✶ Add nodes and links from other sources, constantly checking them
against the information you already have. Follow all leads, whether
they are people, groups, things, or events, and regardless of source.
Make note of the sources.
✶ Stop in these cases: When you run out of information, when all of
the new links are dead ends, when all of the new links begin to turn in
on one another like a spider’s web, or when you run out of time.
✶ Update the chart and supporting documents regularly as new
information becomes available, or as you have time. Just a few
minutes a day will pay enormous dividends.
✶ Rearrange the nodes and links so that the links cross over one
another as little as possible. This is easier to accomplish if you are
using software. Many software packages can rearrange the nodes and
links in various ways.
✶ Cluster the nodes. Do this by looking for “dense” areas of the chart
and relatively “empty” areas. Draw shapes around the dense areas.
Use a variety of shapes, colors, and line styles to denote different
types of clusters, your relative confidence in the cluster, or any other
criterion you deem important.
✶ Cluster the clusters, if you can, using the same method.
✶ Label each cluster according to the common denominator among
the nodes it contains. In doing this you will identify groups, events,
activities, and/or key locations. If you have in mind a model for
groups or activities, you may be able to identify gaps in the chart by
what is or is not present that relates to the model.
✶ Look for “cliques”—a group of nodes in which every node is
connected to every other node, though not to many nodes outside the
group. These groupings often look like stars or pentagons. In the
intelligence world, they often turn out to be clandestine cells.
✶ Look in the empty spaces for nodes or links that connect two
clusters. Highlight these nodes with shapes or colors. These nodes are
brokers, facilitators, leaders, advisers, media, or some other key
connection that bears watching. They are also points where the
109
network is susceptible to disruption.
✶ Chart the flow of activities between nodes and clusters. You may
want to use arrows and time stamps. Some software applications will
allow you to display dynamically how the chart has changed over
time.
✶ Analyze this flow. Does it always go in one direction or in
multiple directions? Are the same or different nodes involved? How
many different flows are there? What are the pathways? By asking
these questions, you can often identify activities, including
indications of preparation for offensive action and lines of authority.
You can also use this knowledge to assess the resiliency of the
network. If one node or pathway were removed, would there be
alternatives already built in?
✶ Continually update and revise as nodes or links change.
110
Source: Based on Valdis Krebs, Figure 3, “Connecting the Dots:
Tracking Two Identified Terrorists,” Orgnet.com;
www.orgnet.com/tnet.html. Reproduced with permission of the
author. With changes by Cynthia Storer.
111
Source: 2009 Pherson Associates, LLC.
112
✶ Betweenness centrality: Helen has fewer direct connections than
does Deborah, but she plays a vital role as a “broker” in the network.
Without her, Ira and Janice would be cut off from the rest of the
network. A node with high betweenness has great influence over what
flows—or does not flow—through the network.
✶ Closeness centrality: Frank and Gary have fewer connections
than does Deborah, yet the pattern of their direct and indirect ties
allows them to access all the nodes in the network more quickly than
anyone else. They are in the best position to monitor all the
information that flows through the network.
Potential Pitfalls
This method is extremely dependent upon having at least one good source
of information. It is hard to know when information may be missing, and
the boundaries of the network may be fuzzy and constantly changing, in
which case it is difficult to determine whom to include. The constantly
changing nature of networks over time can cause information to become
outdated. You can be misled if you do not constantly question the data
being entered, update the chart regularly, and look for gaps and consider
their potential significance.
You should never rely blindly on the SNA software but strive to
understand how the application being used works. As is the case with any
software, different applications measure different things in different ways,
and the devil is always in the details.
113
http://faculty.ucr.edu/~hanneman/nettext/C1_Social_Network_Data.html#Populations
Marilyn B. Peterson, Defense Intelligence Agency, “Association
Analysis,” undated draft, used with permission of the author; Cynthia
Storer and Averill Farrelly, Pherson Associates, LLC; Pherson Associates
training materials.
When to Use It
Whenever you think about a problem, develop a plan, or consider making
even a very simple decision, you are putting a series of thoughts together.
That series of thoughts can be represented visually with words or images
connected by lines that represent the nature of the relationships among
them. Any thinking for any purpose, whether about a personal decision or
analysis of an intelligence issue, can be diagrammed in this manner. Such
mapping is usually done for either of two purposes:
114
in a jury trial.
Mind Maps can also be used to help analysts brainstorm the various
elements or key players involved in an issue and how they might be
connected. The technique stimulates analysts to ask questions such as: Are
there additional categories or “branches on the tree” that we have not
considered? Are there elements of this process or applications of this
technique that we have failed to capture? Does the Mind Map suggest a
different context for understanding the problem?
115
Source: R. R. Hoffman and J. D. Novak (Pensacola, FL: Institute for
Human and Machine Cognition, 2009). Reproduced with permission
of the author.
116
Figure 4.11b Mind Map of Mind Mapping
117
Source: Illumine Training, “Mind Map,” www.mind-mapping.co.uk.
Reproduced with permission of Illumine Training. With changes by
Randolph H. Pherson.
Value Added
Mapping facilitates the presentation or discussion of a complex body of
information. It is useful because it presents a considerable amount of
information that can generally be seen at a glance. Creating a visual
picture of the basic structure of a complex problem helps analysts be as
clear as possible in stating precisely what they want to express.
Diagramming skills enable analysts to stretch their analytic capabilities.
Mind Maps and Concept Maps vary greatly in size and complexity
depending on how and why they are being used. When used for structured
analysis, a Mind Map or a Concept Map is typically larger, sometimes
much larger, than the examples shown in this chapter. Many are of modest
size and complexity. Like any model, such a map is a simplification of
reality. It does not necessarily try to capture all the nuances of a complex
system of relationships. Instead, it provides, for example, an outline
118
picture of the overall structure of a system of variables, showing which
variables are relevant to a given problem or issue and how they are related
to one another. Once you have this information, you are well on the way
toward knowing what further research needs to be done and perhaps even
how to organize the written report. For some projects, the diagram can be
the analytical product or a key part of it.
After having participated in this group process to define the problem, the
group should be better able to identify what further research needs to be
done and able to parcel out additional work among the best-qualified
members of the group. The group should also be better able to prepare a
report that represents as fully as possible the collective wisdom of the
group as a whole.
Analysts and students also find that Mind Map and Concept Map software
products are useful tools for taking notes during an oral briefing or lecture.
By developing a map as the lecture proceeds, the analyst or student can
chart the train of logic and capture all the data presented in a coherent map
that includes all the key elements of the subject.
The Method
Start a Mind Map or Concept Map with a focal question that defines what
119
is to be included. Then follow these steps:
Mind Mapping and Concept Mapping can be done manually, but mapping
software is strongly recommended; it is much easier and faster to move
concepts and links around on a computer screen than it is to do so
manually. There are many different software programs for various types of
mapping, and each has strengths and weaknesses. These products are
usually variations of the main contenders, Mind Mapping and Concept
Mapping. The two leading techniques differ in the following ways:
✶ Mind Mapping has only one main or central idea, and all other
ideas branch off from it radially in all directions. The central idea is
preferably shown as an image rather than in words, and images are
used throughout the map. “Around the central word you draw the 5 or
10 main ideas that relate to that word. You then take each of those
child words and again draw the 5 or 10 main ideas that relate to each
120
of those words.”9
✶ A Concept Map has a more flexible form. It can have multiple
hubs and clusters. It can also be designed around a central idea, but it
does not have to be and often is not designed that way. It does not
normally use images. A Concept Map is usually shown as a network,
although it too can be shown as a hierarchical structure like Mind
Mapping when that is appropriate. Concept Maps can be very
complex and are often meant to be viewed on a large-format screen.
✶ Mind Mapping was originally developed as a fast and efficient
way to take notes during briefings and lectures. Concept Mapping
was originally developed as a means of mapping students’ emerging
knowledge about science; it has a foundation in the constructivist
theory of learning, which emphasizes that “meaningful learning
involves the assimilation of new concepts and propositions into
existing cognitive structures.”10 Concept Maps are frequently used as
teaching tools. Most recently, they have come to be used to develop
“knowledge models,” in which large sets of complex Concept Maps
are created and hyperlinked together to represent analyses of complex
domains or problems.
121
online)
When to Use It
Process Maps, including Gantt Charts, are used by intelligence analysts to
track, understand, and monitor the progress of activities of intelligence
interest being undertaken by a foreign government, a criminal or terrorist
group, or any other nonstate actor. For example, a Process Map can be
used to:
Value Added
122
The process of constructing a Process Map or a Gantt Chart helps analysts
think clearly about what someone else needs to do to complete a complex
project. When a complex plan or process is understood well enough to be
diagrammed or charted, analysts can then answer questions such as the
following: What are they doing? How far along are they? What do they
still need to do? What resources will they need to do it? How much time
do we have before they have this capability? Is there any vulnerable point
in this process where they can be stopped or slowed down?
The Process Map or Gantt Chart is a visual aid for communicating this
information to the customer. If sufficient information can be obtained, the
analyst’s understanding of the process will lead to a set of indicators that
can be used to monitor the status of an ongoing plan or project.
The Method
A Process Map and a Gantt Chart differ substantially in their appearance.
In a Process Map, the steps in the process are diagrammed sequentially
with various symbols representing starting and end points, decisions, and
actions connected with arrows. Diagrams can be created with readily
available software such as Microsoft Visio.
Example
The Intelligence Community has considerable experience monitoring
terrorist groups. This example describes how an analyst would go about
creating a Gantt Chart of a generic terrorist attack–planning process (see
123
Figure 4.12). The analyst starts by making a list of all the tasks that
terrorists must complete, estimating the schedule for when each task will
be started and finished, and determining what resources are needed for
each task. Some tasks need to be completed in a sequence, with each task
being more or less completed before the next activity can begin. These are
called sequential, or linear, activities. Other activities are not dependent
upon completion of any other tasks. These may be done at any time before
or after a particular stage is reached. These are called nondependent, or
parallel, tasks.
124
Source: Based on Gantt Chart by Richard Damelio, The Basics of
Process Mapping (Florence, KY.: Productivity Press, 2006).
www.ganttchart.com.
When entering tasks into the Gantt Chart, enter the sequential tasks first in
the required sequence. Ensure that they don’t start until the tasks on which
they depend have been completed. Then enter the parallel tasks in an
appropriate time frame toward the bottom of the matrix so that they do not
interfere with the sequential tasks on the critical path to completion of the
project.
Gantt Charts that map a generic process can also be used to track data
about a more specific process as it is received. For example, the Gantt
Chart depicted in Figure 4.12 can be used as a template over which new
information about a specific group’s activities could be layered by using a
different color or line type. Layering in the specific data allows an analyst
to compare what is expected with the actual data. The chart can then be
used to identify and narrow gaps or anomalies in the data and even to
identify and challenge assumptions about what is expected or what is
happening. The analytic significance of considering such possibilities can
mean the difference between anticipating an attack and wrongly assuming
125
that a lack of activity means a lack of intent. The matrix illuminates the
gap and prompts the analyst to consider various explanations.
2. For a discussion of how best to get started when writing a paper, see
“How Should I Conceptualize My Product?” in Katherine Hibbs Pherson
and Randolph H. Pherson, Critical Thinking for Strategic Intelligence
(Washington, DC: CQ Press, 2013), 35–41.
126
Marilyn B. Peterson, Defense Intelligence Agency.
9. Tony Buzan, The Mind Map Book, 2nd ed. (London: BBC Books,
1995).
127
5 Idea Generation
New ideas, and the combination of old ideas in new ways, are essential
elements of effective analysis. Some structured techniques are specifically
intended for the purpose of eliciting or generating ideas at the very early
stage of a project, and they are the topic of this chapter.
—Albert Einstein
128
Overview of Techniques
Structured Brainstorming
is not a group of colleagues just sitting around talking about a problem.
Rather, it is a group process that follows specific rules and procedures. It is
often used at the beginning of a project to identify a list of relevant
variables, driving forces, a full range of hypotheses, key players or
stakeholders, available evidence or sources of information, potential
solutions to a problem, or potential outcomes or scenarios—or, in law
enforcement, potential suspects or avenues of investigation. It requires
little training, and it is one of the most frequently used structured
techniques in the U.S. Intelligence Community. It is most helpful when
paired with a mechanism such as a wiki that allows analysts to capture and
record the results of their brainstorming. The wiki also allows participants
to refine or make additions to the brainstorming results even after the face-
to-face session has ended.
Virtual Brainstorming
is a way to use Structured Brainstorming when participants are in different
geographic locations. The absence of face-to-face contact has
disadvantages, but it also has advantages. The remote process can help
relieve some of the pressure analysts may feel from bosses or peers in a
face-to-face format. It can also increase productivity, because participants
can make their inputs on a wiki at their convenience without having to
read others’ ideas quickly while thinking about their own ideas and
waiting for their turn to speak. The wiki format—including the ability to
upload documents, graphics, photos, or videos, and for all participants to
share ideas on their own computer screens—allows analysts to capture and
track brainstorming ideas and return to them at a later date.
129
member of the group may dominate the meeting, that junior members may
be reluctant to speak up, or that the meeting may lead to heated debate.
Nominal Group Technique encourages equal participation by requiring
participants to present ideas one at a time in round-robin fashion until all
participants feel that they have run out of ideas.
Starbursting
is a form of brainstorming that focuses on generating questions rather than
answers. To help in defining the parameters of a research project, use
Starbursting to identify the questions that need to be answered. Questions
start with the words Who, What, How, When, Where, and Why.
Brainstorm each of these words, one at a time, to generate as many
questions as possible about the research topic.
Cross-Impact Matrix
is a technique that can be used after any form of brainstorming session that
identifies a list of variables relevant to a particular analytic project. The
results of the brainstorming session are put into a matrix, which is used to
guide a group discussion that systematically examines how each variable
influences all other variables to which it is judged to be related in a
particular problem context. The group discussion is often a valuable
learning experience that provides a foundation for further collaboration.
Results of cross-impact discussions can be maintained in a wiki for
continuing reference.
Morphological Analysis
is useful for dealing with complex, nonquantifiable problems for which
little data are available and the chances for surprise are significant. It is a
generic method for systematically identifying and considering all possible
relationships in a multidimensional, highly complex, usually
nonquantifiable problem space. It helps prevent surprises by generating a
large number of outcomes for any complex situation, thus reducing the
chance that events will play out in a way that the analyst has not
previously imagined and has not at least considered. Training and practice
are required before this method is used, and a facilitator experienced in
Morphological Analysis may be necessary.
130
Quadrant Crunching™
is an application of Morphological Analysis that uses key assumptions and
their opposites as a starting point for systematically generating a large
number of alternative outcomes. Two versions of the technique have been
developed: Classic Quadrant Crunching™ to avoid surprise, and Foresight
Quadrant Crunching™ to develop a comprehensive set of potential
alternative futures. For example, an analyst might use Classic Quadrant
Crunching™ to identify the many different ways that a terrorist might
attack a water supply. An analyst would use Foresight Quadrant
Crunching™ to generate multiple scenarios of how the conflict in Syria
might evolve over the next five years.
People sometimes say they are brainstorming when they sit with a few
colleagues or even by themselves and try to come up with relevant ideas.
That is not what true Brainstorming or Structured Brainstorming is about.
A brainstorming session usually exposes an analyst to a greater range of
ideas and perspectives than the analyst could generate alone, and this
broadening of views typically results in a better analytic product. To be
successful, brainstorming must be focused on a specific issue and generate
a final written product. The Structured Brainstorming technique requires a
facilitator, in part because participants cannot talk during the
brainstorming session.
The advent of collaborative tools such as wikis has helped bring structure
to the brainstorming process. Whether brainstorming begins with a face-to-
face session or is done entirely online, the collaborative features of wikis
131
can facilitate the analytic process by obliging analysts to capture results in
a sharable format that can be posted, understood by others, and refined for
future use. In addition, wikis amplify and extend the brainstorming process
—potentially improving the results—because each edit and the reasoning
for it is tracked, and disagreements or refinements can be openly discussed
and explicitly summarized on the wiki.
When to Use It
Structured Brainstorming is one of the most widely used analytic
techniques. Analysts use it at the beginning of a project to identify a list of
relevant variables, key driving forces, a full range of alternative
hypotheses, key players or stakeholders, available evidence or sources of
information, potential solutions to a problem, potential outcomes or
scenarios, or all the forces and factors that may come into play in a given
situation. For law enforcement, brainstorming is used to help identify
potential suspects or avenues of investigation. Brainstorming techniques
can be used again later in the analytic process to pull the team out of an
analytic rut, stimulate new investigative leads, or stimulate more creative
thinking.
Value Added
The stimulus for creativity comes from two or more analysts bouncing
ideas off each other. A brainstorming session usually exposes an analyst to
a greater range of ideas and perspectives than the analyst could generate
alone, and this broadening of views typically results in a better analytic
product. Intelligence analysts have found it particularly effective in
helping them overcome the intuitive traps of giving too much weight to
first impressions, allowing first-hand information to have too much
impact, expecting only marginal or incremental change, and ignoring the
significance of the absence of information.
The Method
132
The facilitator or group leader should present the focal question, explain
and enforce the ground rules, keep the meeting on track, stimulate
discussion by asking questions, record the ideas, and summarize the key
findings. Participants should be encouraged to express every idea that pops
into their heads. Even ideas that are outside the realm of the possible may
stimulate other ideas that are more feasible. The group should have at least
four and no more than twelve participants. Five to seven is an optimal
number; if there are more than twelve people, divide into two groups.
133
supervisors and those who could not attend) either by e-mail or,
preferably, a wiki. If there is a need to capture the initial
brainstorming results as a “snapshot in time,” simply upload the
results as a pdf or other word-processing document, but still allow the
brainstorming discussion to continue on the wiki.
134
✶ Write the question you are brainstorming on a whiteboard or easel.
The objective is to make the question as open ended as possible. For
example, Structured Brainstorming focal questions often begin with:
“What are all the (things/forces and factors/circumstances) that would
help explain . . . ?”
✶ Ask the participants to write down their responses to the question
on a sticky note and give it to the facilitator, who reads them out loud.
Participants are asked to capture the concept with a few key words
that will fit on the sticky note. Use Sharpie-type pens so everyone can
easily see what is written.
✶ The facilitator sticks all the sticky notes on the wall or a
whiteboard in random order as they are called out. All ideas are
treated the same. Participants are urged to build on one another’s
ideas. Usually there is an initial spurt of ideas followed by pauses as
participants contemplate the question. After five or ten minutes
expect a long pause of a minute or so. This slowing down suggests
that the group has “emptied the barrel of the obvious” and is now on
the verge of coming up with some fresh insights and ideas.
Facilitators should not talk during this pause, even if the silence is
uncomfortable.
✶ After a couple of two-minute pauses, facilitators conclude this
divergent stage of the brainstorming process. They then ask the group
or a subset of the group to go up to the board and arrange the sticky
notes into affinity groups (basically grouping the ideas by like
concept). The group should arrange the sticky notes in clusters, not in
rows or columns. The group should avoid putting the sticky notes into
obvious groups like “economic” or “political.” Group members
cannot talk while they are doing this. If one sticky note seems to
“belong” in more than one group, make a copy and place one sticky
note in each affinity group.
✶ If the group has many members, those who are not involved in
arranging the sticky notes should be asked to perform a different task.
The facilitator should make copies of several of the sticky notes that
are considered outliers because they do not fit into any obvious group
or evoked a lot of laughter when read aloud. One of these outliers is
given to each table or to each smaller group of participants, who then
are asked to explain how that outlier relates to the primary task.
✶ If the topic is sufficiently complex, ask a second small group to go
to the board and review how the first group arranged the sticky notes.
The second group cannot speak, but members are encouraged to
135
continue to rearrange the notes into a more coherent pattern.
✶ When all the sticky notes have been arranged, members of the
group at the board are allowed to converse among themselves to pick
a word or phrase that best describes each affinity grouping.
✶ Pay particular attention to outliers or sticky notes that do not
belong in a particular group. Such an outlier could either be useless
noise or, more likely, contain a gem of an idea that deserves further
elaboration as a theme. If outlier sticky notes were distributed earlier
in the exercise, ask each group to explain how that outlier is relevant
to the issue.
✶ To identify the potentially most useful ideas, the facilitator or
group leader should establish up to five criteria for judging the value
or importance of the ideas. If so desired, then use the Ranking,
Scoring, and Prioritizing technique, described in chapter 4, for voting
on or ranking or prioritizing ideas. Another option is to give each
participant ten votes. Tell them to come up to the whiteboard or easel
with a marker and place their votes. They can place ten votes on a
single sticky note or affinity group label, or one vote each on ten
sticky notes or affinity group labels, or any combination in between.
✶ Assess what you have accomplished, and what areas will need
more work or more brainstorming. Ask yourself, “What do I see now
that I did not see before?”
✶ Set priorities and decide on a work plan for the next steps in the
analysis.
136
Origins of This Technique
Brainstorming was a creativity technique used by advertising agencies in
the 1940s. It was popularized in a book by advertising manager Alex
Osborn, Applied Imagination: Principles and Procedures of Creative
Problem Solving (New York: Scribner’s, 1953). There are many versions
of brainstorming. The description here is a combination of information
from Randolph H. Pherson, “Structured Brainstorming,” in Handbook of
Analytic Tools and Techniques (Reston, VA: Pherson Associates, LLC,
2007), and training materials used throughout the global intelligence
community.
When to Use It
Virtual Brainstorming is an appropriate technique to use for a panel of
outside experts or a group of personnel from the same organization who
are working in different locations. It is also appropriate for a group of
analysts working at several locations within a large, congested
metropolitan area, such as Washington, D.C., where distances and traffic
can cause a two-hour meeting to consume most of the day for some
participants.
Value Added
Virtual Brainstorming can be as productive as face-to-face Structured
Brainstorming. The productivity of face-to-face brainstorming is
constrained by what researchers on the subject call “production blocking.”
Participants have to wait their turn to speak. And they have to think about
what they want to say while trying to pay attention to what others are
saying. Something always suffers. What suffers depends upon how the
137
brainstorming session is organized. Often, it is the synergy that can be
gained when participants react to one another’s ideas. Synergy occurs
when an idea from one participant triggers a new idea in another
participant, an idea that otherwise may not have arisen. For many analysts,
synergistic thinking is the most fundamental source of gain from
brainstorming.
The Method
Virtual Brainstorming is usually a two-phase process. It usually begins
with the divergent process of creating as many relevant ideas as possible.
The second phase is a process of convergence, when the ideas are sorted
into categories, weeded out, prioritized, or combined and molded into a
conclusion or plan of action. Software is available for performing these
common functions online. The nature of this second step will vary
depending on the specific topic and the goal of the brainstorming session.
138
Relationship to Other Techniques
See the discussions of Structured Brainstorming and Nominal Group
Technique and compare the relative advantages and possible weaknesses
of each.
When to Use It
NGT prevents the domination of a discussion by a single person. Use it
whenever there is concern that a senior officer or executive or an
outspoken member of the group will control the direction of the meeting
by speaking before anyone else. It is also appropriate to use NGT rather
than Structured Brainstorming if there is concern that some members may
not speak up, the session is likely to be dominated by the presence of one
139
or two well-known experts in the field, or the issue under discussion is
controversial and may provoke a heated debate. NGT can be used to
coordinate the initial conceptualization of a problem before the research
and writing stages begin. Like brainstorming, NGT is commonly used to
identify ideas (assumptions, hypotheses, drivers, causes, variables,
important players) that can then be incorporated into other methods.
Value Added
NGT can be used both to generate ideas and to provide backup support in a
decision-making process where all participants are asked to rank or
prioritize the ideas that are generated. If it seems desirable, all ideas and
votes can be kept anonymous. Compared with Structured Brainstorming,
which usually seeks to generate the greatest possible number of ideas—no
matter how far out they may be—NGT may focus on a limited list of
carefully considered opinions.
The Method
An NGT session starts with the facilitator asking an open-ended question
such as, “What factors will influence . . . ?” “How can we learn if . . . ?”
“In what circumstances might . . . happen?” “What should be included or
not included in this research project?” The facilitator answers any
questions about what is expected of participants and then gives participants
five to ten minutes to work privately to jot down on note cards their initial
ideas in response to the focal question. This part of the process is followed
by these steps:
140
exhausted. If individuals have run out of ideas, they pass when called
upon for an idea, but they can participate again later if they have
another idea when their turn comes up again. The facilitator can also
be an active participant, writing down his or her own ideas. There is
no discussion until all ideas have been presented; however, the
facilitator can clarify ideas to avoid duplication.
✶ When no new ideas are forthcoming, the facilitator initiates a
group discussion to ensure that a common understanding exists of
what each idea means. The facilitator asks about each idea, one at a
time, in the order presented, but no argument for or against any idea
is allowed. At this time, the participants can expand or combine ideas,
but no change can be made to an idea without the approval of the
original presenter of the idea.
✶ Voting to rank or prioritize the ideas as discussed in chapter 4 is
optional, depending upon the purpose of the meeting. When voting is
done, it is usually by secret ballot, although various voting procedures
may be used depending in part on the number of ideas and the
number of participants. It usually works best to employ a ratio of one
vote for every three ideas presented. For example, if the facilitator
lists twelve ideas, each participant is allowed to cast four votes. The
group can also decide to let participants give an idea more than one
vote. In this case, someone could give one idea three votes and
another idea only one vote. An alternative procedure is for each
participant to write what he or she considers the five best ideas on a 3
× 5 card. One might rank the ideas on a scale of 1 to 5, with five
points for the best idea, four points for the next best, down to one
point for the least favored idea. The cards are then passed to the
facilitator for tabulation and announcement of the scores. In such
circumstances, a second round of voting may be needed to rank the
top three or five ideas.
141
Nominal Group Technique was developed by A. L. Delbecq and A. H.
Van de Ven and first described in “A Group Process Model for Problem
Identification and Program Planning,” Journal of Applied Behavioral
Science VII (July–August, 1971): 466–491. The discussion of NGT here is
a synthesis of several sources: James M. Higgins, 101 Creative Problem
Solving Techniques: The Handbook of New Ideas for Business, rev. ed.
(Winter Park, FL: New Management Publishing Company, 2006);
American Society for Quality, http://www.asq.org/learn-about-
quality/idea-creation-tools/overview/nominal-group.html; “Nominal
Group Technique,” http://syque.com/quality_tools/toolbook/NGT/ngt.htm;
and “Nominal Group Technique,”
www.mycoted.com/Nominal_Group_Technique.
5.4 Starbursting
Starbursting is a form of brainstorming that focuses on generating
questions rather than eliciting ideas or answers. It uses the six questions
commonly asked by journalists: Who? What? How? When? Where? and
Why?
When to Use It
Use Starbursting to help define your research project. After deciding on
the idea, topic, or issue to be analyzed, brainstorm to identify the questions
that need to be answered by the research. Asking the right questions is a
common prerequisite to finding the right answer.
The Method
The term “Starbursting” comes from the image of a six-pointed star. To
create a Starburst diagram, begin by writing one of the following six words
at each point of the star: Who, What, How, When, Where, and Why? Then
start the brainstorming session, using one of these words at a time to
generate questions about the topic. Usually it is best to discuss each
question in the order provided, in part because the order also approximates
how English sentences are constructed. Often only three or four of the
words are directly relevant to the intelligence question. For some words
(often When or Where) the answer may be a given and not require further
142
exploration.
Do not try to answer the questions as they are identified; just focus on
developing as many questions as possible. After generating questions that
start with each of the six words, ask the group either to prioritize the
questions to be answered or to sort the questions into logical categories.
Figure 5.4 is an example of a Starbursting diagram. It identifies questions
to be asked about a biological attack in a subway.
143
Source: A basic Starbursting diagram can be found at the MindTools
website:
www.mindtools.com/pages/article/worksheets/Starbursting.pdf. This
version was created by the authors.
When to Use It
The Cross-Impact Matrix is useful early in a project when a group is still
144
in a learning mode trying to sort out a complex situation. Whenever a
brainstorming session or other meeting is held to identify all the variables,
drivers, or players that may influence the outcome of a situation, the next
logical step is to use a Cross-Impact Matrix to examine the
interrelationships among each of these variables. A group discussion of
how each pair of variables interacts can be an enlightening learning
experience and a good basis on which to build ongoing collaboration. How
far one goes in actually filling in the matrix and writing up a description of
the impacts associated with each variable may vary depending upon the
nature and significance of the project. At times, just the discussion is
sufficient. Writing up the interactions with the summary for each pair of
variables can be done effectively in a wiki.
Value Added
When analysts are estimating or forecasting future events, they consider
the dominant forces and potential future events that might influence an
outcome. They then weigh the relative influence or likelihood of these
forces or events, often considering them individually without regard to
sometimes significant interactions that might occur. The Cross-Impact
Matrix provides a context for the discussion of these interactions. This
discussion often reveals that variables or issues once believed to be simple
and independent are, in reality, interrelated. The information sharing that
occurs during a small-group discussion of each potential cross-impact can
be an invaluable learning experience. For this reason alone, the Cross-
145
Impact Matrix is a useful tool that can be applied at some point in almost
any study that seeks to explain current events or forecast future outcomes.
The Method
Assemble a group of analysts knowledgeable on various aspects of the
subject. The group brainstorms a list of variables or events that would
likely have some effect on the issue being studied. The project coordinator
then creates a matrix and puts the list of variables or events down the left
side of the matrix and the same variables or events across the top.
The matrix is then used to consider and record the relationship between
each variable or event and every other variable or event. For example, does
the presence of Variable 1 increase or diminish the influence of Variables
2, 3, 4, etc.? Or does the occurrence of Event 1 increase or decrease the
likelihood of Events 2, 3, 4, etc.? If one variable does affect the other, the
positive or negative magnitude of this effect can be recorded in the matrix
by entering a large or small + or a large or small – in the appropriate cell
(or by making no marking at all if there is no significant effect). The
terminology used to describe each relationship between a pair of variables
or events is that it is “enhancing,” “inhibiting,” or “unrelated.”
The matrix shown in Figure 5.5 has six variables, with thirty possible
interactions. Note that the relationship between each pair of variables is
assessed twice, as the relationship may not be symmetric. That is, the
influence of Variable 1 on Variable 2 may not be the same as the impact of
Variable 2 on Variable 1. It is not unusual for a Cross-Impact Matrix to
have substantially more than thirty possible interactions, in which case
careful consideration of each interaction can be time consuming.
146
of variables that reinforce one another can lead to surprisingly rapid
changes in a predictable direction. On the other hand, for some problems it
may be sufficient simply to recognize that there is a relationship and the
direction of that relationship.
The depth of discussion and the method used for recording the results are
discretionary. Each depends upon how much you are learning from the
discussion, and that will vary from one application of this matrix to
another. If the group discussion of the likelihood of these variables or
events and their relationships to one another is a productive learning
experience, keep it going. If key relationships are identified that are likely
to influence the analytic judgment, fill in all cells in the matrix and take
good notes. If the group does not seem to be learning much, cut the
discussion short.
147
Relationship to Other Techniques
Matrices as a generic technique with many types of applications are
discussed in chapter 4. The use of a Cross-Impact Matrix as described here
frequently follows some form of brainstorming at the start of an analytic
project to elicit the further assistance of other knowledgeable analysts in
exploring all the relationships among the relevant factors identified by the
brainstorming. It can be a good idea to build on the discussion of the
Cross-Impact Matrix by developing a visual Mind Map or Concept Map of
all the relationships.
148
Multiple Scenarios Generation (chapter 6), and Quadrant Hypothesis
Generation (chapter 7). Training and practice are required before using this
method, and the availability of a facilitator experienced in morphological
analysis is highly desirable.
When to Use It
Morphological Analysis is most useful for dealing with complex,
nonquantifiable problems for which little information is available and the
chances for surprise are great. It can be used, for example, to identify
possible variations of a threat, possible ways a crisis might occur between
two countries, possible ways a set of driving forces might interact, or the
full range of potential outcomes in any ambiguous situation.
Morphological Analysis is generally used early in an analytic project, as it
is intended to identify all the possibilities, not to drill deeply into any
specific possibility.
Value Added
By generating a comprehensive list of possible outcomes, analysts are in a
better position to identify and select those outcomes that seem most
credible or that most deserve attention. This list helps analysts and
decision makers focus on what actions need to be undertaken today to
prepare for events that could occur in the future. They can then take the
actions necessary to prevent or mitigate the effect of bad outcomes and
help foster better outcomes. The technique can also sensitize analysts to
High Impact/Low Probability developments, or “nightmare scenarios,”
which could have significant adverse implications for influencing policy or
allocation of resources.
149
The Method
Morphological analysis works through two common principles of
creativity techniques: decomposition and forced association. Start by
defining a set of key parameters or dimensions of the problem, and then
break down each of those dimensions further into relevant forms or states
or values that the dimension can assume—as in the example described
later in this section. Two dimensions can be visualized as a matrix and
three dimensions as a cube. In more complicated cases, multiple linked
matrices or cubes may be needed to break the problem down into all its
parts.
Example
Analysts are asked to assess how a terrorist attack on the water supply
might unfold. In the absence of direct information about specific terrorist
planning for such an attack, a group of analysts uses Structured
Brainstorming to identify the following key dimensions of the problem:
group, type of attack, target, and intended impact. For each dimension, the
analysts identify as many elements as possible. For example, the group
could be an outsider, an insider, or a visitor to a facility, while the location
could be an attack on drinking water, wastewater, or storm sewer runoff.
The analysts then array this data into a matrix, illustrated in Figure 5.6, and
begin to create as many permutations as possible using different
combinations of the matrix boxes. These permutations allow the analysts
to identify and consider multiple combinations for further exploration. One
150
possible scenario, shown in the matrix, is an outsider who carries out
multiple attacks on a treatment plant to cause economic disruption.
Another possible scenario is an insider who carries out a single attack on
drinking water to terrorize the population.
151
Source: 2009 Pherson Associates, LLC.
152
Pherson. It adopts the same initial approach of flipping assumptions
as Classic Quadrant Crunching™ and then applies Multiple Scenarios
Generation to generate a wide range of comprehensive and mutually
exclusive future scenarios or outcomes of any type—many of which
have not previously been contemplated.
When to Use It
Both Quadrant Crunching™ techniques are useful for dealing with highly
complex and ambiguous situations for which few data are available and
the chances for surprise are great. Training and practice are required before
analysts use either technique, and it is highly recommended to engage an
experienced facilitator, especially when this technique is used for the first
time.
Value Added
These techniques reduce the potential for surprise by providing a
structured framework with which the analyst can generate an array of
alternative options or ministories. Classic Quadrant Crunching™ requires
analysts to identify and systematically challenge all their key assumptions
153
about how a terrorist attack might be launched or any other specific
situation might evolve. By critically examining each assumption and how
a contrary assumption might play out, analysts can better assess their level
of confidence in their predictions and the strength of their lead hypothesis.
Foresight Quadrant Crunching™ is a new technique for conducting
scenarios analysis; it is most effective when there is a strong consensus
that only a single future outcome is likely.
The Method
Classic Quadrant Crunching™
is sometimes described as a Key Assumptions Check on steroids. It is most
useful when there is a well-established lead hypothesis that can be
articulated clearly. Classic Quadrant Crunching™ calls on the analyst to
break down the lead hypothesis into its component parts, identifying the
key assumptions that underlie the lead hypothesis, or dimensions that
focus on Who, What, How, When, Where, and Why. Once the key
dimensions of the lead hypothesis are articulated, the analyst generates two
or four examples of contrary dimensions. For example, two contrary
dimensions for a single attack would be simultaneous attacks and
cascading attacks. The various contrary dimensions are then arrayed in sets
of 2 × 2 matrices. If four dimensions are identified for a particular topic,
the technique would generate six different 2 × 2combinations of these four
dimensions (AB, AC, AD, BC, BD, and CD). Each of these pairs would be
presented as a 2 × 2 matrix with four quadrants. Different stories or
alternatives would be generated for each quadrant in each matrix. If 2
154
stories are imagined for each quadrant in each of these 2 × 2 matrices, a
total of 48 different ways the situation could evolve will have been
generated. Similarly, if six drivers are identified, the technique will
generate as many as 120 different stories to consider (see Figure 5.7a).
Once a rich array of potential alternatives has been generated, the analyst’s
task is to identify which of the various alternative stories are the most
deserving of attention. The last step in the process is to develop lists of
indicators for each story in order to track whether a particular story is
beginning to emerge.
✶ State the conventional wisdom for the most likely way this
terrorist attack might be launched. For example, “Al-Qaeda or its
affiliates will contaminate the water supply for a large metropolitan
area, causing mass casualties.”
✶ Break down this statement into its component parts or key
assumptions. For example, the statement makes five key assumptions:
(1) a single attack, (2) involving drinking water, (3) conducted by an
outside attacker, (4) against a major metropolitan area, (5) that causes
large numbers of casualties.
155
✶ Posit a contrary assumption for each key assumption. For example,
what if there are multiple attacks instead of a single attack?
✶ Identify two or four dimensions of that contrary assumption. For
example, what are different ways a multiple attack could be
launched? Two possibilities would be simultaneous attacks (as in the
September 2001 attacks on the World Trade Center and the Pentagon
or the London bombings in 2005) or cascading attacks (as in the
sniper killings in the Washington, D.C., area in October 2002).
✶ Repeat this process for each of the key dimensions. Develop two
or four contrary dimensions for each contrary assumption. (See
Figure 5.7b.)
✶ Array pairs of contrary dimensions into sets of 2 × 2 matrices. In
this case, ten different 2 × 2 matrices would be created. Two of the
ten matrices are shown in Figure 5.7c.
✶ For each cell in each matrix generate one to three examples of how
terrorists might launch an attack. In some cases, such an attack might
already have been imagined. In other quadrants there may be no
credible attack concept. But several of the quadrants will usually
stretch the analysts’ thinking, pushing them to think about the
dynamic in new and different ways.
✶ Review all the attack plans generated; using a preestablished set of
criteria, select those most deserving of attention. In this example,
possible criteria might be those plans that are most likely to:
– Cause the most damage; have the most impact.
– Be the hardest to detect or prevent.
– Pose the greatest challenge for consequence management.
✶ This process is illustrated in Figure 5.7d. In this case, three attack
plans were selected as the most likely. Attack plan 1 became Story A,
attack plans 4 and 7 were combined to form Story B, and attack plan
16 became Story C. It may also be desirable to select one or two
additional attack plans that might be described as “wild cards” or
“nightmare scenarios.” These are attack plans that have a low
probability of being tried but are worthy of attention because their
impact would be substantial if they did occur. The figure shows attack
plan 11 as a nightmare scenario.
✶ Consider what decision makers might do to prevent bad stories
from happening, mitigate their impact, and deal with their
consequences.
✶ Generate a list of key indicators to help assess which, if any, of
these attack plans is beginning to emerge.
156
Figure 5.7b Terrorist Attacks on Water Systems: Flipping Assumptions
157
Source: 2013 Pherson Associates, LLC.
—Louis Pasteur
158
lead scenario. By including the lead scenario, the final set of alternative
scenarios or futures should be comprehensive and mutually exclusive. The
specific states for Foresight Quadrant Crunching™ are:
✶ State what most analysts believe is the most likely future scenario.
✶ Break down this statement into its component parts or key
assumptions.
✶ Posit a contrary assumption for each key assumption.
✶ Identify one or three contrary dimensions of that contrary
assumption.
✶ Repeat this process for each of the contrary assumptions—a
process similar to that shown in Figure 5.7b.
✶ Add the key assumption to the list of contrary dimensions, creating
either one or two pairs.
✶ Repeat this process for each row, creating one or two pairs
including a key assumption and one or three contrary dimensions.
✶ Array these pairs into sets of 2 × 2 matrices, a process similar to
that shown in Figure 5.7c.
✶ For each cell in each matrix generate one to three credible
scenarios. In some cases, such a scenario may already have been
imagined. In other quadrants, there may be no scenario that makes
sense. But several of the quadrants will usually stretch the analysts’
thinking, often generating counterintuitive scenarios.
✶ Review all the scenarios generated—a process similar to that
shown in Figure 5.7d; using a preestablished set of criteria, select
those scenarios most deserving of attention. The difference is that
with Classic Quadrant Crunching™, analysts are seeking to develop a
set of credible alternative attack plans to avoid surprise. In Foresight
Quadrant Crunching™, analysts are engaging in a new version of
multiple scenarios analysis.
159
Origins of This Technique
Classic Quadrant Crunching™ was developed by Randy Pherson and Alan
Schwartz to meet a specific analytic need in the counterterrorism arena. It
was first published in Randolph H. Pherson, Handbook of Analytic Tools
and Techniques (Reston, VA: Pherson Associates, LLC, 2008). Foresight
Quadrant Crunching™ was developed by Randy Pherson in 2013 as a new
method for conducting multiple scenarios analysis.
1. Daniel Kahneman, Thinking: Fast and Slow (New York: Farrar, Straus
and Giroux, 2011), 95–96.
160
6 Scenarios and Indicators
161
warning of the direction in which the future is heading, but these early
signs are not obvious. Indicators take on meaning only in the context of a
specific scenario with which they have been identified. The prior
identification of a scenario and associated indicators can create an
awareness that prepares the mind to recognize early signs of significant
change. Indicators are particularly useful in helping overcome Hindsight
Bias because they generate objective, preestablished lists that can be used
to capture an analyst’s actual thought process at an earlier stage of the
analysis. Similarly, indicators can mitigate the cognitive bias of assuming
something is inevitable if the indicators that the analyst had expected to
emerge are not actually realized.
162
Overview of Techniques
Scenarios Analysis
identifies the multiple ways in which a situation might evolve. This form
of analysis can help decision makers develop plans to exploit whatever
opportunities might arise or avoid whatever risks the future may hold. Four
different techniques for generating scenarios are described in this chapter,
and a fifth was described previously in chapter 5, along with guidance on
when and how to use each technique. Simple Scenarios is a quick and easy
way for either an individual analyst or a small group of analysts to
generate scenarios. It starts with the current analytic line and then explores
other alternatives. The Cone of Plausibility works well with a small group
of experts who define a set of key drivers, establish a baseline scenario,
and then modify the drivers to create plausible alternative scenarios and
wild cards. Alternative Futures Analysis is a more systematic and
imaginative procedure that uses a group of experts, often including
decision makers and a trained facilitator. With some additional effort,
Multiple Scenarios Generation can handle a much larger number of
scenarios than Alternative Futures Analysis. It also requires a facilitator,
but the use of this technique can greatly reduce the chance that events
could play out in a way that was not at least foreseen as a possibility. A
fifth technique, Foresight Quadrant Crunching™, was developed in 2013
by Randy Pherson as a variation on Multiple Scenarios Generation and the
Key Assumptions Check. It adopts a different approach to generating
scenarios by flipping assumptions, and it is described along with its
companion technique, Classic Quadrant Crunching™, in chapter 5.
Indicators
are a classic technique used to provide early warning of some future event
or validate what is being observed. Indicators are often paired with
scenarios to identify which of several possible scenarios is developing.
They are also used to measure change toward an undesirable condition,
such as political instability, or a desirable condition, such as economic
reform. Use Indicators whenever you need to track a specific situation to
monitor, detect, or evaluate change over time. The indicator list becomes
the basis for directing collection efforts and for routing relevant
information to all interested parties. It can also serve as the basis for your
163
filing system to track emerging developments.
Indicators Validation
is a process that helps analysts assess the diagnostic power of an indicator.
An indicator is most diagnostic when it clearly points to the likelihood of
only one scenario or hypothesis and suggests that the others are not likely.
Too frequently indicators are of limited value because they may be
consistent with several different outcomes or hypotheses.
When to Use It
Scenarios Analysis is most useful when a situation is complex or when the
outcomes are too uncertain to trust a single prediction. When decision
makers and analysts first come to grips with a new situation or challenge, a
degree of uncertainty always exists about how events will unfold. At this
point when national policies or long-term corporate strategies are in the
initial stages of formulation, Scenarios Analysis can have a strong impact
on decision makers’ thinking.
Scenarios do not predict the future, but a good set of scenarios bounds the
range of possible futures for which a decision maker may need to be
164
prepared. Scenarios Analysis can also be used as a strategic planning tool
that brings decision makers and stakeholders together with experts to
envisage the alternative futures for which they must plan.2
The amount of time and effort required depends upon the specific
technique used. Simple Scenarios and Cone of Plausibility can be used by
an analyst working alone without any technical or methodological support,
although a group effort is preferred for most structured techniques. The
time required for Alternative Futures Analysis and Multiple Scenarios
Generation varies, but it can require a team of experts spending several
days working together on a project. Engaging a facilitator who is
knowledgeable about scenarios analysis is highly recommended, as this
will definitely save time and produce better results.
Value Added
When analysts are thinking about scenarios, they are rehearsing the future
so that decision makers can be prepared for whatever direction that future
takes. Instead of trying to estimate the most likely outcome and being
wrong more often than not, scenarios provide a framework for considering
multiple plausible futures. Trying to divine or predict a single outcome can
be a disservice to senior policy officials, decision makers, and other valued
clients. Generating several scenarios helps focus attention on the key
underlying forces and factors most likely to influence how a situation will
develop. Scenarios also can be used to examine assumptions and deliver
useful warning messages when high impact/low probability scenarios are
included in the exercise.
165
assumptions and consider plausible “wild card” scenarios or
discontinuous events.
✶ Produce an analytic framework for calculating the costs, risks, and
opportunities represented by different outcomes.
✶ Provide a means for weighing multiple unknown or unknowable
factors and presenting a set of plausible outcomes.
✶ Stimulate thinking about opportunities that can be exploited.
✶ Bound a problem by identifying plausible combinations of
uncertain factors.
When analysts are required to look well into the future, they usually find it
extremely difficult to do a simple, straight-line projection given the high
number of variables that need to be considered. By changing the “analytic
lens” through which the future is to be foretold, analysts are also forced to
reevaluate their assumptions about the priority order of key factors driving
the issue. By pairing the key drivers to create sets of mutually exclusive
scenarios, the technique helps analysts think about the situation from
sometimes counterintuitive perspectives, often generating several
unexpected and dramatically different potential future worlds. By
engaging in a multifaceted and systematic examination of an issue,
analysts also create a more comprehensive set of alternative futures. This
enables them to maintain records about each alternative and track the
potential for change, thus gaining greater confidence in their overall
assessment.
Potential Pitfalls
Scenarios Analysis exercises will most likely fail if the group conducting
the exercise is not highly diverse, including representatives from a variety
166
of disciplines, organizations, and even cultures to avoid the trap of
groupthink. Another frequent pitfall is when participants in a scenarios
exercise have difficulty thinking outside of their comfort zone, resisting
instructions to look far into the future or to explore or suggest concepts
that do not fall within their area of expertise. Techniques that have worked
well to pry exercise participants out of such analytic ruts are (1) to define a
time period for the estimate (such as five or ten years) that one cannot
easily extrapolate to from current events, or (2) to post a list of concepts or
categories (such as social, economic, environmental, political, and
technical) to stimulate thinking about an issue from different perspectives.
The analysts involved in the process should also have a thorough
understanding of the subject matter and possess the conceptual skills
necessary to select the key drivers and assumptions that are likely to
remain valid throughout the period of the assessment.
✶ Clearly define the focal issue and the specific goals of the futures
exercise.
✶ Make a list of forces, factors, and events that are likely to
influence the future.
✶ Organize the forces, factors, and events that are related to one
another into five to ten affinity groups that are expected to be the
driving forces in how the focal issue will evolve.
✶ Label each of these drivers and write a brief description of each.
For example, one training exercise for this technique is to forecast the
future of the fictional country of Caldonia by identifying and
describing six drivers. Generate a matrix, as shown in Figure 6.1.1,
with a list of drivers down the left side. The columns of the matrix are
used to describe scenarios. Each scenario is assigned a value for each
driver. The values are strong or positive (+), weak or negative (–),
and blank if neutral or no change.
167
– Government effectiveness: To what extent does the
government exert control over all populated regions of the
country and effectively deliver services?
– Economy: Does the economy sustain a positive growth rate?
– Civil society: Can nongovernmental and local institutions
provide appropriate services and security to the population?
– Insurgency: Does the insurgency pose a viable threat to the
government? Is it able to extend its dominion over greater
portions of the country?
– Drug trade: Is there a robust drug-trafficking economy?
– Foreign influence: Do foreign governments, international
financial organizations, or nongovernmental organizations
provide military or economic assistance to the government?
✶ Generate at least four different scenarios—a best case, worst case,
mainline, and at least one other by assigning different values (+, 0, –)
to each driver.
✶ Reconsider the list of drivers and the selected scenarios. Is there a
better way to conceptualize and describe the drivers? Are there
important forces that have not been included? Look across the matrix
to see the extent to which each driver discriminates among the
scenarios. If a driver has the same value across all scenarios, it is not
discriminating and should be deleted. To stimulate thinking about
other possible scenarios, consider the key assumptions that were
made in deciding on the most likely scenario. What if some of these
assumptions turn out to be invalid? If they are invalid, how might that
affect the outcome, and are such outcomes included within the
available set of scenarios?
✶ For each scenario, write a one-page story to describe what that
future looks like and/or how it might come about. The story should
illustrate the interplay of the drivers.
✶ For each scenario, describe the implications for the decision
maker.
✶ Generate and validate a list of indicators, or “observables,” for
each scenario that would help you discover that events are starting to
play out in a way envisioned by that scenario.
✶ Monitor the list of indicators on a regular basis.
✶ Report periodically on which scenario appears to be emerging and
why.
168
Source: 2009 Pherson Associates, LLC.
The steps in using the technique are as follows (see Figure 6.1.2):
169
analysis of key drivers. One of the most common is PEST, which
signifies political, economic, social, or technological variables. Other
analysts have added legal, military, environmental, psychological, or
demographic factors to form abbreviations such as STEEP,
STEEPLE, STEEPLED, PESTLE, or STEEP + 2.4
✶ The drivers should be written as a neutral statement and should be
expected to generally remain valid throughout the period of the
assessment. For example, write “the economy,” not “declining
economic growth.” Be sure you are listing true drivers and not just
describing important players or factors relevant to the situation. The
technique works best when four to seven drivers are generated.
✶ Make assumptions about how the drivers are most likely to play
out over the time frame of the assessment. Be as specific as possible;
for example, say “the economy will grow 2 to 4 percent annually over
the next five years,” not simply that “the economy will improve.”
Generate only one assumption per driver.
✶ Generate a baseline scenario based on the list of key drivers and
key assumptions. This is often a projection from the current situation
forward adjusted by the assumptions you are making about future
behavior. The scenario assumes that the drivers and their descriptions
will remain valid throughout the period. Write the scenario as a future
that has come to pass and describe how it came about. Construct one
to three alternative scenarios by changing an assumption or several of
the assumptions that you made in your initial list. Often it is best to
start by looking at those assumptions that appear least likely to
remain true. Consider the impact that change is likely to have on the
baseline scenario and describe this new end point and how it came
about. Also consider what impact changing one assumption would
have on the other assumptions on the list.
✶ We would recommend making at least one of these alternative
scenarios an opportunities scenario, illustrating how a positive
outcome that is significantly better than the current situation could
plausibly be achieved. Often it is also desirable to develop a scenario
that captures the full extent of the downside risk.
✶ Generate a possible wild card scenario by radically changing the
assumption that you judge as the least likely to change. This should
produce a High Impact/Low Probability scenario (see chapter 9) that
may have not been considered otherwise.
170
6.1.3 The Method: Alternative Futures Analysis
Alternative Futures Analysis and Multiple Scenarios Generation (the next
technique described below) differ from the first two techniques in that they
are usually larger projects that rely on a group of experts, often including
decision makers, academics, and other outside experts. They use a more
systematic process, and usually require the assistance of a knowledgeable
facilitator.
✶ Clearly define the focal issue and the specific goals of the futures
exercise.
✶ Brainstorm to identify the key forces, factors, or events that are
most likely to influence how the issue will develop over a specified
time period.
171
✶ If possible, group these various forces, factors, or events to form
two critical drivers that are expected to determine the future outcome.
In the example on the future of Cuba (Figure 6.1.3), the two key
drivers are Effectiveness of Government and Strength of Civil
Society. If there are more than two critical drivers, do not use this
technique—use the Multiple Scenarios Generation technique, which
can handle a larger number of scenarios.
✶ As in the Cuba example, define the two ends of the spectrum for
each driver.
✶ Draw a 2 × 2 matrix. Label the two ends of the spectrum for each
driver.
✶ Note that the square is now divided into four quadrants. Each
quadrant represents a scenario generated by a combination of the two
drivers. Now give a name to each scenario, and write it in the relevant
quadrant.
✶ Generate a narrative story of how each hypothetical scenario might
come into existence. Include a hypothetical chronology of key dates
and events for each scenario.
✶ Describe the implications of each scenario, should it be what
actually develops.
✶ Generate and validate a list of indicators, or “observables,” for
each scenario that would help determine whether events are starting
to play out in a way envisioned by that scenario.
✶ Monitor the list of indicators on a regular basis.
✶ Report periodically on which scenario appears to be emerging and
why.
172
Source: 2009 Pherson Associates, LLC.
✶ Clearly define the focal issue and the specific goals of the futures
exercise.
✶ Brainstorm to identify the key forces, factors, or events that are
173
most likely to influence how the issue will develop over a specified
time period.
✶ Define the two ends of the spectrum for each driver.
✶ Pair the drivers in a series of 2 × 2 matrices.
✶ Develop a story or two for each quadrant of each 2 × 2 matrix.
✶ From all the scenarios generated, select those most deserving of
attention because they illustrate compelling and challenging futures
not yet being considered.
✶ Develop and validate indicators for each scenario that could be
tracked to determine whether or not the scenario is developing.
✶ Report periodically on which scenario appears to be emerging and
why.
174
neighboring states but ineffective internal security capabilities. (See
Figure 6.1.4b.) In this “world,” one might imagine a regional defense
umbrella that would help to secure the borders. Another possibility is
that the neighboring states would have the Shiites and Kurds under
control, with the only remaining insurgents Sunnis, who continue to
harass the Shia-led central government.
✶ Review all the stories generated, and select those most deserving
of attention. For example, which scenario
– Presents the greatest challenges to Iraqi and U.S. decision
makers?
– Raises particular concerns that have not been anticipated?
– Surfaces new dynamics that should be addressed?
– Suggests new collection needs?
✶ Select a few scenarios that might be described as “wild cards”
(low probability/high impact developments) or “nightmare scenarios”
(see Figure 6.1.4c).
✶ Consider what decision makers might do to prevent bad scenarios
from occurring or enable good scenarios to develop.
✶ Generate and validate a list of key indicators to help monitor
which scenario story best describes how events are beginning to play
out.
✶ Report periodically on which scenario appears to be emerging and
why.
175
Figure 6.1.4b Future of the Iraq Insurgency: Using Spectrums to Define
Potential Outcomes
176
flipping assumptions. It and its companion technique, Classic Quadrant
Crunching™, are described in detail in chapter 5. All three techniques are
specific applications of Morphological Analysis, also described in chapter
5.
177
expected to the wildcard, in forms that are analytically coherent
and imaginatively engaging. A good scenario grabs us by the collar
and says, ‘Take a good look at this future. This could be your
future. Are you going to be ready?’”
6.2 Indicators
Indicators are observable phenomena that can be periodically reviewed to
help track events, spot emerging trends, and warn of unanticipated
changes. An indicator list is a preestablished set of observable or
potentially observable actions, conditions, facts, or events whose
simultaneous occurrence would argue strongly that a phenomenon is
present or is highly likely to occur. Indicators can be monitored to obtain
tactical, operational, or strategic warnings of some future development
that, if it were to occur, would have a major impact.
When to Use It
Indicators provide an objective baseline for tracking events, instilling rigor
into the analytic process, and enhancing the credibility of the final product.
The indicator list can become the basis for conducting an investigation or
directing collection efforts and routing relevant information to all
interested parties. It can also serve as the basis for the analyst’s filing
system to track how events are developing.
178
happen. This is particularly appropriate when conducting a What If?
Analysis or a High Impact/Low Probability Analysis.
Descriptive indicators are best used to help the analyst assess whether
there are sufficient grounds to believe that a specific action is taking place.
They provide a systematic way to validate a hypothesis or help
substantiate an emerging viewpoint. Figure 6.2a is an example of a list of
descriptive indicators, in this case pointing to a clandestine drug
laboratory.
179
Source: Pamphlet from ALERT Unit, New Jersey State Police, 1990;
republished in The Community Model, Counterdrug Intelligence
Coordinating Group, 2003.
Value Added
The human mind sometimes sees what it expects to see and can overlook
the unexpected. Identification of indicators creates an awareness that
prepares the mind to recognize early signs of significant change. Change
often happens so gradually that analysts do not see it, or they rationalize it
as not being of fundamental importance until it is too obvious to ignore.
Once analysts take a position on an issue, they can be reluctant to change
their minds in response to new evidence. By specifying in advance the
threshold for what actions or events would be significant and might cause
them to change their minds, analysts can seek to avoid this type of
rationalization.
Defining explicit criteria for tracking and judging the course of events
makes the analytic process more visible and available for scrutiny by
others, thus enhancing the credibility of analytic judgments. Including an
indicator list in the finished product helps decision makers track future
developments and builds a more concrete case for the analytic conclusions.
When analysts or decision makers are sharply divided over (1) the
interpretation of events (for example, political dynamics in Iran or how the
conflict in Syria is progressing), (2) the guilt or innocence of a “person of
interest,” or (3) the culpability of a counterintelligence suspect, indicators
can help depersonalize the debate by shifting attention away from personal
viewpoints to more objective criteria. Emotions often can be diffused and
substantive disagreements clarified if all parties agree in advance on a set
of criteria that would demonstrate that developments are—or are not—
180
moving in a particular direction or that a person’s behavior suggests that
he or she is guilty as suspected or is indeed a spy.
The Method
✶ The first step in using this technique is to create a list of indicators.
Developing the indicator list can range from a simple process to a
sophisticated team effort. For example, with minimum effort you
could jot down a list of things you would expect to see if a particular
situation were to develop as feared or foreseen. Or you could join
with others to define multiple variables that would influence a
situation and then rank the value of each variable based on incoming
information about relevant events, activities, or official statements. In
both cases, some form of brainstorming, hypothesis generation, or
scenario development is often used to identify the indicators.
✶ Review and refine the list, discarding any indicators that are
duplicative and combining those that are similar. (See Figure 6.2b for
a sample indicator list.)
✶ Examine each indicator to determine if it meets the following five
criteria. Discard those that are found wanting. The following criteria
are arranged along a continuum, from essential to highly desirable:
– Observable and collectible: There must be some reasonable
expectation that, if present, the indicator will be observed and
reported by a reliable source. If an indicator is to monitor change
over time, it must be collectable over time.
– Valid: An indicator must be clearly relevant to the end state
the analyst is trying to predict or assess, and it must be
inconsistent with all or at least some of the alternative
explanations or outcomes. It must accurately measure the
concept or phenomenon at issue.
– Reliable: Data collection must be consistent when
181
comparable methods are used. Those observing and collecting
data must observe the same things. Reliability requires precise
definition of the indicators.
– Stable: An indicator must be useful over time to allow
comparisons and to track events. Ideally, the indicator should be
observable early in the evolution of a development so that
analysts and decision makers have time to react accordingly.
– Unique: An indicator should measure only one thing and, in
combination with other indicators, point only to the phenomenon
being studied. Valuable indicators are those that are not only
consistent with a specified scenario or hypothesis but are also
inconsistent with alternative scenarios or hypotheses. The
Indicators Validator™ tool, described later in this chapter, can
be used to check the diagnosticity of indicators.
✶ Monitor the indicator lists regularly to chart trends and detect
signs of change.
182
Source: 2009 Pherson Associates, LLC.
183
consistent with two or more scenarios or hypotheses. Therefore, an analyst
should prepare separate lists of indicators for each scenario or hypothesis.
For example, consider indicators of an opponent’s preparations for a
military attack where there may be three hypotheses—no attack, attack,
and feigned intent to attack with the goal of forcing a favorable negotiated
solution. Almost all indicators of an imminent attack are also consistent
with the hypothesis of a feigned attack. The analyst must identify
indicators capable of diagnosing the difference between true intent to
attack and feigned intent to attack. The mobilization of reserves is such a
diagnostic indicator. It is so costly that it is not usually undertaken unless
there is a strong presumption that the reserves will be needed.
After creating the indicator list or lists, the analyst or analytic team should
regularly review incoming reporting and note any changes in the
indicators. To the extent possible, the analyst or the team should decide
well in advance which critical indicators, if observed, will serve as early-
warning decision points. In other words, if a certain indicator or set of
indicators is observed, it will trigger a report advising of some
modification in the analysts’ appraisal of the situation.
Potential Pitfalls
The quality of indicators is critical, as poor indicators lead to analytic
failure. For these reasons, analysts must periodically review the validity
and relevance of an indicator list. Narrowly conceived or outdated
184
indicators can reinforce analytic bias, encourage analysts to discard new
evidence, and lull consumers of information inappropriately. Indicators
can also prove to be invalid over time, or they may turn out to be poor
“pointers” to what they were supposed to show. By regularly checking the
validity of the indicators, analysts may also discover that their original
assumptions were flawed. Finally, if an opponent learns what indicators
are on your list, the opponent may make operational changes to conceal
what you are looking for or arrange for you to see contrary indicators.
185
Relationship to Other Techniques
Indicators are closely related to a number of other techniques. Some form
of brainstorming is commonly used to draw upon the expertise in creating
indicators of multiple analysts with different perspectives and different
specialties. The development of alternative scenarios should always
involve the development and monitoring of indicators that point toward
which scenario is evolving. What If? Analysis and High Impact/Low
Probability Analysis depend upon the development and use of indicators.
Indicators are often entered as items of relevant information in
Te@mACH®, as discussed in chapter 7 on Hypothesis Generation and
Testing.
186
Techniques (Reston, VA: Pherson Associates, LLC, 2008); and Pherson,
The Indicators Handbook (Reston, VA: Pherson Associates, LLC, 2008).
Cynthia M. Grabo’s book Anticipating Surprise: Analysis for Strategic
Warning (Lanham, MD: University Press of America, 2004) is a classic
text on the development and use of indicators.
When to Use It
The Indicators Validator™ is an essential tool to use when developing
indicators for competing hypotheses or alternative scenarios. (See Figure
6.3a.) Once an analyst has developed a set of alternative scenarios or
competing hypotheses, the next step is to generate indicators for each
scenario (or hypothesis) that would appear if that particular world or
hypothesis were beginning to emerge or prove accurate. A critical question
that is not often asked is whether a given indicator would appear only in
the scenario to which it is assigned or also in one or more alternative
scenarios or hypotheses. Indicators that could appear in several scenarios
or hypotheses are not considered diagnostic, suggesting that they may not
be particularly useful in determining whether a specific scenario or a
particular hypothesis is true. The ideal indicator is highly likely or
consistent for the scenario or hypothesis to which it is assigned and highly
unlikely or inconsistent for all other alternatives.
187
Source: 2008 Pherson Associates, LLC.
Value Added
Employing the Indicators Validator™ to identify and dismiss
nondiagnostic indicators can significantly increase the credibility of an
analysis. By applying the tool, analysts can rank order their indicators
from most to least diagnostic and decide how far up the list they want to
draw the line in selecting the indicators that will be used in the analysis. In
some circumstances, analysts might discover that most or all the indicators
for a given scenario have been eliminated because they are also consistent
with other scenarios, forcing them to brainstorm a new and better set of
indicators. If analysts find it difficult to generate independent lists of
diagnostic indicators for two scenarios, it may be that the scenarios are not
sufficiently dissimilar, suggesting that they should be combined.
188
one(s) are actually useful and diagnostic instead of simply supporting a
particular scenario. Indicators Validation also helps guard against
premature closure by requiring an analyst to track whether the indicators
previously developed for a particular scenario are actually emerging.
The Method
The first step is to fill out a matrix similar to that used for Analysis of
Competing Hypotheses. This can be done manually or by using the
Indicators Validator™ software (see Figure 6.3b). The matrix should:
✶ List alternative along the top of the matrix (as is done for
hypotheses in Analysis of Competing Hypotheses)
✶ List indicators that have already been generated for all the
scenarios down the left side of the matrix (as is done with relevant
information in Analysis of Competing Hypotheses)
✶ In each cell of the matrix, assess whether the indicator for that
particular scenario is
– Highly Likely to appear
– Likely to appear
– Could appear
– Unlikely to appear
– Highly Unlikely to appear
✶ Indicators developed for their particular scenario, the home
scenario, should be either Highly Likely or Likely. If the software is
unavailable, you can do your own scoring. If the indicator is Highly
Likely in the home scenario, then in the other scenarios:
– Highly Likely is 0 points
– Likely is 1 point
– Could is 2 points
– Unlikely is 4 points
– Highly Unlikely is 6 points
✶ If the indicator is Likely in the home scenario, then in the other
scenarios:
– Highly Likely is 0 points
– Likely is 0 points
– Could is 1 point
– Unlikely is 3 points
– Highly Unlikely is 5 points
✶ Tally up the scores across each row.
189
✶ Once this process is complete, re-sort the indicators so that the
most discriminating indicators are displayed at the top of the matrix
and the least discriminating indicators at the bottom.
– The most discriminating indicator is “Highly Likely” to
emerge in one scenario and “Highly Unlikely” to emerge in all
other scenarios.
– The least discriminating indicator is “Highly Likely” to
appear in all scenarios.
– Most indicators will fall somewhere in between.
✶ The indicators with the most Highly Unlikely and Unlikely ratings
are the most discriminating.
✶ Review where analysts differ in their assessments and decide if
adjustments are needed in their ratings. Often, differences in how an
analysts rate a particular indicator cam be traced back to different
assumptions about the scenario when the analysts were doing the
ratings.
✶ Use your judgment as to whether you should retain or discard
indicators that have no Unlikely or Highly Unlikely ratings. In some
cases, an indicator may be worth keeping if it is useful when viewed
in combination with a cluster of indicators.
✶ Once nonhelpful indicators have been eliminated, regroup the
indicators under their home scenario.
✶ If a large number of diagnostic indicators for a particular scenario
have been eliminated, develop additional—and more diagnostic—
indicators for that scenario.
✶ Recheck the diagnostic value of any new indicators by applying
the Indicators Validator™ to them as well.
190
Source: 2013 Globalytica, LLC.
Potential Pitfalls
Serious thought should be given before discarding indicators that are
determined to be nondiagnostic. For example, an indicator might not have
diagnostic value on its own but be helpful when viewed as part of a cluster
of indicators. An indicator that a terrorist group “purchased guns” would
not be diagnostic in determining which of the following scenarios were
likely to happen: armed attack, hostage taking, or kidnapping; but knowing
that guns had been purchased could be critical in pointing to an intent to
commit an act of violence or even to warn of the imminence of the event.
191
published to alert other analysts and collectors to the possibility that they
might make the same mistake.
1. Peter Schwartz, The Art of the Long View: Planning for the Future in an
Uncertain World (New York: Doubleday, 1996).
3. The description of the Cone of Plausibility is taken from Quick Wins for
Busy Analysts, DI Futures and Analytic Methods (DI FAM), Professional
Head of Defence Intelligence Analysis, UK Ministry of Defence; and
Gudmund Thompson, Aide Memoire on Intelligence Analysis Tradecraft,
Chief of Defence Intelligence, Director General of Intelligence Production,
Canada, Version 4.02, and are used with the permission of the UK and
Canadian governments.
192
7 Hypothesis Generation and Testing
The generation and testing of hypotheses is a skill, and its subtleties do not
193
come naturally. It is a form of reasoning that people can learn to use for
dealing with high-stakes situations. What does come naturally is drawing
on our existing body of knowledge and experience (mental model) to make
an intuitive judgment.1 In most circumstances in our daily lives, this is an
efficient approach that works most of the time. For intelligence analysis,
however, it is not sufficient, because intelligence issues are generally so
complex, and the risk and cost of error are too great. Also, the situations
are often novel, so the intuitive judgment shaped by past knowledge and
experience may well be wrong.
194
This chapter describes three techniques that are intended to be used
specifically for hypothesis generation. Other chapters include techniques
that can be used to generate hypotheses but also have a variety of other
purposes. These include Venn Analysis (chapter 4); Structured
Brainstorming, Nominal Group Technique, and Quadrant Crunching™
(chapter 5); Scenarios Analysis (chapter 6); the Delphi Method (chapter 9);
and Decision Trees (chapter 11).
Overview of Techniques
Hypothesis Generation
is a category that includes three specific techniques—Simple Hypotheses,
Multiple Hypotheses Generator™, and Quadrant Hypothesis Generation.
Simple Hypotheses is the easiest to use but not always the best selection.
Use the Multiple Hypotheses Generator™ to identify a large set of all
possible hypotheses. Quadrant Hypothesis Generation is used to identify a
set of hypotheses when the outcome is likely to be determined by just two
driving forces. The latter two techniques are particularly useful in
identifying a set of mutually exclusive and comprehensively exhaustive
(MECE) hypotheses.
Diagnostic Reasoning
applies hypothesis testing to the evaluation of significant new information.
Such information is evaluated in the context of all plausible explanations
of that information, not just in the context of the analyst’s well-established
mental model. The use of Diagnostic Reasoning reduces the risk of
surprise, as it ensures that an analyst will have given at least some
195
consideration to alternative conclusions. Diagnostic Reasoning differs
from the Analysis of Competing Hypotheses (ACH) technique in that it is
used to evaluate a single item of evidence, while ACH deals with an entire
issue involving multiple pieces of evidence and a more complex analytic
process.
Argument Mapping
is a method that can be used to put a single hypothesis to a rigorous logical
test. The structured visual representation of the arguments and evidence
makes it easier to evaluate any analytic judgment. Argument Mapping is a
logical follow-on to an ACH analysis. It is a detailed presentation of the
arguments for and against a single hypothesis, while ACH is a more
general analysis of multiple hypotheses. The successful application of
Argument Mapping to the hypothesis favored by the ACH analysis would
increase confidence in the results of both analyses.
Deception Detection
is discussed in this chapter because the possibility of deception by a
foreign intelligence service, economic competitor, or other adversary
organization is a distinctive type of hypothesis that analysts must
frequently consider. The possibility of deception can be included as a
hypothesis in any ACH analysis. Information identified through the
Deception Detection technique can then be entered as relevant information
in the ACH matrix.
196
to be tested by collecting and presenting evidence. It is a declarative
statement that has not been established as true—an “educated guess” based
on observation that needs to be supported or refuted by more observation
or through experimentation.
When to Use It
Analysts should use some structured procedure to develop multiple
hypotheses at the start of a project when:
197
✶ Many variables are involved in the analysis.
✶ There is uncertainty about the outcome.
✶ Analysts or decision makers hold competing views.
Value Added
Hypothesis Generation provides a structured way to generate a
comprehensive set of mutually exclusive hypotheses. This can increase
confidence that an important hypothesis has not been overlooked and can
also help to reduce bias. When the techniques are used properly, choosing
a lead hypothesis becomes much less critical than making sure that all the
possible explanations have been considered.
198
have “bins” for the use of a commercial airliner as a large
combustible missile or hijacking an airplane with no intent to land it.
✶ Expecting marginal change: Another frequent trap is focusing on
a narrow range of alternatives and projecting only marginal, not
radical, change. An example would be assuming that the conflict in
Syria is a battle between the Assad regime and united opposition and
not considering scenarios of more dramatic change such as partition
into several states, collapse into failed-state status, or a takeover by
radical Islamist forces.
199
to identify key forces and factors.
✶ Aggregate the hypotheses into affinity groups and label each
group.
✶ Use problem restatement and consideration of the opposite to
develop new ideas.
✶ Update the list of alternative hypotheses. If the hypotheses will be
used in ACH, strive to keep them mutually exclusive—that is, if one
hypothesis is true all others must be false.
✶ Have the group clarify each hypothesis by asking the journalist’s
classic list of questions: Who, What, How, When, Where, and Why?
✶ Select the most promising hypotheses for further exploration.
200
Source: 2013 Globalytica, LLC.
201
✶ Define the issue, activity, or behavior that is subject to
examination. Often, it is useful to ask the question in the following
ways:
– What variations could be developed to challenge the lead
hypothesis that . . . ?
– What are the possible permutations that would flip the
assumptions contained in the lead hypothesis that . . . ?
✶ Break up the lead hypothesis into its component parts following
the order of Who, What, How, When, Where, and Why? Some of
these questions may not be appropriate, or they may be givens for the
particular issue, activity, or behavior you are examining. Usually,
only three of these components are directly relevant to the issue at
hand; moreover, the method works best when the number of
hypotheses generated remains manageable.
– In a case in which you are seeking to challenge a favored
hypothesis, identify the Who, What, How, When, Where, and
Why for the given hypothesis. Then generate plausible
alternatives for each relevant key component.
✶ Review the lists of alternatives for each of the key components;
strive to keep the alternatives on each list mutually exclusive.
✶ Generate a list of all possible permutations, as shown in Figure
7.1.2.
✶ Discard any permutation that simply makes no sense.
✶ Evaluate the credibility of the remaining permutations by
challenging the key assumptions of each component. Some of these
assumptions may be testable themselves. Assign a “credibility score”
to each permutation using a 1-to 5-point scale where 1 is low
credibility and 5 is high credibility.
✶ Re-sort the remaining permutations, listing them from most
credible to least credible.
✶ Restate the permutations as hypotheses, ensuring that each meets
the criteria of a good hypothesis.
✶ Select from the top of the list those hypotheses most deserving of
attention.
202
Source: 2013 Globalytica, LLC.
203
Use the quadrant technique to identify a basic set of hypotheses when two
easily identified key driving forces can be identified that are likely to
determine the outcome of an issue. The technique identifies four potential
scenarios that represent the extreme conditions for each of the two major
drivers. It spans the logical possibilities inherent in the relationship and
interaction of the two driving forces, thereby generating options that
analysts otherwise may overlook.
204
The four hypotheses derived from the quad chart can be stated as follows:
1. The final state of the Iraq government will be a centralized state and a
secularized society.
2. The final state of the Iraq government will be a centralized state and a
religious society.
3. The final state of the Iraq government will be a decentralized state
and a secularized society.
4. The final state of the Iraq government will be a decentralized state
and a religious society.
Potential Pitfalls
The value of this technique is limited to the ability of analysts to generate a
robust set of alternative explanations. If group dynamics are flawed, the
outcomes will be flawed. Whether the correct hypothesis or “idea”
emerges from this process and is identified as such by the analyst or
analysts cannot be guaranteed, but the prospect of the correct hypothesis
being included in the set of hypotheses under consideration is greatly
increased.
205
Relationship to Other Techniques
The product of any Scenarios Analysis can be thought of as a set of
alternative hypotheses. Quadrant Hypothesis Generation is a specific
application of the generic method called Morphological Analysis,
described in chapter 5. Alternative Futures Analysis uses a similar
quadrant chart approach to define four potential outcomes.
When to Use It
Analysts should use Diagnostic Reasoning if they find themselves making
a snap intuitive judgment while assessing the meaning of a new
development, the significance of a new report, or the reliability of a stream
of reporting from a new source. Often, much of the information used to
support one’s lead hypothesis turns out to be consistent with alternative
hypotheses as well. In such cases, the new information should not—and
cannot—be used as evidence to support the prevailing view or lead
hypothesis.
206
The technique also helps reduce the chances of being caught by surprise,
as it ensures that the analyst or decision maker will have given at least
some consideration to alternative explanations. It is especially important to
use when an analyst—or decision maker—is looking for evidence to
confirm an existing mental model or policy position. It helps the analyst
assess whether the same information is consistent with other reasonable
conclusions or with alternative hypotheses.
Value Added
The value of Diagnostic Reasoning is that it helps analysts balance their
natural tendency to interpret new information as consistent with their
existing understanding of what is happening—that is, the analyst’s mental
model. The technique prompts analysts to ask themselves whether this
same information is consistent with other reasonable conclusions or
alternative hypotheses. It is a common experience to discover that much of
the information supporting what one believes is the most likely conclusion
is really of limited value in confirming one’s existing view, because that
same information is also consistent with alternative conclusions. One
needs to evaluate new information in the context of all possible
explanations of that information, not just in the context of a well-
established mental model.
The Diagnostic Reasoning technique helps the analyst identify and focus
on the information that is most needed to make a decision and avoid the
mistake of discarding or ignoring information that is inconsistent with
what the analyst expects to see. When evaluating evidence, analysts tend
to assimilate new information into what they currently perceive. Gradual
change is difficult to notice. Sometimes, past experience can provide a
handicap in that the expert has much to unlearn—and a fresh perspective
can be helpful.
Diagnostic Reasoning also helps analysts avoid the classic intuitive traps
of assuming the same dynamic is in play when something seems to accord
with an analyst’s past experiences and continuing to hold to an analytic
judgment when confronted with a mounting list of evidence that
contradicts the initial conclusion. It also helps reduce attention to
information that is not diagnostic by identifying information that is
consistent with all possible alternatives.
207
The Method
Diagnostic Reasoning is a process by which you try to refute alternative
judgments rather than confirm what you already believe to be true. Here
are the steps to follow:
Potential Pitfalls
When new information is received, analysts need to validate that the new
information is accurate and not deceptive or intentionally misleading. It is
208
also possible that none of the key information turns out to be diagnostic, or
that all of the relevant information will not be identified.
209
trying to refute as many reasonable hypotheses as possible rather than to
confirm what initially appears to be the most likely hypothesis. The most
likely hypothesis is then the one with the least relevant information against
it, as well as information for it—not the one with the most relevant
information for it.
When to Use It
ACH is appropriate for almost any analysis where there are alternative
explanations for what has happened, is happening, or is likely to happen.
Use it when the judgment or decision is so important that you cannot
afford to be wrong. Use it when your gut feelings are not good enough,
and when you need a systematic approach to prevent being surprised by an
unforeseen outcome. Use it on controversial issues when it is desirable to
identify precise areas of disagreement and to leave an audit trail to show
what relevant information was considered and how different analysts
arrived at their different judgments.
The technique can be used by a single analyst, but it is most effective with
a small team whose members can question one another’s evaluation of the
relevant information. It structures and facilitates the exchange of
information and ideas with colleagues in other offices or agencies. An
ACH analysis requires a modest commitment of time; it may take a day or
more to build the ACH matrix once all the relevant information has been
collected, and it may require another day to work through all the stages of
the analytic process before writing up any conclusions. A facilitator or a
colleague previously schooled in the use of the technique is often needed
to help guide analysts through the process, especially if it is the first time
they have used the methodology.
210
Value Added
Analysts are commonly required to work with incomplete, ambiguous,
anomalous, and sometimes deceptive data. In addition, strict time
constraints and the need to “make a call” often conspire with natural
human cognitive biases to cause inaccurate or incomplete judgments. If the
analyst is already generally knowledgeable on the topic, a common
procedure is to develop a favored hypothesis and then search for relevant
information to confirm it. This is called a “satisficing” approach—going
with the first answer that seems to be supported by the evidence.
211
assess the support for and against each hypothesis. This can be done
manually, but it is much easier and better to use ACH software designed
for this purpose. Various ACH softwares can be used to sort and analyze
the data by type of source and date of information, as well as by degree of
support for or against each hypothesis.
ACH Software
ACH started as a manual method at the CIA in the mid-1980s. The first
professionally developed and tested ACH software was created in 2005 by
the Palo Alto Research Center (PARC), with federal government funding
and technical assistance from Richards Heuer and Randy Pherson. Randy
Pherson managed its introduction into the U.S. Intelligence Community. It
was developed with the understanding that PARC would make it available
for public use at no cost, and it may still be downloaded at no cost from
the PARC website: http://www2.parc.com/istl/projects/ach/ach.html.
The PARC version was designed for use by an individual analyst, but in
practice it is commonly used by a collocated team of analysts. Members of
such groups report that:
212
✶ Review of the ACH matrix provides a systematic basis for
identification and discussion of differences between participating
analysts.
✶ Reference to the matrix helps depersonalize the argumentation
when there are differences of opinion.
The Method
To retain three or five or seven hypotheses in working memory and note
how each item of information fits into each hypothesis is beyond the
213
capabilities of most people. It takes far greater mental agility than the
common practice of seeking evidence to support a single hypothesis that is
already believed to be the most likely answer. ACH can be accomplished,
however, with the help of the following nine-step process:
3. Create a matrix with all hypotheses across the top and all items of
relevant information down the left side. See Figure 7.3a for an
example. Analyze each input by asking, “Is this input Consistent with
the hypothesis, is it Inconsistent with the hypothesis, or is it Not
Applicable or not relevant?” This can be done by either filling in each
cell of the matrix row-by-row or using the survey method which
randomly selects a cell in the matrix for the analyst to rate (see Figure
7.3b for an example). If it is Consistent, put a “C” in the appropriate
matrix box; if it is Inconsistent, put an “I”; if it is Not Applicable to
that hypothesis, put an “NA.” If a specific item of evidence,
argument, or assumption is particularly compelling, put two “Cs” in
the box; if it strongly undercuts the hypothesis, put two “Is.”
214
assumption—whatever assumption the rating “depends on.” You
should record all such assumptions. After completing the matrix, look
for any pattern in those assumptions—such as, the same assumption
being made when ranking multiple items of information. After the
relevant information has been sorted for diagnosticity, note how many
of the highly diagnostic Inconsistency ratings are based on
assumptions. Consider how much confidence you should have in
those assumptions and then adjust the confidence in the ACH
Inconsistency Scores accordingly.
4. Review where analysts differ in their assessments and decide if
adjustments are needed in the ratings (see Figure 7.3c). Often,
differences in how analysts rate a particular item of information can
be traced back to different assumptions about the hypotheses when
doing the ratings.
5. Refine the matrix by reconsidering the hypotheses. Does it make
sense to combine two hypotheses into one, or to add a new hypothesis
that was not considered at the start? If a new hypothesis is added, go
back and evaluate all the relevant information for this hypothesis.
Additional relevant information can be added at any time.
215
interpretation of a few critical items of relevant information that the
software automatically moves to the top of the matrix. Consider the
consequences for your analysis if one or more of these critical items
of relevant information were wrong or deceptive or subject to a
different interpretation. If a different interpretation would be
sufficient to change your conclusion, go back and do everything that
is reasonably possible to double-check the accuracy of your
interpretation.
8. Report the conclusions. Consider the relative likelihood of all the
hypotheses, not just the most likely one. State which items of relevant
information were the most diagnostic, and how compelling a case
they make in identifying the most likely hypothesis.
9. Identify indicators or milestones for future observation. Generate two
lists: the first focusing on future events or what might be developed
through additional research that would help prove the validity of your
analytic judgment; the second, a list of indicators that would suggest
that your judgment is less likely to be correct or that the situation has
changed. Validate the indicators and monitor both lists on a regular
basis, remaining alert to whether new information strengthens or
weakens your case.
216
Figure 7.3b Coding Relevant Information in ACH
217
Figure 7.3c Evaluating Levels of Disagreement in ACH
218
Potential Pitfalls
A word of caution: ACH only works when all the analysts approach an
issue with a relatively open mind. An analyst who is already committed to
a belief in what the right answer is will often find a way to interpret the
relevant information as consistent with that belief. In other words, as an
antidote to Confirmation Bias, ACH is similar to a flu shot. Taking the flu
shot will usually keep you from getting the flu, but it won’t make you well
if you already have the flu.
219
offsetting advantage of focusing your attention on the few items of critical
information that cause the uncertainty or, if they were available, would
alleviate it. ACH can guide future collection, research, and analysis to
resolve the uncertainty and produce a more accurate judgment.
220
having high credibility, but this is probably not enough to reflect the
conclusiveness of such relevant information and the impact it should
have on an analyst’s thinking about the hypotheses. In other words, in
some circumstances one or two highly authoritative reports from a
trusted source in a position to know may support one hypothesis so
strongly that they refute all other hypotheses regardless of what other
less reliable or less definitive relevant information may show.
✶ Unbalanced set of evidence: Evidence and arguments must be
representative of the problem as a whole. If there is considerable
evidence on a related but peripheral issue and comparatively few
items of evidence on the core issue, the Inconsistency Score may be
misleading.
✶ Diminishing returns: As evidence accumulates, each new item of
Inconsistent relevant information or argument has less impact on the
Inconsistency Scores than does the earlier relevant information. For
example, the impact of any single item is less when there are fifty
items than when there are only ten items. To understand this, consider
what happens when you calculate the average of fifty numbers. Each
number has equal weight, but adding a fifty-first number will have
less impact on the average than if you start with only ten numbers and
add one more. Stated differently, the accumulation of relevant
information over time slows down the rate at which the Inconsistency
Score changes in response to new relevant information. Therefore,
these numbers may not reflect the actual amount of change in the
situation you are analyzing. When you are evaluating change over
time, it is desirable to delete the older relevant information
periodically, or to partition the relevant information and analyze the
older and newer relevant information separately.
Some other caveats to watch out for when using ACH include:
221
Relationship to Other Techniques
ACH is often used in conjunction with other techniques. For example,
Structured Brainstorming, Nominal Group Technique, the Multiple
Hypotheses Generator™, or the Delphi Method may be used to identify
hypotheses or relevant information to be included in the ACH analysis or
to evaluate the significance of relevant information. Deception Detection
may identify an opponent’s motive, opportunity, or means to conduct
deception, or past deception practices; information about these factors
should be included in the list of ACH-relevant information. The
Diagnostic Reasoning technique is incorporated within the ACH method.
The final step in the ACH method identifies indicators for monitoring
future developments.
ACH and Argument Mapping (described in the next section) are both used
on the same types of complex analytic problems. They are both systematic
methods for organizing relevant information, but they work in
222
fundamentally different ways and are best used at different stages in the
analytic process. ACH is used during an early stage to analyze a range of
hypotheses in order to determine which is most consistent with the broad
body of relevant information. At a later stage, when the focus is on
developing, evaluating, or presenting the case for a specific conclusion,
Argument Mapping is the appropriate method. Each method has strengths
and weaknesses, and the optimal solution is to use both.
An Argument Map makes it easier for both the analysts and the recipients
of the analysis to clarify and organize their thoughts and evaluate the
soundness of any conclusion. It shows the logical relationships between
various thoughts in a systematic way and allows one to assess quickly in a
visual way the strength of the overall argument. The technique also helps
the analysts and recipients of the report to focus on key issues and
arguments rather than focusing too much attention on minor points.
223
When to Use It
When making an intuitive judgment, use Argument Mapping to test your
own reasoning. Creating a visual map of your reasoning and the evidence
that supports this reasoning helps you better understand the strengths,
weaknesses, and gaps in your argument. It is best to use this technique
before you write your product to ensure the quality of the argument and
refine the line of argument.
Value Added
An Argument Map organizes one’s thinking by showing the logical
relationships between the various thoughts, both pro and con. An
Argument Map also helps the analyst recognize assumptions and identify
gaps in the available knowledge. The visualization of these relationships
makes it easier to think about a complex issue and serves as a guide for
clearly presenting to others the rationale for the conclusions. Having this
rationale available in a visual form helps both the analyst and recipients of
the report focus on the key points rather than meandering aimlessly or
going off on nonsubstantive tangents.
224
argument also makes it easier to recognize weaknesses in opposing
arguments. It pinpoints the location of any disagreement, and it can also
serve as an objective basis for mediating a disagreement.
An Argument Map is an ideal tool for dealing with issues of cause and
effect—and for avoiding the cognitive trap of assuming that correlation
implies causation. By laying out all the arguments for and against a lead
hypothesis—and all the supporting evidence and logic—for each
argument, it is easy to evaluate the soundness of the overall argument. The
process also mitigates against the intuitive traps of ignoring base rate
probabilities by encouraging the analyst to seek out and record all the
relevant facts that support each supposition. Similarly, the focus on
seeking out and recording all data that support or rebut the key points of
the argument makes it difficult for the analyst to make the mistake of
overdrawing conclusions from a small sample of data or continuing to hold
to an analytic judgment when confronted with a mounting list of evidence
that contradicts the initial conclusion.
The Method
An Argument Map starts with a hypothesis—a single-sentence statement,
judgment, or claim about which the analyst can, in subsequent statements,
present general arguments and detailed evidence, both pro and con. Boxes
with arguments are arrayed hierarchically below this statement, and these
boxes are connected with arrows. The arrows signify that a statement in
one box is a reason to believe, or not to believe, the statement in the box to
which the arrow is pointing. Different types of boxes serve different
functions in the reasoning process, and boxes use some combination of
color-coding, icons, shapes, and labels so that one can quickly distinguish
arguments supporting a hypothesis from arguments opposing it. Figure 7.4
is a very simple example of Argument Mapping, showing some of the
arguments bearing on the assessment that North Korea has nuclear
weapons.
225
Source: Diagram produced using the bCisive Argument Mapping
software from Austhink, www.austhink.com. Reproduced with
permission.
226
or challenges it supports.
✶ Specify rebuttals, if any, with orange lines. An objection,
challenge, or counterevidence that does not have an orange-line
rebuttal suggests a flaw in the argument.
✶ Evaluate the argument for clarity and completeness, ensuring that
red-lined opposing claims and evidence have orange-line rebuttals. If
all the reasons can be rebutted, then the argument is without merit.
Potential Pitfalls
Argument Mapping is a challenging skill. Training and practice are
required to use the technique properly and so gain its benefits. Detailed
instructions for effective use of this technique are available at the website
listed below under “Origins of This Technique.” Assistance by someone
experienced in using the technique is necessary for first-time users.
Commercial software and freeware are available for various types of
Argument Mapping. In the absence of software, using a sticky note to
represent each box in an Argument Map drawn on a whiteboard can be
very helpful, as it is easy to move the sticky notes around as the map
evolves and changes.
227
deceived. “The accurate perception of deception in counterintelligence
analysis is extraordinarily difficult. If deception is done well, the analyst
should not expect to see any evidence of it. If, on the other hand, deception
is expected, the analyst often will find evidence of deception even when it
is not there.”7
When to Use It
Analysts should be concerned about the possibility of deception when:
Value Added
Most intelligence analysts know they cannot assume that everything that
arrives in their in-box is valid, but few know how to factor such concerns
effectively into their daily work practices. If an analyst accepts the
possibility that some of the information received may be deliberately
deceptive, this puts a significant cognitive burden on the analyst. All the
evidence is open then to some question, and it becomes difficult to draw
any valid inferences from the reporting. This fundamental dilemma can
paralyze analysis unless practical tools are available to guide the analyst in
determining when it is appropriate to worry about deception, how best to
detect deception in the reporting, and what to do in the future to guard
against being deceived.
228
The measure of a good deception operation is how well it exploits the
cognitive biases of its target audience. The deceiver’s strategy usually is to
provide some intelligence or information of value to the person being
deceived in the hopes that he or she will conclude the “take” is good
enough and should be disseminated. As additional information is collected,
the satisficing bias is reinforced and the recipient’s confidence in the
information or the source usually grows, further blinding the recipient to
the possibility that he or she is being deceived. The deceiver knows that
the information being provided is highly valued, although over time some
people will begin to question the bona fides of the source. Often, this puts
the person who developed the source or acquired the information on the
defensive, and the natural reaction is to reject any and all criticism. This
cycle is usually broken only by applying structured techniques such as
Deception Detection to force a critical examination of the true quality of
the information and the potential for deception.
It is very hard to deal with deception when you are really just
trying to get a sense of what is going on, and there is so much noise
in the system, so much overload, and so much ambiguity. When
you layer deception schemes on top of that, it erodes your ability to
act.
The Method
Analysts should routinely consider the possibility that opponents or
competitors are attempting to mislead them or to hide important
information. The possibility of deception cannot be rejected simply
because there is no evidence of it; because, if the deception is well done,
one should not expect to see evidence of it. Some circumstances in which
deception is most likely to occur are listed in the “When to Use It” section.
When such circumstances occur, the analyst, or preferably a small group
of analysts, should assess the situation using four checklists that are
229
commonly referred to by their acronyms: MOM, POP, MOSES, and EVE.
(See box on p. 201.)
Analysts have also found the following “rules of the road” helpful in
dealing with deception:8
230
Deception Detection in this book was previously published in Randolph H.
Pherson, Handbook of Analytic Tools and Techniques (Reston, VA:
Pherson Associates, LLC, 2008).
3. Karl Popper, The Logic of Science (New York: Basic Books, 1959).
232
8 Assessment of Cause and Effect
Attempts to explain the past and forecast the future are based on an
understanding of cause and effect. Such understanding is difficult, because
the kinds of variables and relationships studied by the intelligence analyst
are, in most cases, not amenable to the kinds of empirical analysis and
theory development that are common in academic research. The best the
analyst can do is to make an informed judgment, but such judgments
depend upon the analyst’s subject-matter expertise and reasoning ability,
and are vulnerable to various cognitive pitfalls and fallacies of reasoning.
233
People also make several common errors in logic. When two occurrences
often happen at the same time, or when one follows the other, they are said
to be correlated. A frequent assumption is that one occurrence causes the
other, but correlation should not be confused with causation. It often
happens that a third variable causes both. “A classic example of this is ice
cream sales and the number of drownings in the summer. The fact that ice
cream sales increase as drownings increase does not mean that either one
caused the other. Rather, a third factor, such as hot weather, could have
caused both.”1
234
are too generalized to be applicable to the unique characteristics of
most intelligence problems. Many others involve quantitative analysis
that is beyond the domain of structured analytic techniques as defined
in this book. However, a conceptual model that simply identifies
relevant variables and the diverse ways they might combine to cause
specific outcomes can be a useful template for guiding collection and
analysis of some common types of problems. Outside-In Thinking
can be used to explain current events or forecast the future in this
way.
Overview of Techniques
Key Assumptions Check
is one of the most important and frequently used techniques. Analytic
judgment is always based on a combination of evidence and assumptions
—or preconceptions—that influence how the evidence is interpreted. The
Key Assumptions Check is a systematic effort to make explicit and
question the assumptions (i.e., mental model) that guide an analyst’s
thinking.
Structured Analogies
applies analytic rigor to reasoning by analogy. This technique requires that
the analyst systematically compare the issue of concern with multiple
potential analogies before selecting the one for which the circumstances
are most similar to the issue of concern. It seems natural to use analogies
when making decisions or forecasts as, by definition, they contain
information about what has happened in similar situations in the past.
People often recognize patterns and then consciously take actions that
235
were successful in a previous experience or avoid actions that previously
were unsuccessful. However, analysts need to avoid the strong tendency to
fasten onto the first analogy that comes to mind and supports their prior
view about an issue.
Role Playing
, as described here, starts with the current situation, perhaps with a real or
hypothetical new development that has just happened and to which the
players must react. This type of Role Playing, unlike much military
gaming, can often be completed in one or two days, with little advance
preparation. Even simple Role Playing exercises will stimulate creative
and systematic thinking about how a current complex situation might play
out. A Role Playing game is also an effective mechanism for bringing
together analysts who, although they work on a common problem, may
have little opportunity to meet and discuss their perspectives on that
problem. When major decisions are needed and time is short, this
technique can be used to bring analysts and decision makers together in the
same room to engage in dynamic problem solving.
Outside-In Thinking
broadens an analyst’s thinking about the forces that can influence a
particular issue of concern. This technique requires the analyst to reach
beyond his or her specialty area to consider broader social, organizational,
economic, environmental, technological, political, legal, and global forces
or trends that can affect the topic being analyzed.
236
8.1 Key Assumptions Check
Analytic judgment is always based on a combination of evidence and
assumptions, or preconceptions, which influences how the evidence is
interpreted.2 The Key Assumptions Check is a systematic effort to make
explicit and question the assumptions (the mental model) that guide an
analyst’s interpretation of evidence and reasoning about any particular
problem. Such assumptions are usually necessary and unavoidable as a
means of filling gaps in the incomplete, ambiguous, and sometimes
deceptive information with which the analyst must work. They are driven
by the analyst’s education, training, and experience, plus the
organizational context in which the analyst works.
When to Use It
Any explanation of current events or estimate of future developments
requires the interpretation of evidence. If the available evidence is
incomplete or ambiguous, this interpretation is influenced by assumptions
about how things normally work in the country of interest. These
assumptions should be made explicit early in the analytic process.
237
should be modified.
Value Added
Preparing a written list of one’s working assumptions at the beginning of
any project helps the analyst do the following:
The Method
The process of conducting a Key Assumptions Check is relatively
straightforward in concept but often challenging to put into practice. One
challenge is that participating analysts must be open to the possibility that
they could be wrong. It helps to involve in this process several well-
regarded analysts who are generally familiar with the topic but have no
prior commitment to any set of assumptions about the issue at hand. Keep
in mind that many “key assumptions” turn out to be “key uncertainties.”
Randy Pherson’s extensive experience as a facilitator of analytic projects
indicates that approximately one in every four key assumptions collapses
on careful examination.
238
Here are the steps in conducting a Key Assumptions Check:
239
– How much confidence do I have that this assumption is
valid?
– If it turns out to be invalid, how much impact would this
have on the analysis?
✶ Place each assumption in one of three categories:
– Basically solid.
– Correct with some caveats.
– Unsupported or questionable—the “key uncertainties.”
✶ Refine the list, deleting those assumptions that do not hold up to
scrutiny and adding new ones that emerge from the discussion. Above
all, emphasize those assumptions that would, if wrong, lead to
changing the analytic conclusions.
✶ Consider whether key uncertainties should be converted into
intelligence collection requirements or research topics.
Figure 8.1 shows apparently flawed assumptions made in the Wen Ho Lee
espionage case during the 1990s and what further investigation showed
about these assumptions. A Key Assumptions Check could have identified
weaknesses in the case against Lee much earlier.
240
present) in which the assumption is wrong. What could have happened to
make it wrong, how could that have happened, and what are the
consequences?
241
Source: 2009 Pherson Associates, LLC.
242
Quadrant Crunching™ (chapter 5) and Simple Scenarios (chapter 6) both
use assumptions and their opposites to generate multiple explanations or
outcomes.
When to Use It
It seems natural to use analogies when making judgments or forecasts
because, by definition, they contain information about what has happened
in similar situations in the past. People do this in their daily lives, and
analysts do it in their role as intelligence analysts. People recognize similar
situations or patterns and then consciously take actions that were
successful in a previous experience or avoid actions that previously were
243
unsuccessful. People often turn to analogical reasoning in unfamiliar or
uncertain situations where the available information is inadequate for any
other approach.
When one is making any analogy, it is important to think about more than
just the similarities. It is also necessary to consider those conditions,
qualities, or circumstances that are dissimilar between the two phenomena.
This should be standard practice in all reasoning by analogy and especially
in those cases when one cannot afford to be wrong.
Many analogies are used loosely and have a broad impact on the thinking
of both decision makers and the public at large. One role for analysis is to
take analogies that are already being used by others, and that are having an
impact, and then subjecting these analogies to rigorous examination.
244
—Jerome K. Clauser and Sandra M. Weir, Intelligence Research
Methodology, Defense Intelligence School (1975)
Value Added
Reasoning by analogy helps achieve understanding by reducing the
unfamiliar to the familiar. In the absence of data required for a full
understanding of the current situation, reasoning by analogy may be the
only alternative. If this approach is taken, however, one should be aware of
the significant potential for error, and the analyst should reduce the
potential for error to the extent possible through the use of the Structured
Analogies technique. Use of the technique helps protect analysts from
making the mental mistakes of overdrawing conclusions from a small set
of data and assuming the same dynamic is in play when, at first glance,
something seems to accord with their past experiences.
245
—Ernest R. May, Lessons of the Past: The Use and Misuse of
History in American Foreign Policy (1975)
The Method
Training in this technique is recommended prior to using it. Such a
training course is available at
http://www.academia.edu/1070109/Structured_Analogies_for_Forecasting.
246
communication and information, and actions in disputed area.
✶ Write up an account of each selected analogy, with equal focus on
those aspects of the analogy that are similar and those that are
different. The task of writing accounts of all the analogies should be
divided up among the experts. Each account can be posted on a wiki
where each member of the group can read and comment on them.
✶ Review the tentative list of categories for comparing the analogous
situations to make sure they are still appropriate. Then ask each
expert to rate the similarity of each analogy to the issue of concern.
The experts should do the rating in private using a scale from 0 to 10,
where 0 = not at all similar, 5 = somewhat similar, and 10 = very
similar.
✶ After combining the ratings to calculate an average rating for each
analogy, discuss the results and make a forecast for the current issue
of concern. This will usually be the same as the outcome of the most
similar analogy. Alternatively, identify several possible outcomes, or
scenarios, based on the diverse outcomes of analogous situations.
Then use the analogous cases to identify drivers or policy actions that
might influence the outcome of the current situation.
247
When to Use It
Role Playing is often used to improve understanding of what might happen
when two or more people, organizations, or countries interact, especially
in conflict situations or negotiations. It shows how each side might react to
statements or actions from the other side. Many years ago Richards Heuer
participated in several Role Playing exercises, including one with analysts
of the Soviet Union from throughout the Intelligence Community playing
the role of Politburo members deciding on the successor to Soviet leader
Leonid Brezhnev. Randy Pherson has also organized several Role Playing
games on Latin America involving intelligence analysts and senior policy
officials. Role Playing has a desirable by-product that might be part of the
rationale for using this technique. It is a useful mechanism for bringing
together people who, although they work on a common problem, may have
little opportunity to meet and discuss their perspectives on this problem. A
Role Playing game may lead to the long-term benefits that come with
mutual understanding and ongoing collaboration. To maximize this
benefit, the organizer of the game should allow for participants to have
informal time together.
Value Added
Role Playing is a good way to see a problem from another person’s
perspective, to gain insight into how others think, or to gain insight into
how other people might react to U.S. actions. Playing a role gives one
license to think and act in a different manner. Simply trying to imagine
how another leader or country will think and react, which analysts do
frequently, is not Role Playing and is probably less effective than Role
Playing. One must actually act out the role and become, in a sense, the
person whose role is assumed. It is “living” the role that opens one’s
mental model and makes it possible to relate facts and ideas to one another
in ways that differ from habitual patterns.
248
interaction” (Role Playing) to act out the conflicts.3
Role Playing does not necessarily give a “right” answer, but it typically
enables the players to see some things in a new light. Players become more
conscious that “where you stand depends on where you sit.” By changing
roles, the participants can see the problem in a different context. Most
participants view their experiences as useful in providing new information
and insights.4 Bringing together analysts from various offices and agencies
offers each participant a modest reality test of his or her views.
Participants are forced to confront in a fairly direct fashion the fact that the
assumptions they make about the problem are not inevitably shared by
others. The technique helps them avoid the traps of giving too much
weight to first impressions and seeing patterns where they do not exist.
The Method
Most of the gaming done in the Department of Defense and in the
academic world is rather elaborate, so it requires substantial preparatory
work. It does not have to be that way. The preparatory work (such as
writing scripts) can be avoided by starting the game with the current
situation as already known to analysts and decision makers, rather than
with a notional scenario that participants have to learn. Just one notional
news or intelligence report is sufficient to start the action in the game. In
the authors’ experience, it is possible to have a useful political game in just
one day with only a modest investment in preparatory work.
249
The analyst who plans and organizes the game leads a control team. This
team monitors time to keep the game on track, serves as the
communication channel to pass messages between teams, leads the after-
action review, and helps write the after-action report to summarize what
happened and lessons learned. The control team also plays any role that
becomes necessary but was not foreseen—for example, a United Nations
mediator. If necessary to keep the game on track or lead it in a desired
direction, the control team may introduce new events, such as a terrorist
attack that inflames emotions or a new policy statement on the issue by the
U.S. president.
Potential Pitfalls
One limitation of Role Playing is the difficulty of generalizing from the
game to the real world. Just because something happens in a Role Playing
game does not necessarily mean the future will turn out that way. This
observation seems obvious, but it can actually be a problem. Because of
the immediacy of the experience and the personal impression made by the
simulation, the outcome may have a stronger impact on the participants’
thinking than is warranted by the known facts of the case. As we shall
discuss, this response needs to be addressed in the after-action review.
250
studies, Role Playing predicted the correct outcome for 56 percent of 143
predictions, while unaided expert opinion was correct for 16 percent of
172 predictions. (See “Origins of This Technique.”) Outcomes of conflict
situations are difficult to predict because they involve a series of actions
and reactions, with each action depending upon the other party’s previous
reaction to an earlier action, and so forth. People do not have the cognitive
capacity to play out such a complex situation in their heads. Acting out
situations and responses is helpful because it simulates this type of
sequential interaction.
Armstrong’s findings validate the use of Role Playing as a useful tool for
intelligence analysis, but they do not validate treating the outcome as a
valid prediction. Although Armstrong found Role Playing results to be
more accurate than expert opinion, the error rate for Role Playing was still
quite large. Moreover, the conditions in which Role Playing is used in
intelligence analysis are quite different from the studies that Armstrong
reviewed and conducted. When the technique is used for intelligence
analysis, the goal is not an explicit prediction but better understanding of
the situation and the possible outcomes. The method does not end with the
conclusion of the Role Playing. There must be an after-action review of
the key turning points and how the outcome might have been different if
different choices had been made at key points in the game.
251
and perceive the world in the same way we do. Red Hat Analysis5 is a
useful technique for trying to perceive threats and opportunities as others
see them, but this technique alone is of limited value without significant
cultural understanding of the other country and people involved.
When to Use It
The chances of a Red Hat Analysis being accurate are better when one is
trying to foresee the behavior of a specific person who has the authority to
make decisions. Authoritarian leaders as well as small, cohesive groups,
such as terrorist cells, are obvious candidates for this type of analysis. The
chances of making an accurate forecast about an adversary’s or a
competitor’s decision is significantly lower when the decision is
constrained by a legislature or influenced by conflicting interest groups. In
law enforcement, Red Hat Analysis can be used effectively to simulate the
likely behavior of a criminal or a drug lord.
Value Added
There is a great deal of truth to the maxim that “where you stand depends
on where you sit.” Red Hat Analysis is a reframing technique6 that
requires the analyst to adopt—and make decisions consonant with—the
culture of a foreign leader, cohesive group, criminal, or competitor. This
conscious effort to imagine the situation as the target perceives it helps the
analyst gain a different and usually more accurate perspective on a
problem or issue. Reframing the problem typically changes the analyst’s
perspective from that of an analyst observing and forecasting an
adversary’s behavior to that of a leader who must make a difficult decision
within that operational culture. This reframing process often introduces
new and different stimuli that might not have been factored into a
traditional analysis. For example, in a Red Hat exercise, participants might
ask themselves these questions: “What are my supporters expecting from
me?” “Do I really need to make this decision now?” “What are the
consequences of making a wrong decision?” “How will the United States
respond?”
The Method
252
✶ Gather a group of experts with in-depth knowledge of the target,
operating environment, and senior decision maker’s personality,
motives, and style of thinking. If at all possible, try to include people
who are well grounded in the adversary’s culture, who speak the same
language, share the same ethnic background, or have lived
extensively in the region.
✶ Present the experts with a situation or a stimulus and ask them
what they would do if confronted by this situation. For example, you
might ask for a response to this situation: “The United States has just
imposed sanctions on your country.” Or: “We are about to launch a
new product. How do you think our competitors will react?” The
reason for first asking the experts how they would react is to establish
a baseline for assessing whether the adversary is likely to react
differently.7
✶ Once the experts have articulated how they would have responded
or acted, ask them to explain why they think they would behave that
way. Ask the experts to list what core values or core assumptions
were motivating their behavior or actions. Again, this step establishes
a baseline for assessing why the adversary is likely to react
differently.
✶ Once they can explain in a convincing way why they chose to act
the way they did, ask the experts to put themselves in the adversary’s
or competitor’s shoes and simulate how their adversary or competitor
would respond. At this point, the experts should ask themselves,
“Does our adversary or competitor share our values or motives or
methods of operation?” If not, then how do those differences lead
them to act in ways we might not have anticipated before engaging in
this exercise?
✶ If trying to foresee the actions of a group or an organization,
consider using the Role Playing technique. To gain cultural expertise
that might otherwise be lacking, consider using the Delphi Method
(chapter 9) to elicit the expertise of geographically distributed
experts.
✶ In presenting the results, describe the alternatives that were
considered and the rationale for selecting the path the person or group
is believed most likely to take. Consider other less conventional
means of presenting the results of your analysis, such as the
following:
– Describing a hypothetical conversation in which the leader
and other players discuss the issue in the first person.
253
– Drafting a document (set of instructions, military orders,
policy paper, or directives) that the adversary or competitor
would likely generate.
Figure 8.4 shows how one might use the Red Hat Technique to catch bank
robbers.
254
Source: Eric Hess, Senior Biometric Product Manager, MorphoTrak,
Inc. From an unpublished paper, “Facial Recognition for Criminal
Investigations,” delivered at the International Association of Law
Enforcements Intelligence Analysts, Las Vegas, 2009. Reproduced
with permission.
Potential Pitfalls
Forecasting human decisions or the outcome of a complex organizational
process is difficult in the best of circumstances. For example, how
successful would you expect to be in forecasting the difficult decisions to
be made by the U.S. president or even your local mayor? It is even more
difficult when dealing with a foreign culture and significant gaps in the
available information. Mirror imaging is hard to avoid because, in the
absence of a thorough understanding of the foreign situation and culture,
your own perceptions appear to be the only reasonable way to look at the
problem.
255
A common error in our perceptions of the behavior of other people,
organizations, or governments of all types is likely to be even more
common when assessing the behavior of foreign leaders or groups. This is
the tendency to attribute the behavior of people, organizations, or
governments to the nature of the actor and to underestimate the influence
of situational factors. This error is especially easy to make when one
assumes that the actor has malevolent intentions but our understanding of
the pressures on that actor is limited. Conversely, people tend to see their
own behavior as conditioned almost entirely by the situation in which they
find themselves. We seldom see ourselves as a bad person, but we often
see malevolent intent in others. This is known to cognitive psychologists
as the “fundamental attribution error.”8
Analysts should always try to see the situation from the other side’s
perspective, but if a sophisticated grounding in the culture and operating
environment of their subject is lacking, they will often be wrong and fall
victim to mirror imaging. Recognition of this uncertainty should prompt
analysts to consider using words such as “possibly” and “could happen”
rather than “likely” or “probably” when reporting the results of Red Hat
Analysis. A key first step in avoiding mirror imaging is to establish how
you would behave and the reasons why. Once this baseline is established,
the analyst then asks if the adversary would act differently and why this is
likely to occur. Is the adversary motivated by different stimuli, or does the
adversary hold different core values? The task of Red Hat Analysis then
becomes illustrating how these differences would result in different
policies or behaviors.
256
opposing team.
When to Use It
This technique is most useful in the early stages of an analytic process
when analysts need to identify all the critical factors that might explain an
257
event or could influence how a particular situation will develop. It should
be part of the standard process for any project that analyzes potential
future outcomes, for this approach covers the broader environmental
context from which surprises and unintended consequences often come.
Value Added
Most analysts focus on familiar factors within their field of specialty, but
we live in a complex, interrelated world where events in our little niche of
that world are often affected by forces in the broader environment over
which we have no control. The goal of Outside-In Thinking is to help
analysts get an entire picture, not just the part of the picture with which
they are already familiar.
258
Source: 2009 Pherson Associates, LLC.
The Method
✶ Generate a generic description of the problem or phenomenon to
be studied.
✶ Form a group to brainstorm the key forces and factors that could
have an impact on the topic but over which the subject can exert little
or no influence, such as globalization, the emergence of new
technologies, historical precedent, and the growing impact of social
media.
✶ Employ the mnemonic STEEP + 2 to trigger new ideas (Social,
Technical, Economic, Environmental, and Political, plus Military and
Psychological).
✶ Move down a level of analysis and list the key factors about which
some expertise is available.
✶ Assess specifically how each of these forces and factors could
have an impact on the problem.
259
✶ Ascertain whether these forces and factors actually do have an
impact on the issue at hand basing your conclusion on the available
evidence.
✶ Generate new intelligence collection tasking or research priorities
to fill in information gaps.
260
4. Ibid.
7. The description of how to conduct a Red Hat Analysis has been updated
since publication of the first edition to capture insights provided by Todd
Sears, a former DIA analyst, who noted that mirror imaging is unlikely to
be overcome simply by sensitizing analysts to the problem. The value of a
structured technique like Red Hat Analysis is that it requires analysts to
think first about what would motivate them to act before articulating why a
foreign adversary would act differently.
261
9 Challenge Analysis
262
As we noted in chapter 1, an analyst’s mental model can be regarded as a
distillation of everything the analyst knows about how things normally
work in a certain country or a specific scientific field. It tells the analyst,
sometimes subconsciously, what to look for, what is important, and how to
interpret what he or she sees. A mental model formed through education
and experience serves an essential function: it is what enables the analyst
to provide on a daily basis reasonably good intuitive assessments or
estimates about what is happening or likely to happen.
The problem is that a mental model that has previously provided accurate
assessments and estimates for many years can be slow to change. New
information received incrementally over time is easily assimilated into
one’s existing mental model, so the significance of gradual change over
time is easily missed. It is human nature to see the future as a continuation
of the past. As a general rule, major trends and events evolve slowly, and
the future is often foreseen by skilled intelligence analysts. However, life
does not always work this way. The most significant intelligence failures
have been failures to foresee historical discontinuities, when history pivots
and changes direction. Much the same can be said about economic experts
not anticipating the U.S. financial crisis in 2010.
Such surprising events are not foreseeable unless they are first imagined so
that one can start examining the world from a different perspective. That is
what this chapter is about—techniques that enable the analyst, and
eventually the intelligence consumer or business client, to evaluate events
from a different perspective—in other words, with a different mental
model.
263
likelihood of an assessment or estimate. Unfortunately, there is no
consensus within the Intelligence Community on what “probable” and
other verbal expressions of likelihood mean when they are converted to
numerical percentages. For discussion here, we accept Sherman Kent’s
definition of “probable” as meaning “75% plus or minus 12%.”3 This
means that analytic judgments described as “probable” are expected to be
correct roughly 75 percent of the time—and, therefore, incorrect or off
target about 25 percent of the time.
Logically, therefore, one might expect that one of every four judgments
that intelligence analysts describe as “probable” will turn out to be wrong.
This perspective broadens the scope of what challenge analysis might
accomplish. It should not be limited to questioning the dominant view to
be sure it’s right. Even if the challenge analysis confirms the initial
probability judgment, it should go further to seek a better understanding of
the other 25 percent. In what circumstances might there be a different
assessment or outcome, what would that be, what would constitute
evidence of events moving in that alternative direction, how likely is it,
and what would be the consequences? As we will discuss in the next
chapter, on conflict management, an understanding of these probabilities
should reduce the frequency of unproductive conflict between opposing
views. Analysts who recognize a one in four chance of being wrong should
at least be open to consideration of alternative assessments or estimates to
account for the other 25 percent.
264
that. The term “Red Team” is used to describe a group that is
assigned to take an adversarial perspective. The Delphi Method is a
structured process for eliciting opinions from a panel of outside
experts.
What gets us into trouble is not what we don’t know, it’s what we
know for sure that just ain’t so.
Reframing Techniques
Three of the techniques in this chapter work by a process called reframing.
A frame is any cognitive structure that guides the perception and
interpretation of what one sees. A mental model of how things normally
work can be thought of as a frame through which an analyst sees and
interprets evidence. An individual or a group of people can change their
frame of reference, and thus challenge their own thinking about a problem,
simply by changing the questions they ask or changing the perspective
from which they ask the questions. Analysts can use this reframing
technique when they need to generate new ideas, when they want to see
old ideas from a new perspective, or any other time when they sense a
need for fresh thinking.4
Once a person has started thinking about a problem one way, the same
265
mental circuits or pathways are activated and strengthened each time the
person thinks about it. The benefit of this is that it facilitates the retrieval
of information one wants to remember. The downside is that these
pathways become mental ruts that make it difficult to see the information
from a different perspective. When an analyst reaches a judgment or
decision, this thought process is embedded in the brain. Each time the
analyst thinks about it, the same synapses are triggered, and the analyst’s
thoughts tend to take the same well-worn pathway through the brain.
Getting the same answer each time one thinks about it builds confidence,
and often overconfidence, in that answer.
266
Fortunately, it is fairly easy to open the mind to think in different ways,
and the techniques described in this chapter are designed to serve that
function. The trick is to restate the question, task, or problem from a
different perspective that activates a different set of synapses in the brain.
Each of the three applications of reframing described in this chapter does
this in a different way. Premortem Analysis asks analysts to imagine
themselves at some future point in time, after having just learned that a
previous analysis turned out to be completely wrong. The task then is to
figure out how and why it might have gone wrong. Structured Self-
Critique asks a team of analysts to reverse its role from advocate to critic
in order to explore potential weaknesses in the previous analysis. This
change in role can empower analysts to express concerns about the
consensus view that might previously have been suppressed. What If?
Analysis asks the analyst to imagine that some unlikely event has
occurred, and then to explain how it could happen and the implications of
the event.
These techniques are generally more effective in a small group than with a
single analyst. Their effectiveness depends in large measure on how fully
and enthusiastically participants in the group embrace the imaginative or
alternative role they are playing. Just going through the motions is of
limited value.
267
Overview of Techniques
Premortem Analysis
reduces the risk of analytic failure by identifying and analyzing a potential
failure before it occurs. Imagine yourself several years in the future. You
suddenly learn from an unimpeachable source that your analysis or
estimate or strategic plan was wrong. Then imagine what could have
happened to cause it to be wrong. Looking back from the future to explain
something that has happened is much easier than looking into the future to
forecast what will happen, and this exercise helps identify problems one
has not foreseen.
Structured Self-Critique
is a procedure that a small team or group uses to identify weaknesses in its
own analysis. All team or group members don a hypothetical black hat and
become critics rather than supporters of their own analysis. From this
opposite perspective, they respond to a list of questions about sources of
uncertainty, the analytic processes that were used, critical assumptions,
diagnosticity of evidence, anomalous evidence, information gaps, changes
in the broad environment in which events are happening, alternative
decision models, availability of cultural expertise, and indicators of
possible deception. Looking at the responses to these questions, the team
reassesses its overall confidence in its own judgment.
268
is used to sensitize analysts and decision makers to the possibility that a
low-probability event might actually happen and stimulate them to think
about measures that could be taken to deal with the danger or to exploit the
opportunity if it does occur. The analyst assumes the event has occurred,
and then figures out how it could have happened and what the
consequences might be.
Devil’s Advocacy
is a technique in which a person who has been designated the Devil’s
Advocate, usually by a responsible authority, makes the best possible case
against a proposed analytic judgment, plan, or decision.
Delphi Method
is a procedure for obtaining ideas, judgments, or forecasts electronically
from a geographically dispersed panel of experts. It is a time-tested,
extremely flexible procedure that can be used on any topic for which
expert judgment can contribute. It is included in this chapter because it can
be used to identify divergent opinions that challenge conventional wisdom.
It can also be used as a double-check on any research finding. If two
analyses from different analysts who are using different techniques arrive
at the same conclusion, this is grounds for a significant increase in
confidence in that conclusion. If the two conclusions disagree, this is also
valuable information that may open a new avenue of research.
269
challenge effectively the accuracy of their own conclusions. It is a specific
application of the reframing method, in which restating the question, task,
or problem from a different perspective enables one to see the situation
differently and come up with different ideas.
When to Use It
Premortem Analysis should be used by analysts who can devote a few
hours to challenging their own analytic conclusions about the future to see
where they might be wrong. It may be used by a single analyst, but, like all
structured analytic techniques, it is most effective when used in a small
group.
Value Added
270
It is important to understand what it is about the Premortem Analysis
approach that helps analysts identify potential causes of error that
previously had been overlooked. Briefly, there are two creative processes
at work here. First, the questions are reframed—this exercise typically
elicits responses that are different from the original ones. Asking questions
about the same topic, but from a different perspective, opens new
pathways in the brain, as we noted in the introduction to this chapter.
Second, the Premortem approach legitimizes dissent. For various reasons,
many members of small groups suppress dissenting opinions, leading to
premature consensus. In a Premortem Analysis, all analysts are asked to
make a positive contribution to group goals by identifying weaknesses in
the previous analysis.
Group members tend to go along with the group leader, with the first
group member to stake out a position, or with an emerging majority
viewpoint for many reasons. Most benign is the common rule of thumb
that when we have no firm opinion, we take our cues from the opinions of
others. We follow others because we believe (often rightly) they know
what they are doing. Analysts may also be concerned that their views will
be critically evaluated by others, or that dissent will be perceived as
disloyalty or as an obstacle to progress that will just make the meeting last
longer.
271
because they lacked confidence are empowered by the technique to
express previously hidden divergent thoughts. If this change in perspective
is handled well, each team member will know that they add value to the
exercise for being critical of the previous judgment, not for supporting it.
The Method
The best time to conduct a Premortem Analysis is shortly after a group has
reached a conclusion on an action plan, but before any serious drafting of
the report has been done. If the group members are not already familiar
with the Premortem technique, the group leader, another group member, or
a facilitator steps up and makes a statement along the lines of the
following: “Okay, we now think we know the right answer, but we need to
double-check this. To free up our minds to consider other possibilities,
let’s imagine that we have made this judgment, our report has gone
forward and been accepted, and now, x months or years later, we gain
access to a crystal ball. Peering into this ball, we learn that our analysis
was wrong, and things turned out very differently from the way we had
expected. Now, working from that perspective in the future, let’s put our
imaginations to work and brainstorm what could have possibly happened
to cause our analysis to be spectacularly wrong.”
272
unsuccessful or have unintended consequences, or how difficult it is to see
things from the perspective of a foreign culture. This type of thinking may
bring a different part of analysts’ brains into play as they are mulling over
what could have gone wrong with their analysis. Outside-In Thinking
(chapter 8) can also be helpful for this purpose.
At the Premortem meeting, the group leader or a facilitator writes the ideas
on a whiteboard or flip chart. To ensure that no single person dominates
the presentation of ideas, the Nominal Group version of brainstorming
might be used. With that technique, the facilitator goes around the room in
round-robin fashion, taking one idea from each participant until all have
presented every idea on their lists. (See Nominal Group Technique in
chapter 5.) After all ideas are posted on the board and made visible to all,
the group discusses what it has learned from this exercise, and what action,
if any, the group should take. This generation and initial discussion of
ideas can often be accomplished in a single two-hour meeting, which is a
small investment of time to undertake a systematic challenge to the
group’s thinking.
If the Premortem Analysis leads the group to reconsider and revise its
analytic judgment, the questions shown in Figure 9.1 are a good starting
point. For a more thorough set of self-critique questions, see the discussion
of Structured Self-Critique, which involves changing one’s role from
advocate to critic of one’s previous analysis.
273
Relationship to Other Techniques
If the Premortem Analysis identifies a significant problem, the natural
follow-up technique for addressing this problem is Structured Self-
Critique, described in the next section.
274
Source: 2009 Pherson Associates, LLC.
When to Use It
You can use Structured Self-Critique productively to look for weaknesses
in any analytic explanation of events or estimate of the future. It is
specifically recommended for use in the following ways:
Value Added
When people are asked questions about the same topic but from a different
276
perspective, they often give different answers than the ones they gave
before. For example, if a team member is asked if he or she supports the
team’s conclusions, the answer will usually be “yes.” However, if all team
members are asked to look for weaknesses in the team’s argument, that
member may give a quite different response.
The Method
Start by reemphasizing that all analysts in the group are now wearing a
black hat. They are now critics, not advocates, and they will now be
judged by their ability to find weaknesses in the previous analysis, not on
the basis of their support for the previous analysis. Then work through the
following topics or questions:
277
– Are there a greater than usual number of assumptions
because of insufficient evidence or the complexity of the
situation?
– Is the team dealing with a relatively stable situation or with a
situation that is undergoing, or potentially about to undergo,
significant change?
✶ Analytic process: In the initial analysis, see if the team did the
following. Did it identify alternative hypotheses and seek out
information on these hypotheses? Did it identify key assumptions?
Did it seek a broad range of diverse opinions by including analysts
from other offices, agencies, academia, or the private sector in the
deliberations? If these steps were not taken, the odds of the team
having a faulty or incomplete analysis are increased. Either consider
doing some of these things now or lower the team’s level of
confidence in its judgment.
✶ Critical assumptions: Assuming that the team has already
identified key assumptions, the next step is to identify the one or two
assumptions that would have the greatest impact on the analytic
judgment if they turned out to be wrong. In other words, if the
assumption is wrong, the judgment will be wrong. How recent and
well documented is the evidence that supports each such assumption?
Brainstorm circumstances that could cause each of these assumptions
to be wrong, and assess the impact on the team’s analytic judgment if
the assumption is wrong. Would the reversal of any of these
assumptions support any alternative hypothesis? If the team has not
previously identified key assumptions, it should do a Key
Assumptions Check now.
✶ Diagnostic evidence: Identify alternative hypotheses and the
several most diagnostic items of evidence that enable the team to
reject alternative hypotheses. For each item, brainstorm for any
reasonable alternative interpretation of this evidence that could make
it consistent with an alternative hypothesis. See Diagnostic Reasoning
in chapter 7.
✶ Information gaps: Are there gaps in the available information, or
is some of the information so dated that it may no longer be valid? Is
the absence of information readily explainable? How should absence
of information affect the team’s confidence in its conclusions?
✶ Missing evidence: Is there any evidence that one would expect to
see in the regular flow of intelligence or open-source reporting if the
analytic judgment is correct, but that turns out not to be there?
278
✶ Anomalous evidence: Is there any anomalous item of evidence
that would have been important if it had been believed or if it could
have been related to the issue of concern, but that was rejected as
unimportant because it was not believed or its significance was not
known? If so, try to imagine how this item might be a key clue to an
emerging alternative hypothesis.
✶ Changes in the broad environment: Driven by technology and
globalization, the world as a whole seems to be experiencing social,
technical, economic, environmental, and political changes at a faster
rate than ever before in history. Might any of these changes play a
role in what is happening or will happen? More broadly, what key
forces, factors, or events could occur independently of the issue that
is the subject of analysis that could have a significant impact on
whether the analysis proves to be right or wrong?
✶ Alternative decision models: If the analysis deals with decision
making by a foreign government or nongovernmental organization
(NGO), was the group’s judgment about foreign behavior based on a
rational actor assumption? If so, consider the potential applicability of
other decision models, specifically that the action was or will be the
result of bargaining between political or bureaucratic forces, the result
of standard organizational processes, or the whim of an authoritarian
leader.10 If information for a more thorough analysis is lacking,
consider the implications of that for confidence in the team’s
judgment.
✶ Cultural expertise: If the topic being analyzed involves a foreign
or otherwise unfamiliar culture or subculture, does the team have or
has it obtained cultural expertise on thought processes in that culture?
11
✶ Deception: Does another country, NGO, or commercial
competitor about which the team is making judgments have a motive,
opportunity, or means for engaging in deception to influence U.S.
policy or to change your behavior? Does this country, NGO, or
competitor have a past history of engaging in denial, deception, or
influence operations?
After responding to these questions, the analysts take off the black hats
and reconsider the appropriate level of confidence in the team’s previous
judgment. Should the initial judgment be reaffirmed or modified?
279
Potential Pitfalls
The success of this technique depends in large measure on the team
members’ willingness and ability to make the transition from supporters to
critics of their own ideas. Some individuals lack the intellectual flexibility
to do this well. It must be very clear to all members that they are no longer
performing the same function as before. Their new task is to critique an
analytic position taken by some other group (actually themselves, but with
a different hat on).
280
Begin challenging your own assumptions. Your assumptions are
your windows on the world. Scrub them off every once in a while,
or the light won’t come in.
When to Use It
This technique should be in every analyst’s toolkit. It is an important
technique for alerting decision makers to an event that could happen, even
if it may seem unlikely at the present time. What If? Analysis serves a
function similar to Scenarios Analysis—it creates an awareness that
prepares the mind to recognize early signs of a significant change, and it
may enable the decision maker to plan ahead for that contingency. It is
most appropriate when any of the following conditions are present:
What If? Analysis is a logical follow-up after any Key Assumptions Check
that identifies an assumption that is critical to an important estimate but
about which there is some doubt. In that case, the What If? Analysis would
imagine that the opposite of this assumption is true. Analysis would then
focus on how this outcome could possibly occur and what the
consequences would be.
281
When analysts are too cautious in estimative judgments on threats,
they brook blame for failure to warn. When too aggressive in
issuing warnings, they brook criticism for ‘crying wolf.’
Value Added
Shifting the focus from asking whether an event will occur to imagining
that it has occurred and then explaining how it might have happened opens
the mind to think in different ways. What If? Analysis shifts the discussion
from “How likely is it?” to these questions:
282
The Method
✶ A What If? Analysis can be done by an individual or as a team
project. The time required is about the same as that for drafting a
short paper. It usually helps to initiate the process with a
brainstorming session and/or to interpose brainstorming sessions at
various stages of the process.
✶ Begin by assuming that what could happen has actually occurred.
Often it is best to pose the issue in the following way: “The New York
Times reported yesterday that. . . .” Be precise in defining both the
event and its impact. Sometimes it is useful to posit the new
contingency as the outcome of a specific triggering event, such as a
natural disaster, an economic crisis, a major political miscalculation,
or an unexpected new opportunity that vividly reveals that a key
analytic assumption is no longer valid.
✶ Develop at least one chain of argumentation—based on both
evidence and logic—to explain how this outcome could have come
about. In developing the scenario or scenarios, focus on what must
actually occur at each stage of the process. Work backwards from the
event to the present day. This is called “backwards thinking.” Try to
envision more than one scenario or chain of argument.
✶ Generate a list of indicators or “observables” for each scenario that
would help to detect whether events are starting to play out in a way
envisioned by that scenario.
✶ Assess the level of damage or disruption that would result from a
negative scenario and estimate how difficult it would be to overcome
or mitigate the damage incurred. For new opportunities, assess how
well developments could turn out and what can be done to ensure that
such a positive scenario might actually come about.
✶ Rank order the scenarios according to how much attention they
merit, taking into consideration both difficulty of implementation and
the potential significance of the impact.
✶ Develop and validate indicators or observables for the scenarios
and monitor them on a regular or periodic basis.
✶ Report periodically on whether any of the proposed scenarios may
be emerging and why.
Figure 9.3 What If? Scenario: India Makes Surprising Gains from the
Global Financial Crisis
283
Source: This example was developed by Ray Converse and Elizabeth
Manak, Pherson Associates, LLC.
284
Relationship to Other Techniques
What If? Analysis is sometimes confused with the High Impact/Low
Probability technique, as each deals with low-probability events. However,
only What If? Analysis uses the reframing technique of assuming that a
future event has happened and then thinking backwards in time to imagine
how it could have happened. High Impact/Low Probability requires new or
anomalous information as a trigger and then projects forward to what
might occur and the consequences if it does occur.
A Cautionary Note
Scenarios developed using both the What If? Analysis and the High
Impact/Low Probability techniques can often contain highly sensitive
data requiring a very limited distribution of the final product. Examples
are the following: How might a terrorist group launch a debilitating
attack on a vital segment of the U.S. infrastructure? How could a coup
be launched successfully against a friendly government? What could be
done to undermine or disrupt global financial networks? Obviously, if
an analyst identifies a major vulnerability that could be exploited by an
adversary, extreme care must be taken to prevent that detailed
description from falling into the hands of the adversary. An additional
concern is that the more “brilliant” and provocative the scenario, the
more likely it will attract attention, be shared with others, and possibly
leak.
285
early warning that a seemingly unlikely event with major policy and
resource repercussions might actually occur.
When to Use It
High Impact/Low Probability Analysis should be used when one wants to
alert decision makers to the possibility that a seemingly long-shot
development that would have a major policy or resource impact may be
more likely than previously anticipated. Events that would have merited
such treatment before they occurred include the reunification of Germany
in 1989, the collapse of the Soviet Union in 1991, and the devastation
caused by Hurricane Katrina to New Orleans in August 2005—the
costliest natural disaster in the history of the United States. A variation of
this technique, High Impact/Uncertain Probability Analysis, might also be
used to address the potential impact of an outbreak of H5N1 (avian
influenza) or applied to a terrorist attack when intent is well established
but there are multiple variations on how it might be carried out.
286
Value Added
The High Impact/Low Probability Analysis format allows analysts to
explore the consequences of an event—particularly one not deemed likely
by conventional wisdom—without having to challenge the mainline
judgment or to argue with others about how likely an event is to happen. In
other words, this technique provides a tactful way of communicating a
viewpoint that some recipients might prefer not to hear.
The analytic focus is not on whether something will happen but to take it
as a given that an event could happen that would have a major and
unanticipated impact. The objective is to explore whether an increasingly
credible case can be made for an unlikely event occurring that could pose a
major danger—or offer great opportunities. The more nuanced and
concrete the analyst’s depiction of the plausible paths to danger, the easier
it is for a decision maker to develop a package of policies to protect or
advance vital U.S. interests.
The Method
An effective High Impact/Low Probability Analysis involves these steps:
287
✶ Generate and validate a list of indicators that would help analysts
and decision makers recognize that events were beginning to unfold
in this way.
✶ Identify factors that would deflect a bad outcome or encourage a
positive outcome.
✶ Report periodically on whether any of the proposed scenarios may
be emerging and why.
Potential Pitfalls
Analysts need to be careful when communicating the likelihood of
unlikely events. The meaning of the word “unlikely” can be interpreted as
meaning anywhere from 1 percent to 25 percent probability, while “highly
unlikely” may mean from 1 percent to 10 percent.12 Customers receiving
an intelligence report that uses words of estimative probability such as
“very unlikely” will typically interpret the report as consistent with their
own prior thinking, if at all possible. If the report says a terrorist attack
against a specific U.S. embassy abroad within the next year is very
unlikely, it is quite possible, for example, that the analyst may be thinking
of about a 10 percent possibility, while a decision maker sees that as
consistent with his or her own thinking that the likelihood is less than 1
percent. Such a difference in likelihood can make the difference between a
decision to pay or not to pay for expensive contingency planning or a
proactive preventive countermeasure. When an analyst is describing the
likelihood of an unlikely event, it is desirable to express the likelihood in
numeric terms, either as a range (such as less than 5 percent or 10 to 20
percent) or as bettor’s odds (such as one chance in ten).
288
Relationship to Other Techniques
High Impact/Low Probability Analysis is sometimes confused with What
If? Analysis. Both deal with low-probability or unlikely events. High
Impact/Low Probability Analysis is primarily a vehicle for warning
decision makers that recent, unanticipated developments suggest that an
event previously deemed highly unlikely might actually occur. Based on
recent evidence or information, it projects forward to discuss what could
occur and the consequences if the event does occur. It challenges the
conventional wisdom. What If? Analysis does not require new or
anomalous information to serve as a trigger. It reframes the question by
assuming that a surprise event has happened. It then looks backwards from
that surprise event to map several ways it could have come about. It also
tries to identify actions that, if taken in a timely manner, might have
prevented it.
289
(Reston, VA: Pherson Associates, LLC, 2008); and Department of
Homeland Security, Office of Intelligence and Analysis training materials.
When to Use It
Devil’s Advocacy is most effective when initiated by a manager as part of
a strategy to ensure that alternative solutions are thoroughly considered.
The following are examples of well-established uses of Devil’s Advocacy:
Value Added
The unique attribute and value of Devil’s Advocacy as described here is
that the critique is initiated at the discretion of management to test the
strength of the argument for a proposed analytic judgment, plan, or
290
decision. It looks for what could go wrong and often focuses attention on
questionable assumptions, evidence, or lines of argument that undercut the
generally accepted conclusion, or on the insufficiency of current evidence
on which to base such a conclusion.
The Method
The Devil’s Advocate is charged with challenging the proposed judgment
by building the strongest possible case against it. There is no prescribed
procedure. Devil’s Advocates may be selected because of their expertise
with a specific technique. Or they may decide to use a different technique
from that used in the original analysis or to use a different set of
assumptions. They may also simply review the evidence and the analytic
procedures looking for weaknesses in the argument. In the latter case, an
ideal approach would be asking the questions about how the analysis was
conducted that are listed in this chapter under Structured Self-Critique.
These questions cover sources of uncertainty, the analytic processes that
were used, critical assumptions, diagnosticity of evidence, anomalous
evidence, information gaps, changes in the broad environment in which
events are happening, alternative decision models, availability of cultural
expertise, and indicators of possible deception.
Potential Pitfalls
Devil’s Advocacy is sometimes used as a form of self-critique. Rather than
being done by an outsider, a member of the analytic team volunteers or is
asked to play the role of Devil’s Advocate. We do not believe this
application of the Devil’s Advocacy technique is effective because:
291
the true belief of one of their members who is courageous enough to
speak out, this exercise may actually enhance the majority’s original
belief—“a smugness that may occur because one assumes one has
considered alternatives though, in fact, there has been little serious
reflection on other possibilities.”14 What the team learns from the
Devil’s Advocate presentation may be only how to better defend the
team’s own entrenched position.
292
devil’s advocates, offering alternative interpretations (team B) and
otherwise challenging established thinking within an enterprise.”16 This is
red teaming as a management strategy rather than as a specific analytic
technique. It is in this context that red teaming is sometimes used as a
synonym for any form of challenge analysis or alternative analysis.
To accommodate these two different ways that red teaming is used, in this
book we identify two separate techniques called Red Team Analysis and
Red Hat Analysis.
When to Use It
Management should initiate a Red Team Analysis whenever there is a
perceived need to challenge the conventional wisdom on an important
issue or whenever the responsible line office is perceived as lacking the
level of cultural expertise required to fully understand an adversary’s or
competitor’s point of view.
Value Added
Red Team Analysis can help free analysts from their own well-developed
mental model—their own sense of rationality, cultural norms, and personal
values. When analyzing an adversary, the Red Team approach requires
that an analyst change his or her frame of reference from that of an
“observer” of the adversary or competitor, to that of an “actor” operating
293
within the adversary’s cultural and political milieu. This reframing or role
playing is particularly helpful when an analyst is trying to replicate the
mental model of authoritarian leaders, terrorist cells, or non-Western
groups that operate under very different codes of behavior or motivations
than those to which most Americans are accustomed.
The Method
The function of Red Team Analysis is to challenge a proposed or existing
judgment by building the strongest possible case against it. If the goal is to
understand the thinking of an adversary or a competitor, the method is
similar to that described in chapter 8 under Red Hat Analysis. The
difference is that additional resources for cultural, substantive expertise
and analytic skills may be available to implement the Red Team Analysis.
The use of Role Playing or the Delphi Method to cross-check or
supplement the Red Team approach is encouraged.
294
survey in that there are two or more rounds of questioning. After the first
round of questions, a moderator distributes all the answers and
explanations of the answers to all participants, often anonymously. The
expert participants are then given an opportunity to modify or clarify their
previous responses, if so desired, on the basis of what they have seen in the
responses of the other participants. A second round of questions builds on
the results of the first round, drills down into greater detail, or moves to a
related topic. There is great flexibility in the nature and number of rounds
of questions that might be asked.
When to Use It
The Delphi Method was developed by the RAND Corporation at the
beginning of the Cold War in the 1950s to forecast the impact of new
technology on warfare. It was also used to assess the probability, intensity,
or frequency of future enemy attacks. In the 1960s and 1970s, Delphi
became widely known and used as a method for futures research,
especially forecasting long-range trends in science and technology. Futures
research is similar to intelligence analysis in that the uncertainties and
complexities one must deal with often preclude the use of traditional
statistical methods, so explanations and forecasts must be based on the
experience and informed judgments of experts.
Over the years, Delphi has been used in a wide variety of ways, and for an
equally wide variety of purposes. Although many Delphi projects have
focused on developing a consensus of expert judgment, a variant called
Policy Delphi is based on the premise that the decision maker is not
interested in having a group make a consensus decision, but rather in
having the experts identify alternative policy options and present all the
supporting evidence for and against each option. That is the rationale for
including Delphi in this chapter on challenge analysis. It can be used to
identify divergent opinions that may be worth exploring.
One group of Delphi scholars advises that the Delphi technique “can be
used for nearly any problem involving forecasting, estimation, or decision
making”—as long as the problem is not so complex or so new as to
preclude the use of expert judgment. These Delphi advocates report using
it for diverse purposes that range from “choosing between options for
regional development, to predicting election outcomes, to deciding which
applicants should be hired for academic positions, to predicting how many
295
meals to order for a conference luncheon.”17
Several software programs have been developed to handle these tasks, and
one of these is hosted for public use
(http://armstrong.wharton.upenn.edu/delphi2). The distributed decision
support systems now publicly available to support virtual teams include
some or all of the functions necessary for Delphi as part of a larger
package of analytic tools.
Value Added
Consultation with relevant experts in academia, business, and
nongovernmental organizations is encouraged by Intelligence Community
Directive No. 205, on Analytic Outreach, dated July 2008. We believe the
development of Delphi panels of experts on areas of critical concern
should be standard procedure for outreach to experts outside the analytic
unit of any organization, and particularly the Intelligence Community
because of its more insular work environment.
296
their responses makes it easy for experts to adjust their previous
judgments in response to new evidence.
✶ In many Delphi projects, the experts remain anonymous to other
panel members so that no one can use his or her position of authority,
reputation, or personality to influence others. Anonymity also
facilitates the expression of opinions that go against the conventional
wisdom and may not otherwise be expressed.
The Method
In a Delphi project, a moderator (analyst) sends a questionnaire to a panel
of experts who may be in different locations. The experts respond to these
questions and usually are asked to provide short explanations for their
responses. The moderator collates the results from this first questionnaire
and sends the collated responses back to all panel members, requesting
them to reconsider their responses based on what they see and learn from
the other experts’ responses and explanations. Panel members may also be
asked to answer another set of questions. This cycle of question, response,
and feedback continues through several rounds using the same or a related
set of questions. It is often desirable for panel members to remain
anonymous so that they are not unduly influenced by the responses of
senior members. This method is illustrated in Figure 9.7.
Examples
297
To show how Delphi can be used for intelligence analysis, we have
developed three illustrative applications:
298
responses at any time as new events occur or as new information is
submitted by one of the participants.18 The probability estimates
provided by the Delphi panel can be aggregated to provide a measure
of the significance of change over time. They can also be used to
identify differences of opinion among the experts that warrant further
examination.
299
Potential Pitfalls
A Delphi project involves administrative work to identify the experts,
communicate with panel members, and collate and tabulate their responses
through several rounds of questioning. The Intelligence Community can
pose additional obstacles, such as ensuring that the experts have
appropriate security clearances or requiring them to meet with the analysts
in cleared office spaces. Another potential pitfall is that overenthusiastic
use of the technique can force consensus when it might be better to present
two competing hypotheses and the evidence supporting each position.
300
“When to Use It.” The following references were useful in researching this
topic: Murray Turoff and Starr Roxanne Hiltz, “Computer-Based Delphi
Processes,” 1996, http://web.njit.edu/~turoff/Papers/delphi3.html; and
Harold A. Linstone and Murray Turoff, The Delphi Method: Techniques
and Applications (Reading, MA: Addison-Wesley, 1975). A 2002 digital
version of Linstone and Turoff’s book is available online at
http://is.njit.edu/pubs/delphibook; see in particular the chapter by Turoff
on “The Policy Delphi” (http://is.njit.edu/pubs/delphibook/ch3b1.pdf). For
more current information on validity and optimal techniques for
implementing a Delphi project, see Gene Rowe and George Wright,
“Expert Opinions in Forecasting: The Role of the Delphi Technique,” in
Principles of Forecasting, ed. J. Scott Armstrong (New York: Springer
Science+Business Media, 2001).
6. Gary Klein, Intuition at Work: Why Developing Your Gut Instinct Will
Make You Better at What You Do (New York: Doubleday, 2002), 91.
301
Paul B. Paulus and Bernard A. Nijstad (New York: Oxford University
Press, 2003), 63.
16. Defense Science Board Task Force, The Role and Status of DoD Red
Teaming Activities (Washington, DC: Office of the Under Secretary of
Defense for Acquisition, Technology, and Logistics, September 2003).
302
10 Conflict Management
The most common procedures for dealing with differences of opinion have
been to force a consensus, water down the differences, or—in the U.S.
Intelligence Community—add a dissenting footnote to an estimate. We
believe these practices are suboptimal, at best, and hope they will become
increasingly rare as our various analytic communities embrace greater
collaboration early in the analytic process, rather than endure mandated
303
coordination at the end of the process after all parties are locked into their
positions. One of the principal benefits of using structured analytic
techniques for intraoffice and interagency collaboration is that these
techniques identify differences of opinion at the start of the analytic
process. This gives time for the differences to be at least understood, if not
resolved, at the working level before management becomes involved.
304
uncertain than puzzles, for which an answer does exist if one could
only find it.3
✶ The more assumptions that are made, the greater the uncertainty.
Assumptions about intent or capability, and whether or not they have
changed, are especially critical.
✶ Analysis of human behavior or decision making is far more
uncertain than analysis of technical data.
✶ The behavior of a complex dynamic system is more uncertain than
that of a simple system. The more variables and stakeholders
involved in a system, the more difficult it is to foresee what might
happen.
Overview of Techniques
Adversarial Collaboration
in essence is an agreement between opposing parties on how they will
work together to resolve their differences, gain a better understanding of
why they differ, or collaborate on a joint paper to explain the differences.
Six approaches to implementing Adversarial Collaboration are described.
Structured Debate
is a planned debate of opposing points of view on a specific issue in front
305
of a “jury of peers,” senior analysts, or managers. As a first step, each side
writes up the best possible argument for its position and passes this
summation to the opposing side. The next step is an oral debate that
focuses on refuting the other side’s arguments rather than further
supporting one’s own arguments. The goal is to elucidate and compare the
arguments against each side’s argument. If neither argument can be
refuted, perhaps both merit some consideration in the analytic report.
306
help of an impartial arbiter. A joint report describes the tests, states what
has been learned on which both sides agree, and provides interpretations of
the test results on which they disagree.5
When to Use It
Adversarial Collaboration should be used only if both sides are open to
discussion of an issue. If one side is fully locked into its position and has
repeatedly rejected the other side’s arguments, this technique is unlikely to
be successful. Structured Debate is more appropriate to use in these
situations because it includes an independent arbiter who listens to both
sides and then makes a decision.
Value Added
Adversarial Collaboration can help opposing analysts see the merit of
another group’s perspective. If successful, it will help both parties gain a
better understanding of what assumptions or evidence are behind their
opposing opinions on an issue and to explore the best way of dealing with
these differences. Can one side be shown to be wrong, or should both
positions be reflected in any report on the subject? Can there be agreement
on indicators to show the direction in which events seem to be moving?
The Method
Six approaches to Adversarial Collaboration are described here. What they
307
all have in common is the forced requirement to understand and address
the other side’s position rather than simply dismiss it. Mutual
understanding of the other side’s position is the bridge to productive
collaboration. These six techniques are not mutually exclusive; in other
words, one might use several of them for any specific project.
Discussion should then focus on the rationale for each assumption and
suggestions for how the assumption might be either confirmed or refuted.
If the discussion focuses on the probability of Assumption A versus
Assumption B, it is often helpful to express probability as a numerical
range—for example, 65 percent to 85 percent for probable. When analysts
go through these steps, they sometimes discover they are not as far apart as
they thought. The discussion should focus on refuting the other side’s
assumptions rather than supporting one’s own.
308
The use of ACH may not result in the elimination of all the differences of
opinion, but it can be a big step toward understanding these differences
and determining which might be reconcilable through further intelligence
collection or research. The analysts can then make a judgment about the
potential productivity of further efforts to resolve the differences. ACH
may not be helpful, however, if two sides are already locked into their
positions. It is all too easy in ACH for one side to interpret the evidence
and enter assumptions in a way that deliberately supports its preconceived
position. To challenge a well-established mental model, other challenge or
conflict management techniques may be more appropriate.
Argument Mapping:
Argument Mapping, which was described in chapter 7, maps the logical
relationship between each element of an argument. Two sides might agree
to work together to create a single Argument Map with the rationale both
for and against a given conclusion. Such an Argument Map will show
where the two sides agree, where they diverge, and why. The visual
representation of the argument makes it easier to recognize weaknesses in
opposing arguments. This technique pinpoints the location of any
disagreement and could serve as an objective basis for mediating a
disagreement.
Mutual Understanding:
When analysts in different offices or agencies disagree, the disagreement
is often exacerbated by the fact that they have a limited understanding of
the other side’s position and logical reasoning. The Mutual Understanding
approach addresses this problem directly.
309
and what it is based upon. Once all the analysts accurately understand each
side’s position, they can discuss their differences more rationally and with
less emotion. Experience shows that this technique normally prompts some
movement of the opposing parties toward a common ground.6
There are two ways to measure the health of a debate: the kinds of
questions being asked and the level of listening.
Joint Escalation:
When disagreement occurs within an analytic team, the disagreement is
often referred to a higher authority. This escalation often makes matters
worse. What typically happens is that a frustrated analyst takes the
problem up to his or her boss, briefly explaining the conflict in a manner
that is clearly supportive of the analyst’s own position. The analyst then
returns to the group armed with the boss’s support. However, the opposing
analyst(s) have also gone to their bosses and come back with support for
their solution. Each analyst is then locked into what has become “my
manager’s view” of the issue. An already thorny problem has become even
more intractable. If the managers engage each other directly, both will
quickly realize they lack a full understanding of the problem and must also
factor in what their counterparts know before trying to resolve the issue.
Just the need to prepare such a joint statement discourages escalation and
310
often leads to an agreement. The proponents of this approach report their
experience that “companies that require people to share responsibility for
the escalation of a conflict often see a decrease in the number of problems
that are pushed up the management chain. Joint escalation helps create the
kind of accountability that is lacking when people know they can provide
their side of an issue to their own manager and blame others when things
don’t work out.”8
The interesting point here is the ground rule that the team was instructed to
follow. After reviewing the evidence, each officer identified those items of
evidence thought to be of critical importance in making a judgment on
Nosenko’s bona fides. Any item that one officer stipulated as critically
important then had to be addressed by the other two members.
It turned out that fourteen items were stipulated by at least one of the team
members and had to be addressed by both of the others. Each officer
prepared his own analysis, but they all had to address the same fourteen
issues. Their report became known as the “Wise Men” report.
311
the truth, rather than proving Nosenko’s guilt or innocence, the case that
Nosenko was a plant began to unravel. The officer who had always
believed that Nosenko was bona fide felt he could now prove the case. The
officer who was relatively new to the case changed his mind in favor of
Nosenko’s bona fides. The officer who had been one of the principal
analysts and advocates for the position that Nosenko was a plant became
substantially less confident in that conclusion. There were now sufficient
grounds for management to make the decision.
The ground rules used in the Nosenko case can be applied in any effort to
abate a long-standing analytic controversy. The key point that makes these
rules work is the requirement that each side must directly address the
issues that are important to the other side and thereby come to understand
the other’s perspective. This process guards against the common
propensity of analysts to make their own arguments and then simply
dismiss those of the other side as unworthy of consideration.9
When to Use It
Structured Debate is called for when a significant difference of opinion
exists within or between analytic units or within the decision-making
community. It can also be used effectively when Adversarial Collaboration
has been unsuccessful or is impractical, and a choice must be made
between two opposing opinions or a decision to go forward with a
comparative analysis of both. Structured Debate requires a significant
312
commitment of analytic time and resources. A long-standing policy issue,
a critical decision that has far-reaching implications, or a dispute within
the analytic community that is obstructing effective interagency
collaboration would be grounds for making this type of investment in time
and resources.
Value Added
In the method proposed here, each side presents its case in writing to the
opposing side; then, both cases are combined in a single paper presented to
the audience prior to the debate. The oral debate then focuses on refuting
the other side’s position. Glib and personable speakers can always make
arguments for their own position sound persuasive. Effectively refuting the
other side’s position is a different ball game, however. The requirement to
refute the other side’s position brings to the debate an important feature of
the scientific method: that the most likely hypothesis is actually the one
with the least evidence against it as well as good evidence for it. (The
concept of refuting hypotheses is discussed in chapter 7.)
The goal of the debate is to decide what to tell the customer. If neither side
can effectively refute the other, then arguments for and against both sides
should be included in the report. Customers of intelligence analysis gain
more benefit by weighing well-argued conflicting views than from reading
an assessment that masks substantive differences among analysts or drives
the analysis toward the lowest common denominator. If participants
routinely interrupt one another or pile on rebuttals before digesting the
preceding comment, the teams are engaged in emotional conflict rather
than constructive debate.
He who knows only his own side of the case, knows little of that.
His reasons may be good, and no one may have been able to refute
them. But if he is equally unable to refute the reasons on the
opposite side, if he does not so much as know what they are, he has
no ground for preferring either opinion.
313
The Method
Start by defining the conflict to be debated. If possible, frame the conflict
in terms of competing and mutually exclusive hypotheses. Ensure that all
sides agree with the definition. Then follow these steps:
Next, conduct the debate phase in the presence of a jury of peers, senior
analysts, or managers who will provide guidance after listening to the
debate. If desired, an audience of interested observers might also watch the
debate.
✶ The debate starts with each side presenting a brief (maximum five
minutes) summary of the argument for its position. The jury and the
audience are expected to have read each side’s full argument.
✶ Each side then presents to the audience its rebuttal of the other
side’s written position. The purpose here is to proceed in the oral
arguments by systematically refuting alternative hypotheses rather
than by presenting more evidence to support one’s own argument.
This is the best way to evaluate the strengths of the opposing
arguments.
✶ After each side has presented its rebuttal argument, the other side
is given an opportunity to refute the rebuttal.
✶ The jury asks questions to clarify the debaters’ positions or gain
additional insight needed to pass judgment on the debaters’ positions.
✶ The jury discusses the issue and passes judgment. The winner is
the side that makes the best argument refuting the other side’s
position, not the side that makes the best argument supporting its own
position. The jury may also recommend possible next steps for further
research or intelligence collection efforts. If neither side can refute
the other’s arguments, it may be because both sides have a valid
314
argument; if that is the case, both positions should be represented in
any subsequent analytic report.
315
referring him to Kahneman’s work on Adversarial Collaboration.
8. Ibid.
10. The term Team A/Team B is taken from a historic analytic experiment
conducted in 1976. A team of CIA Soviet analysts (Team A) and a team of
outside critics (Team B) prepared competing assessments of the Soviet
Union’s strategic military objectives. This exercise was characterized by
entrenched and public warfare between long-term adversaries. In other
words, the historic legacy of Team A/Team B is exactly the type of trench
warfare between opposing sides that we need to avoid. The 1976
experiment did not achieve its goals, and it is not a model that most
analysts who are familiar with it would want to follow. We recognize that
some recent Team A/Team B exercises have been quite fruitful, but we
believe other conflict management techniques described in this chapter are
a better way to proceed.
316
11 Decision Support
It is usually not the analyst’s job to make the choices or decide on the
trade-offs, but analysts can and should use decision-support techniques to
provide timely support to managers, commanders, planners, and decision
makers who do make these choices. To engage in this type of customer-
support analysis, analysts must understand the operating environment of
the decision maker and anticipate how the decision maker is likely to
approach an issue. They must understand the dynamics of the decision-
making process in order to recognize when and how they can be most
useful. Most of the decision-support techniques described here are used in
both government and industry. By using such techniques, analysts can see
a problem from the decision maker’s perspective. They can use these
techniques without overstepping the limits of their role as analysts because
the technique doesn’t make the decision; it just structures all the relevant
information in a format that makes it easier for the manager, commander,
317
planner, or other decision maker to make a choice.
318
Decision making and decision analysis are large and diverse fields of study
and research. The decision-support techniques described in this chapter are
only a small sample of what is available, but they do meet many of the
basic requirements for intelligence analysis.
Overview of Techniques
319
Decision Trees
are a simple way to chart the range of options available to a decision
maker, estimate the probability of each option, and show possible
outcomes. They provide a useful landscape to organize a discussion and
weigh alternatives but can also oversimplify a problem.
Decision Matrix
is a simple but powerful device for making trade-offs between conflicting
goals or preferences. An analyst lists the decision options or possible
choices, the criteria for judging the options, the weights assigned to each
of these criteria, and an evaluation of the extent to which each option
satisfies each of the criteria. This process will show the best choice—based
on the values the analyst or a decision maker puts into the matrix. By
studying the matrix, one can also analyze how the best choice would
change if the values assigned to the selection criteria were changed or if
the ability of an option to satisfy a specific criterion were changed. It is
almost impossible for an analyst to keep track of these factors effectively
without such a matrix, as one cannot keep all the pros and cons in working
memory at the same time. A Decision Matrix helps the analyst see the
whole picture.
Pros-Cons-Faults-and-Fixes
is a strategy for critiquing new policy ideas. It is intended to offset the
human tendency of analysts and decision makers to jump to conclusions
before conducting a full analysis of a problem, as often happens in group
meetings. The first step is for the analyst or the project team to make lists
of Pros and Cons. If the analyst or team is concerned that people are being
unduly negative about an idea, he or she looks for ways to “Fix” the Cons
—that is, to explain why the Cons are unimportant or even to transform
them into Pros. If concerned that people are jumping on the bandwagon
too quickly, the analyst tries to “Fault” the Pros by exploring how they
could go wrong. Usually, the analyst will either “Fix” the Cons or “Fault”
the Pros, but not do both. Of the various techniques described in this
chapter, this is one of the easiest and quickest to use.
320
is a technique that analysts can use to help a decision maker decide how to
solve a problem or achieve a goal and determine whether it is possible to
do so. The analyst identifies and assigns weights to the relative importance
of all the factors or forces that are either a help or a hindrance in solving
the problem or achieving the goal. After organizing all these factors in two
lists, pro and con, with a weighted value for each factor, the analyst or
decision maker is in a better position to recommend strategies that would
be most effective in either strengthening the impact of the driving forces or
reducing the impact of the restraining forces.
SWOT Analysis
is used to develop a plan or strategy for achieving a specified goal. (SWOT
is an acronym for Strengths, Weaknesses, Opportunities, and Threats.) In
using this technique, the analyst first lists the Strengths and Weaknesses in
the organization’s ability to achieve a goal, and then lists Opportunities
and Threats in the external environment that would either help or hinder
the organization from reaching the goal.
Impact Matrix
can be used by analysts or managers to assess the impact of a decision on
their organization by evaluating what impact it is likely to have on all key
actors or participants in that decision. It also gives the analyst or decision
maker a better sense of how the issue is most likely to play out or be
resolved in the future.
Complexity Manager
is a simplified approach to understanding complex systems—the kind of
systems in which many variables are related to each other and may be
changing over time. Government policy decisions are often aimed at
changing a dynamically complex system. It is because of this dynamic
complexity that many policies fail to meet their goals or have unforeseen
and unintended consequences. Use Complexity Manager to assess the
chances for success or failure of a new or proposed policy, identify
opportunities for influencing the outcome of any situation, determine what
would need to change in order to achieve a specified goal, or recognize the
potential for unintended consequences from the pursuit of a policy goal.
321
11.1 Decision Trees
Decision Trees establish chains of decisions and/or events that illustrate a
comprehensive range of possible future actions. They paint a landscape for
the decision maker showing the range of options available, the estimated
value or probability of each option, and the likely implications or
outcomes of choosing each option.
When to Use It
Decision Trees can be used to do the following:
Value Added
Decision Trees are simple to understand and easy to use and interpret. A
Decision Tree can be generated by a single analyst—or, preferably, by a
group using brainstorming techniques discussed in chapter 5. Once the tree
has been built, it can be posted on a wall or in a wiki and adjusted over a
period of time as new information is gathered. When significant new data
are received that add new branches to the tree or substantially alter the
probabilities of the options, these changes can be inserted into the tree and
highlighted with color to show the decision maker what has changed and
how it may have changed the previous line of analysis.
322
The Method
Using a Decision Tree is a fairly simple process involving two steps: (1)
building the tree, and (2) calculating the value or probability of each
outcome represented on the tree. Follow these steps:
The most valuable or most probable outcome will have the highest
percentage assigned to it, and the least valuable or least probable outcome
will have the lowest percentage assigned to it.
Potential Pitfalls
A Decision Tree is only as good as the reliability of the data, completeness
of the range of options, and validity of the qualitative probabilities or
values assigned to each option. A detailed Decision Tree can present the
misleading impression that the authors have thought of all possible options
or outcomes. In intelligence analysis, options are often available to the
subjects of the analysis that were not imagined, just as there might be
unintended consequences that the subjects did not anticipate.
323
Relationship to Other Techniques
A Decision Tree is structurally similar to critical path analysis and to
Program Evaluation and Review Technique (PERT) charts. Both of these
techniques, however, only show the activities and connections that need to
be undertaken to complete a complex task. A timeline analysis as done in
support of a criminal investigation is essentially a Decision Tree drawn
after the fact, showing only the paths actually taken.
When to Use It
The Decision Matrix technique should be used when a decision maker has
multiple options from which to choose, has multiple criteria for judging
the desirability of each option, and/or needs to find the decision that
maximizes a specific set of goals or preferences. For example, it can be
used to help choose among various plans or strategies for improving
intelligence analysis, to select one of several IT systems one is considering
buying, to determine which of several job applicants is the right choice, or
to consider any personal decision, such as what to do after retiring.
324
support a Red Hat Analysis that examines decision options from an
adversary’s or competitor’s perspective.
Value Added
This technique deconstructs a decision problem into its component parts,
listing all the options or possible choices, the criteria for judging the
options, the weights assigned to each of these criteria, and an evaluation of
the extent to which each option satisfies each of these criteria. All these
judgments are apparent to anyone looking at the matrix. Because it is so
explicit, the matrix can play an important role in facilitating
communication between those who are involved in or affected by the
decision process. It can be easy to identify areas of disagreement and to
determine whether such disagreements have any material impact on the
decision. One can also see how sensitive the decision is to changes that
might be made in the values assigned to the selection criteria or to the
ability of an option to satisfy the criteria. If circumstances or preferences
change, it is easy to go back to the matrix, make changes, and calculate the
impact of the changes on the proposed decision.
The Method
Create a Decision Matrix table. To do this, break the decision problem
down into two main components by making two lists—a list of options or
alternatives for making a choice and a list of criteria to be used when
judging the desirability of the options. Then follow these steps:
✶ Create a matrix with one column for each option. Write the name
of each option at the head of one of the columns. Add two more blank
columns on the left side of the table.
✶ Count the number of selection criteria, and then adjust the table so
that it has that many rows plus two more, one at the top to list the
options and one at the bottom to show the scores for each option. In
the first column on the left side, starting with the second row, write in
all the selection criteria down the left side of the table. There is some
value in listing them roughly in order of importance, but doing so is
not critical. Leave the bottom row blank. (Note: Whether you enter
the options across the top row and the criteria down the far-left
column, or vice versa, depends on what fits best on the page. If one of
325
the lists is significantly longer than the other, it usually works best to
put the longer list in the left-side column.)
✶ Assign weights based on the importance of each of the selection
criteria. There are several ways to do this, but the preferred way is to
take 100 percent and divide these percentage points among the
selection criteria. Be sure that the weights for all the selection criteria
combined add to 100 percent. Also be sure that all the criteria are
phrased in such a way that a higher weight is more desirable. (Note: If
this technique is being used by an intelligence analyst to support
decision making, this step should not be done by the analyst. The
assignment of relative weights is up to the decision maker.)
✶ Work across the matrix one row at a time to evaluate the relative
ability of each of the options to satisfy each of the selection criteria.
For example, assign ten points to each row and divide these points
according to an assessment of the degree to which each of the options
satisfies each of the selection criteria. Then multiply this number by
the weight for that criterion. Figure 11.2 is an example of a Decision
Matrix with three options and six criteria.
✶ Add the columns for each of the options. If you accept the
judgments and preferences expressed in the matrix, the option with
the highest number will be the best choice.
326
that plausible changes in some values would lead to a different choice. For
example, the analyst might think of a way to modify an option in a way
that makes it more desirable or might rethink the selection criteria in a way
that changes the preferred outcome. The numbers calculated in the matrix
do not make the decision. The matrix is just an aid to help the analyst and
the decision maker understand the trade-offs between multiple competing
preferences.
11.3 Pros-Cons-Faults-and-Fixes
Pros-Cons-Faults-and-Fixes is a strategy for critiquing new policy ideas. It
is intended to offset the human tendency of a group of analysts and
decision makers to jump to a conclusion before full analysis of the
problem has been completed.
When to Use It
Making lists of pros and cons for any action is a common approach to
decision making. The “Faults” and “Fixes” are what is new in this strategy.
Use this technique to make a quick appraisal of a new idea or a more
systematic analysis of a choice between two options.
327
others to solicit divergent input.
In the business world, the technique can also be used to discover potential
vulnerabilities in a proposed strategy to introduce a new product or acquire
a new company. By assessing how Pros can be “Faulted,” one can
anticipate how competitors might react to a new corporate initiative; by
assessing how Cons can be “Fixed,” potential vulnerabilities can be
addressed and major mistakes avoided early in the planning process.
Value Added
It is unusual for a new idea to meet instant approval. What often happens
in meetings is that a new idea is brought up, one or two people
immediately explain why they don’t like it or believe it won’t work, and
the idea is then dropped. On the other hand, there are occasions when just
the opposite happens. A new idea is immediately welcomed, and a
commitment to support it is made before the idea is critically evaluated.
The Pros-Cons-Faults-and-Fixes technique helps to offset this human
tendency to jump to conclusions.
The technique first requires a list of Pros and Cons about the new idea or
the choice between two alternatives. If there seems to be excessive
enthusiasm for an idea and a risk of acceptance without critical evaluation,
the next step is to look for “Faults.” A Fault is any argument that a Pro is
unrealistic, won’t work, or will have unacceptable side effects. On the
other hand, if there seems to be a bias toward negativity or a risk of the
idea being dropped too quickly without careful consideration, the next step
is to look for “Fixes.” A Fix is any argument or plan that would neutralize
or minimize a Con, or even change it into a Pro. In some cases, it may be
appropriate to look for both Faults and Fixes before comparing the two
lists and making a decision.
328
The Method
Start by clearly defining the proposed action or choice. Then follow these
steps:
✶ List the Pros in favor of the decision or choice. Think broadly and
creatively, and list as many benefits, advantages, or other positives as
possible.
✶ List the Cons, or arguments against what is proposed. There are
usually more Cons than Pros, as most humans are naturally critical. It
is easier to think of arguments against a new idea than to imagine
how the new idea might work. This is why it is often difficult to get
careful consideration of a new idea.
✶ Review and consolidate the list. If two Pros are similar or
overlapping, consider merging them to eliminate any redundancy. Do
the same for any overlapping Cons.
✶ If the choice is between two clearly defined options, go through
the previous steps for the second option. If there are more than two
options, a technique such as Decision Matrix may be more
appropriate than Pros-Cons-Faults-and-Fixes.
✶ At this point you must make a choice. If the goal is to challenge an
initial judgment that the idea won’t work, take the Cons, one at a
time, and see if they can be “Fixed.” That means trying to figure a
way to neutralize their adverse influence or even to convert them into
Pros. This exercise is intended to counter any unnecessary or biased
negativity about the idea. There are at least four ways an argument
listed as a Con might be Fixed:
– Propose a modification of the Con that would significantly
lower the risk of the Con being a problem.
– Identify a preventive measure that would significantly reduce
the chances of the Con being a problem.
– Do contingency planning that includes a change of course if
certain indicators are observed.
– Identify a need for further research or information gathering
to confirm or refute the assumption that the Con is a problem.
✶ If the goal is to challenge an initial optimistic assumption that the
idea will work and should be pursued, take the Pros, one at a time,
and see if they can be “Faulted.” That means to try and figure out
how the Pro might fail to materialize or have undesirable
consequences. This exercise is intended to counter any wishful
329
thinking or unjustified optimism about the idea. A Pro might be
Faulted in at least three ways:
– Identify a reason why the Pro would not work or why the
benefit would not be received.
– Identify an undesirable side effect that might accompany the
benefit.
– Identify a need for further research or information gathering
to confirm or refute the assumption that the Pro will work or be
beneficial.
✶ A third option is to combine both approaches, to Fault the Pros and
Fix the Cons.
✶ Compare the Pros, including any Faults, against the Cons,
including the Fixes. Weigh the balance of one against the other, and
make the choice. The choice is based on your professional judgment,
not on any numerical calculation of the number or value of Pros
versus Cons.
Potential Pitfalls
Often when listing the Pros and Cons, analysts will assign weights to each
Pro and Con on the list and then re-sort the lists, with the Pros or Cons
receiving the most points at the top of the list and those receiving the
330
fewest points at the bottom. This can be a useful exercise, helping the
analyst weigh the balance of one against the other, but the authors strongly
recommend against adding up the scores on each side and deciding that the
list with the most number of points is the right choice. Any numerical
calculation can be easily manipulated by simply adding more Pros or more
Cons to either list to increase its overall score. The best protection against
this practice is simply not to add up the points in either column.
When to Use It
Force Field Analysis is useful in the early stages of a project or research
effort, when the analyst is defining the issue, gathering data, or developing
recommendations for action. It requires that the analyst clearly define the
problem in all its aspects. It can aid an analyst in structuring the data and
assessing the relative importance of each of the forces affecting the issue.
331
The technique can also help the analyst overcome the natural human
tendency to dwell on the aspects of the data that are most comfortable. The
technique can be used by an individual analyst or by a small team.
In the world of business and politics, the technique can also be used to
develop and refine strategies to promote a particular policy or ensure that a
desired outcome actually comes about. In such instances, it is often useful
to define the various forces in terms of key individuals who need to be
persuaded. For example, instead of listing budgetary restrictions as a key
factor, one would write down the name of the person who controls the
budget. Similarly, Force Field Analysis can be used to diagnose what
forces and individuals need to be constrained or marginalized in order to
prevent a policy from being adopted or an outcome from happening.
Value Added
The primary benefit of Force Field Analysis is that it requires an analyst to
consider the forces and factors (and, in some cases, individuals) that
influence a situation. It helps the analyst think through the ways various
forces affect the issue and fosters the recognition that forces can be divided
into two categories: the driving forces and the restraining forces. By
sorting the evidence into driving and restraining forces, the analyst must
delve deeply into the issue and consider the less obvious factors and
issues.
By weighing all the forces for and against an issue, the analyst can better
recommend strategies that would be most effective in reducing the impact
of the restraining forces and strengthening the effect of the driving forces.
Force Field Analysis also offers a powerful way to visualize the key
elements of the problem by providing a simple tally sheet for displaying
the different levels of intensity of the forces individually and as a whole.
With the data sorted into two lists, decision makers can more easily
identify which forces deserve the most attention and develop strategies to
overcome the negative elements while promoting the positive elements.
Figure 11.4 is an example of a Force Field diagram.
332
those attempting to maintain the status quo (restraining forces).
The Method
✶ Define the problem, goal, or change clearly and concisely.
✶ Brainstorm to identify the main forces that will influence the issue.
Consider such topics as needs, resources, costs, benefits,
organizations, relationships, attitudes, traditions, interests, social and
cultural trends, rules and regulations, policies, values, popular desires,
and leadership to develop the full range of forces promoting and
restraining the factors involved.
✶ Make one list showing the forces or people “driving” the change
and a second list showing the forces or people “restraining” the
change.
✶ Assign a value (the intensity score) to each driving or restraining
force to indicate its strength. Assign the weakest intensity scores a
value of 1 and the strongest a value of 5. The same intensity score can
be assigned to more than one force if you consider the factors equal in
strength. List the intensity scores in parentheses beside each item.
✶ Examine the two lists to determine if any of the driving forces
balance out the restraining forces.
✶ Devise a manageable course of action to strengthen those forces
that lead to the preferred outcome and weaken the forces that would
hinder the desired outcome.
Figure 11.4 Force Field Analysis: Removing Abandoned Cars from City
Streets
333
Source: 2007 Pherson Associates, LLC.
You should keep in mind that the preferred outcome may be either
promoting a change or restraining a change. For example, if the problem is
increased drug use or criminal activity, you would focus the analysis on
the factors that would have the most impact on restraining criminal activity
or drug use. On the other hand, if the preferred outcome is improved
border security, you would highlight the driving forces that, if
strengthened, would be most likely to promote border security.
Potential Pitfalls
When assessing the balance between driving and restraining forces, the
authors recommend against adding up the scores on each side and
concluding that the side with the most number of points will win out. Any
numerical calculation can be easily manipulated by simply adding more
334
forces or factors to either list to increase its overall score.
When to Use It
After setting a goal or objective, use SWOT as a framework for collecting
and organizing information in support of strategic planning and decision
making to achieve the goal or objective. Information is collected to
analyze the plan’s Strengths and Weaknesses and the Opportunities and
Threats present in the external environment that might have an impact on
the ability to achieve the goal.
Value Added
SWOT can generate useful information with relatively little effort, and it
brings that information together in a framework that provides a good base
335
for further analysis. It often points to specific actions that can or should be
taken. Because the technique matches an organization’s or plan’s Strengths
and Weaknesses against the Opportunities and Threats in the environment
in which it operates, the plans or action recommendations that develop
from the use of this technique are often quite practical.
The Method
✶ Define the objective.
✶ Fill in the SWOT table by listing Strengths, Weaknesses,
Opportunities, and Threats that are expected to facilitate or hinder
achievement of the objective. (See Figure 11.5.) The significance of
the attributes’ and conditions’ impact on achievement of the objective
is far more important than the length of the list. It is often desirable to
list the items in each quadrant in order of their significance or to
assign them values on a scale of 1 to 5.
✶ Identify possible strategies for achieving the objective. This is
done by asking the following questions:
– How can we use each Strength?
– How can we improve each Weakness?
– How can we exploit each Opportunity?
– How can we mitigate each Threat?
Potential Pitfalls
SWOT is simple, easy, and widely used, but it has limitations. It focuses
on a single goal without weighing the costs and benefits of alternative
means of achieving the same goal. In other words, SWOT is a useful
technique as long as the analyst recognizes that it does not necessarily tell
the full story of what decision should or will be made. There may be other
equally good or better courses of action.
336
Another strategic planning technique, the TOWS Matrix, remedies one of
the limitations of SWOT. The factors listed under Threats, Opportunities,
Weaknesses, and Strengths are combined to identify multiple alternative
strategies that an organization might pursue.3
337
The Impact Matrix identifies the key actors involved in a decision, their
level of interest in the issue, and the impact of the decision on them. It is a
framing technique that can give analysts and managers a better sense of
how well or poorly a decision may be received, how it is most likely to
play out, and what would be the most effective strategies to resolve a
problem.
When to Use It
The best time for a manager to use this technique is when a major new
policy initiative is being contemplated or a mandated change is about to be
announced. The technique helps the manager identify where he or she is
most likely to encounter both resistance and support. Intelligence analysts
can also use the technique to assess how the public might react to a new
policy pronouncement by a foreign government or a new doctrine posted
on the Internet by a political movement. Invariably, the technique will
uncover new insights by focusing in a systematic way on all possible
dimensions of the issue.
The template matrix makes the technique fairly easy to use. Most often, an
individual manager will apply the technique to develop a strategy for how
he or she plans to implement a new policy or respond to a newly decreed
mandate from on high. Managers can also use the technique proactively
before they announce a new policy or procedure. The technique can
expose unanticipated pockets of resistance or support, as well as those it
might be smart to consult before the policy or procedure becomes public
knowledge. A single intelligence analyst can also use the technique,
although it is usually more effective if done as a group process.
Value Added
The technique provides the user with a comprehensive framework for
assessing whether a new policy or procedure will be met with resistance or
support. A key concern is to identify any actor who will be heavily
impacted in a negative way. They should be engaged early on or ideally
before the policy is announced, in case they have ideas on how to make the
new policy more digestible. At a minimum, they will appreciate that their
views were sought out and considered, whether positive or negative.
Support can be enlisted from those who will be strongly impacted in a
338
positive way.
The Method
The impact matrix process involves the following steps (a template for
using the Impact Matrix is provided in Figure 11.6):
Figure 11.6 Impact Matrix: Identifying Key Actors, Interests, and Impact
339
Origins of This Technique
The Impact Matrix was developed by Mary O’Sullivan and Randy
Pherson, Pherson Associates, LLC, and is taught in courses for mid-level
managers in the government, law enforcement, and business.
340
When to Use It
As a policy support tool, Complexity Manager can be used to assess the
chances for success or failure of a new or proposed program or policy, and
opportunities for influencing the outcome of any situation. It also can be
used to identify what would have to change in order to achieve a specified
goal, as well as unintended consequences from the pursuit of a policy goal.
When trying to foresee future events, both the intelligence and business
communities have typically dealt with complexity by the following:
Value Added
We all know that we live in a complex world of interdependent political,
economic, social, and technological systems in which each event or change
has multiple impacts. These impacts then have additional impacts on other
elements of the system. Although we understand this, we usually do not
341
analyze the world in this way, because the multitude of potential
interactions is too difficult for the human brain to track simultaneously. As
a result, analysts often fail to foresee future problems or opportunities that
may be generated by current trends and developments. Or they fail to
foresee the undesirable side effects of well-intentioned policies.5
It takes time to work through the Complexity Manager process, but it may
342
save time in the long run. This structured approach helps analysts work
efficiently without getting mired down in the complexity of the problem.
Because it produces a better and more carefully reasoned product, it also
saves time during the editing and coordination processes.
The Method
Complexity Manager requires the analyst to proceed through eight specific
steps:
343
number of variables increases. With 10 variables, as in Figure 11.7, there
are 90 possible interactions. With 15 variables, there are 210. Complexity
Manager may be impractical with more than 15 variables.
Analysts can record the nature and strength of impact that one variable has
on another in two different ways. Figure 11.7 uses plus and minus signs to
show whether the variable being analyzed has a positive or negative
impact on the paired variable. The size of the plus or minus sign signifies
the strength of the impact on a three-point scale. The small plus or minus
sign shows a weak impact, the medium size a medium impact, and the
large size a strong impact. If the variable being analyzed has no impact on
the paired variable, the cell is left empty. If a variable might change in a
way that could reverse the direction of its impact, from positive to negative
or vice versa, this is shown by using both a plus and a minus sign.
The completed matrix shown in Figure 11.7 is the same matrix you will
see in chapter 14, when the Complexity Manager technique is used to
forecast the future of structured analytic techniques. The plus and minus
signs work well for the finished matrix. When first populating the matrix,
however, it may be easier to use letters (P and M for plus and minus) to
show whether each variable has a positive or negative impact on the other
variable with which it is paired. Each P or M is then followed by a number
to show the strength of that impact. A three-point scale is used, with 3
344
indicating a Strong impact, 2 Medium, and 1 Weak.
345
After rating each pair of variables, and before doing further analysis,
consider pruning the matrix to eliminate variables that are unlikely to have
a significant effect on the outcome. It is possible to measure the relative
significance of each variable by adding up the weighted values in each row
and column. The sum of the weights in each row is a measure of each
variable’s impact on the system as a whole. The sum of the weights in
each column is a measure of how much each variable is affected by all the
other variables. Those variables most impacted by the other variables
should be monitored as potential indicators of the direction in which
events are moving or as potential sources of unintended consequences.
346
6. Analyze loops and indirect impacts: The matrix shows only the
direct impact of one variable on another. When you are analyzing the
direct impacts variable by variable, there are several things to look for and
make note of. One is feedback loops. For example, if variable A has a
positive impact on variable B, and variable B also has a positive impact on
variable A, this is a positive feedback loop. Or there may be a three-
variable loop, from A to B to C and back to A. The variables in a loop gain
strength from one another, and this boost may enhance their ability to
influence other variables. Another thing to look for is circumstances where
the causal relationship between variables A and B is necessary but not
sufficient for something to happen. For example, variable A has the
potential to influence variable B, and may even be trying to influence
variable B, but it can do so effectively only if variable C is also present. In
that case, variable C is an enabling variable and takes on greater
significance than it ordinarily would have.
All variables are either static or dynamic. Static variables are expected to
remain more or less unchanged during the period covered by the analysis.
Dynamic variables are changing or have the potential to change. The
analysis should focus on the dynamic variables, as these are the sources of
surprise in any complex system. Determining how these dynamic variables
interact with other variables and with each other is critical to any forecast
of future developments. Dynamic variables can be either predictable or
unpredictable. Predictable change includes established trends or
established policies that are in the process of being implemented.
Unpredictable change may be a change in leadership or an unexpected
change in policy or available resources.
347
8. Conduct an opportunity analysis: When appropriate, analyze what
actions could be taken to influence this system in a manner favorable to
the primary customer of the analysis.
348
definitions of complexity. See Seth Lloyd, Programming the Universe
(New York: Knopf, 2006).
5. Dietrich Dorner, The Logic of Failure (New York: Basic Books, 1996).
349
12 Practitioner’s Guide to Collaboration
This chapter starts with some practical guidance on how to take advantage
of the collaborative environment while preventing or avoiding the many
well-known problems associated with small-group processes. It then goes
on to describe how structured analytic techniques provide the process by
which collaboration becomes most effective. Many things change when
the internal thought process of analysts is externalized in a transparent
manner so that evidence is shared early and differences of opinion can be
shared, built on, and easily critiqued by others.
350
12.1 Social Networks and Analytic Teams
Teams and groups can be categorized in several ways. When the purpose
of the group is to generate an analytic product, it seems most useful to deal
with three types: the traditional analytic team, the special project team, and
teams supported by social networks. Traditional teams are usually
colocated and focused on a specific task. Special project teams are most
effective when their members are colocated or working in a synchronous
virtual world (similar to Second Life). Analytic teams supported by social
networks can operate effectively in colocated, geographically distributed,
and synchronous as well as asynchronous modes. These three types of
groups differ in the nature of their leadership, frequency of face-to-face
and virtual world meetings, breadth of analytic activity, and amount of
time pressure under which they work.1
351
exemplifies this type of team. Members typically are located in the
same physical office space or are connected by video
communications. There is strong team leadership, often with close
personal interaction among team members. Because the team is
created to deal with a specific situation, its work may have a narrower
focus than a social network or regular analytic team, and its duration
may be limited. There is usually intense time pressure, and around-
the-clock operation may be required. Figure 12.1b is a diagram of a
special project team.
✶ Social networks: Experienced analysts have always had their own
network of experts in their field or related fields with whom they
consult from time to time and whom they may recruit to work with
them on a specific analytic project. Social networks are critical to the
analytic business. They do the day-to-day monitoring of events,
produce routine products as needed, and may recommend the
formation of a more formal analytic team to handle a specific project.
The social network is the form of group activity that is now changing
dramatically with the growing ease of cross-agency secure
communications and the availability of collaborative software. Social
networks are expanding exponentially across organization
boundaries. The term “social network,” as used here, includes all
analysts working anywhere in the world on a particular country, such
as Brazil; on an issue, such as the development of chemical weapons;
or on a consulting project in the business world. It can be limited to a
small group with special clearances or comprise a broad array of
government, business, nongovernment organization (NGO), and
academic experts. The network can be located in the same office, in
different buildings in the same metropolitan area, or increasingly at
multiple locations around the globe.
352
Source: 2009 Pherson Associates, LLC.
353
Source: 2009 Pherson Associates, LLC.
The key problem that arises with social networks is the geographic
distribution of their members. Even within the Washington, D.C.
metropolitan area, distance is a factor that limits the frequency of face-to-
face meetings, particularly as traffic congestion becomes a growing
nightmare and air travel an unaffordable expense. From their study of
teams in diverse organizations, which included teams in the U.S.
Intelligence Community, Richard Hackman and Anita Woolley came to
this conclusion:
354
Using them well requires careful attention to team structure, a face-
to-face launch when members initially come together, and leadership
support throughout the life of the team to keep members engaged and
aligned with collective purposes.2
✶ Know and trust one another; this usually requires that they meet
face to face at least once.
✶ Feel a personal need to engage the group in order to perform a
critical task.
✶ Derive mutual benefits from working together.
✶ Connect with one another virtually on demand and easily add new
members.
✶ Perceive incentives for participating in the group, such as saving
time, gaining new insights from interaction with other knowledgeable
analysts, or increasing the impact of their contribution.
✶ Share a common understanding of the problem with agreed lists of
common terms and definitions.3
355
a wiki. This provides a solid foundation for the smaller analytic team to do
the subsequent convergent analysis. In other words, each type of group
performs the type of task for which it is best qualified. This process is
applicable to most analytic projects. Figure 12.2 shows how it can work.
356
Source: 2009 Pherson Associates, LLC.
357
✶ Putting this list into a Cross-Impact Matrix, as described in chapter
5, and then discussing and recording in the wiki the relationship, if
any, between each pair of driving forces, variables, or players in that
matrix.
✶ Developing a list of alternative explanations or outcomes
(hypotheses) to be considered, as described in chapter 7.
✶ Developing a list of relevant information to be considered when
evaluating these hypotheses, as described in chapter 7.
✶ Doing a Key Assumptions Check, as described in chapter 8. This
actually takes less time using a synchronous collaborative virtual
workspace than when done in a face-to-face meeting, and is highly
recommended to learn the network’s thinking about key assumptions.
Most of these steps involve making lists, which can be done quite
effectively in a virtual environment. Making such input online in a chat
room or on a wiki can be even more productive than a face-to-face
meeting, because analysts have more time to think about and write up their
thoughts. They can look at their contribution over several days and make
additions or changes as new ideas come to them.
The draft report is best done by a single person. That person can work
from other team members’ inputs, but the report usually reads better if it is
crafted in one voice. As noted earlier, the working draft should be
358
reviewed by those members of the social network who participated in the
first phase of the analysis.
Some group process problems are obvious to anyone who has participated
in trying to arrive at decisions or judgments in a group meeting. Guidelines
for how to run meetings effectively are widely available, but many group
leaders fail to follow them.6 Key individuals are absent or late, and
participants are unprepared. Meetings are often dominated by senior
members or strong personalities, while some participants are reluctant to
speak up or to express their true beliefs. Discussion can get stuck on
several salient aspects of a problem, rather than covering all aspects of the
subject. Decisions may not be reached, and if they are reached, may wind
up not being implemented. Such problems are often greatly magnified
when the meeting is conducted virtually, over telephones or computers.
Academic studies show that “the order in which people speak has a
profound effect on the course of a discussion. Earlier comments are more
influential, and they tend to provide a framework within which the
discussion occurs.”7 Once that framework is in place, discussion tends to
center on that framework, to the exclusion of other options.
359
preferred position is wrong.8 This phenomenon is what is commonly
called “groupthink.”
Other problems that are less obvious but no less significant have been
documented extensively by academic researchers. Often, some reasonably
satisfactory solution is proposed on which all members can agree, and the
discussion is ended without further search to see if there may be a better
answer. Such a decision often falls short of the optimum that might be
achieved with further inquiry. Another phenomenon, known as group
“polarization,” leads in certain predictable circumstances to a group
decision that is more extreme than the average group member’s view prior
to the discussion. “Social loafing” is the phenomenon that people working
in a group will often expend less effort than if they were working to
accomplish the same task on their own. In any of these situations, the
result is often an inferior product that suffers from a lack of analytic rigor.
If you had to identify, in one word, the reason that the human race
has not achieved, and never will achieve, its full potential, that
word would be meetings.
360
dissenter is correct. The dissent stimulates a reappraisal of the situation
and identification of options that otherwise would have gone
undetected.”10 To be effective, however, dissent must be genuine—not
generated artificially, as in some applications of the Devil’s Advocacy
technique.11
With any heterogeneous group, this reduces the risk of premature closure,
groupthink, and polarization. Use of a structured technique also sets a clear
step-by-step agenda for any meeting where that technique is used. This
makes it easier for a group leader to keep a meeting on track to achieve its
goal.12
361
Distributed asynchronous collaboration followed by distributed
synchronous collaboration that uses some of the basic structured
techniques is one of the best ways to tap the expertise of a group of
knowledgeable individuals. The Delphi Method, discussed in chapter 9, is
one well-known method for accomplishing the first phase, and
collaborative, synchronous virtual worlds with avatars are showing great
promise for optimizing work done in phase two.
Figure 12.5 displays the differences between advocacy and the objective
inquiry expected from a team member or a colleague.16 When advocacy
leads to emotional conflict, it can lower team effectiveness by provoking
hostility, distrust, cynicism, and apathy among team members. On the
other hand, objective inquiry, which often leads to cognitive conflict, can
lead to new and creative solutions to problems, especially when it occurs
362
in an atmosphere of civility, collaboration, and common purpose. Several
effective methods for managing analytic differences are described in
chapter 10.
363
Source: 2009 Pherson Associates, LLC.
364
arguments being used and determining how these are interpreted as either
consistent or inconsistent with the various hypotheses. Review of this
matrix provides a systematic basis for identification and discussion of
differences between two or more analysts. CIA and FBI analysts also note
that referring to the matrix helps to depersonalize the argumentation when
there are differences of opinion.18 In other words, ACH can help analysts
learn from their differences rather than fight over them, and other
structured techniques do this as well.
365
timeline, and general focus of the project are agreed to with the leader in
advance. When roles and interactions are explicitly defined and
functioning, the group can more easily turn to the more challenging
analytic tasks at hand.
366
Source: 2009 Pherson Associates, LLC.
367
1. This chapter was inspired by and draws on the research done by the
Group Brain Project at Harvard University. That project was supported by
the National Science Foundation and the CIA Intelligence Technology
Innovation Center. See in particular J. Richard Hackman and Anita W.
Woolley, “Creating and Leading Analytic Teams,” Technical Report 5
(February 2007), http://groupbrain.wjh.harvard.edu/publications.html.
2. Ibid., 8.
368
11. Ibid., 76–78.
12. This paragraph and the previous paragraph express the authors’
professional judgment based on personal experience and anecdotal
evidence gained in discussion with other experienced analysts. As
discussed in chapter 13, there is a clear need for systematic research on
this topic and other variables related to the effectiveness of structured
analytic techniques.
14. Martha Lagace, “Four Questions for David Garvin and Michael
Roberto,” Working Knowledge: A First Look at Faculty Research, Harvard
Business School weekly newsletter, October 15, 2001,
http://hbswk.hbs.edu/item/3568.html.
15. Ibid.
16. The table is from David A. Garvin and Michael A. Roberto, “What
You Don’t Know About Making Decisions,” Working Knowledge: A First
Look at Faculty Research, Harvard Business School weekly newsletter,
October 15, 2001, http://hbswk.hbs.edu/item/2544.html.
369
article is no longer available online.
370
13 Validation of Structured Analytic
Techniques
The authors of this book are often asked for proof of the effectiveness of
structured analytic techniques. The testing of these techniques in the U.S.
Intelligence Community has been done largely through the experience of
using them. That experience has certainly been successful, but it has not
been sufficient to persuade skeptics reluctant to change their long-
ingrained habits or many academics accustomed to looking for hard,
empirical evidence. The question to be addressed in this chapter is how to
go about testing or demonstrating the effectiveness of structured analytic
techniques not just in the intelligence profession, but in other disciplines as
well. The same process should also be applicable to the use of structured
techniques in business, medicine, and many other fields that consistently
deal with probabilities and uncertainties rather than hard data.
371
Two factors raise questions about the practical feasibility of valid
empirical testing of structured analytic techniques as used in intelligence.
First, these techniques are commonly used as a group process. That would
require testing with groups of analysts rather than individual analysts.
Second, intelligence deals with issues of high uncertainty. Former CIA
director Michael Hayden wrote that because of the inherent uncertainties
in intelligence analysis, a record of 70 percent accuracy is a good
performance.1 If this is true, a single experiment in which the use of a
structured technique leads to a wrong answer is meaningless. Multiple
repetitions of the same experiment would be needed to evaluate how often
the analytic judgments were accurate.
372
major purpose of scenario development is to identify indicators and
milestones for each potential scenario. The indicators and milestones can
then be monitored to gain early warning of the direction in which events
seem to be heading. Tetlock’s experiments did not use scenarios in this
way.
Similarly, much empirical evidence suggests that the human mind tends to
see what it is looking for and often misses what it is not looking for. This
is why it is useful to develop scenarios and indicators of possible future
events for which intelligence needs to provide early warning. These
techniques can help collectors target needed information; and, for analysts,
373
they prepare the mind to recognize the early signs of significant change.
Given the necessary role that assumptions play when making intelligence
judgments based on incomplete and ambiguous information, it seems
likely that an analyst who uses the Key Assumptions Check will, on
average, do a better job than an analyst who makes no effort to identify
and validate assumptions. There is also extensive empirical evidence that
reframing a question helps to unblock the mind and enables one to see
other perspectives.
374
time well spent; they say it saves time in the long run once they have
learned this technique.
We believe the most feasible and effective approach for the empirical
evaluation of structured analytic techniques is to look at the purpose for
which a technique is being used, and then test whether or not it actually
achieves that purpose or if the purpose can be achieved in some better
way. This section lays out a program for empirical documentation of the
value gained by the application of structured analytic techniques. This
approach is embedded in the reality of how analysis is actually done in the
U.S. Intelligence Community. We then show how this approach might be
applied to the analysis of three specific techniques.
Step 1 is to agree on the most common causes of analytic error and which
structured techniques are most useful in preventing such errors.
375
product. It does this by:
376
provides the expected benefits. Acquisition of evidence for or against these
benefits is not limited to the results of empirical experiments. It includes
structured interviews of analysts, managers, and customers; observations
of meetings of analysts as they use these techniques; and surveys as well
as experiments.
377
Source: 2009 Pherson Associates, LLC.
378
How successful is the technique in achieving this goal? and (2) How does
that improve the quality and perhaps accuracy of analysis? The following
is a list of studies that might answer these questions:
Cross-Impact Matrix
The Cross-Impact Matrix described in chapter 5 is an old technique that
Richards Heuer tested many years ago, but as far as both of us know it is
not now being used in the U.S. Intelligence Community. Heuer
recommends that it become part of the standard drill early in a project,
when an analyst is still in a learning mode, trying to sort out a complex
situation. It is the next logical step after a brainstorming session is used to
identify all the relevant factors or variables that may influence a situation.
379
All the factors identified by the brainstorming are put into a matrix, which
is then used to guide discussion of how each factor influences all other
factors to which it is judged to be related in the context of a particular
problem. Such a group discussion of how each pair of variables interacts is
an enlightening learning experience and a good basis on which to build
further analysis and ongoing collaboration.
Indicators Validator™
The Indicators Validator™ described in chapter 6 is a relatively new
technique developed by Randy Pherson to test the power of a set of
indicators to provide early warning of future developments, such as which
of several potential scenarios seems to be developing. It uses a matrix
similar to an ACH matrix, with scenarios listed across the top and
indicators down the left side. For each combination of indicator and
scenario, the analyst rates on a five-point scale the likelihood that this
indicator will or will not be seen if that scenario is developing. This rating
measures the diagnostic value of each indicator or its ability to diagnose
which scenario is becoming most likely.
It is often found that indicators have little or no value because they are
consistent with multiple scenarios. The explanation for this phenomenon is
380
that when analysts are identifying indicators, they typically look for
indicators that are consistent with the scenario they are concerned about
identifying. They do not think about the value of an indicator being
diminished if it is also consistent with other hypotheses.
Several years ago, Nancy Dixon conducted a study for the Defense
Intelligence Agency on the “lessons learned problem” across the U.S.
Intelligence Community and within the DIA specifically. This study
confirmed a disconnect between the so-called lessons learned and the
implementation of these lessons. Dixon found that “the most frequent
381
concern expressed across the Intelligence Community is the difficulty of
getting lessons implemented.” She attributes this to two major factors:
2. One of the best examples of research that does meet this comparability
standard is Master Sergeant Robert D. Folker Jr., Intelligence Analysis in
Theater Joint Intelligence Centers: An Experiment in Applying Structured
Methods (Washington, DC: Joint Military Intelligence College, 2000).
382
6. Charlan J. Nemeth and Brendan Nemeth-Brown, “Better Than
Individuals: The Potential Benefits of Dissent and Diversity for Group
Creativity,” in Group Creativity: Innovation through Collaboration, ed.
Paul B. Paulus and Bernard A. Nijstad (New York: Oxford University
Press, 2003), 63–64.
10. Nancy Dixon, “The Problem and the Fix for the U.S. Intelligence
Agencies’ Lessons Learned,” July 1, 2009,
http://www.nancydixonblog.com/2009/07/the-problem-and-the-fix-for-the-
us-intelligence-agencies-lessons-learned.html.
383
14 The Future of Structured Analytic
Techniques
At the end of this chapter, we suppose that it is now the year 2020 and the
use of structured analytic techniques is widespread. We present our vision
of what has happened to make this a reality and how the use of structured
analytic techniques has transformed the way analysis is done—not only in
intelligence but across a broad range of disciplines.
384
14.1 Structuring the Data
The analysis for the case study starts with a brainstormed list of variables
that will influence, or be impacted by, the use of structured analytic
techniques over the next five years or so. The first variable listed is the
target variable, followed by nine other variables related to it.
The next step in Complexity Manager is to put these ten variables into a
Cross-Impact Matrix. This is a tool for the systematic description of the
two-way interaction between each pair of variables. Each pair is assessed
by asking the following question: Does this variable impact the paired
variable in a manner that will contribute to increased or decreased use of
structured analytic techniques in 2020? The completed matrix is shown in
Figure 14.1. This is the same matrix as that shown in chapter 11.
385
To fill in the matrix, the authors started with column A to assess the impact
of each of the variables listed down the left side of the matrix on the
frequency of use of structured analysis. This exercise provides an
overview of all the variables that will impact positively or negatively on
the use of structured analysis. Next, the authors completed row A across
the top of the matrix. This shows the reverse impact—the impact of
increased use of structured analysis on the other variables listed across the
top of the matrix. Here one identifies the second-tier effects. Does the
growing use of structured analytic techniques impact any of these other
variables in ways that one needs to be aware of?1
The remainder of the matrix was then completed one variable at a time,
while identifying and making notes on potentially significant secondary
effects. A secondary effect occurs when one variable strengthens or
weakens another variable, which in turn impacts on or is impacted by
structured analytic techniques.
386
focuses on those variables that (1) are changing or that have the potential
to change and (2) have the greatest impact on other significant variables.
The principal drivers of the system—and, indeed, the variables with the
most cross-impact on other variables as shown in the matrix—are the
extent to which (1) senior executives provide a culture of collaboration, (2)
the work environment supports the development of virtual collaborative
technologies, and (3) customers indicate they are seeking more rigorous
and collaborative analysis delivered on electronic devices such as tablets.
These three variables provide strong support to structured analysis through
their endorsement of and support for collaboration. Structured analysis
reinforces them in turn by providing an optimal process through which
collaboration occurs.
387
techniques. This is critical to fund analytic tradecraft and collaboration
support cells, facilitation support, mentoring programs, research on the
effectiveness of structured analytic techniques, and other support for the
learning and use of structured analysis. A second assumption is that senior
executives will have the wisdom to allocate the necessary personnel,
funding, and training to support collaboration and will create the necessary
incentives to foster a broader use of structured techniques across their
organizations.
Analysts can put on headphones at their desk and with just a click of a
mouse, find themselves sitting as an avatar in a virtual meeting room
conferring with experts from multiple geographic locations who are also
represented by avatars in the same room. They can post their papers for
others in the virtual world to read and edit, call up an Internet site that
merits examination, or project what they see on their own desktop
computer screen so that others, for example, can view how they are using a
particular computer tool. The avatar platform also allows an analyst or a
group of analysts to be mentored “on demand” by a senior analyst on how
to use a particular technique or by an instructor to conduct an analytic
techniques workshop without requiring anyone to leave his or her desk.
388
toward a common goal. There is a basic set of techniques and critical
thinking skills that collaborating teams or groups commonly use at the
beginning of most projects to establish a shared foundation for their
communication and work together. This includes Virtual Brainstorming to
identify a list of relevant variables to be tracked and considered; Cross-
Impact Matrix as a basis for discussion and learning from one another
about the relationships between key variables; and Key Assumptions
Check to discuss assumptions about how things normally work in the topic
area of common interest. Judgments about the cross-impacts between
variables and the key assumptions are drafted collaboratively and posted
electronically. This process establishes a common base of knowledge and
understanding about a topic of interest. It also identifies at an early stage of
the collaborative process any potential differences of opinion, ascertains
gaps in available information, and determines which analysts are most
knowledgeable about various aspects of the project.
The process for coordinating analytic papers and assessments has changed
dramatically. Formal coordination prior to publication is now usually a
formality. Collaboration among interested parties is now taking place as
papers are initially being conceptualized, and all relevant information and
intelligence is shared. AIMS (Audience, Issue, Message, and Storyline),
the Getting Started Checklist, Key Assumptions Check, and other basic
critical thinking and structured analytic techniques are being used
regularly, and differences of opinion are explored early in the preparation
of an analytic product. New analytic techniques, such as Premortem
Analysis, Structured Self-Critique, and Adversarial Collaboration, have
almost become a requirement to bullet-proof analytic products and resolve
disagreements as much as possible prior to the final coordination.
389
Exploitation of outside knowledge—especially cultural, environmental,
and technical expertise—has increased significantly. The Delphi Method is
used extensively as a flexible procedure for obtaining ideas, judgments, or
forecasts electronically on avatar-based collaborative systems from
geographically dispersed panels of experts.
By 2020, the use of structured analytic techniques has expanded across the
globe. All U.S. intelligence agencies, almost all intelligence services in
Europe, and a dozen other services in other parts of the world have
incorporated structured techniques into their analytic processes. Over
twenty Fortune 50 companies with competitive intelligence units—and a
growing number of hospitals—have incorporated selected structured
techniques, including the Key Assumptions Check, ACH, and Premortem
Analysis, into their analytic processes, concluding that they could no
longer afford multimillion-dollar mistakes that could have been avoided by
engaging in more rigorous analysis as part of their business processes.
One no longer hears the old claim that there is no proof that the use of
structured analytic techniques improves analysis. The widespread use of
structured techniques in 2020 is partially attributable to the debunking of
that claim. Canadian studies involving a sample of reports prepared with
the assistance of several structured techniques and a comparable sample of
reports where structured techniques had not been used showed that the use
of structured techniques had distinct value. Researchers interviewed the
authors of the reports, their managers, and the customers who received
these reports. The study confirmed that reports prepared with the
assistance of the selected structured techniques were more thorough,
provided better accounts of how the conclusions were reached, and
generated greater confidence in the conclusions than did reports for which
such techniques were not used. These findings were replicated by other
intelligence services that used the techniques, and this was sufficient to
quiet most of the doubters.
390
Crunching™ and What If? Analysis are commonly discussed among
analysts and decision makers alike. In some cases, decision makers or their
aides even observe or participate in Role Playing exercises using
structured techniques. These interactions help both customers and analysts
understand the benefits and limitations of using collaborative processes
and tools to produce analysis that informs and augments policy
deliberations.
391
Connections to techniques in other domains are shown by a thin gray
line.
392
Structured analytic techniques in six different domains often make use
of indicators and are indicated by stars.
1. For a more detailed explanation of how each variable was rated in the
Complexity Analysis matrix, send an e-mail requesting the data to
[email protected].
393
Strategic Multilayer Assessment (SMA), Multi-Agency/Multi-Disciplinary
White Papers in Support of Counter-Terrorism and Counter-WMD, Office
of Secretary of Defense/DDR&E/RTTO, accessible at
http://www.hsdl.org/?view&did=712792.
394
Table of Contents
Table of Contents 2
Foreword 17
Preface 20
Introduction and Overview 26
Building a System 2 Taxonomy 42
Choosing the Right Technique 52
Decomposition and Visualization 66
Idea Generation 128
Scenarios and Indicators 161
Hypothesis Generation and Testing 193
Assessment of Cause and Effect 233
Challenge Analysis 262
Conflict Management 303
Decision Support 317
Practitioner’s Guide to Collaboration 350
Validation of Structured Analytic Techniques 371
The Future of Structured Analytic Techniques 384
395