Structured Analytic Techniques For Intelligence Analysis

Download as pdf or txt
Download as pdf or txt
You are on page 1of 395

Contents

Figures
Foreword by John McLaughlin
Preface
1. Introduction and Overview
1.1 Our Vision
1.2 Two Types of Thinking
1.3 Dealing with Bias
1.4 Role of Structured Analytic Techniques
1.5 Value of Team Analysis
1.6 History of Structured Analytic Techniques
1.7 Selection of Techniques for This Book
1.8 Quick Overview of Chapters
2. Building a System 2 Taxonomy
2.1 Taxonomy of System 2 Methods
2.2 Taxonomy of Structured Analytic Techniques
3. Choosing the Right Technique
3.1 Core Techniques
3.2 Making a Habit of Using Structured Techniques
3.3 One Project, Multiple Techniques
3.4 Common Errors in Selecting Techniques
3.5 Structured Technique Selection Guide
4. Decomposition and Visualization
4.1 Getting Started Checklist
4.2 AIMS (Audience, Issue, Message, and Storyline)
4.3 Customer Checklist
4.4 Issue Redefinition
4.5 Chronologies and Timelines
4.6 Sorting
4.7 Ranking, Scoring, and Prioritizing
4.7.1 The Method: Ranked Voting
4.7.2 The Method: Paired Comparison
4.7.3 The Method: Weighted Ranking
4.8 Matrices
4.9 Venn Analysis
4.10 Network Analysis
4.11 Mind Maps and Concept Maps

2
4.12 Process Maps and Gantt Charts
5. Idea Generation
5.1 Structured Brainstorming
5.2 Virtual Brainstorming
5.3 Nominal Group Technique
5.4 Starbursting
5.5 Cross-Impact Matrix
5.6 Morphological Analysis
5.7 Quadrant Crunching™
6. Scenarios and Indicators
6.1 Scenarios Analysis
6.1.1 The Method: Simple Scenarios
6.1.2 The Method: Cone of Plausibility
6.1.3 The Method: Alternative Futures Analysis
6.1.4 The Method: Multiple Scenarios Generation
6.2 Indicators
6.3 Indicators Validator™
7. Hypothesis Generation and Testing
7.1 Hypothesis Generation
7.1.1 The Method: Simple Hypotheses
7.1.2 The Method: Multiple Hypotheses Generator™
7.1.3 The Method: Quadrant Hypothesis Generation
7.2 Diagnostic Reasoning
7.3 Analysis of Competing Hypotheses
7.4 Argument Mapping
7.5 Deception Detection
8. Assessment of Cause and Effect
8.1 Key Assumptions Check
8.2 Structured Analogies
8.3 Role Playing
8.4 Red Hat Analysis
8.5 Outside-In Thinking
9. Challenge Analysis
9.1 Premortem Analysis
9.2 Structured Self-Critique
9.3 What If? Analysis
9.4 High Impact/Low Probability Analysis
9.5 Devil’s Advocacy
9.6 Red Team Analysis
9.7 Delphi Method

3
10. Conflict Management
10.1 Adversarial Collaboration
10.2 Structured Debate
11. Decision Support
11.1 Decision Trees
11.2 Decision Matrix
11.3 Pros-Cons-Faults-and-Fixes
11.4 Force Field Analysis
11.5 SWOT Analysis
11.6 Impact Matrix
11.7 Complexity Manager
12. Practitioner’s Guide to Collaboration
12.1 Social Networks and Analytic Teams
12.2 Dividing the Work
12.3 Common Pitfalls with Small Groups
12.4 Benefiting from Diversity
12.5 Advocacy versus Objective Inquiry
12.6 Leadership and Training
13. Validation of Structured Analytic Techniques
13.1 Limits of Empirical Analysis
13.2 Establishing Face Validity
13.3 A Program for Empirical Validation
13.4 Recommended Research Program
14. The Future of Structured Analytic Techniques
14.1 Structuring the Data
14.2 Key Drivers
14.3 Imagining the Future: 2020

4
More Advance Praise for Structured
Analytic Techniques for Intelligence
Analysis

“Structured Analytic Techniques is an indispensable companion for all


analysts and policy-makers who strive to instill transparency, rigor, and
foresight into their everyday analytic routines, provide sound
argumentation for policy decisions, and avoid surprise. As the authors so
vividly point out: Our brain is not always our friend. Good analysis and
foresight mean that we need to constantly keep questioning ourselves. This
book tells us how to do this in a structured and systematic manner. It is
easy to use, practice-oriented, and convinces even sceptics of the necessity
of applying structured techniques. The 55 techniques in this book will
undoubtedly contribute to advancing foresight and critical thinking skills
in Germany’s policy community. Randy Pherson’s extraordinary
experience in teaching SATs has already contributed significantly to the
training aspects of our efforts to enhance strategic foresight at the federal
level in Germany.”

—Kathrin Brockmann

Head of the Government Foresight Project at the Berlin-based stiftung


neue verantwortung and Analyst at the Futures Analysis Section, German
Armed Forces Planning Office

“A hearty thanks to Heuer and Pherson for writing—and updating—this


key work for the aspiring intelligence analyst—or anyone interested in
sound approaches to analyzing complex, ambiguous information. This was
the only textbook we considered when developing our Intelligence
Analysis Techniques course because it is comprehensive, well-organized,
and practical. The second edition has several valuable improvements that
will greatly help both instructors and students. Of particular note is the
discussion of dual-process thinking and cognitive biases and how
structured analytic techniques can help analysts avoid such biases. The
second edition should provide an unsurpassed learning opportunity for our
students, particularly when used in conjunction with Beebe and Pherson’s
Case Studies in Intelligence Analysis.”

5
—Alan More

Adjunct Professor, Intelligence Studies, George Mason University

“Competitive Intelligence has been struggling with the Information Cycle


for more than two decades. With this book, Richards J. Heuer Jr. and
Randolph Pherson are releasing us from the traditional, and mostly
intuitive, methods that go with it. They lay the foundation for a new
approach to intelligence in business if we take off our blinders and
investigate new methods in other fields. It provides an unprecedented and
practical baseline for developing a new culture of information sharing in
intelligence activities writ large.”

—Dr. Pascal Frion

President of Acrie Competitive Intelligence Network and 2013 recipient of


the French Competitive Intelligence Academy Award

“Heuer and Pherson have written a book that provides law enforcement
intelligence and crime analysts with numerous techniques to assist in
homeland security and crime prevention. The book is a must read for
analysts in the law enforcement community responsible for analyzing
intelligence and crime data. Analysis of Competing Hypotheses is but one
non-traditional example of a tool that helps them challenge assumptions,
identify investigative leads and trends, and anticipate future
developments.”

—Major Jesse McLendon (ret.)

North Kansas City Police, North Kansas City, Missouri

“Heuer and Pherson’s Structured Analytic Techniques for Intelligence


Analysis has become a classic in intelligence literature. Already a standard
text in numerous universities and government agencies around the world,
the 2nd edition will continue to be required reading for Denmark’s current
and future intelligence analysts. Its techniques are taught at the University
of Copenhagen and the book represents the core literature for analysis
simulation exercises, in which graduate students at the Department of
Political Science practice the art and science of intelligence analysis under
the supervision of senior government intelligence analysts.”

6
—Morten Hansen

Lecturer in intelligence studies, Department of Political Science,


University of Copenhagen

“Heuer and Pherson’s Structured Analytic Techniques is the standard text


for learning how to conduct intelligence analysis. This handbook provides
a panoply of critical thinking methodologies suitable to any issue that
intelligence analysts may encounter. Used by both government
practitioners and intelligence studies students throughout the world, the
book’s techniques have redefined critical thinking.”

—Dr. Melissa Graves

Associate Director, Center for Intelligence and Security Studies, The


University of Mississippi

“Heuer and Pherson are the leading practitioners, innovators, and teachers
of the rigorous use of structured analytic techniques. Their work stands out
above all others in explaining and evaluating the utility of such methods
that can appreciably raise the standards of analysis. The methods they
present stimulate the imagination, enhance the rigor, and apply to hard
intelligence problems as well as other areas requiring solid analysis. This
new, expanded edition is a must-have resource for any serious analyst’s
daily use as well as one’s professional bookshelf.”

—Roger Z. George and James B. Bruce

adjunct professors, Center for Security Studies, Georgetown University


and co-editors, Analyzing Intelligence: National Security Practitioners’
Perspectives

“The science of reasoning has grown considerably over the past 40-odd
years. Among the many fascinating aspects of the human intellect is the
ability to amplify our own capabilities by creating analytic tools. The tools
in this book are for those whose profession often requires making
judgments based on incomplete and ambiguous information. You hold in
your hands the toolkit for systematic analytic methods and critical
thinking. This is a book you can read and then actually apply to
accomplish something. Like any good toolkit, it has some simple tools that
explain themselves, some that need explanation and guidance, and some

7
that require considerable practice. This book helps us in our quest to enrich
our expertise and expand our reasoning skill.”

—Robert R. Hoffman

Institute for Human & Machine Cognition

8
Structured Analytic Techniques for
Intelligence Analysis
2

9
CQ Press, an imprint of SAGE, is the leading publisher of books,
periodicals, and electronic products on American government and
international affairs. CQ Press consistently ranks among the top
commercial publishers in terms of quality, as evidenced by the numerous
awards its products have won over the years. CQ Press owes its existence
to Nelson Poynter, former publisher of the St. Petersburg Times, and his
wife Henrietta, with whom he founded Congressional Quarterly in 1945.
Poynter established CQ with the mission of promoting democracy through
education and in 1975 founded the Modern Media Institute, renamed The
Poynter Institute for Media Studies after his death. The Poynter Institute
(www.poynter.org) is a nonprofit organization dedicated to training
journalists and media leaders.

In 2008, CQ Press was acquired by SAGE, a leading international


publisher of journals, books, and electronic media for academic,
educational, and professional markets. Since 1965, SAGE has helped
inform and educate a global community of scholars, practitioners,
researchers, and students spanning a wide range of subject areas, including
business, humanities, social sciences, and science, technology, and
medicine. A privately owned corporation, SAGE has offices in Los
Angeles, London, New Delhi, and Singapore, in addition to the
Washington, D.C., office of CQ Press.

10
Structured Analytic Techniques for
Intelligence Analysis
2

Richards J. Heuer Jr.


Randolph H. Pherson

SAGE

Los Angeles
London
New Delhi
Singapore
Washington DC

11
FOR INFORMATION:

CQ Press

An Imprint of SAGE Publications, Inc.

2455 Teller Road

Thousand Oaks, California 91320

E-mail: [email protected]

SAGE Publications Ltd.

1 Oliver’s Yard

55 City Road

London EC1Y 1SP

United Kingdom

SAGE Publications India Pvt. Ltd.

B 1/I 1 Mohan Cooperative Industrial Area

Mathura Road, New Delhi 110 044

India

SAGE Publications Asia-Pacific Pte. Ltd.

3 Church Street

12
#10-04 Samsung Hub

Singapore 049483

Copyright © 2015 by CQ Press, an Imprint of SAGE Publications, Inc. CQ


Press is a registered trademark of Congressional Quarterly Inc.

All rights reserved. No part of this book may be reproduced or utilized in


any form or by any means, electronic or mechanical, including
photocopying, recording, or by any information storage and retrieval
system, without permission in writing from the publisher.

Printed in the United States of America

Library of Congress Cataloging-in-Publication Data

Heuer, Richards J.

Structured analytic techniques for intelligence analysis / by Richards J.


Heuer Jr. and Randolph H. Pherson.—Second Edition.

pages cm

ISBN 978-1-4522-4151-7

1. Intelligence service—United States. 2. Intelligence service—


Methodology. I. Pherson, Randolph H. II. Title.

JK468.I6H478 2015

327.12—dc23 2014000255

This book is printed on acid-free paper.

14 15 16 17 18 10 9 8 7 6 5 4 3 2 1

Acquisitions Editors: Sarah Calabi, Charisse Kiino

Editorial Assistant: Davia Grant

Production Editor: David C. Felts

13
Typesetter: C&M Digitals (P) Ltd.

Copy Editor: Talia Greenberg

Proofreader: Sally Jaskold

Cover Designer: Glenn Vogel

Interior Graphics Designer: Adriana M. Gonzalez

Marketing Manager: Amy Whitaker

14
Figures
2.0 System 1 and System 2 Thinking 20
2.2 Eight Families of Structured Analytic Techniques 25
3.0 Value of Using Structured Techniques to Perform Key Tasks 30
3.2 The Five Habits of the Master Thinker 34
4.4 Issue Redefinition Example 55
4.5 Timeline Estimate of Missile Launch Date 59
4.7a Paired Comparison Matrix 65
4.7b Weighted Ranking Matrix 66
4.8 Rethinking the Concept of National Security: A New Ecology 70
4.9a Venn Diagram of Components of Critical Thinking 72
4.9b Venn Diagram of Invalid and Valid Arguments 73
4.9c Venn Diagram of Zambrian Corporations 75
4.9d Zambrian Investments in Global Port Infrastructure Projects 77
4.10a Social Network Analysis: The September 11 Hijackers 80
4.10b Social Network Analysis: September 11 Hijacker Key Nodes
83
4.10c Social Network Analysis 84
4.11a Concept Map of Concept Mapping 87
4.11b Mind Map of Mind Mapping 88
4.12 Gantt Chart of Terrorist Attack Planning 95
5.1 Picture of Structured Brainstorming 104
5.4 Starbursting Diagram of a Lethal Biological Event at a Subway
Station 114
5.5 Cross-Impact Matrix 117
5.6 Morphological Analysis: Terrorist Attack Options 121
5.7a Classic Quadrant Crunching™: Creating a Set of Stories 124
5.7b Terrorist Attacks on Water Systems: Flipping Assumptions 125
5.7c Terrorist Attacks on Water Systems: Sample Matrices 126
5.7d Selecting Attack Plans 127
6.1.1 Simple Scenarios 140
6.1.2 Cone of Plausibility 142
6.1.3 Alternative Futures Analysis: Cuba 144
6.1.4a Multiple Scenarios Generation: Future of the Iraq Insurgency
146
6.1.4b Future of the Iraq Insurgency: Using Spectrums to Define
Potential Outcomes 147

15
6.1.4c Selecting Attention-Deserving and Nightmare Scenarios 147
6.2a Descriptive Indicators of a Clandestine Drug Laboratory 150
6.2b Using Indicators to Track Emerging Scenarios in Zambria 153
6.2c Zambria Political Instability Indicators 155
6.3a Indicators Validator™ Model 157
6.3b Indicators Validator™ Process 159
7.1.1 Simple Hypotheses 172
7.1.2 Multiple Hypothesis Generator™: Generating Permutations 174
7.1.3 Quadrant Hypothesis Generation: Four Hypotheses on the
Future of Iraq 177
7.3a Creating an ACH Matrix 186
7.3b Coding Relevant Information in ACH 187
7.3c Evaluating Levels of Disagreement in ACH 188
7.4 Argument Mapping: Does North Korea Have Nuclear Weapons?
195
8.1 Key Assumptions Check: The Case of Wen Ho Lee 213
8.4 Using Red Hat Analysis to Catch Bank Robbers 225
8.5 Inside-Out Analysis versus Outside-In Approach 229
9.0 Mount Brain: Creating Mental Ruts 237
9.1 Structured Self-Critique: Key Questions 244
9.3 What If? Scenario: India Makes Surprising Gains from the Global
Financial Crisis 252
9.4 High Impact/Low Probability Scenario: Conflict in the Arctic 258
9.7 Delphi Technique 268
11.2 Decision Matrix 299
11.3 Pros-Cons-Faults-and-Fixes Analysis 301
11.4 Force Field Analysis: Removing Abandoned Cars from City
Streets 306
11.5 SWOT Analysis 309
11.6 Impact Matrix: Identifying Key Actors, Interests, and Impact
312
11.7 Variables Affecting the Future Use of Structured Analysis 318
12.1a Traditional Analytic Team 325
12.1b Special Project Team 326
12.2 Wikis as Collaboration Enablers 328
12.5 Advocacy versus Inquiry in Small-Group Processes 334
12.6 Effective Small-Group Roles and Interactions 336
13.3 Three Approaches to Evaluation 347
14.1 Variables Affecting the Future Use of Structured Analysis 357

16
Foreword

John McLaughlin

Senior Research Fellow, Paul H. Nitze School of Advanced International


Studies, Johns Hopkins University

Former Deputy Director, Central Intelligence Agency and Acting Director


of Central Intelligence

As intensively as America’s Intelligence Community has been studied and


critiqued, little attention has typically been paid to intelligence analysis.
Most assessments focus on such issues as overseas clandestine operations
and covert action, perhaps because they accord more readily with popular
images of the intelligence world.

And yet, analysis has probably never been a more important part of the
profession—or more needed by policymakers. In contrast to the bipolar
dynamics of the Cold War, this new world is strewn with failing states,
proliferation dangers, regional crises, rising powers, and dangerous
nonstate actors—all at play against a backdrop of exponential change in
fields as diverse as population and technology.

To be sure, there are still precious secrets that intelligence collection must
uncover—things that are knowable and discoverable. But this world is
equally rich in mysteries having to do more with the future direction of
events and the intentions of key actors. Such things are rarely illuminated
by a single piece of secret intelligence data; they are necessarily subjects
for analysis.

Analysts charged with interpreting this world would be wise to absorb the
thinking in this book by Richards Heuer and Randy Pherson and in
Heuer’s earlier work The Psychology of Intelligence Analysis. The reasons
are apparent if one considers the ways in which intelligence analysis
differs from similar fields of intellectual endeavor.

Intelligence analysts must traverse a minefield of potential errors.

✶ First, they typically must begin addressing their subjects where

17
others have left off; in most cases the questions they get are about
what happens next, not about what is known.
✶ Second, they cannot be deterred by lack of evidence. As Heuer
pointed out in his earlier work, the essence of the analysts’ challenge
is having to deal with ambiguous situations in which information is
never complete and arrives only incrementally—but with constant
pressure to arrive at conclusions.
✶ Third, analysts must frequently deal with an adversary that
actively seeks to deny them the information they need and is often
working hard to deceive them.
✶ Finally, for all of these reasons, analysts live with a high degree of
risk—essentially the risk of being wrong and thereby contributing to
ill-informed policy decisions.

The risks inherent in intelligence analysis can never be eliminated, but one
way to minimize them is through more structured and disciplined thinking
about thinking. On that score, I tell my students at the Johns Hopkins
School of Advanced International Studies that the Heuer book is probably
the most important reading I give them, whether they are heading into the
government or the private sector. Intelligence analysts should reread it
frequently. In addition, Randy Pherson’s work over the past six years to
develop and refine a suite of structured analytic techniques offers
invaluable assistance by providing analysts with specific techniques they
can use to combat mindsets, groupthink, and all the other potential pitfalls
of dealing with ambiguous data in circumstances that require clear and
consequential conclusions.

The book you now hold augments Heuer’s pioneering work by offering a
clear and more comprehensive menu of more than fifty techniques to build
on the strategies he earlier developed for combating perceptual errors. The
techniques range from fairly simple exercises that a busy analyst can use
while working alone—the Key Assumptions Check, Indicators
Validator™, or What If? Analysis—to more complex techniques that work
best in a group setting—Structured Brainstorming, Analysis of Competing
Hypotheses, or Premortem Analysis.

The key point is that all analysts should do something to test the
conclusions they advance. To be sure, expert judgment and intuition have
their place—and are often the foundational elements of sound analysis—
but analysts are likely to minimize error to the degree they can make their

18
underlying logic explicit in the ways these techniques demand.

Just as intelligence analysis has seldom been more important, the stakes in
the policy process it informs have rarely been higher. Intelligence analysts
these days therefore have a special calling, and they owe it to themselves
and to those they serve to do everything possible to challenge their own
thinking and to rigorously test their conclusions. The strategies offered by
Richards Heuer and Randy Pherson in this book provide the means to do
precisely that.

19
Preface

Origin and Purpose


The investigative commissions that followed the terrorist attacks of 2001
and the erroneous 2002 National Intelligence Estimate on Iraq’s weapons
of mass destruction clearly documented the need for a new approach to
how analysis is conducted in the U.S. Intelligence Community. Attention
focused initially on the need for “alternative analysis”—techniques for
questioning conventional wisdom by identifying and analyzing alternative
explanations or outcomes. This approach was later subsumed by a broader
effort to transform the tradecraft of intelligence analysis by using what
have become known as structured analytic techniques. Structured analysis
involves a step-by-step process that externalizes an individual analyst’s
thinking in a manner that makes it readily apparent to others, thereby
enabling it to be shared, built on, and critiqued by others. When combined
with the intuitive judgment of subject-matter experts, such a structured and
transparent process can significantly reduce the risk of analytic error.

Our current high-tech, global environment increasingly requires


collaboration among analysts with different areas of expertise and different
organizational perspectives. Structured analytic techniques are ideal for
this interaction. Each step in a technique prompts relevant discussion and,
typically, this generates more divergent information and more new ideas
than any unstructured group process. The step-by-step process of
structured analytic techniques organizes the interaction among analysts in
a small analytic group or team in a way that helps to avoid the multiple
pitfalls and pathologies that often degrade group or team performance.

Progress in the development and use of structured analytic techniques has


been steady since the publication of the first edition of this book in 2011.
By defining the domain of structured analytic techniques, providing a
manual for using and testing these techniques, and outlining procedures for
evaluating and validating these techniques, the first edition laid the
groundwork for continuing improvement in how analysis is done within
the U.S. Intelligence Community and a growing number of foreign
intelligence services. In addition, the techniques have made significant
inroads into academic curricula and the business world.

20
This second edition of the book includes five new techniques—AIMS
(Audience, Issue, Message, and Storyline) and Venn Analysis in chapter 4,
on Decomposition and Visualization; Cone of Plausibility in chapter 6, on
Scenarios and Indicators; and Decision Trees and Impact Matrix in chapter
11, on Decision Support. We have also split the Quadrant Crunching™
technique into two parts—Classic Quadrant Crunching™ and Foresight
Quadrant Crunching™, as described in chapter 5, on Idea Generation—
and made significant revisions to four other techniques: Getting Started
Checklist, Customer Checklist, Red Hat Analysis, and Indicators
Validator™.

In the introductory chapters, we have included a discussion of System 1


and System 2 thinking (intuitive versus analytic approaches to thinking) as
they relate to structured analysis and have revised the taxonomy of
analytic procedures to show more clearly where structured analytic
techniques fit in. Chapter 3 includes a new section describing how to make
the use of structured analytic techniques a habit. We have also expanded
the discussion of how structured analytic techniques can be used to deal
with cognitive biases and intuitive traps most often encountered by
intelligence analysts. In addition, we substantially revised chapter 13, on
strategies to validate structured analytic techniques, and chapter 14, which
projects our vision of how structured techniques may be used in the future.

As the use of structured analytic techniques becomes more widespread, we


anticipate that the ways these techniques are used will continue to change.
Our goal is to keep up with these changes in future editions, so we
welcome your suggestions, at any time, for updating this second edition or
otherwise enhancing its utility. To facilitate the use of these techniques,
CQ Press/SAGE published a companion book, Cases in Intelligence
Analysis: Structured Analytic Techniques in Action, with twelve case
studies and detailed exercises and lesson plans for learning how to use and
teach twenty-four of the structured analytic techniques. A second edition
of that book will be published simultaneously with this one, containing
five additional case studies and including new or updated exercises and
lesson plans for six structured techniques.

Audience for This Book


This book is for practitioners, managers, teachers, and students in the
intelligence, law enforcement, and homeland security communities, as

21
well as in academia, business, medicine, and the private sector. Managers,
policymakers, corporate executives, strategic planners, action officers, and
operators who depend on input from analysts to help them achieve their
goals will also find it useful. Academics and consulting companies who
specialize in qualitative methods for dealing with unstructured data will be
interested in this pathbreaking book as well.

Many of the techniques described here relate to strategic intelligence, but


there is ample information on techniques of interest to law enforcement,
counterterrorism, and competitive intelligence analysis, as well as to
business consultants and financial planners with a global perspective.
Many techniques developed for these related fields have been adapted for
use in intelligence analysis, and now we are starting to see the transfer of
knowledge going in the other direction. Techniques such as Analysis of
Competing Hypotheses (ACH), Key Assumptions Check, Quadrant
Crunching™, and the Indicators Validator™ developed specifically for
intelligence analysis are now being adapted for use in other fields. New
techniques that the authors developed to fill gaps in what is currently
available for intelligence analysis are being published for the first time in
this book and have broad applicability.

Content and Design


The first three chapters describe structured analysis in general, how it fits
into the spectrum of methods used by analysts, and how to select which
techniques are most suitable for your analytic project. The next eight
chapters describe when, why, and how to use each of the techniques
contained in this volume. The final three chapters discuss the integration of
these techniques in a collaborative team project, validation strategies, and
a vision of how these techniques are likely to be used in the year 2020.

We designed the book for ease of use and quick reference. The spiral
binding allows analysts to have the book open while they follow step-by-
step instructions for each technique. We grouped the techniques into
logical categories based on a taxonomy we devised. Tabs separating each
chapter contain a table of contents for the selected chapter. Each technique
chapter starts with a description of that technique category and then
provides a brief summary of each technique covered in that chapter.

22
The Authors
Richards J. Heuer Jr. is best known for his book Psychology of
Intelligence Analysis and for developing and then guiding automation of
the Analysis of Competing Hypotheses (ACH) technique. Both are being
used to teach and train intelligence analysts throughout the Intelligence
Community and in a growing number of academic programs on
intelligence or national security. Long retired from the Central Intelligence
Agency (CIA), Mr. Heuer has nevertheless been associated with the
Intelligence Community in various roles for more than five decades and
has written extensively on personnel security, counterintelligence,
deception, and intelligence analysis. He has a B.A. in philosophy from
Williams College and an M.A. in international relations from the
University of Southern California, and has pursued other graduate studies
at the University of California at Berkeley and the University of Michigan.

Randolph H. Pherson is president of Pherson Associates, LLC; CEO of


Globalytica, LLC; and a founding director of the nonprofit Forum
Foundation for Analytic Excellence. He teaches advanced analytic
techniques and critical thinking skills to analysts in the government and
private sector. Mr. Pherson collaborated with Richards Heuer in
developing and launching the Analysis of Competing Hypotheses software
tool, and he developed several analytic techniques for the CIA’s Sherman
Kent School, many of which were incorporated in his Handbook of
Analytic Tools and Techniques. He coauthored Critical Thinking for
Strategic Intelligence with Katherine Hibbs Pherson, Cases in Intelligence
Analysis: Structured Analytic Techniques in Action with Sarah Miller
Beebe, and the Analytic Writing Guide with Louis M. Kaiser. He has
developed a suite of collaborative web-based analytic tools, TH!NK
Suite®, including a collaborative version of ACH called Te@mACH®.
Mr. Pherson completed a twenty-eight-year career in the Intelligence
Community in 2000, last serving as National Intelligence Officer (NIO)
for Latin America. Previously at the CIA, Mr. Pherson managed the
production of intelligence analysis on topics ranging from global
instability to Latin America, served on the Inspector General’s staff, and
was chief of the CIA’s Strategic Planning and Management Staff. He is the
recipient of the Distinguished Intelligence Medal for his service as NIO
and the Distinguished Career Intelligence Medal. Mr. Pherson received his
B.A. from Dartmouth College and an M.A. in international relations from
Yale University.

23
Acknowledgments
The authors greatly appreciate the contributions made by Mary Boardman,
Kathrin Brockmann and her colleagues at the Stiftung Neue
Verantwortung, Nick Hare and his colleagues at the UK Cabinet Office,
Mary O’Sullivan, Kathy Pherson, John Pyrik, Todd Sears, and Cynthia
Storer to expand and improve the chapters on analytic techniques, as well
as the graphics design and editing support provided by Adriana Gonzalez
and Richard Pherson.

Both authors also recognize the large contributions many individuals made
to the first edition, reviewing all or large portions of the draft text. These
include J. Scott Armstrong, editor of Principles of Forecasting: A
Handbook for Researchers and Practitioners and professor at the Wharton
School, University of Pennsylvania; Sarah Miller Beebe, a Russian
specialist who previously served as a CIA analyst and on the National
Security Council staff; Jack Davis, noted teacher and writer on intelligence
analysis, a retired senior CIA officer, and now an independent contractor
with the CIA; Robert R. Hoffman, noted author of books on naturalistic
decision making, Institute for Human & Machine Cognition; Marilyn B.
Peterson, senior instructor at the Defense Intelligence Agency, former
president of the International Association of Law Enforcement Intelligence
Analysts, and former chair of the International Association for Intelligence
Education; and Cynthia Storer, a counterterrorism specialist and former
CIA analyst now associated with Pherson Associates, LLC. Their
thoughtful critiques, recommendations, and edits as they reviewed this
book were invaluable.

Valuable comments, suggestions, and assistance were also received from


many others during the development of the first and second edition,
including Todd Bacastow, Michael Bannister, Aleksandra Bielska, Arne
Biering, Jim Bruce, Hriar Cabayan, Ray Converse, Steve Cook, John
Donelan, Averill Farrelly, Stanley Feder, Michael Fletcher, Roger George,
Jay Hillmer, Donald Kretz, Terri Lange, Darci Leonhart, Mark Lowenthal,
Elizabeth Manak, Stephen Marrin, William McGill, David Moore, Mary
O’Sullivan, Emily Patterson, Amanda Pherson, Kathy Pherson, Steve
Rieber, Grace Scarborough, Alan Schwartz, Marilyn Scott, Gudmund
Thompson, Kristan Wheaton, and Adrian “Zeke” Wolfberg. We also thank
Jonathan Benjamin-Alvarado, University of Nebraska–Omaha; Lawrence

24
D. Dietz, American Public University System; Bob Duval, West Virginia
University; Chaka Ferguson, Florida International University; Joseph
Gordon, National Intelligence University; Kurt Jensen, Carleton
University; and Doug Watson, George Mason University.

Richards Heuer is grateful to William Reynolds of Least Squares Software


for pointing out the need for a taxonomy of analytic methods and
generating financial support through the ODNI/IARPA PAINT program
for the initial work on what subsequently evolved into chapters 1 and 2 of
the first edition. He is also grateful to the CIA’s Sherman Kent School for
Intelligence Analysis for partial funding of what evolved into parts of
chapters 3 and 12 of the first edition. This book as a whole, however, has
not been funded by the Intelligence Community.

The CQ Press team headed by editorial director Charisse Kiino did a


marvelous job in managing the production of the second edition of this
book and getting it out on schedule. Copy editor Talia Greenberg, editorial
assistant Davia Grant, production editor David C. Felts, and designer
Glenn Vogel all deserve praise for the quality of their work.

The ideas, interest, and efforts of all the above contributors to this book are
greatly appreciated, but the responsibility for any weaknesses or errors
rests solely on the shoulders of the authors.

Disclaimer
All statements of fact, opinion, or analysis expressed in this book are those
of the authors and do not reflect the official positions of the Office of the
Director of National Intelligence (ODNI), the Central Intelligence Agency
(CIA), or any other U.S. government agency. Nothing in the contents
should be construed as asserting or implying U.S. government
authentication of information or agency endorsement of the authors’
views. This material has been reviewed by the ODNI and the CIA only to
prevent the disclosure of classified information.

25
1 Introduction and Overview

1.1 Our Vision [ 4 ]


1.2 Two Types of Thinking [ 4 ]
1.3 Dealing with Bias [ 5 ]
1.4 Role of Structured Analytic Techniques [ 6 ]
1.5 Value of Team Analysis [ 7 ]
1.6 History of Structured Analytic Techniques [ 8 ]
1.7 Selection of Techniques for This Book [ 10 ]
1.8 Quick Overview of Chapters [ 12 ]

Analysis as practiced in the intelligence, law enforcement, and business


communities is steadily evolving from a mental activity done
predominantly by a sole analyst to a collaborative team or group activity.1
The driving forces behind this transition include the following:

✶ The growing complexity of international issues and the consequent


requirement for multidisciplinary input to most analytic products.2
✶ The need to share more information more quickly across
organizational boundaries.
✶ The dispersion of expertise, especially as the boundaries between
analysts, collectors, operators, and decision makers become blurred.
✶ The need to identify and evaluate the validity of alternative mental
models.

This transition is being enabled by advances in technology, such as new


collaborative networks and communities of interest, and the mushrooming
growth of social networking practices among the upcoming generation of
analysts. The transition is being facilitated by the increasing use of
structured analytic techniques to guide the exchange of information and
reasoning among analysts in ways that identify and eliminate a wide range
of cognitive biases and other shortfalls of intuitive judgment.

1.1 Our Vision


This book defines the role and scope of structured analytic techniques as a
distinct analytic methodology that provides a step-by-step process for

26
dealing with the kinds of incomplete, ambiguous, and sometimes deceptive
information with which analysts must work. Structured analysis is a
mechanism by which internal thought processes are externalized in a
systematic and transparent manner so that they can be shared, built on, and
easily critiqued by others. Each technique leaves a trail that other analysts
and managers can follow to see the basis for an analytic judgment. These
techniques are used by individual analysts but are perhaps best utilized in a
collaborative team or group effort in which each step of the analytic
process exposes participants to divergent or conflicting perspectives. This
transparency helps ensure that differences of opinion among analysts are
heard and seriously considered early in the analytic process. Analysts tell
us that this is one of the most valuable benefits of any structured
technique.

Structured analysis helps analysts ensure that their analytic framework—


the foundation upon which they form their analytic judgments—is as solid
as possible. By helping break down a specific analytic problem into its
component parts and specifying a step-by-step process for handling these
parts, structured analytic techniques help to organize the amorphous mass
of data with which most analysts must contend. Such techniques make our
thinking more open and available for review and critique by ourselves as
well as by others. This transparency enables the effective communication
at the working level that is essential for intraoffice and interagency
collaboration.

These are called “techniques” because they usually guide the analyst in
thinking about a problem rather than provide the analyst with a definitive
answer, as one might expect from a method. Structured analytic techniques
in general, however, do form a methodology—a set of principles and
procedures for qualitative analysis of the kinds of uncertainties that many
analysts must deal with on a daily basis.

1.2 Two Types of Thinking


In the last twenty years, important gains have been made in psychological
research on human judgment. Dual process theory has emerged as the
predominant approach, positing two systems of decision making called
System 1 and System 2.3 The basic distinction between System 1 and
System 2 is intuitive versus analytical thinking.

27
System 1 is intuitive, fast, efficient, and often unconscious. It draws
naturally on available knowledge, past experience, and often a long-
established mental model of how people or things work in a specific
environment. System 1 thinking is very alluring, as it requires little effort,
and it allows people to solve problems and make judgments quickly and
efficiently. It is often accurate, but intuitive thinking is also a common
source of cognitive biases and other intuitive mistakes that lead to faulty
analysis. Cognitive biases are discussed in the next section of this chapter.

System 2 thinking is analytic. It is slow, deliberate, conscious reasoning. It


includes all types of analysis, such as critical thinking and structured
analytic techniques, as well as the whole range of empirical and
quantitative methods. The introductory section of each of this book’s eight
chapters on structured analytic techniques describes how the type of
analytic technique discussed in that chapter helps to counter one or more
types of cognitive bias and other common intuitive mistakes associated
with System 1 thinking.

1.3 Dealing with Bias


There are many types of bias, all of which might be considered cognitive
biases, as they are all formed and expressed through System 1 activity in
the brain. Potential causes of bias include professional experience leading
to an ingrained analytic mindset, training or education, the nature of one’s
upbringing, type of personality, a salient personal experience, or personal
equity in a particular decision.

All biases, except perhaps the personal self-interest bias, are the result of
fast, unconscious, and intuitive thinking (System 1)—not the result of
thoughtful reasoning (System 2). System 1 thinking is usually correct, but
frequently influenced by various biases as well as insufficient knowledge
and the inherent unknowability of the future. Structured analytic
techniques are a type of System 2 thinking designed to help identify and
overcome the analytic biases inherent in System 1 thinking.

Behavioral scientists have studied the impact of cognitive biases on


analysis and decision making in many fields such as psychology, political
science, medicine, economics, business, and education ever since Amos
Tversky and Daniel Kahneman introduced the concept of cognitive biases

28
in the early 1970s.4 Richards Heuer’s work for the CIA in the late 1970s
and the 1980s, subsequently followed by his book Psychology of
Intelligence Analysis, first published in 1999, applied Tversky and
Kahneman’s insights to problems encountered by intelligence analysts.5
Since the publication of Psychology of Intelligence Analysis, other authors
associated with the U.S. Intelligence Community (including Jeffrey
Cooper and Rob Johnston) have identified cognitive biases as a major
cause of analytic failure at the CIA.6

This book is a logical follow-on to Psychology of Intelligence Analysis,


which described in detail many of the biases that influence intelligence
analysis.7 Since then hundreds of cognitive biases have been described in
the academic literature using a wide variety of terms. As Heuer noted
many years ago, “Cognitive biases are similar to optical illusions in that
the error remains compelling even when one is fully aware of its nature.
Awareness of the bias, by itself, does not produce a more accurate
perception.”8 This is why cognitive biases are exceedingly difficult to
overcome. For example, Emily Pronin, Daniel Y. Lin, and Lee Ross
observed in three different studies that people see the existence and
operation of cognitive and motivational biases much more in others than in
themselves.9 This explains why so many analysts believe their own
intuitive thinking (System 1) is sufficient.

An extensive literature exists on cognitive biases, sometimes called


“heuristics,” that explains how they affect a person’s thinking in many
fields. What is unique about our book is that it provides guidance on how
to overcome many of these biases. Each of the fifty-five structured analytic
techniques described in this book provides a roadmap for avoiding one or
more specific cognitive biases as well as other common intuitive pitfalls.
The introduction and overview in each of the eight chapters on structured
analytic techniques identifies and describes the diverse System 1 errors
that this category of structured analytic techniques is designed to avoid.
While these techniques are helpful, they too carry no guarantee.

1.4 Role of Structured Analytic Techniques


Structured analytic techniques are debiasing techniques. They do not
replace intuitive judgment. Their role is to question intuitive judgments by

29
identifying a wider range of options for analysts to consider. For example,
a Key Assumptions Check requires the identification and consideration of
additional assumptions. Analysis of Competing Hypotheses requires
identification of alternative hypotheses, a focus on refuting rather than
confirming hypotheses, and a more systematic analysis of the evidence.
All structured techniques described in this book have a Value Added
section that describes how this technique contributes to better analysis and
helps mitigate cognitive biases and intuitive traps often made by
intelligence analysts and associated with System 1 thinking. For many
techniques, the benefit is self-evident. None purports to always give the
correct answer. They identify alternatives that merit serious consideration.

No formula exists, of course, for always getting it right, but the use of
structured techniques can reduce the frequency and severity of error. These
techniques can help analysts mitigate the proven cognitive limitations,
sidestep some of the known analytic biases, and explicitly confront the
problems associated with unquestioned mental models or mindsets. They
help analysts think more rigorously about an analytic problem and ensure
that preconceptions and assumptions are not taken for granted but are
explicitly examined and, when possible, tested.10

The most common criticism of structured analytic techniques is, “I don’t


have enough time to use them.” The experience of many analysts shows
that this criticism is not justified. Many techniques take very little time.
Anything new does take some time to learn; but, once learned, the use of
structured analytic techniques saves analysts time. It can enable individual
analysts to work more efficiently, especially at the start of a project, when
the analyst may otherwise flounder a bit in trying to figure out how to
proceed. Structured techniques aid group processes by improving
communication as well as enhancing the collection and interpretation of
evidence. And, in the end, a structured technique produces a product in
which the reasoning behind the conclusions is more transparent and more
readily accepted than one derived from other methods. This saves time by
expediting review by supervisors and editors and thereby compressing the
coordination process.11

Analytic methods are important, but method alone is far from sufficient to
ensure analytic accuracy or value. Method must be combined with
substantive expertise and an inquiring and imaginative mind. And these, in
turn, must be supported and motivated by the organizational environment

30
in which the analysis is done.

1.5 Value of Team Analysis


Our vision for the future of intelligence analysis dovetails with that of the
Director of National Intelligence’s Vision 2015, in which intelligence
analysis increasingly becomes a collaborative enterprise, with the focus
shifting “away from coordination of draft products toward regular
discussion of data and hypotheses early in the research phase.”12 This is a
major change from the traditional concept of intelligence analysis as
largely an individual activity and coordination as the final step in the
process.

In a collaborative enterprise, structured analytic techniques are a process


through which collaboration occurs. Just as these techniques provide
structure to our individual thought processes, they can also structure the
interaction of analysts within a small team or group. Because the thought
process in these techniques is transparent, each step in the technique
prompts discussion within the team. Such discussion can generate and
evaluate substantially more divergent information and new information
than can a group that does not use a structured process. When a team is
dealing with a complex issue, the synergy of multiple minds using
structured analysis is usually more effective than is the thinking of a lone
analyst. Structured analytic techniques when paired with collaborative
software can also provide a framework to guide interagency collaboration
and coordination, connecting team members in different offices, agencies,
parts of traffic-congested metropolitan areas, and even around the world.

Team-based analysis can, of course, bring with it a new set of challenges


equivalent to the cognitive biases and other pitfalls faced by the individual
analyst. However, the well-known group process problems are minimized
by the use of structured techniques that guide the interaction among
members of a team or group. This helps to keep discussions from getting
sidetracked and facilitates the elicitation of alternative views from all team
members. Analysts have also found that use of a structured process helps
to depersonalize arguments when there are differences of opinion. This is
discussed further in chapter 12. Also, today’s technology and social
networking programs make structured collaboration much easier than it
has ever been in the past.

31
1.6 History of Structured Analytic Techniques
The first use of the term “structured analytic techniques” in the U.S.
Intelligence Community was in 2005. However, the origin of the concept
goes back to the 1980s, when the eminent teacher of intelligence analysis,
Jack Davis, first began teaching and writing about what he called
“alternative analysis.”13 The term referred to the evaluation of alternative
explanations or hypotheses, better understanding of other cultures, and
analysis of events from the other country’s point of view rather than by
mirror imaging. In the mid-1980s some initial efforts were made to initiate
the use of more alternative analytic techniques in the CIA’s Directorate of
Intelligence. Under the direction of Robert Gates, then CIA Deputy
Director for Intelligence, analysts employed several new techniques to
generate scenarios of dramatic political change, track political instability,
and anticipate military coups. Douglas MacEachin, Deputy Director for
Intelligence from 1993 to 1996, supported new standards for systematic
and transparent analysis that helped pave the path to further change.14

The term “alternative analysis” became widely used in the late 1990s after
Adm. David Jeremiah’s postmortem analysis of the U.S. Intelligence
Community’s failure to foresee India’s 1998 nuclear test, a U.S.
congressional commission’s review of the Intelligence Community’s
global missile forecast in 1998, and a report from the CIA Inspector
General that focused higher-level attention on the state of the Directorate
of Intelligence’s analytic tradecraft. The Jeremiah report specifically
encouraged increased use of what it called “red-team” analysis.

When the Sherman Kent School for Intelligence Analysis at the CIA was
created in 2000 to improve the effectiveness of intelligence analysis, John
McLaughlin, then Deputy Director for Intelligence, tasked the school to
consolidate techniques for doing what was then referred to as “alternative
analysis.” In response to McLaughlin’s tasking, the Kent School
developed a compilation of techniques, and the CIA’s Directorate of
Intelligence started teaching these techniques in a class that later evolved
into the Advanced Analytic Tools and Techniques Workshop. The course
was subsequently expanded to include analysts from the Defense
Intelligence Agency and other elements of the U.S. Intelligence
Community.

The various investigative commissions that followed the surprise terrorist

32
attacks of September 11, 2001, and then the erroneous analysis of Iraq’s
possession of weapons of mass destruction, cranked up the pressure for
more alternative approaches to intelligence analysis. For example, the
Intelligence Reform Act of 2004 assigned to the Director of National
Intelligence “responsibility for ensuring that, as appropriate, elements of
the intelligence community conduct alternative analysis (commonly
referred to as ‘red-team’ analysis) of the information and conclusions in
intelligence analysis.”

Over time, however, analysts who misunderstood or resisted this approach


came to interpret alternative analysis as simply meaning an alternative to
the normal way that analysis is done, implying that these alternative
procedures are needed only occasionally in exceptional circumstances
when an analysis is of critical importance. Kent School instructors had to
explain that the techniques are not alternatives to traditional analysis, but
that they are central to good analysis and should be integrated into the
normal routine—instilling rigor and structure into the analysts’ everyday
work process.

In 2004, when the Kent School decided to update its training materials
based on lessons learned during the previous several years and publish A
Tradecraft Primer,15 Randolph H. Pherson and Roger Z. George were
among the drafters. “There was a sense that the name ‘alternative analysis’
was too limiting and not descriptive enough. At least a dozen different
analytic techniques were all rolled into one term, so we decided to find a
name that was more encompassing and suited this broad array of
approaches to analysis.”16 Kathy Pherson is credited with coming up with
the name “structured analytic techniques” during a dinner table
conversation with her husband, Randy. Roger George organized the
techniques into three categories: diagnostic techniques, contrarian
techniques, and imagination techniques. The term “structured analytic
techniques” became official in June 2005, when updated training materials
were formally approved.

The Directorate of Intelligence’s senior management became a strong


supporter of structured analytic techniques and took active measures to
facilitate and promote this approach. The term is now used throughout the
U.S. Intelligence Community—and increasingly in academia and many
intelligence services around the globe.

33
One thing cannot be changed, however, in the absence of new legislation.
The Director of National Intelligence (DNI) is still responsible under the
Intelligence Reform Act of 2004 for ensuring that elements of the U.S.
Intelligence Community conduct alternative analysis, which it now
describes as the inclusion of alternative outcomes and hypotheses in
analytic products. We view “alternative analysis” as covering only a part
of what now is regarded as structured analytic techniques and recommend
avoiding use of the term “alternative analysis” to avoid any confusion.

“Wisdom begins with the definition of terms”.

—Socrates, Greek philosopher

1.7 Selection of Techniques for This Book


The techniques described in this book are limited to ones that meet our
definition of structured analytic techniques, as discussed in chapter 2.
Although the focus is on techniques for strategic intelligence analysis,
many of the techniques described in this book have wide applicability to
tactical military analysis, law enforcement intelligence analysis, homeland
security, business consulting, financial planning, and complex decision
making in any field. The book focuses on techniques that can be used by a
single analyst working alone or preferably with a small group or team of
analysts. Techniques that require sophisticated computing, or complex
projects of the type usually outsourced to an outside expert or company,
are not included. Several interesting techniques that were recommended to
us were not included for this reason.

From the several hundred techniques that might have been included here,
we identified a core group of fifty-five techniques that appear to be most
useful for the intelligence profession, but also useful for those engaged in
related analytic pursuits in academia, business, law enforcement, finance,
and medicine. Techniques that tend to be used exclusively for a single type
of analysis in fields such as law enforcement or business consulting,
however, have not been included. This list is not static. It is expected to
increase or decrease as new techniques are identified and others are tested
and found wanting. In fact, we have dropped two techniques from the first

34
edition and added five new ones to the second edition.

Some training programs may have a need to boil down their list of
techniques to the essentials required for one particular type of analysis. No
one list will meet everyone’s needs. However, we hope that having one
fairly comprehensive list and common terminology available to the
growing community of analysts now employing structured analytic
techniques will help to facilitate the discussion and use of these techniques
in projects involving collaboration across organizational boundaries.

In this collection of techniques we build on work previously done in the


U.S. Intelligence Community, but also include a few techniques developed
and used by our British, Canadian, Spanish, and Australian colleagues. To
select the most appropriate additional techniques, Heuer reviewed a large
number of books and websites dealing with intelligence analysis
methodology, qualitative methods in general, decision making, problem
solving, competitive intelligence, law enforcement intelligence,
forecasting or futures research, and social science research in general.
Given the immensity of this literature, there can be no guarantee that
nothing was missed.

About half of the techniques described here were previously incorporated


in training materials used by the Defense Intelligence Agency, the Office
of Intelligence and Analysis in the Department of Homeland Security, or
other intelligence agencies. We have revised or refined those techniques
for this book. Many of the techniques were originally developed or refined
by one of the authors, Randolph H. Pherson, when he was teaching
structured techniques to intelligence analysts, students, and in the private
sector. Twenty-five techniques were newly created or adapted to
intelligence analyst needs by Richards Heuer or Randy Pherson to fill
perceived needs and gaps.

We provide specific guidance on how to use each technique, but this


guidance is not written in stone. Many of these techniques can be
implemented in more than one way, and some techniques are known by
several different names. An experienced government analyst told one of
the authors that he seldom uses a technique the same way twice. He adapts
techniques to the requirements of the specific problem, and his ability to
do that effectively is a measure of his experience.

The names of some techniques are normally capitalized, while many are

35
not. For consistency and to make them stand out, the names of all
techniques described in this book are capitalized.

1.8 Quick Overview of Chapters


Chapter 2 (“Building a System 2 Taxonomy”) defines the domain of
structured analytic techniques by describing how it differs from three other
major categories of intelligence analysis methodology. It presents a
taxonomy with eight distinct categories of structured analytic techniques.
The categories are based on how each set of techniques contributes to
better intelligence analysis.

Chapter 3 (“Choosing the Right Technique”) describes the criteria we used


for selecting techniques for inclusion in this book, discusses which
techniques might be learned first and used the most, and provides a guide
for matching techniques to analysts’ needs. Analysts using this guide
answer twelve abbreviated questions about what the analyst wants or needs
to do. An affirmative answer to any question directs the analyst to the
appropriate chapter(s), where the analyst can quickly zero in on the most
appropriate technique(s).

Chapters 4 through 11 each describe a different category of technique,


which taken together cover fifty-five different techniques. Each of these
chapters starts with a description of that particular category of technique
and how it helps to mitigate known cognitive biases or intuitive traps. It
then provides a brief overview of each technique. This is followed by a
discussion of each technique, including when to use it, the value added,
description of the method, potential pitfalls when noteworthy, relationship
to other techniques, and origins of the technique.

Readers who go through these eight chapters of techniques from start to


finish may perceive some overlap. This repetition is for the convenience of
those who use this book as a reference guide and seek out individual
sections or chapters. The reader seeking only an overview of the
techniques as a whole can save time by reading the introduction to each
technique chapter, the brief overview of each technique, and the full
descriptions of only those specific techniques that pique the reader’s
interest.

Highlights of the eight chapters of techniques are as follows:

36
✶ Chapter 4 (“Decomposition and Visualization”) covers the basics
such as checklists, sorting, ranking, classification, several types of
mapping, matrices, and networks. It includes two new techniques,
Venn Analysis and AIMS (Audience, Issue, Message, and Storyline).
✶ Chapter 5 (“Idea Generation”) presents several types of
brainstorming. That includes Nominal Group Technique, a form of
brainstorming that rarely has been used in the U.S. Intelligence
Community but should be used when there is concern that a
brainstorming session might be dominated by a particularly
aggressive analyst or constrained by the presence of a senior officer.
A Cross-Impact Matrix supports a group learning exercise about the
relationships in a complex system.
✶ Chapter 6 (“Scenarios and Indicators”) covers four scenario
techniques and the indicators used to monitor which scenario seems
to be developing. The Indicators Validator™ developed by Randy
Pherson assesses the diagnostic value of the indicators. The chapter
includes a new technique, Cone of Plausibility.
✶ Chapter 7 (“Hypothesis Generation and Testing”) includes the
following techniques for hypothesis generation: Diagnostic
Reasoning, Analysis of Competing Hypotheses, Argument Mapping,
and Deception Detection.
✶ Chapter 8 (“Assessment of Cause and Effect”) includes the widely
used Key Assumptions Check and Structured Analogies, which
comes from the literature on forecasting the future. Other techniques
include Role Playing, Red Hat Analysis, and Outside-In Thinking.
✶ Chapter 9 (“Challenge Analysis”) helps analysts break away from
an established mental model to imagine a situation or problem from a
different perspective. Two important techniques developed by the
authors, Premortem Analysis and Structured Self-Critique, give
analytic teams viable ways to imagine how their own analysis might
be wrong. What If? Analysis and High Impact/Low Probability
Analysis are tactful ways to suggest that the conventional wisdom
could be wrong. Devil’s Advocacy, Red Team Analysis, and the
Delphi Method can be used by management to actively seek
alternative answers.
✶ Chapter 10 (“Conflict Management”) explains that confrontation
between conflicting opinions is to be encouraged, but it must be
managed so that it becomes a learning experience rather than an
emotional battle. It describes a family of techniques grouped under
the umbrella of Adversarial Collaboration and an original approach to

37
Structured Debate in which debaters refute the opposing argument
rather than support their own.
✶ Chapter 11 (“Decision Support”) includes several techniques,
including Decision Matrix, that help managers, commanders,
planners, and policymakers make choices or trade-offs between
competing goals, values, or preferences. This chapter describes the
Complexity Manager developed by Richards Heuer and two new
techniques, Decision Trees and the Impact Matrix.

As previously noted, analysis done across the global intelligence


community is in a transitional stage from a mental activity performed
predominantly by a sole analyst to a collaborative team or group activity.
Chapter 12, entitled “Practitioner’s Guide to Collaboration,” discusses,
among other things, how to include in the analytic process the rapidly
growing social networks of area and functional specialists who often work
from several different geographic locations. It proposes that most analysis
be done in two phases: a divergent analysis or creative phase with broad
participation by a social network using a wiki, followed by a convergent
analysis phase and final report done by a small analytic team.

How can we know that the use of structured analytic techniques does, in
fact, improve the overall quality of the analytic product? As we discuss in
chapter 13 (“Validation of Structured Analytic Techniques”), there are two
approaches to answering this question—logical reasoning and empirical
research. The logical reasoning approach starts with the large body of
psychological research on the limitations of human memory and
perception and pitfalls in human thought processes. If a structured analytic
technique is specifically intended to mitigate or avoid one of the proven
problems in human thought processes, and that technique appears to be
successful in doing so, that technique can be said to have “face validity.”
The growing popularity of many of these techniques would imply that they
are perceived by analysts—and their customers—as providing distinct
added value in a number of different ways.

Another approach to evaluation of these techniques is empirical testing.


This is often done by constructing experiments that compare analyses in
which a specific technique is used with comparable analyses in which the
technique is not used. Our research found that such testing done outside
the intelligence profession is generally of limited value, as the
experimental conditions varied significantly from the conditions under

38
which the same techniques are used by most intelligence analysts. Chapter
13 proposes a broader approach to the validation of structured analytic
techniques. It calls for structured interviews, observation, and surveys in
addition to experiments conducted under conditions that closely simulate
how these techniques are used by intelligence analysts. Chapter 13 also
recommends formation of a separate organizational unit to conduct such
research as well as other tasks to support the use of structured analytic
techniques.

Chapter 14 (“The Future of Structured Analytic Techniques”) employs one


of the techniques in this book, Complexity Manager, to assess the
prospects for continued growth in the use of structured analytic techniques.
It asks the reader to imagine it is 2020 and answers the following questions
based on an analysis of ten variables that could support or hinder the
growth of structured analytic techniques during this time period: Will
structured analytic techniques gain traction and be used with greater
frequency by intelligence agencies, law enforcement, and the business
sector? What forces are spurring the increased use of structured analysis?
What obstacles are hindering its expansion?

1. Vision 2015: A Globally Networked and Integrated Intelligence


Enterprise (Washington, DC: Director of National Intelligence, 2008).

2. National Intelligence Council, Global Trends 2025: A Transformed


World (Washington, DC: U.S. Government Printing Office, November
2008).

3. For further information on dual process theory, see the research by


Jonathan Evans and Keith Frankish, In Two Minds: Dual Processes and
Beyond (Oxford, UK: Oxford University Press, 2009); and Pat Croskerry,
“A Universal Model of Diagnostic Reasoning,” Academic Medicine 84,
no. 8 (August 2009).

4. Amos Tversky and Daniel Kahneman, “Judgment under Uncertainty:


Heuristics and Biases,” Sciences 185, no. 4157 (1974): 1124–1131.

5. Psychology of Intelligence Analysis was republished by Pherson


Associates, LLC, in 2007, and can be purchased on its website:
www.pherson.org.

6. Jeffrey R. Cooper, Curing Analytic Pathologies: Pathways to Improved

39
Intelligence Analysis (Washington, DC: CIA Center for the Study of
Intelligence, 2005); and Rob Johnston, Analytic Culture in the U.S.
Intelligence Community: An Ethnographic Study (Washington, DC: CIA
Center for the Study of Intelligence, 2005).

7. Richards J. Heuer Jr., Psychology of Intelligence Analysis (Washington,


DC: CIA Center for the Study of Intelligence, 1999; reprinted by Pherson
Associates, LLC, 2007).

8. Ibid., 112.

9. Emily Pronin, Daniel Y. Lin, and Lee L. Ross, “The Bias Blind Spot:
Perceptions of Bias in Self versus Others,” Personality and Social
Psychology Bulletin 28, no. 3 (2002): 369–381.

10. Judgments in this and the next sections are based on our personal
experience and anecdotal evidence gained in work with or discussion with
other experienced analysts. As we will discuss in chapter 13, there is a
need for systematic research on these and other benefits believed to be
gained through the use of structured analytic techniques.

11. Again, these statements are our professional judgments based on


discussions with working analysts using structured analytic techniques.
Research by the U.S. Intelligence Community on the benefits and costs
associated with all aspects of the use of structured analytic techniques is
strongly recommended, as discussed in chapter 13.

12. Vision 2015: A Globally Networked and Integrated Intelligence


Enterprise (Washington, D.C.: Director of National Intelligence, 2008), p.
13.

13. Information on the history of the terms “structured analytic


techniques” and “alternative analysis” is based on information provided by
Jack Davis, Randolph H. Pherson, and Roger Z. George, all of whom were
key players in developing and teaching these techniques at the CIA.

14. See Heuer, Psychology of Intelligence Analysis, xvii–xix.

15. A Tradecraft Primer: Structured Analytic Techniques for Improving


Intelligence Analysis, 2nd ed. (Washington, DC: Central Intelligence
Agency, 2009), https://www.cia.gov/library/publications/publications-rss-

40
updates/tradecraft-primer-may-4-2009.html.

16. Personal communication to Richards Heuer from Roger Z. George,


October 9, 2007.

41
2 Building a System 2 Taxonomy

2.1 Taxonomy of System 2 Methods [ 22 ]


2.2 Taxonomy of Structured Analytic Techniques [ 24 ]

Ataxonomy is a classification of all elements of some domain of


information or knowledge. It defines the domain by identifying, naming,
and categorizing all the various objects in this domain. The objects are
organized into related groups based on some factor common to each object
in the group. The previous chapter defined the difference between two
different types of thinking, System 1 and System 2 thinking.

✶ System 1 thinking is intuitive, fast, efficient, and often


unconscious. Such intuitive thinking is often accurate, but it is also a
common source of cognitive biases and other intuitive mistakes that
lead to faulty analysis.
✶ System 2 thinking is analytic. It is slow, deliberate, and conscious,
the result of thoughtful reasoning. In addition to structured analytic
techniques, System 2 thinking encompasses critical thinking and the
whole range of empirical and quantitative analysis.

Intelligence analysts have largely relied on intuitive judgment—a System


1 process—in constructing their line of analysis. When done well, intuitive
judgment—sometimes referred to as traditional analysis—combines
subject-matter expertise with basic thinking skills. Evidentiary reasoning,
historical method, case study method, and reasoning by analogy are
examples of this category of analysis.1 The key characteristic that
distinguishes intuitive judgment from structured analysis is that it is
usually an individual effort in which the reasoning remains largely in the
mind of the individual analyst until it is written down in a draft report.
Training in this type of analysis is generally provided through postgraduate
education, especially in the social sciences and liberal arts, and often along
with some country or language expertise.

This chapter presents a taxonomy that defines the domain of System 2


thinking. Figure 2.0 distinguishes System 1 or intuitive thinking from the
four broad categories of analytic methods used in System 2 thinking. It
describes the nature of these four categories, one of which is structured

42
analysis. The others are critical thinking, empirical analysis, and quasi-
quantitative analysis. As discussed in section 2.2, structured analysis
consists of eight different categories of structured analytic techniques. This
chapter describes the rationale for these four broad categories and
identifies the eight families of structured analytic techniques.

The word “taxonomy” comes from the Greek taxis, meaning arrangement,
division, or order, and nomos, meaning law. Classic examples of a
taxonomy are Carolus Linnaeus’s hierarchical classification of all living
organisms by kingdom, phylum, class, order, family, genus, and species
that is widely used in the biological sciences, and the periodic table of
elements used by chemists. A library catalogue is also considered a
taxonomy, as it starts with a list of related categories that are then
progressively broken down into finer categories.

Figure 2.0 System 1 and System 2 Thinking

43
Development of a taxonomy is an important step in organizing knowledge
and furthering the development of any particular discipline. Rob Johnston
developed a taxonomy of variables that influence intelligence analysis but
did not go into any depth on analytic techniques or methods. He noted that
“a taxonomy differentiates domains by specifying the scope of inquiry,
codifying naming conventions, identifying areas of interest, helping to set
research priorities, and often leading to new theories. Taxonomies are
signposts, indicating what is known and what has yet to be discovered.”2

Robert Clark has described a taxonomy of intelligence sources.3 He also


categorized some analytic techniques commonly used in intelligence
analysis, but not to the extent of creating a taxonomy. To the best of our
knowledge, a taxonomy of analytic methods for intelligence analysis has
not previously been developed, although taxonomies have been developed
to classify research methods used in forecasting,4 operations research,5
information systems,6 visualization tools,7 electronic commerce,8
knowledge elicitation,9 and cognitive task analysis.10

After examining taxonomies of methods used in other fields, we found that


there is no single right way to organize a taxonomy—only different ways
that are more or less useful in achieving a specified goal. In this case, our

44
goal is to gain a better understanding of the domain of structured analytic
techniques, investigate how these techniques contribute to providing a
better analytic product, and consider how they relate to the needs of
analysts. The objective has been to identify various techniques that are
currently available, identify or develop additional potentially useful
techniques, and help analysts compare and select the best technique for
solving any specific analytic problem. Standardization of terminology for
structured analytic techniques will facilitate collaboration across agency
and international boundaries during the use of these techniques.

2.1 Taxonomy of System 2 Methods


Intelligence analysts employ a wide range of methods to deal with an even
wider range of subjects. Although this book focuses on the field of
structured analysis, it is appropriate to identify some initial categorization
of all the methods in order to see where structured analysis fits in. Many
researchers write of only two general approaches to analysis, contrasting
qualitative with quantitative, intuitive with empirical, or intuitive with
scientific. Others might claim that there are three distinct approaches:
intuitive, structured, and scientific. In our taxonomy, we have sought to
address this confusion by describing two types of thinking (System 1 and
System 2) and defining four categories of System 2 thinking.

Whether intelligence analysis is, or should be, an art or science is one of


the long-standing debates in the literature on intelligence analysis. As we
see it, intelligence analysis has aspects of both spheres. The range of
activities that fall under the rubric of intelligence analysis spans the entire
range of human cognitive abilities, and it is not possible to divide it into
just two categories—art and science—or to say that it is only one or the
other. The extent to which any part of intelligence analysis is either art or
science is entirely dependent upon how one defines “art” and “science.”

The taxonomy described here posits four functionally distinct


methodological approaches to intelligence analysis. These approaches are
distinguished by the nature of the analytic methods used, the type of
quantification if any, the type of data that is available, and the type of
training that is expected or required. Although each method is distinct, the
borders between them can be blurry.

✶ Critical thinking: Critical thinking, as defined by longtime

45
intelligence methodologist and practitioner Jack Davis, is the
application of the processes and values of scientific inquiry to the
special circumstances of strategic intelligence.11 Good critical
thinkers will stop and reflect on who is the key customer, what is the
question, where can they find the best information, how can they
make a compelling case, and what is required to convey their message
effectively. They recognize that this process requires checking key
assumptions, looking for disconfirming data, and entertaining
multiple explanations as long as possible. Most students are exposed
to critical thinking techniques at some point in their education—from
grade school to university—but few colleges or universities offer
specific courses to develop critical thinking and writing skills.
✶ Structured analysis: Structured analytic techniques involve a
step-by-step process that externalizes the analyst’s thinking in a
manner that makes it readily apparent to others, thereby enabling it to
be reviewed, discussed, and critiqued piece by piece, or step by step.
For this reason, structured analysis usually becomes a collaborative
effort in which the transparency of the analytic process exposes
participating analysts to divergent or conflicting perspectives. This
type of analysis is believed to mitigate some of the adverse impacts of
a single analyst’s cognitive limitations, an ingrained mindset, and the
whole range of cognitive and other analytic biases. Frequently used
techniques include Structured Brainstorming, Scenarios Analysis,
Indicators, Analysis of Competing Hypotheses, and Key Assumptions
Check. Structured techniques are taught at the college and graduate
school levels and can be used by analysts who have not been trained
in statistics, advanced mathematics, or the hard sciences.
✶ Quasi-quantitative analysis using expert-generated data:
Analysts often lack the empirical data needed to analyze an
intelligence problem. In the absence of empirical data, many methods
are designed that rely on experts to fill the gaps by rating key
variables as High, Medium, Low, or Not Present, or by assigning a
subjective probability judgment. Special procedures are used to elicit
these judgments, and the ratings usually are integrated into a larger
model that describes that particular phenomenon, such as the
vulnerability of a civilian leader to a military coup, the level of
political instability, or the likely outcome of a legislative debate. This
category includes methods such as Bayesian inference, dynamic
modeling, and simulation. Training in the use of these methods is
provided through graduate education in fields such as mathematics,

46
information science, operations research, business, or the sciences.
✶ Empirical analysis using quantitative data: Quantifiable empirical
data are so different from expert-generated data that the methods and
types of problems the data are used to analyze are also quite different.
Econometric modeling is one common example of this method.
Empirical data are collected by various types of sensors and are used,
for example, in analysis of weapons systems. Training is generally
obtained through graduate education in statistics, economics, or the
hard sciences.

No one of these four methods is better or more effective than another. All
are needed in various circumstances to optimize the odds of finding the
right answer. The use of multiple methods during the course of a single
analytic project should be the norm, not the exception. For example, even
a highly quantitative technical analysis may entail assumptions about
motivation, intent, or capability that are best handled with critical thinking
approaches and/or structured analysis. One of the structured techniques for
idea generation might be used to identify the variables to be included in a
dynamic model that uses expert-generated data to quantify these variables.

Of these four methods, structured analysis is the new kid on the block, so
to speak, so it is useful to consider how it relates to System 1 thinking.
System 1 thinking combines subject-matter expertise and intuitive
judgment in an activity that takes place largely in an analyst’s head.
Although the analyst may gain input from others, the analytic product is
frequently perceived as the product of a single analyst, and the analyst
tends to feel “ownership” of his or her analytic product. The work of a
single analyst is particularly susceptible to the wide range of cognitive
pitfalls described in Psychology of Intelligence Analysis and throughout
this book.12

Structured analysis follows a step-by-step process that can be used by an


individual analyst, but it is done more commonly as a group process, as
that is how the principal benefits are gained. As we discussed in the
previous chapter, structured techniques guide the dialogue between
analysts with common interests as they work step by step through an
analytic problem. The critical point is that this approach exposes
participants with various types and levels of expertise to alternative ideas,
evidence, or mental models early in the analytic process. It can help the
experts avoid some of the common cognitive pitfalls. The structured group

47
process that identifies and assesses alternative perspectives can also help to
avoid “groupthink,” the most common problem of small-group processes.

When used by a group or a team, structured techniques can become a


mechanism for information sharing and group learning that helps to
compensate for gaps or weaknesses in subject-matter expertise. This is
especially useful for complex projects that require a synthesis of multiple
types of expertise.

“The first step of science is to know one thing from another. This
knowledge consists in their specific distinctions; but in order that it
may be fixed and permanent, distinct names must be given to
different things, and those names must be recorded and
remembered.”

—Carolus Linnaeus, Systema Naturae (1738)

2.2 Taxonomy of Structured Analytic


Techniques
Structured techniques have been used by U.S. Intelligence Community
methodology specialists and some analysts in selected specialties for many
years, but the broad and general use of these techniques by the average
analyst is a relatively new approach to intelligence analysis. The driving
forces behind the development and use of these techniques in the
intelligence profession are (1) an increased appreciation of cognitive
limitations and biases that make intelligence analysis so difficult, (2)
prominent intelligence failures that have prompted reexamination of how
intelligence analysis is generated, (3) policy support and technical support
for intraoffice and interagency collaboration, and (4) a desire by
policymakers who receive analysis that it be more transparent as to how
the conclusions were reached.

Considering that the U.S. Intelligence Community started focusing on


structured techniques in order to improve analysis, it is fitting to categorize
these techniques by the various ways they can help achieve this goal (see

48
Figure 2.2). Structured analytic techniques can mitigate some of the human
cognitive limitations, sidestep some of the well-known analytic pitfalls,
and explicitly confront the problems associated with unquestioned
assumptions and mental models. They can ensure that assumptions,
preconceptions, and mental models are not taken for granted but are
explicitly examined and tested. They can support the decision-making
process, and the use and documentation of these techniques can facilitate
information sharing and collaboration.

Figure 2.2 Eight Families of Structured Analytic Techniques

49
A secondary goal when categorizing structured techniques is to correlate
categories with different types of common analytic tasks. This makes it
possible to match specific techniques to individual analysts’ needs, as will
be discussed in chapter 3. There are, however, quite a few techniques that
fit comfortably in several categories because they serve multiple analytic
functions.

The eight families of structured analytic techniques are described in detail


in chapters 4–11. The introduction to each chapter describes how that
specific category of techniques helps to improve analysis.

1. Reasoning by analogy can also be a structured technique called


Structured Analogies, as described in chapter 8.

2. Rob Johnston, Analytic Culture in the U.S. Intelligence Community


(Washington, DC: CIA Center for the Study of Intelligence, 2005), 34.

50
3. Robert M. Clark, Intelligence Analysis: A Target-Centric Approach, 2nd
ed. (Washington, DC: CQ Press, 2007), 84.

4. Forecasting Principles website:


www.forecastingprinciples.com/files/pdf/methodsselectionchart.pdf.

5. Russell W. Frenske, “A Taxonomy for Operations Research,”


Operations Research 19, no. 1 (January–February 1971).

6. Kai R. T. Larson, “A Taxonomy of Antecedents of Information Systems


Success: Variable Analysis Studies,” Journal of Management Information
Systems 20, no. 2 (Fall 2003).

7. Ralph Lengler and Martin J. Epler, “A Periodic Table of Visualization


Methods,” undated, www.visual-
literacy.org/periodic_table/periodic_table.html.

8. Roger Clarke, Appropriate Research Methods for Electronic Commerce


(Canberra, Australia: Xanax Consultancy Pty Ltd., 2000),
http://anu.edu.au/people/Roger.Clarke/EC/ResMeth.html.

9. Robert R. Hoffman, Nigel R. Shadbolt, A. Mike Burton, and Gary


Klein, “Eliciting Knowledge from Experts,” Organizational Behavior and
Human Decision Processes 62 (May 1995): 129–158.

10. Robert R. Hoffman and Laura G. Militello, Perspectives on Cognitive


Task Analysis: Historical Origins and Modern Communities of Practice
(Boca Raton, FL: CRC Press/Taylor and Francis, 2008); and Beth
Crandall, Gary Klein, and Robert R. Hoffman, Working Minds: A
Practitioner’s Guide to Cognitive Task Analysis (Cambridge, MA: MIT
Press, 2006).

11. See Katherine Hibbs Pherson and Randolph H. Pherson, Critical


Thinking for Strategic Intelligence (Washington, DC: CQ Press, 2013),
xxii.

12. Richards J. Heuer Jr., Psychology of Intelligence Analysis


(Washington, DC: CIA Center for the Study of Intelligence, 1999;
reprinted by Pherson Associates, LLC, 2007).

51
3 Choosing the Right Technique

3.1 Core Techniques [ 31 ]


3.2 Making a Habit of Using Structured Techniques [ 33 ]
3.3 One Project, Multiple Techniques [ 36 ]
3.4 Common Errors in Selecting Techniques [ 37 ]
3.5 Structured Technique Selection Guide [ 38 ]

This chapter provides analysts with a practical guide to identifying the


various techniques that are most likely to meet their needs. It also does the
following:

✶ Identifies a set of core techniques that are used or should be used


most frequently and should be in every analyst’s toolkit. Instructors
may also want to review this list in deciding which techniques to
teach.
✶ Describes five habits of thinking that an analyst should draw upon
when under severe time pressure to deliver an analytic product.
✶ Discusses the value of using multiple techniques for a single
project.
✶ Reviews the importance of identifying which structured analytic
techniques are most effective in helping analysts overcome or at least
mitigate ingrained cognitive biases and intuitive traps that are
common to the intelligence profession.
✶ Lists common mistakes analysts make when deciding which
technique or techniques to use for a specific project.

A key step in considering which structured analytic technique to use to


solve a particular problem is to ask: Which error or trap do I most need to
avoid or at least to mitigate when performing this task? Figure 3.0
highlights fifteen of the most common intuitive traps analysts are likely to
encounter depending on what task they are trying to do. As illustrated by
the chart, our experience is that intelligence analysts are particularly
susceptible to the following five traps: (1) failing to consider multiple
hypotheses or explanations, (2) ignoring inconsistent evidence, (3)
rejecting evidence that does not support the lead hypothesis, (4) lacking
insufficient bins or alternative hypotheses for capturing key evidence, and
(5) improperly projecting past experience. Most of these traps are also

52
mitigated by learning and applying the Five Habits of the Master Thinker,
discussed in section 3.2.

Figure 3.0 Value of Using Structured Techniques to Perform Key Tasks

53
3.1 Core Techniques
The average analyst is not expected to know how to use every technique in
this book. All analysts should, however, understand the functions
performed by various types of techniques and recognize the analytic
circumstances in which it is advisable to use them. An analyst can gain this
knowledge by reading the introductions to each of the technique chapters
and the overviews of each technique. Tradecraft or methodology
specialists should be available to assist when needed in the actual
implementation of many of these techniques. In the U.S. Intelligence
Community, for example, the CIA and the Department of Homeland
Security have made good progress supporting the use of these techniques
through the creation of analytic tradecraft support cells. Similar units have
been established by other intelligence analysis services and proven
effective.

All analysts should be trained to use the core techniques discussed here
because they are needed so frequently and are widely applicable across the
various types of analysis—strategic and tactical, intelligence and law
enforcement, and cyber and business. They are identified and described
briefly in the following paragraphs.

Structured Brainstorming (chapter 5):


Perhaps the most commonly used technique, Structured Brainstorming is a

54
simple exercise often employed at the beginning of an analytic project to
elicit relevant information or insight from a small group of knowledgeable
analysts. The group’s goal might be to identify a list of such things as
relevant variables, driving forces, a full range of hypotheses, key players
or stakeholders, available evidence or sources of information, potential
solutions to a problem, potential outcomes or scenarios, or potential
responses by an adversary or competitor to some action or situation; or, for
law enforcement, potential suspects or avenues of investigation. Analysts
should be aware of Nominal Group Technique as an alternative to
Structured Brainstorming when there is concern that a regular
brainstorming session may be dominated by a senior officer or that junior
personnel may be reluctant to speak up.

Cross-Impact Matrix (chapter 5):


If the brainstorming identifies a list of relevant variables, driving forces, or
key players, the next step should be to create a Cross-Impact Matrix and
use it as an aid to help the group visualize and then discuss the relationship
between each pair of variables, driving forces, or players. This is a
learning exercise that enables a team or group to develop a common base
of knowledge about, for example, each variable and how it relates to each
other variable. It is a simple but effective learning exercise that will be
new to most intelligence analysts.

Key Assumptions Check (chapter 8 ):


One of the most commonly used techniques is the Key Assumptions
Check, which requires analysts to explicitly list and question the most
important working assumptions underlying their analysis. Any explanation
of current events or estimate of future developments requires the
interpretation of incomplete, ambiguous, or potentially deceptive evidence.
To fill in the gaps, analysts typically make assumptions about such things
as the relative strength of political forces, another country’s intentions or
capabilities, the way governmental processes usually work in that country,
the trustworthiness of key sources, the validity of previous analyses on the
same subject, or the presence or absence of relevant changes in the context
in which the activity is occurring. It is important that such assumptions be
explicitly recognized and questioned.

55
Indicators (chapter 6):
Indicators are observable or potentially observable actions or events that
are monitored to detect or evaluate change over time. For example, they
might be used to measure changes toward an undesirable condition such as
political instability, a pending financial crisis, or a coming attack.
Indicators can also point toward a desirable condition such as economic or
democratic reform. The special value of Indicators is that they create an
awareness that prepares an analyst’s mind to recognize the earliest signs of
significant change that might otherwise be overlooked. Developing an
effective set of Indicators is more difficult than it might seem. The
Indicator Validator™ helps analysts assess the diagnosticity of their
Indicators.

Analysis of Competing Hypotheses (chapter 7):


This technique requires analysts to start with a full set of plausible
hypotheses rather than with a single most likely hypothesis. Analysts then
take each item of relevant information, one at a time, and judge its
consistency or inconsistency with each hypothesis. The idea is to refute
hypotheses rather than confirm them. The most likely hypothesis is the one
with the least relevant information that would argue against it, not the most
relevant information that supports it. This process applies a key element of
scientific method to intelligence analysis. Software recommended for
using this technique is discussed in chapter 7.

Premortem Analysis and Structured Self-Critique


(chapter 9):
These two easy-to-use techniques enable a small team of analysts who
have been working together on any type of future-oriented analysis to
challenge effectively the accuracy of their own conclusions. Premortem
Analysis uses a form of reframing, in which restating the question or
problem from another perspective enables one to see it in a different way
and come up with different answers. Imagine yourself several years in the
future. You suddenly learn from an unimpeachable source that your
original estimate was wrong. Then imagine what could have happened to
cause your estimate to be wrong. Looking back to explain something that
has happened is much easier than looking into the future to forecast what

56
will happen.

With the Structured Self-Critique, analysts respond to a list of questions


about a variety of factors, including sources of uncertainty, analytic
processes that were used, critical assumptions, diagnosticity of evidence,
information gaps, and the potential for deception. Rigorous use of both of
these techniques can help prevent a future need for a postmortem.

What If? Analysis (chapter 9):


In conducting a What If? Analysis, one imagines that an unexpected event
has happened and then, with the benefit of “hindsight,” analyzes how it
could have happened and considers the potential consequences. This type
of exercise creates an awareness that prepares the analyst’s mind to
recognize early signs of a significant change, and can enable decision
makers to plan ahead for that contingency. A What If? Analysis can be a
tactful way of alerting decision makers to the possibility that they may be
wrong.

3.2 Making a Habit of Using Structured


Techniques
Analysts sometimes express concern that they do not have enough time to
use structured analytic techniques. The experience of most analysts and
particularly managers of analysts is that this concern is unfounded. In fact,
if analysts stop to consider how much time it takes not just to research an
issue and draft a report, but also to coordinate the analysis, walk the paper
through the editing process, and get it approved, they will usually discover
that the use of structured techniques almost always speeds the process.

✶ Many of the techniques, such as a Key Assumptions Check,


Indicators Validation, or Venn Analysis, take little time and
substantially improve the rigor of the analysis.
✶ Some take a little more time to learn, but once learned, often save
analysts considerable time over the long run. Analysis of Competing
Hypotheses and Red Hat Analysis are good examples of this
phenomenon.
✶ Techniques such as the Getting Started Checklist; AIMS
(Audience, Issue, Message, and Storyline); or Issue Redefinition

57
force analysts to stop and reflect on how to be more efficient over
time.
✶ Premortem Analysis and Structured Self-Critique usually take
more time but offer major rewards if errors in the original analysis are
discovered and remedied.

Figure 3.2 The Five Habits of the Master Thinker

Source: Copyright 2013 Pherson Associates, LLC.

When working on quick-turnaround items such as a current situation report


or an intelligence assessment that must be produced the same day, a
credible argument can be made that a structured analytic technique cannot
be applied properly in the available time. When deadlines are short,
gathering the right people in a small group to employ a structured
technique can prove impossible.

The best response to this valid observation is to practice using the core
techniques when deadlines are less pressing. In so doing, analysts will
ingrain new habits of thinking critically in their minds. If they and their
colleagues practice how to apply the concepts embedded in the structured
techniques when they have time, they will be more capable of applying
these critical thinking skills instinctively when under pressure. The Five

58
Habits of the Master Thinker are described in Figure 3.2.1 Each habit can
be mapped to one or more structured analytic techniques.

Key Assumptions:
In a healthy work environment, challenging assumptions should be
commonplace, ranging from “Why do you assume we all want pepperoni
pizza?” to “Won’t increased oil prices force them to reconsider their
export strategy?” If you expect your colleagues to challenge your key
assumptions on a regular basis, you will become more sensitive to your
own assumptions, and you will increasingly ask yourself if they are well
founded.

Alternative Explanations:
When confronted with a new development, the first instinct of a good
analyst is to develop a hypothesis to explain what has occurred based on
the available evidence and logic. A master thinker goes one step further
and immediately asks whether any alternative explanations should be
considered. If envisioning one or more alternative explanations is difficult,
then a master thinker will simply posit a single alternative that the initial or
lead hypothesis is not true. While at first glance these alternatives may
appear much less likely, over time as new evidence surfaces they may
evolve into the lead hypothesis. Analysts who do not generate a set of
alternative explanations at the start and lock on to a preferred explanation
will often fall into the trap of Confirmation Bias—focusing on the data that
are consistent with their explanation and ignoring or rejecting other data
that are inconsistent.

Inconsistent Data:
Looking for inconsistent data is probably the hardest habit to master of the
five, but it is the one that can reap the most benefits in terms of time saved
when conducting an investigation or researching an issue. The best way to
train your brain to look for inconsistent data is to conduct a series of
Analysis of Competing Hypotheses (ACH) exercises. Such practice helps
the analyst learn how to more readily identify what constitutes compelling
contrary evidence. If an analyst encounters an item of data that is
inconsistent with one of the hypotheses in a compelling fashion (for

59
example, a solid alibi), then that hypothesis can be quickly discarded,
saving the analyst time by redirecting his or her attention to more likely
solutions.

Key Drivers:
Asking at the outset what key drivers best explain what has occurred or
will foretell what is about to happen is a key attribute of a master thinker.
If key drivers are quickly identified, the chance of surprise will be
diminished. An experienced analyst should know how to vary the weights
of these key drivers (either instinctively or by using such techniques as
Multiple Scenarios Generation or Quadrant Crunching™) to generate a set
of credible alternative scenarios that capture the range of possible
outcomes. A master thinker will take this one step further, developing a list
of Indicators to identify which scenario is emerging.

Context:
Analysts often get so engaged in collecting and sorting data that they miss
the forest for the trees. Learning to stop and reflect on the overarching
context for the analysis is a key habit to learn. Most analysis is done under
considerable time pressure, and the tendency is to plunge in as soon as a
task is assigned. If the analyst does not take time to reflect on what the
client is really seeking, the resulting analysis could prove inadequate and
much of the research a waste of time. Ask yourself: “What do they need
from me,” “How can I help them frame the issue,” and “Do I need to place
their question in a broader context?” Failing to do this at the outset can
easily lead the analyst down blind alleys or require reconceptualizing an
entire paper after it has been drafted. Key structured techniques for
developing context include Starbursting, Mind Mapping, and Structured
Brainstorming.

Learning how to internalize the five habits will take a determined effort.
Applying each core technique to three to five real problems should implant
the basic concepts firmly in any analyst’s mind. With every repetition, the
habits will become more ingrained and, over time, will become instinctive.
Few analysts can wish for more; if they master the habits, they will both
increase their impact and save themselves time.

60
3.3 One Project, Multiple Techniques
Many projects require the use of multiple techniques, which is why this
book includes fifty-five different techniques. Each technique may provide
only one piece of a complex puzzle, and knowing how to put these pieces
together for any specific project is part of the art of structured analysis.
Separate techniques might be used for generating ideas, evaluating ideas,
identifying assumptions, drawing conclusions, and challenging previously
drawn conclusions. Chapter 12, “Practitioner’s Guide to Collaboration,”
discusses stages in the collaborative process and the various techniques
applicable at each stage.

Multiple techniques can also be used to check the accuracy and increase
the confidence in an analytic conclusion. Research shows that forecasting
accuracy is increased by combining “forecasts derived from methods that
differ substantially and draw from different sources of information.”2 This
is a particularly appropriate function for the Delphi Method (chapter 9),
which is a structured process for eliciting judgments from a panel of
outside experts. If a Delphi panel produces results similar to the internal
analysis, one can have significantly greater confidence in those results. If
the results differ, further research may be appropriate to understand why
and evaluate the differences. A key lesson learned from mentoring analysts
in the use of structured techniques is that major benefits can result—and
major mistakes avoided—if two different techniques (for example,
Structured Brainstorming and Diagnostic Reasoning) are used to attack the
same problem or two groups work the same problem independently and
then compare their results.

The back cover of this book opens up to a graphic that shows eight
families of techniques and traces the links among the families and the
individual techniques within the families. The families of techniques
include Decomposition and Visualization, Idea Generation, Scenarios and
Indicators, Hypothesis Generation and Testing, Assessment of Cause and
Effect, Challenge Analysis, Conflict Management, and Decision Support.

3.4 Common Errors in Selecting Techniques


The value and accuracy of an analytic product depend in part upon
selection of the most appropriate technique or combination of techniques

61
for doing the analysis. Unfortunately, it is easy for analysts to go astray
when selecting the best method. Lacking effective guidance, analysts are
vulnerable to various influences:3

✶ College or graduate-school recipe: Analysts are inclined to use


the tools they learned in college or graduate school whether or not
those tools are the best application for the different context of
intelligence analysis.
✶ Tool rut: Analysts are inclined to use whatever tool they already
know or have readily available. Psychologist Abraham Maslow
observed that “if the only tool you have is a hammer, it is tempting to
treat everything as if it were a nail.”4
✶ Convenience shopping: The analyst, guided by the evidence that
happens to be available, uses a method appropriate for that evidence,
rather than seeking out the evidence that is really needed to address
the intelligence issue. In other words, the evidence may sometimes
drive the technique selection instead of the analytic need driving the
evidence collection.
✶ Time constraints: Analysts can easily be overwhelmed by their
in-boxes and the myriad tasks they have to perform in addition to
their analytic workload. The temptation is to avoid techniques that
would “take too much time.” In reality, however, many useful
techniques take relatively little time, even as little as an hour or two.
And the time analysts spend upfront can save time in the long run by
keeping them from going off on a wrong track or by substantially
reducing the time required for editing and coordination—while
helping them to produce higher-quality and more compelling analysis
than might otherwise be possible.

3.5 Structured Technique Selection Guide


Analysts must be able, with minimal effort, to identify and learn how to
use the techniques that best meet their needs and fit their styles. The
following selection guide lists twelve short questions defining what
analysts may want or need to do and the techniques that can help them best
perform that task. (Note: If only the chapter number and title are given, all
or almost all the techniques listed for that chapter are potentially relevant.
If one or more techniques are shown in parentheses after the chapter
number and title, only the listed techniques in that chapter are relevant.)

62
To find structured techniques that would be most applicable for a task,
analysts pick the statement(s) that best describe what they want to do and
then look up the relevant chapter(s). Each chapter starts with a brief
discussion of that category of analysis followed by a short description of
each technique in the chapter. Analysts should read the introductory
discussion and the paragraph describing each technique before reading the
full description of the techniques that are most applicable to the specific
issue. The description for each technique describes when, why, and how to
use it. For many techniques, the information provided is sufficient for
analysts to use the technique. For more complex techniques, however,
training in the technique or assistance by an experienced user of the
technique is strongly recommended.

The structured analytic techniques listed under each question not only help
analysts perform that particular analytic function but help them overcome,
avoid, or at least mitigate the influence of several intuitive traps or System
1 errors commonly encountered by intelligence analysts. Most of these
traps are manifestations of more basic cognitive biases.

SELECTING THE RIGHT TECHNIQUE


Choose What You Want to Do
1. Define the project?
Chapter 4: Decomposition and Visualization (Getting Started
Checklist, Customer Checklist, Issue Redefinition, Venn
Analysis)
Chapter 5: Idea Generation
2. Get started; generate a list, for example, of driving forces,
variables to be considered, indicators, important players,
historical precedents, sources of information, or questions to be
answered? Organize, rank, score, or prioritize the list?
Chapter 4: Decomposition and Visualization
Chapter 5: Idea Generation
3. Examine and make sense of the data; figure out what is going
on?
Chapter 4: Decomposition and Visualization (Chronologies
and Timelines, Sorting, Network Analysis, Mind Maps and
Concept Maps)
Chapter 5: Idea Generation (Cross-Impact Matrix)
4. Explain a recent event; assess the most likely outcome of an

63
evolving situation?
Chapter 7: Hypothesis Generation and Testing
Chapter 8: Assessment of Cause and Effect
Chapter 9: Challenge Analysis
5. Monitor a situation to gain early warning of events or changes
that may affect critical interests; avoid surprise?
Chapter 6: Scenarios and Indicators
Chapter 9: Challenge Analysis
6. Generate and test hypotheses?
Chapter 7: Hypothesis Generation and Testing
Chapter 8: Assessment of Cause and Effect (Key
Assumptions Check)
7. Assess the possibility of deception?
Chapter 7: Hypothesis Generation and Testing (Analysis of
Competing Hypotheses, Deception Detection)
Chapter 8: Assessment of Cause and Effect (Key
Assumptions Check, Role Playing, Red Hat Analysis)
8. Foresee the future?
Chapter 6: Scenarios and Indicators
Chapter 7: Hypothesis Generation and Testing (Analysis of
Competing Hypotheses)
Chapter 8: Assessment of Cause and Effect (Key
Assumptions Check, Structured Analogies)
Chapter 9: Challenge Analysis
Chapter 11: Decision Support (Complexity Manager)
9. Challenge your own mental model?
Chapter 9: Challenge Analysis
Chapter 5: Idea Generation
Chapter 7: Hypothesis Generation and Testing (Diagnostic
Reasoning, Analysis of Competing Hypotheses)
Chapter 8: Assessment of Cause and Effect (Key
Assumptions Check)
10. See events from the perspective of the adversary or other
players?
Chapter 8: Assessment of Cause and Effect (Key
Assumptions Check, Role Playing, Red Hat Analysis)
Chapter 9: Challenge Analysis (Red Team Analysis, Delphi
Method)
Chapter 10: Conflict Management
Chapter 11: Decision Support (Impact Matrix)
11. Manage conflicting mental models or opinions?
Chapter 10: Conflict Management
Chapter 7: Hypothesis Generation and Testing (Analysis of
Competing Hypotheses, Argument Mapping)

64
Chapter 8: Assessment of Cause and Effect (Key
Assumptions Check)
12. Support a manager, commander, action officer, planner, or
policymaker in deciding between alternative courses of action;
draw actionable conclusions?
Chapter 11: Decision Support
Chapter 10: Conflict Management
Chapter 7: Hypothesis Generation and Testing (Analysis of
Competing Hypotheses)

1. For a fuller discussion of this topic, see Randolph H. Pherson, “Five


Habits of the Master Thinker,” Journal of Strategic Security 6, no. 3 (Fall
2013), http://scholarcommons.usf.edu/jss.

2. J. Scott Armstrong, “Combining Forecasts,” in Principles of


Forecasting, ed. J. Scott Armstrong (New York: Springer
Science+Business Media, 2001), 418–439.

3. The first three items here are from Craig S. Fleisher and Babette E.
Bensoussan, Strategic and Competitive Analysis: Methods and Techniques
for Analyzing Business Competition (Upper Saddle River, NJ: Prentice
Hall, 2003), 22–23.

4. Abraham Maslow, Psychology of Science (New York: Harper and Row,


1966). A similar quote is attributed to Abraham Kaplan: “Give a child a
hammer and he suddenly discovers that everything he encounters needs
pounding.”

65
4 Decomposition and Visualization

4.1 Getting Started Checklist [ 47 ]


4.2 AIMS (Audience, Isssue, Message, and Storyline) [ 49 ]
4.3 Customer Checklist [ 51 ]
4.4 Issue Redefinition [ 53 ]
4.5 Chronologies and Timelines [ 56 ]
4.6 Sorting [ 60 ]
4.7 Ranking, Scoring, and Prioritizing [ 63 ]
4.8 Matrices [ 68 ]
4.9 Venn Analysis [ 72 ]
4.10 Network Analysis [ 78 ]
4.11 Mind Maps and Concept Maps [ 86 ]
4.12 Process Maps and Gantt Charts [ 93 ]

One of the most obvious constraints that analysts face in their work is the
limit on how much information most people can keep at the forefront of
their minds and think about at the same time. Imagine that you have to
make a difficult decision. You make a list of pros and cons. When it comes
time to make the decision, however, the lists may be so long that you can’t
think of them all at the same time to weigh off pros against cons. This
often means that when you think about the decision, you will vacillate,
focusing first on the pros and then on the cons, favoring first one decision
and then the other. Now imagine how much more difficult it would be to
think through an intelligence problem with many interacting variables. The
limitations of human thought make it difficult, if not impossible, to do
error-free analysis without the support of some external representation of
the parts of the problem that is being addressed.

Two common approaches for coping with this limitation of our working
memory are decomposition—that is, breaking down the problem or issue
into its component parts so that each part can be considered separately—
and visualization—placing all the parts on paper or on a computer screen
in some organized manner designed to facilitate understanding how the
various parts interrelate. Actually, all structured analytic techniques
employ these approaches, as the externalization of one’s thinking is part of
the definition of structured analysis. For some of the basic techniques,
however, decomposing an issue to present the data in an organized manner

66
is the principal contribution they make to more effective analysis. These
are the basic techniques that will be described in this chapter.

Any technique that gets a complex thought process out of the analyst’s
head and onto paper or the computer screen can be helpful. The use of
even a simple technique such as a checklist can be extremely productive.
Consider, for example, the work of Dr. Peter Pronovost, who is well
known for his research documenting that a simple checklist of precautions
reduces infections, deaths, and costs in hospital intensive care units
(ICUs). Dr. Pronovost developed a checklist of five standard precautions
against infections that can occur when the ICU staff uses catheter lines to
connect machines to a patient’s body in a hospital intensive care unit.
ICUs in the United States insert five million such lines into patients each
year, and national statistics show that, after ten days, 4 percent of those
lines become infected. Infections occur in 80,000 patients each year, and
between 5 percent and 28 percent die, depending on how sick the patient is
at the start. A month of observation showed that doctors skipped at least
one of the precautions in more than a third of all patients.

The Michigan Health and Hospital Association decided in 2003 to require


that its hospitals use three different checklists developed by Dr. Pronovost
for ICUs, and it made nurses responsible for documenting compliance.
Nurses were instructed to intervene if a doctor did not follow every step on
a checklist. Obviously, some doctors were offended by the idea that they
needed checklists and were being monitored by nurses, and none liked the
additional paperwork. However, during its first eighteen months, this
checklist program saved more than 1,500 lives and an estimated
$75,000,000, and these gains had been sustained for almost four years at
the time the referenced article was written.1 These results make one
wonder about the potential value of a checklist for intelligence analysts.

“Analysis is breaking information down into its component parts.


Anything that has parts also has a structure that relates these parts
to each other. One of the first steps in doing analysis is to
determine an appropriate structure for the analytic problem, so that
one can then identify the various parts and begin assembling
information on them. Because there are many different kinds of
analytic problems, there are also many different ways to structure
analysis.”

67
—Richards J. Heuer Jr., The Psychology of Intelligence Analysis
(1999)

Overview of Techniques
Getting Started Checklist, AIMS, Customer Checklist, and
Issue Redefinition
are four techniques that can be combined to help analysts conceptualize
and launch a new project. If an analyst can start off in the right direction
and avoid having to change course later, a lot of time can be saved.
However, analysts should still be prepared to change course should their
research so dictate. As Albert Einstein said, “If we knew what we were
doing, it would not be called research.”

Chronologies and Timelines


are used to organize data on events or actions. They are used whenever it
is important to understand the timing and sequence of relevant events or to
identify key events and gaps.

Sorting
is a basic technique for organizing data in a manner that often yields new
insights. Sorting is effective when information elements can be broken out
into categories or subcategories for comparison by using a computer
program, such as a spreadsheet. It is particularly effective during initial
data gathering and hypothesis generation.

Ranking, Scoring, and Prioritizing


provide how-to guidance on three different ranking techniques—Ranked
Voting, Paired Comparison, and Weighted Ranking. Combining an idea-
generation technique such as Structured Brainstorming with a ranking
technique is an effective way for an analyst to start a new project or to
provide a foundation for intraoffice or interagency collaboration. The idea-
generation technique is used to develop lists of driving forces, variables to

68
be considered, indicators, possible scenarios, important players, historical
precedents, sources of information, questions to be answered, and so forth.
Such lists are even more useful once they are ranked, scored, or prioritized
to determine which items are most important, most useful, most likely, or
should be at the top of the priority list.

Matrices
are generic analytic tools for sorting and organizing data in a manner that
facilitates comparison and analysis. They are used to analyze the
relationships among any two sets of variables or the interrelationships
among a single set of variables. A matrix consists of a grid with as many
cells as needed for whatever problem is being analyzed. Some analytic
topics or problems that use a matrix occur so frequently that they are
described in this book as separate techniques.

Venn Analysis
is a visual technique that can be used to explore the logic of arguments.
Venn diagrams are commonly used to teach set theory in mathematics;
they can also be adapted to illustrate simple set relationships in analytic
arguments, reveal flaws in reasoning, and identify missing data.

Network Analysis
is used extensively by counterterrorism, counternarcotics,
counterproliferation, law enforcement, and military analysts to identify and
monitor individuals who may be involved in illegal activity. Social
Network Analysis is used to map and analyze relationships among people,
groups, organizations, computers, websites, and any other information
processing entities. The terms Network Analysis, Association Analysis,
Link Analysis, and Social Network Analysis are often used
interchangeably.

Mind Maps and Concept Maps


are visual representations of how an individual or a group thinks about a
topic of interest. Such diagrams have two basic elements—the ideas that
are judged relevant to whatever topic one is thinking about, and the lines

69
that show and briefly describe the connections among these ideas. Mind
Maps and Concept Maps are used by an individual or a group to help sort
out their own thinking or to facilitate the communication of a complex set
of relationships to others, as in a briefing or an intelligence report. Mind
Maps can also be used to identify gaps or stimulate new thinking about a
topic.

Process Maps and Gantt Charts


were developed for use in business and the military, but they are also
useful to intelligence analysts. Process Mapping is a technique for
identifying and diagramming each step in a complex process; this includes
Event Flow Charts, Activity Flow Charts, and Commodity Flow Charts. A
Gantt Chart is a specific type of Process Map that uses a matrix to chart
the progression of a multifaceted process over a specific period of time.
Both techniques can be used to track the progress of plans or projects of
intelligence interest being undertaken by a foreign government, a criminal
or terrorist group, or any other nonstate actor—for example, tracking
development of a weapons system or preparations for a military, terrorist,
or criminal attack.

Other comparable techniques for organizing and presenting data include


various types of graphs, diagrams, and trees. They are not discussed in this
book because they are well covered in other works, and it was necessary to
draw a line on the number of techniques included here.

4.1 Getting Started Checklist


The Getting Started Checklist is a simple tool to help analysts launch a
new project.2 Past experience has shown that much time can be saved if an
analyst takes a few moments to reflect on the task at hand before plunging
in. Much analysis is done under time pressure, and this often works to the
detriment of the quality of the final product as well as the efficiency of the
research and drafting process.

When to Use It
Analysts should view the Getting Started Checklist in much the same way
a pilot and copilot regard the checklist they always review carefully before

70
flying their airplane. It should be an automatic first step in preparing an
analysis and essential protection against future unwanted surprises. Even if
the task is to prepare a briefing or a quick-turnaround article, taking a
minute to review the checklist can save valuable time, for example, by
reminding the analyst of a key source, prompting her or him to identify the
key customer, or challenging the analyst to consider alternative
explanations before coming to closure.

Value Added
By getting the fundamentals right at the start of a project, analysts can
avoid having to change course later on. This groundwork can save the
analyst—and reviewers of the draft—a lot of time and greatly improve the
quality of the final product. Seasoned analysts will continuously adapt the
checklist to their work environment and the needs of their particular
customers.

The Method
Analysts should answer several questions at the beginning of a new
project. The following is our list of suggested starter questions, but there is
no single best way to begin. Other lists can be equally effective.

✶ What has prompted the need for the analysis? For example, was it
a news report, a new intelligence report, a new development, a
perception of change, or a customer request?
✶ What is the key intelligence, policy, or business question that
needs to be answered?
✶ Why is this issue important, and how can analysis make a unique
and meaningful contribution?
✶ Has this question or a similar question already been answered by
you or someone else, and what was said? To whom was that analysis
delivered, and what has changed since then?
✶ Who are the principal customers? Are their needs well
understood? If not, try to gain a better understanding of their needs
and the style of reporting they like.
✶ Are there any other stakeholders who would have an interest in the
answer to the question? Would any of them prefer that a different
question be answered? Consider meeting with others who see the

71
question from a different perspective.
✶ What are all the possible answers to this question? What
alternative explanations or outcomes should be considered before
making an analytic judgment on the issue?
✶ Would the analysis benefit from the use of structured analytic
techniques?
✶ What potential sources or streams of information would be most
useful—and efficient—to exploit for learning more about this topic or
question?
✶ Where should we reach out for expertise, information, or
assistance within our organization or outside our unit?
✶ Should we convene an initial brainstorming session to identify and
challenge key assumptions, examine key information, identify key
drivers and important players, explore alternative explanations, and/or
generate alternative hypotheses?
✶ What is the best way to present my information and analysis?
Which parts of my response should be represented by graphics,
tables, or in matrix format?

Relationship to Other Techniques


Other techniques that help when first starting a project include the
Customer Checklist, Issue Redefinition, Structured Brainstorming,
Starbursting, Mind Mapping, Venn Analysis, Key Assumptions Check,
and Multiple Hypothesis Generation.

Origins of This Technique


This list of suggested starter questions was developed by Randy Pherson
and refined in multiple training sessions conducted with Department of
Homeland Security analysts.

4.2 AIMS (Audience, Issue, Message, and


Storyline)
Before beginning to write a paper, analysts should avoid the temptation to
plunge in and start drafting. Instead, they need to take a little time to think

72
through the AIMS of their product.3 AIMS is a mnemonic that stands for
Audience, Issue or Intelligence Question, Message, and Storyline. Its
purpose is to prompt the analyst to consider up front who the paper is
being written for, what key question or questions it should address, what is
the key message the reader should take away, and how best to present the
analysis in a compelling way. Once these four questions are answered in a
crisp and direct way, the process of drafting the actual paper becomes
much easier.

When to Use It
A seasoned analyst knows that much time can be saved over the long run if
the analyst takes an hour or so to define the AIMS of the paper. This can
be done working alone or, preferably, with a small group. It is helpful to
include one’s supervisor in the process either as part of a short
brainstorming session or by asking the supervisor to review the resulting
plan. For major papers, consideration should be given to addressing the
AIMS of a paper in a more formal Concept Paper or Terms of Reference
(TOR).

Value Added
Conceptualizing a product before it is drafted is usually the greatest
determinant of whether the final product will meet the needs of key
consumers. Focusing on these four elements—Audience, Issue or
Intelligence Question, Message, and Storyline—helps ensure that the
customer will quickly absorb and benefit from the analysis. If the AIMS of
the article or assessment are not considered before drafting, the paper is
likely to lack focus and frustrate the reader. Moreover, chances are that it
will take more time to be processed through the editing and coordination
process, which could reduce its timeliness and relevance to the customer.

The Method
✶ Audience: The first step in the process is to identify the primary
audience for the product. Are you writing a short, tightly focused
article for a senior customer, or a longer piece with more detail that
will serve a less strategic customer? If you identify more than one key

73
customer for the product, we recommend that you draft multiple
versions of the paper while tailoring each to a different key customer.
Usually the best strategy is to outline both papers before you begin
researching and writing. Then when you begin to draft the first paper
you will know what information should be saved and later
incorporated into the second paper.
✶ Issue or intelligence question: Ask yourself what is the key issue
or question with which your targeted audience is struggling or will
have to struggle in the future. What is their greatest concern or
greatest need at the moment? Make sure that the key question is
tightly focused, actionable, and answerable in more than one way.
✶ Message: What is the bottom line that you want to convey to your
key customer or customers? What is the “elevator speech” or key
point you would express to that customer if you had a minute with her
or him between floors on the elevator? The message should be
formulated as a short, clear, and direct statement before starting to
draft your article. Sometimes it is easier to discern the message if you
talk through your paper with a peer or your supervisor and then write
down the key theme or conclusion that emerges from the
conversation.
✶ Storyline: With your bottom-line message in mind, can you
present that message in a clear, direct, and persuasive way to the
customer? Do you have a succinct line of argumentation that flows
easily and logically throughout the paper and tells a compelling story?
Can you illustrate this storyline with equally compelling pictures,
videos, or other graphics?

Relationship to Other Techniques


Other techniques that help when you conceptualize a paper when first
starting a project include the Getting Started Checklist, Customer
Checklist, Issue Redefinition, Venn Analysis, and Structured
Brainstorming.

Origins of This Technique


This technique was developed from training materials used in several U.S.
government agencies and further refined in multiple training sessions
conducted with Department of Homeland Security analysts.

74
4.3 Customer Checklist
The Customer Checklist helps an analyst tailor the product to the needs of
the principal customer for the analysis.4 When used appropriately, it
ensures that the product is of maximum possible value to this customer. If
the product—whether a paper, briefing, or web posting—is intended to
serve many different kinds of customers, it is important to focus on the
customer or customers who constitute the main audience and meet their
specific concerns or requirements.

When to Use It
The Customer Checklist helps the analyst focus on the customer for whom
the analytic product is being prepared when the analyst is first starting on a
project, by posing a set of twelve questions. Ideally, an analytic product
should address the needs of the principal customer, and it is most efficient
to identify that individual at the very start. It also can be helpful to review
the Customer Checklist later in the drafting process to ensure that the
product continues to be tightly focused on the needs of the principal
recipient.

If more than one key customer is identified, then consider preparing


different drafts tailored to each key customer. Usually it is more effective
to outline each paper before beginning to draft. This will accomplish two
tasks: giving the analyst a good sense of all the types of information that
will be needed to write each paper, and providing a quick point of
reference as the analyst begins each new draft.

Value Added
The analysis will be more compelling if the needs and preferences of the
principal customer are kept in mind at each step in the drafting process.
Using the checklist also helps focus attention on what matters most and to
generate a rigorous response to the task at hand.

The Method
Before preparing an outline or drafting a paper, ask the following

75
questions:

✶ Who is the key person for whom the product is being developed?
✶ Will this product answer the question the customer asked?
✶ Did the customer ask the right question? If not, do you need to
place your answer in a broader context to better frame the issue
before addressing his or her particular concern?
✶ What is the most important message to give this customer?
✶ What value added are you providing with your response?
✶ How is the customer expected to use this information?
✶ How much time does the customer have to digest your product?
✶ What format would convey the information most effectively?
✶ Is it possible to capture the essence in one or a few key graphics?
✶ Does distribution of this document need to be restricted? What
classification is most appropriate? Should you prepare different
products at different levels of restriction?
✶ What is the customer’s level of interest in or tolerance for
technical language and detail? Can you provide details in appendices
or backup materials, graphics, or an annex?
✶ Would the customer expect you to reach out to other experts to tap
their expertise in drafting this paper? If so, how would you flag the
contribution of other experts in the product?
✶ To whom might the customer turn for other views on this topic?
What data or analysis might others provide that could influence how
the customer would react to what you will be preparing?
✶ What perspectives do other interested parties have on this issue?
What are the responsibilities of the other parties?

Relationship to Other Techniques


Other techniques that help an analyst get over the hump when first starting
on a project include the Getting Started Checklist, Issue Redefinition,
Structured Brainstorming, Starbursting, and Multiple Hypotheses
Generation.

Origins of This Technique


This checklist was developed by Randy Pherson and refined in training
sessions with Department of Homeland Security analysts.

76
4.4 Issue Redefinition
Many analytic projects start with an issue statement: What is the issue,
why is it an issue, and how will it be addressed? Issue Redefinition is a
technique for experimenting with different ways to define an issue. This is
important, because seemingly small differences in how an issue is defined
can have significant effects on the direction of the research.

When to Use It
Using Issue Redefinition at the beginning of a project can get you started
off on the right foot. It may also be used at any point during the analytic
process when a new hypothesis or critical new evidence is introduced.
Issue Redefinition is particularly helpful in preventing “mission creep,”
which results when analysts unwittingly take the direction of analysis
away from the core intelligence question or issue at hand, often as a result
of the complexity of the problem or a perceived lack of information.
Analysts have found Issue Redefinition useful when their thinking is stuck
in a rut, and they need help to get out of it.

It is most effective when the definition process is collaboratively


developed and tracked in an open and sharable manner, such as on a wiki.
The dynamics of the wiki format—including the ability to view edits, nest
information, and link to other sources of information—allow analysts to
understand and explicitly share the reasoning behind the genesis of the
core issue and additional questions as they arise.

Value Added
Proper issue identification can save a great deal of time and effort by
forestalling unnecessary research and analysis on a poorly stated issue.
Issues are often poorly presented when they are

✶ Solution driven (Where are the weapons of mass destruction in


Iraq?)
✶ Assumption driven (When China launches rockets into Taiwan,
will the Taiwanese government collapse?)
✶ Too broad or ambiguous (What is the status of the political
opposition in Syria?)

77
✶ Too narrow or misdirected (Will college students purchase this
product?)

The Method
The following tactics can be used to stimulate new or different thinking
about the best way to state an issue or problem. (See Figure 4.4 for an
example.) These tactics may be used in any order:

✶ Rephrase: Redefine the issue without losing the original meaning.


Review the results to see if they provide a better foundation upon
which to conduct the research and assessment to gain the best answer.
Example: Rephrase the original question, “How much of a role does
Aung San Suu Kyi play in the ongoing unrest in Burma?” as, “How
active is the National League for Democracy, headed by Aung San
Suu Kyi, in the antigovernment riots in Burma?”
✶ Ask why: Ask a series of “why” or “how” questions about the
issue definition. After receiving the first response, ask “why” to do
that or “how” to do it. Keep asking such questions until you are
satisfied that the real problem has emerged. This process is especially
effective in generating possible alternative answers.
✶ Broaden the focus: Instead of focusing on only one piece of the
puzzle, step back and look at several pieces together. What is the
issue connected to? Example: The original question, “How corrupt is
the Pakistani president?” leads to the question, “How corrupt is the
Pakistani government as a whole?”
✶ Narrow the focus: Can you break down the issue further? Take
the question and ask about the components that make up the problem.
Example: The original question, “Will the European Union continue
to support the Euro?” can be broken down to, “How do individual
member states view their commitment to the Euro?”
✶ Redirect the focus: What outside forces impinge on this issue? Is
deception involved? Example: The original question, “Has the
terrorist threat posed by Al-Qaeda been significantly diminished?” is
revised to, “What Al-Qaeda affiliates now pose the greatest threat to
U.S. national security?”
✶ Turn 180 degrees: Turn the issue on its head. Is the issue the one
asked or the opposite of it? Example: The original question, “How
much of the ground capability of China’s People’s Liberation Army
would be involved in an initial assault on Taiwan?” is rephrased as,

78
“How much of the ground capability of China’s People’s Liberation
Army would not be involved in the initial Taiwan assault?”

Relationship to Other Techniques


Issue Redefinition is often used simultaneously with the Getting Started
Checklist and the Customer Checklist. The technique is also known as
Issue Development, Problem Restatement, and Reframing the Question.

Origins of This Technique


This is an edited version of Defense Intelligence Agency training
materials. It also draws on Morgan D. Jones, “Problem Restatement,” in
The Thinker’s Toolkit (New York: Three Rivers Press, 1998), chapter 3.

Figure 4.4 Issue Redefinition Example

79
4.5 Chronologies and Timelines
A Chronology is a list that places events or actions in the order in which
they occurred; a Timeline is a graphic depiction of those events put in
context of the time of the events and the time between events. Both are
used to identify trends or relationships among the events or actions and, in
the case of a Timeline, among the events and actions as well as other
developments in the context of the overarching intelligence problem.

When to Use It
Chronologies and Timelines aid in organizing events or actions. Whenever
it is important to understand the timing and sequence of relevant events or

80
to identify key events and gaps, these techniques can be useful. The events
may or may not have a cause-and-effect relationship.

Value Added
Chronologies and Timelines aid in the identification of patterns and
correlations among events. These techniques also allow you to relate
seemingly disconnected events to the big picture; to highlight or identify
significant changes; or to assist in the discovery of trends, developing
issues, or anomalies. They can serve as a catch-all for raw data when the
meaning of the data has not yet been identified. Multiple-level Timelines
allow analysts to track concurrent events that may have an effect on one
another. Although Chronologies and Timelines may be developed at the
onset of an analytic task to ascertain the context of the activity to be
analyzed, they also may be used in postmortems to break down the stream
of reporting, find the causes for analytic failures, and highlight significant
events after an intelligence or business surprise.

The activities on a Timeline can lead an analyst to hypothesize the


existence of previously unknown events. In other words, the series of
known events may make sense only if other previously unknown events
had occurred. The analyst can then look for other indicators of those
missing events.

Chronologies and Timelines can be very useful for organizing information


in a format that can be readily understood in a briefing.

The Method
Chronologies and Timelines are effective yet simple ways for you to order
incoming information as you go through your daily message traffic. An
Excel spreadsheet or even a Word document can be used to log the results
of research and marshal evidence. You can use tools such as the Excel
drawing function or the Analysts’ Notebook to draw the Timeline. Follow
these steps:

✶ When researching the problem, ensure that the relevant


information is listed with the date or order in which it occurred. Make
sure the data are properly referenced.

81
✶ Review the Chronology or Timeline by asking the following
questions:
– What are the temporal distances between key events? If
“lengthy,” what caused the delay? Are there missing pieces of
data that may fill those gaps that should be collected?
– Did the analyst overlook piece(s) of information that may
have had an impact on or be related to the events?
– Conversely, if events seem to have happened more rapidly
than were expected, or if not all events appear to be related, is it
possible that the analyst has information related to multiple
event Timelines?
– Does the Timeline have all the critical events that are
necessary for the outcome to occur?
– When did the information become known to the analyst or a
key player?
– What are the information or intelligence gaps?
– Are there any points along the Timeline when the target is
particularly vulnerable to the collection of intelligence or
information or countermeasures?
– What events outside this Timeline could have influenced the
activities?
✶ If preparing a Timeline, synopsize the data along a line, usually
horizontal or vertical. Use the space on both sides of the line to
highlight important analytic points. For example, place facts above
the line and points of analysis or commentary below the line.
Alternatively, contrast the activities of different groups,
organizations, or streams of information by placement above or below
the line. If multiple actors are involved, you can use multiple lines,
showing how and where they converge.
✶ Look for relationships and patterns in the data connecting persons,
places, organizations, and other activities. Identify gaps or
unexplained time periods, and consider the implications of the
absence of evidence. Prepare a summary chart detailing key events
and key analytic points in an annotated Timeline.

Potential Pitfalls
In using Timelines, analysts may assume, incorrectly, that events
following earlier events were caused by the earlier events. Also, the value

82
of this technique may be reduced if the analyst lacks imagination in
identifying contextual events that relate to the information in the
Chronology or Timeline.

Example
A team of analysts working on strategic missile forces knows what steps
are necessary to prepare for and launch a nuclear missile. (See Figure 4.5.)
The analysts have been monitoring a country they believe is close to
testing a new variant of its medium-range surface-to-surface ballistic
missile. They have seen the initial steps of a test launch in mid-February
and decide to initiate a concentrated watch of the primary and secondary
test launch facilities. Observed and expected activities are placed into a
Timeline to gauge the potential dates of a test launch. The analysts can
thus estimate when a possible missile launch may occur and make decision
makers aware of indicators of possible activity.

Origins of This Technique


Chronologies and Timelines are well-established techniques used in many
fields. The information here is from Defense Intelligence Agency training
materials and Jones, “Sorting, Chronologies, and Timelines,” The
Thinker’s Toolkit (New York: Three Rivers Press, 1998), chapter 6.

Figure 4.5 Timeline Estimate of Missile Launch Date

83
4.6 Sorting
Sorting is a basic technique for organizing a large body of data in a manner
that often yields new insights.

When to Use It
Sorting is effective when information elements can be broken out into
categories or subcategories for comparison with one another, most often
by using a computer program such as a spreadsheet. This technique is
particularly effective during the initial data-gathering and hypothesis-
generation phases of analysis, but you may also find sorting useful at other
times.

Value Added
Sorting large amounts of data into relevant categories that are compared
with one another can provide analysts with insights into trends,
similarities, differences, or abnormalities of intelligence interest that
otherwise would go unnoticed. When you are dealing with transactions
data in particular (for example, communications intercepts or transfers of
goods or money), it is very helpful to sort the data first.

84
The Method
Follow these steps:

✶ Review the categories of information to determine which category


or combination of categories might show trends or an abnormality
that would provide insight into the problem you are studying. Place
the data into a spreadsheet or a database using as many fields
(columns) as necessary to differentiate among the data types (dates,
times, locations, people, activities, amounts, etc.). List each of the
facts, pieces of information, or hypotheses involved in the problem
that are relevant to your sorting schema. (Use paper, whiteboard,
movable sticky notes, or other means for this.)
✶ Review the listed facts, information, or hypotheses in the database
or spreadsheet to identify key fields that may allow you to uncover
possible patterns or groupings. Those patterns or groupings then
illustrate the schema categories and can be listed as header categories.
For example, if an examination of terrorist activity shows that most
attacks occur in hotels and restaurants but that the times of the attacks
vary, “Location” is the main category, while “Date” and “Time” are
secondary categories.
✶ Group those items according to the sorting schema in the
categories that were defined in step 1.
✶ Choose a category and sort the data within that category. Look for
any insights, trends, or oddities. Good analysts notice trends; great
analysts notice anomalies.
✶ Review (or ask others to review) the sorted facts, information, or
hypotheses to see if there are alternative ways to sort them. List any
alternative sorting schema for your problem. One of the most useful
applications for this technique is to sort according to multiple
schemas and examine results for correlations between data and
categories. But remember that correlation is not the same as
causation.

Examples
Example 1:
Are a foreign adversary’s military leaders pro-U.S., anti-U.S., or neutral on

85
their attitudes toward U.S. policy in the Middle East? To answer this
question, analysts sort the leaders by various factors determined to give
insight into the issue, such as birthplace, native language, religion, level of
professional education, foreign military or civilian/university exchange
training (where/when), field/command assignments by parent service,
political influences in life, and political decisions made. Then they review
the information to see if any parallels exist among the categories.

Example 2:
Analysts review the data from cell-phone communications among five
conspirators to determine the frequency of calls, patterns that show who is
calling whom, changes in patterns of frequency of calls prior to a planned
activity, dates and times of calls, subjects discussed, and so forth.

Example 3:
Analysts are reviewing all information related to an adversary’s weapons
of mass destruction (WMD) program. Electronic intelligence reporting
shows more than 300,000 emitter collections over the past year alone. The
analysts’ sorting of the data by type of emitter, dates of emission, and
location shows varying increases and decreases of emitter activity with
some minor trends identifiable. The analysts filter out all collections
except those related to air defense. The unfiltered information is sorted by
type of air defense system, location, and dates of activity. Of note is a
period when there is an unexpectedly large increase of activity in the air
defense surveillance and early warning systems. The analysts review
relevant external events and find that a major opposition movement
outside the country held a news conference where it detailed the
adversary’s WMD activities, including locations of the activity within the
country. The air defense emitters for all suspected locations of WMD
activity, including several not included in the press conference, increased
to a war level of surveillance within four hours of the press conference.
The analysts reviewed all air defense activity locations that showed the
increase assumed to be related to the press conference and the WMD
programs and found two locations showing increased activity but not
previously listed as WMD-related. These new locations were added to
collection planning to determine what relationship, if any, they had to the
WMD program.

86
Potential Pitfalls
Improper sorting can hide valuable insights as easily as it can illuminate
them. Standardizing the data being sorted is imperative. Working with an
analyst who has experience in sorting can help you avoid this pitfall in
most cases.

Origins of This Technique


Sorting is a long-established procedure for organizing data. The
description here is from Defense Intelligence Agency training materials.

4.7 Ranking, Scoring, and Prioritizing


This section describes three techniques for ranking, scoring, or prioritizing
items on any list according to the item’s importance, desirability, priority,
value, probability, or any other criterion.

When to Use It
Use of a ranking technique is often the next step after using an idea-
generation technique such as Structured Brainstorming, Virtual
Brainstorming, or Nominal Group Technique (see chapter 5). A ranking
technique is appropriate whenever there are too many items to rank easily
just by looking at the list, the ranking has significant consequences and
must be done as accurately as possible, or it is useful to aggregate the
opinions of a group of analysts.

Value Added
Combining an idea-generation technique with a ranking technique is an
excellent way for an analyst to start a new project or to provide a
foundation for intraoffice or interagency collaboration. An idea-generation
technique is often used to develop lists of such things as driving forces,
variables to be considered, or important players. Such lists are more useful
once they are ranked, scored, or prioritized. For example, you might
determine which items are most important, most useful, most likely, or that

87
need to be done first.

The Method
Of the three methods discussed here, Ranked Voting is the easiest and
quickest to use, and it is often sufficient. However, it is not as accurate
after you get past the top two or three ranked items, because the group
usually has not thought as much (and may not care as much) about the
lower-ranked items. Ranked Voting also provides less information than
either Paired Comparison or Weighted Ranking. Ranked Voting shows
only that one item is ranked higher or lower than another; it does not show
how much higher or lower. Paired Comparison does provide this
information, and Weighted Ranking provides even more information. It
specifies the criteria that are used in making the ranking, and weights are
assigned to those criteria for each of the items in the list.

4.7.1 The Method: Ranked Voting


In a Ranked Voting exercise, members of the group individually rank each
item in order according to the member’s preference or what the member
regards as the item’s importance. Depending upon the number of items or
the specific analytic needs, the group can decide to rank all the items or
only the top three to five. The group leader or facilitator passes out simple
ballots listing all the items to be voted on. Each member votes his or her
order of preference. If a member views two items as being of equal
preference, the votes can be split between them. For example, if two items
are tied for second place, each receives a 2.5 ranking. Any items that are
not voted on are considered to be tied for last place. After members of the
group have voted, the votes are added up. The item with the lowest
number is ranked number 1.

4.7.2 The Method: Paired Comparison


Paired Comparison compares each item against every other item, and the
analyst can assign a score to show how much more important or more
preferable or more probable one item is than the others. This technique
provides more than a simple ranking, as it shows the degree of importance
or preference for each item. The list of items can then be ordered along a

88
dimension, such as importance or preference, using an interval-type scale.

Follow these steps to use the technique:

✶ List the items to be compared. Assign a letter to each item.


✶ Create a table with the letters across the top and down the left side,
as in Figure 4.7a. The results of the comparison of each pair of items
are marked in the cells of this table. Note the diagonal line of darker-
colored cells. These cells are not used, as each item is never
compared with itself. The cells below this diagonal line are not used
because they would duplicate a comparison in the cells above the
diagonal line. If you are working in a group, distribute a blank copy
of this table to each participant.
✶ Looking at the cells above the diagonal row of gray cells, compare
the item in the row with the one in the column. For each cell, decide
which of the two items is more important (or more preferable or more
probable). Write the letter of the winner of this comparison in the cell,
and score the degree of difference on a scale from 0 (no difference) to
3 (major difference), as in Figure 4.7a.
✶ Consolidate the results by adding up the total of all the values for
each of the items and put this number in the “Score” column. For
example, in Figure 4.7a item B has one 3 in the first row plus one 2,
and two 1s in the second row, for a score of 7.
✶ Finally, it may be desirable to convert these values into a
percentage of the total score. To do this, divide the total number of
scores (20 in the example) by the score for each individual item. Item
B, with a score of 7, is ranked most important or most preferred. Item
B received a score of 35 percent (7 divided by 20), as compared with
25 percent for item D and only 5 percent each for items C and E,
which received only one vote each. This example shows how Paired
Comparison captures the degree of difference between each ranking.
✶ To aggregate rankings received from a group of analysts, simply
add the individual scores for each analyst.

4.7.3 The Method: Weighted Ranking


In Weighted Ranking, a specified set of criteria are used to rank items. The
analyst creates a table with items to be ranked listed across the top row and
criteria for ranking these items listed down the far-left column (see Figure
4.7b). There are a variety of valid ways to proceed with this ranking. A

89
simple version of Weighted Ranking has been selected for presentation
here because intelligence analysts are normally making subjective
judgments rather than dealing with hard numbers. As you read the
following steps, refer to Figure 4.7b:

✶ Create a table with one column for each item. At the head of each
column, write the name of an item or assign it a letter to save space.
✶ Add two more blank columns on the left side of this table. Count
the number of selection criteria, and then adjust the table so that it has
that number of rows plus three more, one at the top to list the items
and two at the bottom to show the raw scores and percentages for
each item. In the first column on the left side, starting with the second
row, write in all the selection criteria down the left side of the table.
There is some value in listing the criteria roughly in order of
importance, but that is not critical. Leave the bottom two rows blank
for the scores and percentages.
✶ Now work down the far-left column assigning weights to the
selection criteria based on their relative importance for judging the
ranking of the items. Depending upon how many criteria there are,
take either 10 points or 100 points and divide these points between
the selection criteria based on what is believed to be their relative
importance in ranking the items. In other words, decide what
percentage of the decision should be based on each of these criteria.
Be sure that the weights for all the selection criteria combined add up
to either 10 or 100, whichever is selected. Also be sure that all the
criteria are phrased in such a way that a higher weight is more
desirable.
✶ Work across the rows to write the criterion weight in the left side
of each cell.
✶ Next, work across the matrix one row (selection criterion) at a
time to evaluate the relative ability of each of the items to satisfy that
selection criteria. Use a ten-point rating scale, where 1 = low and 10 =
high, to rate each item separately. (Do not spread the ten points
proportionately across all the items as was done to assign weights to
the criteria.) Write this rating number after the criterion weight in the
cell for each item.
✶ Again, work across the matrix one row at a time to multiply the
criterion weight by the item rating for that criterion, and enter this
number for each cell, as shown in Figure 4.7b.
✶ Now add the columns for all the items. The result will be a ranking

90
of the items from highest to lowest score. To gain a better
understanding of the relative ranking of one item as compared with
another, convert these raw scores to percentages. To do this, first add
together all the scores in the “Totals” row to get a total number. Then
divide the score for each item by this total score to get a percentage
ranking for each item. All the percentages together must add up to
100 percent. In Figure 4.7b it is apparent that item B has the number
one ranking (with 20.3 percent), while item E has the lowest (with
13.2 percent).

Figure 4.7a Paired Comparison Matrix

Figure 4.7b Weighted Ranking Matrix

91
Potential Pitfalls
When any of these techniques is used to aggregate the opinions of a group
of analysts, the rankings provided by each group member are added
together and averaged. This means that the opinions of the outliers, whose
views are quite different from the others, are blended into the average. As
a result, the ranking does not show the range of different opinions that
might be present in a group. In some cases the identification of outliers
with a minority opinion can be of great value. Further research might show
that the outliers are correct.

Relationship to Other Techniques


Some form of ranking, scoring, or prioritizing is commonly used with
Structured Brainstorming, Virtual Brainstorming, Nominal Group
Technique, and the Decision Matrix, all of which generate ideas that
should be evaluated or prioritized. Applications of the Delphi Method may
also generate ideas from outside experts that need to be evaluated or
prioritized.

Origins of This Technique

92
Ranking, Scoring, and Prioritizing are common analytic processes in many
fields. All three forms of ranking described here are based largely on
Internet sources. For Ranked Voting, we referred to
http://en.wikipedia.org/wiki/Voting_system; for Paired Comparison,
http://www.mindtools.com; and for Weighted Ranking,
www.ifm.eng.cam.ac.uk/dstools/choosing/criter.html. We also reviewed
the Weighted Ranking process described in Morgan Jones’s The Thinker’s
Toolkit. This method is taught at some government agencies, but we found
it to be more complicated than necessary for intelligence applications that
typically use fuzzy, expert-generated data rather than hard numbers.

4.8 Matrices
A matrix is an analytic tool for sorting and organizing data in a manner
that facilitates comparison and analysis. It consists of a simple grid with as
many cells as needed for whatever problem is being analyzed.

Some analytic topics or problems that use a matrix occur so frequently that
they are handled in this book as separate techniques. For example:

✶ Analysis of Competing Hypotheses (chapter 7) uses a matrix to


analyze the relationships between relevant information and
hypotheses.
✶ Cross-Impact Matrix (chapter 5) uses a matrix to analyze the
interactions among variables or driving forces that will determine an
outcome. Such a Cross-Impact Matrix is part of Complexity Manager
(chapter 11).
✶ Gantt Charts (this chapter) use a matrix to analyze the
relationships between tasks to be accomplished and the time period
for those tasks.
✶ Decision Matrix (chapter 11) uses a matrix to analyze the
relationships between goals or preferences and decision options.
✶ The Impact Matrix (chapter 11) uses a matrix to chart the
impact a new policy or decision is likely to have on key players and
how best that can be managed.
✶ Ranking, Scoring, and Prioritizing (this chapter) uses a matrix
to record the relationships between pairs of items in a matrix, and
Weighted Ranking uses a matrix to analyze the relationships between
items and criteria for judging those items.

93
When to Use It
Matrices are used to analyze the relationships between any two sets of
variables or the interrelationships between a single set of variables. Among
other things, they enable analysts to

✶ Compare one type of information with another.


✶ Compare pieces of information of the same type.
✶ Categorize information by type.
✶ Identify patterns in the information.
✶ Separate elements of a problem.

A matrix is such an easy and flexible tool to use that it should be one of
the first tools analysts think of when dealing with a large body of data.
One limiting factor in the use of matrices is that information must be
organized along only two dimensions.

Value Added
Matrices provide a visual representation of a complex set of data. By
presenting information visually, a matrix enables analysts to deal
effectively with more data than they could manage by juggling various
pieces of information in their head. The analytic problem is broken down
to component parts so that each part (that is, each cell in the matrix) can be
analyzed separately, while ideally maintaining the context of the problem
as a whole.

A matrix can also be used to establish an analytic framework for


understanding a problem, to suggest a more rigorous structure for
explaining a phenomenon, or to generate a more comprehensive set of
alternatives.

The Method
A matrix is a tool that can be used in many different ways and for many
different purposes. What matrices have in common is that each has a grid
with sufficient columns and rows for you to enter two sets of data that you
want to compare. Organize the category headings for each set of data in
some logical sequence before entering the headings for one set of data in

94
the top row and the headings for the other set in the far-left column. Then
enter the data in the appropriate cells.

Figure 4.8, “Rethinking the Concept of National Security: A New


Ecology,” is an example of a complex matrix that not only organizes data
but also tells its own analytic story.5 It shows how the concept of national
security has evolved over recent decades—and suggests that the way we
define national security will continue to expand in the coming years. In
this matrix, threats to national security are arrayed along the vertical axis,
beginning at the top with the most traditional actor, the nation-state. At the
bottom end of the spectrum are systemic threats, such as infectious
diseases or threats that “have no face.” The top row of the matrix presents
the three primary mechanisms for dealing with threats: military force,
policing and monitoring, and collaboration. The cells in the matrix provide
historic examples of how the three different mechanisms of engagement
have been used to deal with the five different sources of threats. The top-
left cell (dark blue) presents the classic case of using military force to
resolve nation-state differences. In contrast, at the bottom-right corner
various actors are strongly motivated to collaborate with one another in
dealing with systemic threats, such as the outbreak of a pandemic disease.

Figure 4.8 Rethinking the Concept of National Security: A New Ecology

95
Source: 2009 Pherson Associates, LLC.

Classic definitions of national security focus on the potential for conflicts


involving nation-states. This is represented by the top-left cell, which lists
three military operations. In recent decades the threat has expanded to
include threats posed by subnational actors as well as terrorist and other
criminal organizations. Similarly, the use of peacekeeping and
international policing has become more common than in the past. This
shift to a broader use of the term “national security” is represented by the
other five cells (medium blue) in the top left of the matrix. The remaining
cells (light blue) to the right and at the bottom of the matrix represent how
the concept of national security is continuing to expand as the world
becomes increasingly globalized.

By using a matrix to present the expanding concept of national security in


this way, one sees that patterns relating to how key players collect
intelligence and share information vary along the two primary dimensions.
In the upper left of Figure 4.8, the practice of nation-states is to seek
intelligence on their adversaries, classify it, and protect it. As one moves
diagonally across the matrix to the lower right, however, this practice
reverses. In the lower right of this figure, information is usually available

96
from unclassified sources and the imperative is to disseminate it to
everyone as soon as possible. This dynamic can create serious tensions at
the midpoint, for example, when those working in the homeland security
arena must find ways to share sensitive national security information with
state and local law enforcement officers.

Origins of This Technique


The description of this basic and widely used technique is from Pherson
Associates, LLC, training materials. The national security matrix was
developed by Randy Pherson.

4.9 Venn analysis


Venn Analysis is a visual technique that analysts can use to explore the
logic of arguments. Venn diagrams consist of overlapping circles and are
commonly used to teach set theory in mathematics. Each circle
encompasses a set of objects, and those objects that fall within the
intersection of the circles are members of both sets. A simple Venn
diagram typically shows the overlap between two sets. Circles can also be
nested within one another, showing that one thing is a subset of another in
a hierarchy. An example of a Venn diagram is provided in Figure 4.9a. It
describes the various components of critical thinking and how they
combine to produce synthesis and analysis.

Venn diagrams can be adapted to illustrate simple sets of relationships in


analytic arguments; this process is called Venn Analysis. When applied to
argumentation, it can reveal invalid reasoning. For example, in Figure
4.9b, the first diagram shows the flaw in the following argument: Cats are
mammals, dogs are mammals; therefore dogs are cats. As the diagram
shows, dogs are not subsets of cats, nor are any cats subsets of dogs;
therefore they are distinct subsets but both belong to the larger set of
mammals. Venn Analysis can also be used to validate the soundness of an
argument. For example, the second diagram in Figure 4.9b shows that the
argument that cats are mammals, tigers are cats, and therefore tigers are
mammals is true.

Figure 4.9a Venn Diagram of Components of Critical Thinking

97
Figure 4.9b Venn Diagram of Invalid and Valid Arguments

When to Use It
The technique has also been used by intelligence analysts to organize their
thinking, look for gaps in logic or data, and examine the quality of an
argument. Analysts can use it to determine how to satisfy a narrow set of
conditions when multiple variables must be considered, for example, in
determining the best time to launch a satellite. It can also be used any time
an argument involves a portion of something that is being compared with

98
other portions or subsets.

One other application is to use Venn Analysis as a brainstorming tool to


see if new ideas or concepts are needed to occupy every subset in the
diagram. The technique can be done individually but works better when
done in groups.

Value Added
Venn Analysis helps analysts determine if they have put like things in the
right groups and correctly identified what belongs in each subset. The
technique makes it easier to visualize arguments, often revealing flaws in
reasoning or spurring analysts to examine their assumptions by making
them explicit when constructing the diagram. Examining the relationships
between the overlapping areas of a Venn diagram helps put things in
perspective and often prompts new research or deeper inquiry. Care should
be taken, however, not to make the diagrams too complex by adding levels
of precision that may not be justified.

The Method
Venn Analysis is an agile tool that can be applied in various ways. It is a
simple process but can spark prolonged debate. The following steps show
how Venn Analysis can be used to examine the validity of an argument:

✶ Represent the elements of a statement or argument as circles. Use


large circles for large concepts or quantities.
✶ Examine the boundaries of the circles. Are they well defined or
“fuzzy”? How are these things determined, measured, or counted?
✶ Consider the impact of time. Are the circles growing or shrinking?
This is especially important when looking at trends.
✶ Check the relative size and relationships between the circles. Are
any assumptions being made?
✶ Ask whether there are elements within the circles or larger circles
that should be added.
✶ Examine and compare overlapping areas. What is found in each
zone? What is significant about the size of each zone? Are these sizes
likely to change over time?

99
Example
Consider this analytic judgment: “The substantial investments state-owned
companies in the fictional country of Zambria are making in major U.S.
port infrastructure projects pose a threat to U.S. national security.”

Using Venn Analysis, the analyst would first draw a large circle to
represent the autocratic state of Zambria, a smaller circle within it
representing state-owned enterprises, and small circles within that circle to
represent individual state-owned companies. A mapping of all Zambrian
corporations would include more circles to represent other private-sector
companies as well as a few that are partially state-owned (see Figure 4.9c).
Simply constructing this diagram raises several questions, such as: What is
the percentage of companies that are state-owned, partially state-owned,
and in the private sector? How does this impact the size of the circles?
What is the definition of “state-owned”? What does “state-owned” imply
politically and economically in a nondemocratic state like Zambria? Do
these distinctions even matter?

Figure 4.9c Venn Diagram of Zambrian Corporations

100
The diagram should also prompt the analyst to examine several
assumptions implicit in the questions:

✶ Does state-owned equate to state-controlled?


✶ Should we assume that state-owned enterprises would act contrary
to U.S. interests?
✶ Would private-sector companies be less hostile or even supportive
of free enterprise and U.S. interests?

Each of these assumptions, made explicit by the Venn diagram, can now
be challenged. For example, the original statement may have
oversimplified the problem. Perhaps the threat may be broader than just
state-owned enterprises. Could Zambria exert influence through its private
companies or private-public partnerships?

The original statement also refers to foreign investment in U.S. port


infrastructure projects. If this is represented by a circle, it would be
enveloped by a larger circle showing U.S. business involvement in port
infrastructure improvement projects in foreign countries. An overlapping

101
circle would show investments by Zambrian businesses in U.S. domestic
and foreign port infrastructure projects, as shown in Figure 4.9d.

Figure 4.9d, however, raises several additional questions:

✶ First, do we know the relative size of current port infrastructure


improvement projects in the United States as a proportion of all
investments being made globally? What percentage of U.S.
companies are working largely overseas, functioning as global
enterprises, or operating only in the United States?
✶ Second, can we estimate the overall amount of Zambrian
investment and where it is directed? If Zambrian state-owned
companies are investing in U.S. companies, are the investments
related only to companies doing work in the United States or also
overseas?
✶ Third, what would be the relative size of all the circles drawn in
the Venn diagram in terms of all port infrastructure improvement
projects globally? Can we assume Zambrian investors are the biggest
investors in U.S. port infrastructure projects? Do they invest the
most? Who would be their strongest competitors? How much
influence does their share of the investment pie give them?

Many analytic arguments highlight differences and trends, but it is


important to put these differences into perspective before becoming too
engaged in the argument. By using Venn Analysis to examine the
relationships between the overlapping areas of the Venn diagram, analysts
have a more rigorous base from which to organize and develop their
arguments. In this case, if the relative proportions are correct, the Venn
Analysis would reveal a more important national security concern:
Zambria’s dominant position as an investor in the bulk of U.S. port
infrastructure improvement projects across the globe could give it
significant economic as well as military advantage.

Relationship to Other Techniques


Venn Analysis is similar to other decomposition and visualization
techniques discussed in this chapter. It can also be used as a graphical way
to conduct a Key Assumptions Check or assess whether alternative
hypotheses are mutually exclusive.

102
Figure 4.9d Zambrian Investments in Global Port Infrastructure Projects

103
Origins of This Technique
This application of Venn diagrams as a tool for intelligence analysis was
developed by John Pyrik for the Government of Canada. The materials are
used with the permission of the Canadian government. For more
discussion of this concept, see Peter Suber, Earlham College, “Symbolic
Logic,” at http://legacy.earlham.edu/~peters/courses/log/loghome.htm; and
Lee Archie, Lander University, “Introduction to Logic,” at
http://philosophy.lander.edu/logic.

4.10 Network Analysis


Network Analysis is the review, compilation, and interpretation of data to
determine the presence of associations among individuals, groups,
businesses, or other entities; the meaning of those associations to the
people involved; and the degrees and ways in which those associations can
be strengthened or weakened.6 It is the best method available to help
analysts understand and identify opportunities to influence the behavior of
a set of actors about whom information is sparse. In the fields of law

104
enforcement and national security, information used in Network Analysis
usually comes from informants or from physical or technical surveillance.
These networks are most often clandestine and therefore not visible to
open source collectors. Although software has been developed to help
collect, sort, and map data, it is not essential to many of these analytic
tasks. Social Network Analysis, which involves measuring associations,
does require software.

Analysis of networks is broken down into three stages, and analysts can
stop at the stage that answers their questions:

✶ Network Charting is the process of and associated techniques for


identifying people, groups, things, places, and events of interest
(nodes) and drawing connecting lines (links) between them on the
basis of various types of association. The product is often referred to
as a Link Chart.
✶ Network Analysis is the process and techniques that take the chart
and strive to make sense of the data represented by the chart by
grouping associations (sorting) and identifying patterns in and among
those groups.
✶ Social Network Analysis (SNA) is the mathematical measuring of
variables related to the distance between nodes and the types of
associations in order to derive even more meaning from the chart,
especially about the degree and type of influence one node has on
another.

When to Use It
Network Analysis is used extensively in law enforcement,
counterterrorism analysis, and analysis of transnational issues such as
narcotics and weapons proliferation to identify and monitor individuals
who may be involved in illegal activity. Network Charting (or Link
Charting) is used literally to “connect the dots” between people, groups, or
other entities of intelligence or criminal interest. Network Analysis puts
these dots in context, and Social Network Analysis helps identify hidden
associations and degrees of influence between the dots.

Value Added

105
Network Analysis has proved to be highly effective in helping analysts
identify and understand patterns of organization, authority,
communication, travel, financial transactions, or other interactions among
people or groups that are not apparent from isolated pieces of information.
It often identifies key leaders, information brokers, or sources of funding.
It can identify additional individuals or groups who need to be
investigated. If done over time, it can help spot change within the network.
Indicators monitored over time may signal preparations for offensive
action by the network or may reveal opportunities for disrupting the
network.

SNA software helps analysts accomplish these tasks by facilitating the


retrieval, charting, and storage of large amounts of information. Software
is not necessary for this task, but it is enormously helpful. The SNA
software included in many network analysis packages is essential for
measuring associations.

The Method
Analysis of networks attempts to answer the question, “Who is related to
whom and what is the nature of their relationship and role in the network?”
The basic network analysis software identifies key nodes and shows the
links between them. SNA software measures the frequency of flow
between links and explores the significance of key attributes of the nodes.
We know of no software that does the intermediate task of grouping nodes
into meaningful clusters, though algorithms do exist and are used by
individual analysts. In all cases, however, you must interpret what is
represented, looking at the chart to see how it reflects organizational
structure, modes of operation, and patterns of behavior.

Network Charting: The key to good network analysis is to begin with a


good chart. An example of such a chart is Figure 4.10a, which shows the
terrorist network behind the attacks of September 11, 2001. It was
compiled by networks researcher Valdis E. Krebs using data available
from news sources on the Internet in early 2002.

There are tried and true methods for making good charts that allow the
analyst to save time, avoid unnecessary confusion, and arrive more quickly
at insights. Network charting usually involves the following steps:

106
Figure 4.10a Social Network Analysis: The September 11 Hijackers

107
Source: Valdis Krebs, Figure 3, “Connecting the Dots: Tracking Two
Identified Terrorists,” Orgnet.com; www.orgnet.com/tnet.html.
Reproduced with permission of the author.

✶ Identify at least one reliable source or stream of data to serve as a


beginning point.
✶ Identify, combine, or separate nodes within this reporting.
✶ List each node in a database, association matrix, or software
program.
✶ Identify interactions among individuals or groups.
✶ List interactions by type in a database, association matrix, or
software program.
✶ Identify each node and interaction by some criterion that is
meaningful to your analysis. These criteria often include frequency of
contact, type of contact, type of activity, and source of information.
✶ Draw the connections between nodes—connect the dots—on a
chart by hand, using a computer drawing tool, or using Network
Analysis software. If you are not using software, begin with the nodes
that are central to your intelligence question. Make the map more
informative by presenting each criterion in a different color or style or
by using icons or pictures. A very complex chart may use all of these

108
elements on the same link or node. The need for additional elements
often happens when the intelligence question is murky (for example,
when “I know something bad is going on, but I don’t know what”);
when the chart is being used to answer multiple questions; or when a
chart is maintained over a long period of time.
✶ Work out from the central nodes, adding links and nodes until you
run out of information from the good sources.
✶ Add nodes and links from other sources, constantly checking them
against the information you already have. Follow all leads, whether
they are people, groups, things, or events, and regardless of source.
Make note of the sources.
✶ Stop in these cases: When you run out of information, when all of
the new links are dead ends, when all of the new links begin to turn in
on one another like a spider’s web, or when you run out of time.
✶ Update the chart and supporting documents regularly as new
information becomes available, or as you have time. Just a few
minutes a day will pay enormous dividends.
✶ Rearrange the nodes and links so that the links cross over one
another as little as possible. This is easier to accomplish if you are
using software. Many software packages can rearrange the nodes and
links in various ways.
✶ Cluster the nodes. Do this by looking for “dense” areas of the chart
and relatively “empty” areas. Draw shapes around the dense areas.
Use a variety of shapes, colors, and line styles to denote different
types of clusters, your relative confidence in the cluster, or any other
criterion you deem important.
✶ Cluster the clusters, if you can, using the same method.
✶ Label each cluster according to the common denominator among
the nodes it contains. In doing this you will identify groups, events,
activities, and/or key locations. If you have in mind a model for
groups or activities, you may be able to identify gaps in the chart by
what is or is not present that relates to the model.
✶ Look for “cliques”—a group of nodes in which every node is
connected to every other node, though not to many nodes outside the
group. These groupings often look like stars or pentagons. In the
intelligence world, they often turn out to be clandestine cells.
✶ Look in the empty spaces for nodes or links that connect two
clusters. Highlight these nodes with shapes or colors. These nodes are
brokers, facilitators, leaders, advisers, media, or some other key
connection that bears watching. They are also points where the

109
network is susceptible to disruption.
✶ Chart the flow of activities between nodes and clusters. You may
want to use arrows and time stamps. Some software applications will
allow you to display dynamically how the chart has changed over
time.
✶ Analyze this flow. Does it always go in one direction or in
multiple directions? Are the same or different nodes involved? How
many different flows are there? What are the pathways? By asking
these questions, you can often identify activities, including
indications of preparation for offensive action and lines of authority.
You can also use this knowledge to assess the resiliency of the
network. If one node or pathway were removed, would there be
alternatives already built in?
✶ Continually update and revise as nodes or links change.

Figure 4.10b is a modified version of the 9/11 hijacker network depicted in


Figure 4.10a. It has been marked to identify the different types of clusters
and nodes discussed under Network Analysis. Cells are seen as star-like or
pentagon-like shapes, potential cells are circled, and the large diamond
surrounds the cluster of cells. Brokers are shown as nodes surrounded by
small pentagons. Note the broker in the center. This node has connections
to all of the other brokers. This is a senior leader: Al-Qaeda’s former head
of operations in Europe, Imad Eddin Barakat Yarkas.

Figure 4.10b Social Network Analysis: September 11 Hijacker Key Nodes

110
Source: Based on Valdis Krebs, Figure 3, “Connecting the Dots:
Tracking Two Identified Terrorists,” Orgnet.com;
www.orgnet.com/tnet.html. Reproduced with permission of the
author. With changes by Cynthia Storer.

Figure 4.10c Social Network Analysis

111
Source: 2009 Pherson Associates, LLC.

Social Network Analysis requires a specialized software application. It is


important, however, for analysts to familiarize themselves with the basic
process and measures and the specialized vocabulary used to describe
position and function within the network. The following three types of
centrality are illustrated in Figure 4.10c:

✶ Degree centrality: This is measured by the number of direct


connections that a node has with other nodes. In the network depicted
in Figure 4.10c, Deborah has the most direct connections. She is a
“connector” or a “hub” in this network.

112
✶ Betweenness centrality: Helen has fewer direct connections than
does Deborah, but she plays a vital role as a “broker” in the network.
Without her, Ira and Janice would be cut off from the rest of the
network. A node with high betweenness has great influence over what
flows—or does not flow—through the network.
✶ Closeness centrality: Frank and Gary have fewer connections
than does Deborah, yet the pattern of their direct and indirect ties
allows them to access all the nodes in the network more quickly than
anyone else. They are in the best position to monitor all the
information that flows through the network.

Potential Pitfalls
This method is extremely dependent upon having at least one good source
of information. It is hard to know when information may be missing, and
the boundaries of the network may be fuzzy and constantly changing, in
which case it is difficult to determine whom to include. The constantly
changing nature of networks over time can cause information to become
outdated. You can be misled if you do not constantly question the data
being entered, update the chart regularly, and look for gaps and consider
their potential significance.

You should never rely blindly on the SNA software but strive to
understand how the application being used works. As is the case with any
software, different applications measure different things in different ways,
and the devil is always in the details.

Origins of This Technique


This is an old technique that has been transformed by the development of
sophisticated software programs for organizing and analyzing large
databases. Each of the following sources has made significant
contributions to the description of this technique: Valdis E. Krebs, “Social
Network Analysis, A Brief Introduction,” www.orgnet.com/sna.html;
Krebs, “Uncloaking Terrorist Networks,” First Monday, 7, no. 4 (April 1,
2002),
http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/941/863
Robert A. Hanneman, “Introduction to Social Network Methods,”
Department of Sociology, University of California Riverside,

113
http://faculty.ucr.edu/~hanneman/nettext/C1_Social_Network_Data.html#Populations
Marilyn B. Peterson, Defense Intelligence Agency, “Association
Analysis,” undated draft, used with permission of the author; Cynthia
Storer and Averill Farrelly, Pherson Associates, LLC; Pherson Associates
training materials.

4.11 mind maps and concept maps


Mind Maps and Concept Maps are visual representations of how an
individual or a group thinks about a topic of interest. Such a diagram has
two basic elements: the ideas that are judged relevant to whatever topic
one is thinking about, and the lines that show and briefly describe the
connections among these ideas. The two dominant approaches to creating
such diagrams are Mind Mapping and Concept Mapping (see Figures
4.11a and 4.11b). Other approaches include cognitive, causal, and
influence mapping as well as idea mapping. There are many commercial
and freely available software products that support this mapping function
and that are known by many different names.7 Diverse groups within the
intelligence community are using various methods for creating meaningful
diagrams.

When to Use It
Whenever you think about a problem, develop a plan, or consider making
even a very simple decision, you are putting a series of thoughts together.
That series of thoughts can be represented visually with words or images
connected by lines that represent the nature of the relationships among
them. Any thinking for any purpose, whether about a personal decision or
analysis of an intelligence issue, can be diagrammed in this manner. Such
mapping is usually done for either of two purposes:

✶ By an individual or a group to help sort out their own thinking and


achieve a shared understanding of key concepts. By getting the ideas
out of their heads and down on paper or a computer screen, the
individual or group is better able to remember, critique, and modify
the ideas.
✶ To facilitate the communication to others of a complex set of
relationships. Examples are an intelligence report, a briefing, a school
classroom, or a graphic prepared by an analyst for prosecutors to use

114
in a jury trial.

Mind Maps can also be used to help analysts brainstorm the various
elements or key players involved in an issue and how they might be
connected. The technique stimulates analysts to ask questions such as: Are
there additional categories or “branches on the tree” that we have not
considered? Are there elements of this process or applications of this
technique that we have failed to capture? Does the Mind Map suggest a
different context for understanding the problem?

Figure 4.11a Concept Map of Concept Mapping

115
Source: R. R. Hoffman and J. D. Novak (Pensacola, FL: Institute for
Human and Machine Cognition, 2009). Reproduced with permission
of the author.

116
Figure 4.11b Mind Map of Mind Mapping

117
Source: Illumine Training, “Mind Map,” www.mind-mapping.co.uk.
Reproduced with permission of Illumine Training. With changes by
Randolph H. Pherson.

Value Added
Mapping facilitates the presentation or discussion of a complex body of
information. It is useful because it presents a considerable amount of
information that can generally be seen at a glance. Creating a visual
picture of the basic structure of a complex problem helps analysts be as
clear as possible in stating precisely what they want to express.
Diagramming skills enable analysts to stretch their analytic capabilities.

Mind Maps and Concept Maps vary greatly in size and complexity
depending on how and why they are being used. When used for structured
analysis, a Mind Map or a Concept Map is typically larger, sometimes
much larger, than the examples shown in this chapter. Many are of modest
size and complexity. Like any model, such a map is a simplification of
reality. It does not necessarily try to capture all the nuances of a complex
system of relationships. Instead, it provides, for example, an outline

118
picture of the overall structure of a system of variables, showing which
variables are relevant to a given problem or issue and how they are related
to one another. Once you have this information, you are well on the way
toward knowing what further research needs to be done and perhaps even
how to organize the written report. For some projects, the diagram can be
the analytical product or a key part of it.

When a Mind Map or Concept Map is created as a group project, its


principal value may be the process the group went through to construct the
map, not the map itself. When a group gets together to identify all the parts
of a complex system and figure out how they relate to one another, the
process often elicits new ideas, clarifies concepts, identifies relevant
bodies of knowledge, and brings to the surface—and often resolves—
differences of opinion at an early stage of a project before anything is put
in writing. Although such a map may be a bare skeleton, the discussion
will have revealed a great deal more information than can be shown in a
single map. The process also gives the group a shared experience and a
common basis for further discussion. It ensures that this initial effort is
truly a group effort at defining the problem, not something done by one
member of the group and then presented after the fact for coordination by
the others. Some mapping software supports virtual collaboration so that
analysts at different locations can work on a map simultaneously and see
one another’s work as it is done.8

After having participated in this group process to define the problem, the
group should be better able to identify what further research needs to be
done and able to parcel out additional work among the best-qualified
members of the group. The group should also be better able to prepare a
report that represents as fully as possible the collective wisdom of the
group as a whole.

Analysts and students also find that Mind Map and Concept Map software
products are useful tools for taking notes during an oral briefing or lecture.
By developing a map as the lecture proceeds, the analyst or student can
chart the train of logic and capture all the data presented in a coherent map
that includes all the key elements of the subject.

The Method
Start a Mind Map or Concept Map with a focal question that defines what

119
is to be included. Then follow these steps:

✶ Make a list of concepts that relate in some way to the focal


question.
✶ Starting with the first dozen or so concepts, sort them into
groupings within the diagram space in some logical manner. These
groups may be based on things they have in common or on their
status as either direct or indirect causes of the matter being analyzed.
✶ Begin making links among related concepts, starting with the most
general concepts. Use lines with arrows to show the direction of the
relationship. The arrows may go in either direction or in both
directions.
✶ Choose the most appropriate words for describing the nature of
each relationship. The lines might be labeled with words such as
“causes,” “influences,” “leads to,” “results in,” “is required by,” or
“contributes to.” Selecting good linking phrases is often the most
difficult step.
✶ While building all the links among the concepts and the focal
question, look for and enter crosslinks among concepts.
✶ Don’t be surprised if, as the map develops, you discover that you
are now diagramming on a different focus question from the one with
which you started. This can be a good thing. The purpose of a focus
question is not to lock down the topic but to get the process going.
✶ Finally, reposition, refine, and expand the map structure as
appropriate.

Mind Mapping and Concept Mapping can be done manually, but mapping
software is strongly recommended; it is much easier and faster to move
concepts and links around on a computer screen than it is to do so
manually. There are many different software programs for various types of
mapping, and each has strengths and weaknesses. These products are
usually variations of the main contenders, Mind Mapping and Concept
Mapping. The two leading techniques differ in the following ways:

✶ Mind Mapping has only one main or central idea, and all other
ideas branch off from it radially in all directions. The central idea is
preferably shown as an image rather than in words, and images are
used throughout the map. “Around the central word you draw the 5 or
10 main ideas that relate to that word. You then take each of those
child words and again draw the 5 or 10 main ideas that relate to each

120
of those words.”9
✶ A Concept Map has a more flexible form. It can have multiple
hubs and clusters. It can also be designed around a central idea, but it
does not have to be and often is not designed that way. It does not
normally use images. A Concept Map is usually shown as a network,
although it too can be shown as a hierarchical structure like Mind
Mapping when that is appropriate. Concept Maps can be very
complex and are often meant to be viewed on a large-format screen.
✶ Mind Mapping was originally developed as a fast and efficient
way to take notes during briefings and lectures. Concept Mapping
was originally developed as a means of mapping students’ emerging
knowledge about science; it has a foundation in the constructivist
theory of learning, which emphasizes that “meaningful learning
involves the assimilation of new concepts and propositions into
existing cognitive structures.”10 Concept Maps are frequently used as
teaching tools. Most recently, they have come to be used to develop
“knowledge models,” in which large sets of complex Concept Maps
are created and hyperlinked together to represent analyses of complex
domains or problems.

Relationship to Other Techniques


Mind and Concept Mapping can be used to present visually the results
generated by a number of other techniques, especially the various types of
brainstorming and/or Cross-Impact Analysis, both described in chapter 5.

Origins of This Technique


Mapping is an old technique that has been given new life by the
development of software that makes it both more useful and easier to use.
Information on Concept Mapping is available at
http://cmap.ihmc.us/conceptmap.html. For information on Mind Mapping,
see Tony and Barry Buzan, The Mind Map Book (Essex, England: BBC
Active, 2006). For information on mapping in general, see Eric Hanson,
“A Survey of Concept Mapping Tools,”
http://datalab.cs.pdx.edu/sidewalk/pub/survey.of.concept.maps; and
Banxia Software, “What’s in a Name? Cognitive Mapping, Mind
Mapping, Concept Mapping,”
www.banxia.com/dexplore/whatsinaname.html. (No longer available

121
online)

4.12 Process Maps and Gantt Charts


Process Mapping is an umbrella term that covers a variety of procedures
for identifying and depicting visually each step in a complex procedure. It
includes flow charts of various types (Activity Flow Charts, Commodity
Flow Charts, Causal Flow Charts), Relationship Maps, and Value Stream
Maps commonly used to assess and plan improvements for business and
industrial processes. A Gantt Chart is a specific type of Process Map that
was developed to facilitate the planning, scheduling, and management of
complex industrial projects.

When to Use It
Process Maps, including Gantt Charts, are used by intelligence analysts to
track, understand, and monitor the progress of activities of intelligence
interest being undertaken by a foreign government, a criminal or terrorist
group, or any other nonstate actor. For example, a Process Map can be
used to:

✶ Monitor progress in developing a new weapons system,


preparations for a major military action, or the execution of any other
major plan that involves a sequence of observable steps.
✶ Identify and describe the modus operandi of a criminal or terrorist
group, including the preparatory steps that such a group typically
takes prior to a major action.
✶ Describe and monitor the process of radicalization by which a
normal youth may be transformed over time into a terrorist.
✶ Describe a sourcing chain that traces how information or a report
has flowed through an intelligence community’s collection and
processing system until it arrived at the desk of the analyst—and how
it might have been corrupted by a bad translation or an
overgeneralization when the actual event or initial statement was
inaccurately summarized along the way.

Value Added

122
The process of constructing a Process Map or a Gantt Chart helps analysts
think clearly about what someone else needs to do to complete a complex
project. When a complex plan or process is understood well enough to be
diagrammed or charted, analysts can then answer questions such as the
following: What are they doing? How far along are they? What do they
still need to do? What resources will they need to do it? How much time
do we have before they have this capability? Is there any vulnerable point
in this process where they can be stopped or slowed down?

The Process Map or Gantt Chart is a visual aid for communicating this
information to the customer. If sufficient information can be obtained, the
analyst’s understanding of the process will lead to a set of indicators that
can be used to monitor the status of an ongoing plan or project.

The Method
A Process Map and a Gantt Chart differ substantially in their appearance.
In a Process Map, the steps in the process are diagrammed sequentially
with various symbols representing starting and end points, decisions, and
actions connected with arrows. Diagrams can be created with readily
available software such as Microsoft Visio.

A Gantt Chart is a matrix that lists tasks in a project or steps in a process


down the far-left column, with the estimated time period for
accomplishing these tasks or steps in weeks, months, or years across the
top row. For each task or step, a horizontal line or bar shows the beginning
and ending of the time period for that task or step. Professionals working
with Gantt Charts use tools such as Microsoft Project to draw the chart.
Gantt Charts can also be made with Microsoft Excel or by hand on graph
paper.

Detailed guidance on creating a Process Map or Gantt Chart is readily


available from the sources described under “Origins of This Technique.”

Example
The Intelligence Community has considerable experience monitoring
terrorist groups. This example describes how an analyst would go about
creating a Gantt Chart of a generic terrorist attack–planning process (see

123
Figure 4.12). The analyst starts by making a list of all the tasks that
terrorists must complete, estimating the schedule for when each task will
be started and finished, and determining what resources are needed for
each task. Some tasks need to be completed in a sequence, with each task
being more or less completed before the next activity can begin. These are
called sequential, or linear, activities. Other activities are not dependent
upon completion of any other tasks. These may be done at any time before
or after a particular stage is reached. These are called nondependent, or
parallel, tasks.

Note whether each terrorist task to be performed is sequential or parallel. It


is this sequencing of dependent and nondependent activities that is critical
in determining how long any particular project or process will take. The
more activities that can be worked in parallel, the greater the chances of a
project being completed on time. The more tasks that must be done
sequentially, the greater the chances of a single bottleneck delaying the
entire project.

Figure 4.12 Gantt Chart of Terrorist Attack Planning

124
Source: Based on Gantt Chart by Richard Damelio, The Basics of
Process Mapping (Florence, KY.: Productivity Press, 2006).
www.ganttchart.com.

When entering tasks into the Gantt Chart, enter the sequential tasks first in
the required sequence. Ensure that they don’t start until the tasks on which
they depend have been completed. Then enter the parallel tasks in an
appropriate time frame toward the bottom of the matrix so that they do not
interfere with the sequential tasks on the critical path to completion of the
project.

Gantt Charts that map a generic process can also be used to track data
about a more specific process as it is received. For example, the Gantt
Chart depicted in Figure 4.12 can be used as a template over which new
information about a specific group’s activities could be layered by using a
different color or line type. Layering in the specific data allows an analyst
to compare what is expected with the actual data. The chart can then be
used to identify and narrow gaps or anomalies in the data and even to
identify and challenge assumptions about what is expected or what is
happening. The analytic significance of considering such possibilities can
mean the difference between anticipating an attack and wrongly assuming

125
that a lack of activity means a lack of intent. The matrix illuminates the
gap and prompts the analyst to consider various explanations.

Origin of This Technique


Development of Gantt Charts was considered a revolutionary advance in
the early 1900s. During the period of industrial development, Gantt charts
were used to plan industrial processes, and they are still in common use.
Information on how to create and use Gantt Charts is readily available at
www.ganttchart.com. Information on how to use other types of Process
Mapping is available in Richard Damelio, The Basics of Process Mapping
(Florence, KY: Productivity Press, 2006).

1. Atul Gawande, “The Checklist,” New Yorker, December 10, 2007,


www.newyorker.com/reporting/2007/12/10/071210fa_fact_gawande. Also
see Marshall Goldsmith, “Preparing Your Professional Checklist,”
Business Week, January 15, 2008,
www.businessweek.com/managing/content/jan2008/ca20080115_768325.htm?
campaign_id=rss_topStories.

2. For a discussion of how best to get started when writing a paper, see
“How Should I Conceptualize My Product?” in Katherine Hibbs Pherson
and Randolph H. Pherson, Critical Thinking for Strategic Intelligence
(Washington, DC: CQ Press, 2013), 35–41.

3. For a more detailed discussion of conceptualization and the AIMS


process see Katherine Hibbs Pherson and Randolph H. Pherson, Critical
Thinking for Strategic Intelligence (Washington, DC.: CQ Press, 2013),
35–41.

4. For a more detailed discussion of customer needs, see Katherine Hibbs


Pherson and Randolph H. Pherson, Critical Thinking for Strategic
Intelligence (Washington, DC: CQ Press, 2013), 3–12.

5. A fuller discussion of the matrix can be found in Randolph H. Pherson,


“Rethinking National Security in a Globalizing World: A New Ecology,”
Revista Romănă De Studii De Intelligence (Romanian Journal of
Intelligence Studies) 1–2 (December 2009).

6. “Association Analysis,” undated draft provided to the authors by

126
Marilyn B. Peterson, Defense Intelligence Agency.

7. See www.mind-mapping.org for a comprehensive compendium of


information on all types of software that supports knowledge management
and information organization in graphic form. Many of these software
products are available at no cost.

8. Tanja Keller, Sigmar-Olaf Tergan, and John Coffey, “Concept Maps


Used as a ‘Knowledge and Information Awareness’ Tool for Supporting
Collaborative Problem Solving in Distributed Groups,” Proceedings of the
Second International Conference on Concept Mapping, San José, Costa
Rica, September 5–8, 2006.

9. Tony Buzan, The Mind Map Book, 2nd ed. (London: BBC Books,
1995).

10. Joseph D. Novak and Alberto J. Canas, The Theory Underlying


Concept Maps and How to Construct and Use Them, Technical Report
IHMC Cmap Tools 2006–01 (Pensacola, FL: Florida Institute for Human
and Machine Cognition, 2006).

127
5 Idea Generation

5.1 Structured Brainstorming [ 102 ]


5.2 Virtual Brainstorming [ 108 ]
5.3 Nominal Group Technique [ 110 ]
5.4 Starbursting [ 113 ]
5.5 Cross-Impact Matrix [ 115 ]
5.6 Morphological Analysis [ 119 ]
5.7 Quadrant Crunching™ [ 122 ]

New ideas, and the combination of old ideas in new ways, are essential
elements of effective analysis. Some structured techniques are specifically
intended for the purpose of eliciting or generating ideas at the very early
stage of a project, and they are the topic of this chapter.

In one sense, all structured analytic techniques are idea generation


techniques when used in a collaborative group process. The structured
process helps identify differences in perspective and different assumptions
among team or group members, and thus stimulates learning and new
ideas. Idea generation techniques are effective in combating many
cognitive biases, particularly groupthink, premature closure, and mental
shotgun, which Daniel Kahneman says occurs when analysts lack
precision while making continuous assessments to supply quick and easy
answers to difficult problems.1 In some instances, the techniques can be
used to check the tendency to assume an event is more certain to occur
than is actually the case, and to continue focusing attention on things that
were initially deemed significant but do not remain so as more information
becomes available. A group or team using a structured analytic technique
is usually more effective than a single individual in generating new ideas
and in synthesizing divergent ideas.

“Imagination is more important than knowledge. For while


knowledge defines all we currently know and understand,
imagination points to all we might yet discover and create.”

—Albert Einstein

128
Overview of Techniques
Structured Brainstorming
is not a group of colleagues just sitting around talking about a problem.
Rather, it is a group process that follows specific rules and procedures. It is
often used at the beginning of a project to identify a list of relevant
variables, driving forces, a full range of hypotheses, key players or
stakeholders, available evidence or sources of information, potential
solutions to a problem, or potential outcomes or scenarios—or, in law
enforcement, potential suspects or avenues of investigation. It requires
little training, and it is one of the most frequently used structured
techniques in the U.S. Intelligence Community. It is most helpful when
paired with a mechanism such as a wiki that allows analysts to capture and
record the results of their brainstorming. The wiki also allows participants
to refine or make additions to the brainstorming results even after the face-
to-face session has ended.

Virtual Brainstorming
is a way to use Structured Brainstorming when participants are in different
geographic locations. The absence of face-to-face contact has
disadvantages, but it also has advantages. The remote process can help
relieve some of the pressure analysts may feel from bosses or peers in a
face-to-face format. It can also increase productivity, because participants
can make their inputs on a wiki at their convenience without having to
read others’ ideas quickly while thinking about their own ideas and
waiting for their turn to speak. The wiki format—including the ability to
upload documents, graphics, photos, or videos, and for all participants to
share ideas on their own computer screens—allows analysts to capture and
track brainstorming ideas and return to them at a later date.

Nominal Group Technique


, often abbreviated NGT, serves much the same function as Structured
Brainstorming, but it uses a quite different approach. It is the preferred
technique when there is concern that a senior member or outspoken

129
member of the group may dominate the meeting, that junior members may
be reluctant to speak up, or that the meeting may lead to heated debate.
Nominal Group Technique encourages equal participation by requiring
participants to present ideas one at a time in round-robin fashion until all
participants feel that they have run out of ideas.

Starbursting
is a form of brainstorming that focuses on generating questions rather than
answers. To help in defining the parameters of a research project, use
Starbursting to identify the questions that need to be answered. Questions
start with the words Who, What, How, When, Where, and Why.
Brainstorm each of these words, one at a time, to generate as many
questions as possible about the research topic.

Cross-Impact Matrix
is a technique that can be used after any form of brainstorming session that
identifies a list of variables relevant to a particular analytic project. The
results of the brainstorming session are put into a matrix, which is used to
guide a group discussion that systematically examines how each variable
influences all other variables to which it is judged to be related in a
particular problem context. The group discussion is often a valuable
learning experience that provides a foundation for further collaboration.
Results of cross-impact discussions can be maintained in a wiki for
continuing reference.

Morphological Analysis
is useful for dealing with complex, nonquantifiable problems for which
little data are available and the chances for surprise are significant. It is a
generic method for systematically identifying and considering all possible
relationships in a multidimensional, highly complex, usually
nonquantifiable problem space. It helps prevent surprises by generating a
large number of outcomes for any complex situation, thus reducing the
chance that events will play out in a way that the analyst has not
previously imagined and has not at least considered. Training and practice
are required before this method is used, and a facilitator experienced in
Morphological Analysis may be necessary.

130
Quadrant Crunching™
is an application of Morphological Analysis that uses key assumptions and
their opposites as a starting point for systematically generating a large
number of alternative outcomes. Two versions of the technique have been
developed: Classic Quadrant Crunching™ to avoid surprise, and Foresight
Quadrant Crunching™ to develop a comprehensive set of potential
alternative futures. For example, an analyst might use Classic Quadrant
Crunching™ to identify the many different ways that a terrorist might
attack a water supply. An analyst would use Foresight Quadrant
Crunching™ to generate multiple scenarios of how the conflict in Syria
might evolve over the next five years.

Quadrant Crunching™ forces analysts to rethink an issue from a broad


range of perspectives and systematically question all the assumptions that
underlie their lead hypothesis. It is most useful for ambiguous situations
for which little information is available.

5.1 Structured Brainstorming


Brainstorming is a group process that follows specific rules and procedures
designed for generating new ideas and concepts. Structured Brainstorming,
one of several forms of brainstorming described in this chapter, is a
systematic, multistep process for conducting group brainstorming that
employs silent brainstorming and sticky notes.

People sometimes say they are brainstorming when they sit with a few
colleagues or even by themselves and try to come up with relevant ideas.
That is not what true Brainstorming or Structured Brainstorming is about.
A brainstorming session usually exposes an analyst to a greater range of
ideas and perspectives than the analyst could generate alone, and this
broadening of views typically results in a better analytic product. To be
successful, brainstorming must be focused on a specific issue and generate
a final written product. The Structured Brainstorming technique requires a
facilitator, in part because participants cannot talk during the
brainstorming session.

The advent of collaborative tools such as wikis has helped bring structure
to the brainstorming process. Whether brainstorming begins with a face-to-
face session or is done entirely online, the collaborative features of wikis

131
can facilitate the analytic process by obliging analysts to capture results in
a sharable format that can be posted, understood by others, and refined for
future use. In addition, wikis amplify and extend the brainstorming process
—potentially improving the results—because each edit and the reasoning
for it is tracked, and disagreements or refinements can be openly discussed
and explicitly summarized on the wiki.

When to Use It
Structured Brainstorming is one of the most widely used analytic
techniques. Analysts use it at the beginning of a project to identify a list of
relevant variables, key driving forces, a full range of alternative
hypotheses, key players or stakeholders, available evidence or sources of
information, potential solutions to a problem, potential outcomes or
scenarios, or all the forces and factors that may come into play in a given
situation. For law enforcement, brainstorming is used to help identify
potential suspects or avenues of investigation. Brainstorming techniques
can be used again later in the analytic process to pull the team out of an
analytic rut, stimulate new investigative leads, or stimulate more creative
thinking.

In most instances a brainstorming session can be followed with Cross-


Impact Analysis to examine the relationship between each of the variables,
players, or other factors identified by the brainstorming.

Value Added
The stimulus for creativity comes from two or more analysts bouncing
ideas off each other. A brainstorming session usually exposes an analyst to
a greater range of ideas and perspectives than the analyst could generate
alone, and this broadening of views typically results in a better analytic
product. Intelligence analysts have found it particularly effective in
helping them overcome the intuitive traps of giving too much weight to
first impressions, allowing first-hand information to have too much
impact, expecting only marginal or incremental change, and ignoring the
significance of the absence of information.

The Method

132
The facilitator or group leader should present the focal question, explain
and enforce the ground rules, keep the meeting on track, stimulate
discussion by asking questions, record the ideas, and summarize the key
findings. Participants should be encouraged to express every idea that pops
into their heads. Even ideas that are outside the realm of the possible may
stimulate other ideas that are more feasible. The group should have at least
four and no more than twelve participants. Five to seven is an optimal
number; if there are more than twelve people, divide into two groups.

When conducting brainstorming sessions, past experience has shown that


analysts should follow these eight basic rules:

✶ Be specific about the purpose and the topic of the brainstorming


session. Announce the topic beforehand, and ask participants to come
to the session with some ideas or to forward them to the facilitator
before the session.
✶ Never criticize an idea, no matter how weird, unconventional, or
improbable it might sound. Instead, try to figure out how the idea
might be applied to the task at hand.
✶ Allow only one conversation at a time, and ensure that everyone
has an opportunity to speak.
✶ Allocate enough time to complete the brainstorming session. It
often takes one hour to set the rules of the game, get the group
comfortable, and exhaust the conventional wisdom on the topic. Only
then do truly creative ideas begin to emerge.
✶ Engage all participants in the discussion; sometimes this might
require “silent brainstorming” techniques such as asking everyone to
be quiet for five minutes and write down their key ideas on a 3 × 5
card and then discussing what everyone wrote down on their cards.
✶ Include one or more “outsiders” in the group to avoid groupthink
and stimulate divergent thinking. Try to recruit astute thinkers who do
not share the same body of knowledge or perspective as other group
members but have some familiarity with the topic.
✶ Write it down! Track the discussion by using a whiteboard, an
easel, or sticky notes (see Figure 5.1).
✶ Summarize the key findings at the end of the session. Ask the
participants to write down their key takeaway or most important thing
they learned on a 3 × 5 card as they depart the session. Then prepare a
short summary and distribute the list to the participants (who may add
items to the list) and to others interested in the topic (including

133
supervisors and those who could not attend) either by e-mail or,
preferably, a wiki. If there is a need to capture the initial
brainstorming results as a “snapshot in time,” simply upload the
results as a pdf or other word-processing document, but still allow the
brainstorming discussion to continue on the wiki.

Figure 5.1 Picture of Structured Brainstorming

Source: 2009 Pherson Associates, LLC.

Many versions of brainstorming are in common usage. The following


twelve-step process has worked well in intelligence and law enforcement
communities to generate key drivers for Multiple Scenarios Generation, a
set of alternative hypotheses when conducting an investigation, or a list of
key factors to explain a particular behavior. The process is divided into
two phases: a divergent thinking (creative) phase when ideas are
presented, and a convergent thinking (analytic) phase when these ideas are
evaluated.

✶ Pass out “sticky” notes and Sharpie-type pens or markers to all


participants. A group of five to ten works best. Tell the participants
that no one can speak except the facilitator during the initial
collection phase of the exercise.

134
✶ Write the question you are brainstorming on a whiteboard or easel.
The objective is to make the question as open ended as possible. For
example, Structured Brainstorming focal questions often begin with:
“What are all the (things/forces and factors/circumstances) that would
help explain . . . ?”
✶ Ask the participants to write down their responses to the question
on a sticky note and give it to the facilitator, who reads them out loud.
Participants are asked to capture the concept with a few key words
that will fit on the sticky note. Use Sharpie-type pens so everyone can
easily see what is written.
✶ The facilitator sticks all the sticky notes on the wall or a
whiteboard in random order as they are called out. All ideas are
treated the same. Participants are urged to build on one another’s
ideas. Usually there is an initial spurt of ideas followed by pauses as
participants contemplate the question. After five or ten minutes
expect a long pause of a minute or so. This slowing down suggests
that the group has “emptied the barrel of the obvious” and is now on
the verge of coming up with some fresh insights and ideas.
Facilitators should not talk during this pause, even if the silence is
uncomfortable.
✶ After a couple of two-minute pauses, facilitators conclude this
divergent stage of the brainstorming process. They then ask the group
or a subset of the group to go up to the board and arrange the sticky
notes into affinity groups (basically grouping the ideas by like
concept). The group should arrange the sticky notes in clusters, not in
rows or columns. The group should avoid putting the sticky notes into
obvious groups like “economic” or “political.” Group members
cannot talk while they are doing this. If one sticky note seems to
“belong” in more than one group, make a copy and place one sticky
note in each affinity group.
✶ If the group has many members, those who are not involved in
arranging the sticky notes should be asked to perform a different task.
The facilitator should make copies of several of the sticky notes that
are considered outliers because they do not fit into any obvious group
or evoked a lot of laughter when read aloud. One of these outliers is
given to each table or to each smaller group of participants, who then
are asked to explain how that outlier relates to the primary task.
✶ If the topic is sufficiently complex, ask a second small group to go
to the board and review how the first group arranged the sticky notes.
The second group cannot speak, but members are encouraged to

135
continue to rearrange the notes into a more coherent pattern.
✶ When all the sticky notes have been arranged, members of the
group at the board are allowed to converse among themselves to pick
a word or phrase that best describes each affinity grouping.
✶ Pay particular attention to outliers or sticky notes that do not
belong in a particular group. Such an outlier could either be useless
noise or, more likely, contain a gem of an idea that deserves further
elaboration as a theme. If outlier sticky notes were distributed earlier
in the exercise, ask each group to explain how that outlier is relevant
to the issue.
✶ To identify the potentially most useful ideas, the facilitator or
group leader should establish up to five criteria for judging the value
or importance of the ideas. If so desired, then use the Ranking,
Scoring, and Prioritizing technique, described in chapter 4, for voting
on or ranking or prioritizing ideas. Another option is to give each
participant ten votes. Tell them to come up to the whiteboard or easel
with a marker and place their votes. They can place ten votes on a
single sticky note or affinity group label, or one vote each on ten
sticky notes or affinity group labels, or any combination in between.
✶ Assess what you have accomplished, and what areas will need
more work or more brainstorming. Ask yourself, “What do I see now
that I did not see before?”
✶ Set priorities and decide on a work plan for the next steps in the
analysis.

Relationship to Other Techniques


As discussed under “When to Use It,” any form of brainstorming is
commonly combined with a wide variety of other techniques. It is often an
early step in many analytic projects used to identify ideas, variables,
evidence, possible outcomes, suspects, or hypotheses that are then
processed by using other structured techniques.

Structured Brainstorming is also called Divergent/Convergent Thinking.


Other forms of brainstorming described in this chapter include Nominal
Group Technique and Virtual Brainstorming. If there is any concern that a
brainstorming session may be dominated by a senior officer or that junior
personnel may be reluctant to speak up, Nominal Group Technique may be
the best choice.

136
Origins of This Technique
Brainstorming was a creativity technique used by advertising agencies in
the 1940s. It was popularized in a book by advertising manager Alex
Osborn, Applied Imagination: Principles and Procedures of Creative
Problem Solving (New York: Scribner’s, 1953). There are many versions
of brainstorming. The description here is a combination of information
from Randolph H. Pherson, “Structured Brainstorming,” in Handbook of
Analytic Tools and Techniques (Reston, VA: Pherson Associates, LLC,
2007), and training materials used throughout the global intelligence
community.

5.2 Virtual Brainstorming


Virtual Brainstorming is the same as Structured Brainstorming except that
it is done online, with participants who are geographically dispersed or
unable to meet in person. The advantages and disadvantages of Virtual
Brainstorming as compared with Structured Brainstorming are discussed in
the “Value Added” section.

When to Use It
Virtual Brainstorming is an appropriate technique to use for a panel of
outside experts or a group of personnel from the same organization who
are working in different locations. It is also appropriate for a group of
analysts working at several locations within a large, congested
metropolitan area, such as Washington, D.C., where distances and traffic
can cause a two-hour meeting to consume most of the day for some
participants.

Value Added
Virtual Brainstorming can be as productive as face-to-face Structured
Brainstorming. The productivity of face-to-face brainstorming is
constrained by what researchers on the subject call “production blocking.”
Participants have to wait their turn to speak. And they have to think about
what they want to say while trying to pay attention to what others are
saying. Something always suffers. What suffers depends upon how the

137
brainstorming session is organized. Often, it is the synergy that can be
gained when participants react to one another’s ideas. Synergy occurs
when an idea from one participant triggers a new idea in another
participant, an idea that otherwise may not have arisen. For many analysts,
synergistic thinking is the most fundamental source of gain from
brainstorming.

In synchronous Virtual Brainstorming, all participants are engaged at the


same time. In asynchronous Virtual Brainstorming, participants can make
their inputs and read the inputs of others at their convenience. This means
that nothing is competing for a participant’s attention when he or she is
suggesting ideas or reading the input of others. If the brainstorming session
is spread over two or three days, participants can occasionally revisit their
inputs and the inputs of others with a fresh mind; this process usually
generates further ideas.

Another benefit of Virtual Brainstorming, whether synchronous or


asynchronous, is that participants can provide their inputs anonymously if
the software needed to support that feature is available. This is particularly
useful in an environment where status or hierarchy influence people’s
behavior. Anonymity is sometimes necessary to elicit original ideas rather
than “politically correct” ideas.

Of course, face-to-face meetings have significant benefits. There is often a


downside to online communication as compared with face-to-face
communication. (Guidance for facilitators of Virtual Brainstorming is
available in the article by Nancy Settle-Murphy and Julia Young listed
under “Origins of This Technique.”)

The Method
Virtual Brainstorming is usually a two-phase process. It usually begins
with the divergent process of creating as many relevant ideas as possible.
The second phase is a process of convergence, when the ideas are sorted
into categories, weeded out, prioritized, or combined and molded into a
conclusion or plan of action. Software is available for performing these
common functions online. The nature of this second step will vary
depending on the specific topic and the goal of the brainstorming session.

138
Relationship to Other Techniques
See the discussions of Structured Brainstorming and Nominal Group
Technique and compare the relative advantages and possible weaknesses
of each.

Origins of This Technique


Virtual Brainstorming is the application of conventional brainstorming to
the computer age. The discussion here combines information from several
sources: Nancy Settle-Murphy and Julia Young, “Virtual Brainstorming: A
New Approach to Creative Thinking,” Communique (2009),
http://www.facilitate.com/support/facilitator-toolkit/docs/Virtual-
Brainstorming.pdf; Alan R. Dennis and Mike L. Williams, “Electronic
Brainstorming: Theory, Research and Future Directions,” Indiana
University, Information Systems Technical Reports and Working Papers,
TR116–1, April 2002; and Alan R. Dennis and Joseph S. Valacich,
“Computer Brainstorming: More Heads Are Better Than One,” Journal of
Applied Psychology 78 (August 1993): 531–537.

5.3 Nominal Group Technique


Nominal Group Technique (NGT) is a process for generating and
evaluating ideas. It is a form of brainstorming, but NGT has always had its
own identity as a separate technique. The goals of Nominal Group
Technique and Structured Brainstorming are the same—the generation of
good, innovative, and viable ideas. NGT differs from Structured
Brainstorming in several important ways. Most important, ideas are
presented one at a time in round-robin fashion.

When to Use It
NGT prevents the domination of a discussion by a single person. Use it
whenever there is concern that a senior officer or executive or an
outspoken member of the group will control the direction of the meeting
by speaking before anyone else. It is also appropriate to use NGT rather
than Structured Brainstorming if there is concern that some members may
not speak up, the session is likely to be dominated by the presence of one

139
or two well-known experts in the field, or the issue under discussion is
controversial and may provoke a heated debate. NGT can be used to
coordinate the initial conceptualization of a problem before the research
and writing stages begin. Like brainstorming, NGT is commonly used to
identify ideas (assumptions, hypotheses, drivers, causes, variables,
important players) that can then be incorporated into other methods.

Value Added
NGT can be used both to generate ideas and to provide backup support in a
decision-making process where all participants are asked to rank or
prioritize the ideas that are generated. If it seems desirable, all ideas and
votes can be kept anonymous. Compared with Structured Brainstorming,
which usually seeks to generate the greatest possible number of ideas—no
matter how far out they may be—NGT may focus on a limited list of
carefully considered opinions.

The technique allows participants to focus on each idea as it is presented,


rather than having to think simultaneously about preparing their own ideas
and listening to what others are proposing—a situation that often happens
with Structured Brainstorming. NGT encourages piggybacking on ideas
that have already been presented—in other words, combining, modifying,
and expanding others’ ideas.

The Method
An NGT session starts with the facilitator asking an open-ended question
such as, “What factors will influence . . . ?” “How can we learn if . . . ?”
“In what circumstances might . . . happen?” “What should be included or
not included in this research project?” The facilitator answers any
questions about what is expected of participants and then gives participants
five to ten minutes to work privately to jot down on note cards their initial
ideas in response to the focal question. This part of the process is followed
by these steps:

✶ The facilitator calls on one person at a time to present one idea. As


each participant presents his or her idea, the facilitator writes a
summary description on a flip chart or whiteboard. This process
continues in a round-robin fashion until all ideas have been

140
exhausted. If individuals have run out of ideas, they pass when called
upon for an idea, but they can participate again later if they have
another idea when their turn comes up again. The facilitator can also
be an active participant, writing down his or her own ideas. There is
no discussion until all ideas have been presented; however, the
facilitator can clarify ideas to avoid duplication.
✶ When no new ideas are forthcoming, the facilitator initiates a
group discussion to ensure that a common understanding exists of
what each idea means. The facilitator asks about each idea, one at a
time, in the order presented, but no argument for or against any idea
is allowed. At this time, the participants can expand or combine ideas,
but no change can be made to an idea without the approval of the
original presenter of the idea.
✶ Voting to rank or prioritize the ideas as discussed in chapter 4 is
optional, depending upon the purpose of the meeting. When voting is
done, it is usually by secret ballot, although various voting procedures
may be used depending in part on the number of ideas and the
number of participants. It usually works best to employ a ratio of one
vote for every three ideas presented. For example, if the facilitator
lists twelve ideas, each participant is allowed to cast four votes. The
group can also decide to let participants give an idea more than one
vote. In this case, someone could give one idea three votes and
another idea only one vote. An alternative procedure is for each
participant to write what he or she considers the five best ideas on a 3
× 5 card. One might rank the ideas on a scale of 1 to 5, with five
points for the best idea, four points for the next best, down to one
point for the least favored idea. The cards are then passed to the
facilitator for tabulation and announcement of the scores. In such
circumstances, a second round of voting may be needed to rank the
top three or five ideas.

Relationship to Other Techniques


Analysts should consider Structured Brainstorming and Virtual
Brainstorming as well as Nominal Group Technique and determine which
technique is most appropriate for the conditions in which it will be used.

Origins of This Technique

141
Nominal Group Technique was developed by A. L. Delbecq and A. H.
Van de Ven and first described in “A Group Process Model for Problem
Identification and Program Planning,” Journal of Applied Behavioral
Science VII (July–August, 1971): 466–491. The discussion of NGT here is
a synthesis of several sources: James M. Higgins, 101 Creative Problem
Solving Techniques: The Handbook of New Ideas for Business, rev. ed.
(Winter Park, FL: New Management Publishing Company, 2006);
American Society for Quality, http://www.asq.org/learn-about-
quality/idea-creation-tools/overview/nominal-group.html; “Nominal
Group Technique,” http://syque.com/quality_tools/toolbook/NGT/ngt.htm;
and “Nominal Group Technique,”
www.mycoted.com/Nominal_Group_Technique.

5.4 Starbursting
Starbursting is a form of brainstorming that focuses on generating
questions rather than eliciting ideas or answers. It uses the six questions
commonly asked by journalists: Who? What? How? When? Where? and
Why?

When to Use It
Use Starbursting to help define your research project. After deciding on
the idea, topic, or issue to be analyzed, brainstorm to identify the questions
that need to be answered by the research. Asking the right questions is a
common prerequisite to finding the right answer.

The Method
The term “Starbursting” comes from the image of a six-pointed star. To
create a Starburst diagram, begin by writing one of the following six words
at each point of the star: Who, What, How, When, Where, and Why? Then
start the brainstorming session, using one of these words at a time to
generate questions about the topic. Usually it is best to discuss each
question in the order provided, in part because the order also approximates
how English sentences are constructed. Often only three or four of the
words are directly relevant to the intelligence question. For some words
(often When or Where) the answer may be a given and not require further

142
exploration.

Do not try to answer the questions as they are identified; just focus on
developing as many questions as possible. After generating questions that
start with each of the six words, ask the group either to prioritize the
questions to be answered or to sort the questions into logical categories.
Figure 5.4 is an example of a Starbursting diagram. It identifies questions
to be asked about a biological attack in a subway.

Relationship to Other Techniques


This Who, What, How, When, Where, and Why approach can be
combined effectively with the Getting Started Checklist and Issue
Redefinition techniques in chapter 4. Ranking, Scoring, and Prioritizing
(chapter 4) can be used to prioritize the questions to be worked on.
Starbursting is also directly related to the analysis of cause and effect as
discussed in chapter 8. It is also used to order the analytic process used in
the Indicators Validator™.

Origin of This Technique


Starbursting is one of many techniques developed to stimulate creativity.
The basic idea for Figure 5.4 comes from the MindTools website:
www.mindtools.com/pages/article/newCT_91.htm.

Figure 5.4 Starbursting Diagram of a Lethal Biological Event at a Subway


Station

143
Source: A basic Starbursting diagram can be found at the MindTools
website:
www.mindtools.com/pages/article/worksheets/Starbursting.pdf. This
version was created by the authors.

5.5 Cross-Impact Matrix


The Cross-Impact Matrix helps analysts deal with complex problems when
“everything is related to everything else.” By using this technique, analysts
and decision makers can systematically examine how each factor in a
particular context influences all other factors to which it appears to be
related.

When to Use It
The Cross-Impact Matrix is useful early in a project when a group is still

144
in a learning mode trying to sort out a complex situation. Whenever a
brainstorming session or other meeting is held to identify all the variables,
drivers, or players that may influence the outcome of a situation, the next
logical step is to use a Cross-Impact Matrix to examine the
interrelationships among each of these variables. A group discussion of
how each pair of variables interacts can be an enlightening learning
experience and a good basis on which to build ongoing collaboration. How
far one goes in actually filling in the matrix and writing up a description of
the impacts associated with each variable may vary depending upon the
nature and significance of the project. At times, just the discussion is
sufficient. Writing up the interactions with the summary for each pair of
variables can be done effectively in a wiki.

Analysis of cross-impacts is also useful when:

✶ A situation is in flux, and analysts need to understand all the


factors that might influence the outcome. This also requires
understanding how all the factors relate to one another, and how they
might influence one another.
✶ A situation is stable, and analysts need to identify and monitor all
the factors that could upset that stability. This, too, requires
understanding how the various factors might interact to influence one
another.
✶ A significant event has just occurred, and analysts need to
understand the implications of the event. What other significant
forces are influenced by the event, and what are the implications of
this influence?

Value Added
When analysts are estimating or forecasting future events, they consider
the dominant forces and potential future events that might influence an
outcome. They then weigh the relative influence or likelihood of these
forces or events, often considering them individually without regard to
sometimes significant interactions that might occur. The Cross-Impact
Matrix provides a context for the discussion of these interactions. This
discussion often reveals that variables or issues once believed to be simple
and independent are, in reality, interrelated. The information sharing that
occurs during a small-group discussion of each potential cross-impact can
be an invaluable learning experience. For this reason alone, the Cross-

145
Impact Matrix is a useful tool that can be applied at some point in almost
any study that seeks to explain current events or forecast future outcomes.

The Cross-Impact Matrix provides a structure for managing the


complexity that makes most analysis so difficult. It requires that all
assumptions about the relationships between variables be clearly
articulated. Thus any conclusions reached through this technique can be
defended or critiqued by tracing the analytical argument back through a
path of underlying premises.

The Method
Assemble a group of analysts knowledgeable on various aspects of the
subject. The group brainstorms a list of variables or events that would
likely have some effect on the issue being studied. The project coordinator
then creates a matrix and puts the list of variables or events down the left
side of the matrix and the same variables or events across the top.

The matrix is then used to consider and record the relationship between
each variable or event and every other variable or event. For example, does
the presence of Variable 1 increase or diminish the influence of Variables
2, 3, 4, etc.? Or does the occurrence of Event 1 increase or decrease the
likelihood of Events 2, 3, 4, etc.? If one variable does affect the other, the
positive or negative magnitude of this effect can be recorded in the matrix
by entering a large or small + or a large or small – in the appropriate cell
(or by making no marking at all if there is no significant effect). The
terminology used to describe each relationship between a pair of variables
or events is that it is “enhancing,” “inhibiting,” or “unrelated.”

The matrix shown in Figure 5.5 has six variables, with thirty possible
interactions. Note that the relationship between each pair of variables is
assessed twice, as the relationship may not be symmetric. That is, the
influence of Variable 1 on Variable 2 may not be the same as the impact of
Variable 2 on Variable 1. It is not unusual for a Cross-Impact Matrix to
have substantially more than thirty possible interactions, in which case
careful consideration of each interaction can be time consuming.

Analysts should use the Cross-Impact technique to focus on significant


interactions between variables or events that may have been overlooked, or
combinations of variables that might reinforce one another. Combinations

146
of variables that reinforce one another can lead to surprisingly rapid
changes in a predictable direction. On the other hand, for some problems it
may be sufficient simply to recognize that there is a relationship and the
direction of that relationship.

Figure 5.5 Cross-Impact Matrix

The depth of discussion and the method used for recording the results are
discretionary. Each depends upon how much you are learning from the
discussion, and that will vary from one application of this matrix to
another. If the group discussion of the likelihood of these variables or
events and their relationships to one another is a productive learning
experience, keep it going. If key relationships are identified that are likely
to influence the analytic judgment, fill in all cells in the matrix and take
good notes. If the group does not seem to be learning much, cut the
discussion short.

As a collaborative effort, team members can conduct their discussion


online with input recorded in a wiki. Set up a wiki with space to enter
information about each cross-impact. Analysts can then, as time permits,
enter new information or edit previously entered information about the
interaction between each pair of variables. This record will serve as a point
of reference or a memory jogger throughout the project.

147
Relationship to Other Techniques
Matrices as a generic technique with many types of applications are
discussed in chapter 4. The use of a Cross-Impact Matrix as described here
frequently follows some form of brainstorming at the start of an analytic
project to elicit the further assistance of other knowledgeable analysts in
exploring all the relationships among the relevant factors identified by the
brainstorming. It can be a good idea to build on the discussion of the
Cross-Impact Matrix by developing a visual Mind Map or Concept Map of
all the relationships.

See also the discussion of the Complexity Manager technique in chapter


11. An integral part of the Complexity Manager technique is a form of
Cross-Impact Analysis that takes the analysis a step further toward an
informed conclusion.

Origins of This Technique


The Cross-Impact Matrix technique was developed in the 1960s as one
element of a quantitative futures analysis methodology called Cross-
Impact Analysis. Richards Heuer became familiar with it when the CIA
was testing the Cross-Impact Analysis methodology. He started using it as
an intelligence analysis technique, as described here, more than thirty
years ago.

5.6 Morphological Analysis


Morphological Analysis is a method for systematically structuring and
examining all the possible relationships in a multidimensional, highly
complex, usually nonquantifiable problem space. The basic idea is to
identify a set of variables and then look at all the possible combinations of
these variables.

Morphological Analysis is a generic method used in a variety of


disciplines. For intelligence analysis, it helps prevent surprise by
generating a large number of feasible outcomes for any complex situation.
This exercise reduces the chance that events will play out in a way that the
analyst has not previously imagined and considered. Specific applications
of this method are Quadrant Crunching™ (discussed later in this chapter),

148
Multiple Scenarios Generation (chapter 6), and Quadrant Hypothesis
Generation (chapter 7). Training and practice are required before using this
method, and the availability of a facilitator experienced in morphological
analysis is highly desirable.

When to Use It
Morphological Analysis is most useful for dealing with complex,
nonquantifiable problems for which little information is available and the
chances for surprise are great. It can be used, for example, to identify
possible variations of a threat, possible ways a crisis might occur between
two countries, possible ways a set of driving forces might interact, or the
full range of potential outcomes in any ambiguous situation.
Morphological Analysis is generally used early in an analytic project, as it
is intended to identify all the possibilities, not to drill deeply into any
specific possibility.

Although Morphological Analysis is typically used for looking ahead, it


can also be used in an investigative context to identify the full set of
possible explanations for some event.

Value Added
By generating a comprehensive list of possible outcomes, analysts are in a
better position to identify and select those outcomes that seem most
credible or that most deserve attention. This list helps analysts and
decision makers focus on what actions need to be undertaken today to
prepare for events that could occur in the future. They can then take the
actions necessary to prevent or mitigate the effect of bad outcomes and
help foster better outcomes. The technique can also sensitize analysts to
High Impact/Low Probability developments, or “nightmare scenarios,”
which could have significant adverse implications for influencing policy or
allocation of resources.

The product of Morphological Analysis is often a set of potential


noteworthy scenarios, with indicators of each, plus the intelligence
collection requirements or research directions for each scenario. Another
benefit is that morphological analysis leaves a clear audit trail about how
the judgments were reached.

149
The Method
Morphological analysis works through two common principles of
creativity techniques: decomposition and forced association. Start by
defining a set of key parameters or dimensions of the problem, and then
break down each of those dimensions further into relevant forms or states
or values that the dimension can assume—as in the example described
later in this section. Two dimensions can be visualized as a matrix and
three dimensions as a cube. In more complicated cases, multiple linked
matrices or cubes may be needed to break the problem down into all its
parts.

The principle of forced association then requires that every element be


paired with and considered in connection with every other element in the
morphological space. How that is done depends upon the complexity of
the case. In a simple case, each combination may be viewed as a potential
scenario or problem solution and examined from the point of view of its
possibility, practicability, effectiveness, or other criteria. In complex cases,
there may be thousands of possible combinations and computer assistance
is required. With or without computer assistance, it is often possible to
quickly eliminate a large proportion of the combinations as not physically
possible, impracticable, or undeserving of attention. This narrowing-down
process allows the analyst to concentrate only on those combinations that
are within the realm of the possible and most worthy of attention.

Example
Analysts are asked to assess how a terrorist attack on the water supply
might unfold. In the absence of direct information about specific terrorist
planning for such an attack, a group of analysts uses Structured
Brainstorming to identify the following key dimensions of the problem:
group, type of attack, target, and intended impact. For each dimension, the
analysts identify as many elements as possible. For example, the group
could be an outsider, an insider, or a visitor to a facility, while the location
could be an attack on drinking water, wastewater, or storm sewer runoff.
The analysts then array this data into a matrix, illustrated in Figure 5.6, and
begin to create as many permutations as possible using different
combinations of the matrix boxes. These permutations allow the analysts
to identify and consider multiple combinations for further exploration. One

150
possible scenario, shown in the matrix, is an outsider who carries out
multiple attacks on a treatment plant to cause economic disruption.
Another possible scenario is an insider who carries out a single attack on
drinking water to terrorize the population.

Analysts who might be interested in using a computerized version of


Morphological Analysis are referred to the Swedish Morphology Society
(www.swemorph.com). This website has detailed guidance and examples
of the use of Morphological Analysis for futures research, disaster risk
management, complex sociotechnical problems, policy research, and other
problems comparable to those faced by intelligence analysts.

Origins of This Technique


The current form of Morphology Analysis was developed by astronomer
Fritz Zwicky and described in his book Discovery, Invention, Research
through the Morphological Approach (Toronto: Macmillan, 1969). Basic
information about this method is available from two well-known websites
that provide information on creativity tools: http://creatingminds.org and
www.mindtools.com. For more advanced information, see General
Morphological Analysis: A General Method for Non-Quantified Modeling
(1998); Wicked Problems: Structuring Social Messes with Morphological
Analysis (2008); and Futures Studies Using Morphological Analysis
(2009), all downloadable from the Swedish Morphology Society’s
website: http://www.swemorph.com.

Figure 5.6 Morphological Analysis: Terrorist Attack Options

151
Source: 2009 Pherson Associates, LLC.

5.7 Quadrant Crunching™


Quadrant Crunching™ is a special application of Morphological Analysis,
a systematic procedure for identifying all the potentially feasible
combinations between several sets of variables. It combines the
methodology of a Key Assumptions Check (chapter 8) with Multiple
Scenarios Generation (chapter 6).

Two versions of the technique have been developed: Classic Quadrant


Crunching™ helps analysts avoid surprise, and Foresight Quadrant
Crunching™ can be used to develop a comprehensive set of potential
alternative futures. Both techniques spur analysts to rethink an issue from a
broad range of perspectives and systematically question all the
assumptions that underlie their lead hypothesis.

✶ Classic Quadrant Crunching™ helps analysts avoid surprise by


examining multiple possible combinations of selected key variables.
It was initially developed in 2006 by Pherson Associates, LLC, to
help counterterrorism analysts and decision makers identify the many
different ways international terrorists or domestic radical extremists
might mount an attack.
✶ Foresight Quadrant Crunching™ was developed in 2013 by Randy

152
Pherson. It adopts the same initial approach of flipping assumptions
as Classic Quadrant Crunching™ and then applies Multiple Scenarios
Generation to generate a wide range of comprehensive and mutually
exclusive future scenarios or outcomes of any type—many of which
have not previously been contemplated.

When to Use It
Both Quadrant Crunching™ techniques are useful for dealing with highly
complex and ambiguous situations for which few data are available and
the chances for surprise are great. Training and practice are required before
analysts use either technique, and it is highly recommended to engage an
experienced facilitator, especially when this technique is used for the first
time.

Analysts can use Classic Quadrant Crunching™ to identify and


systematically challenge assumptions, explore the implications of contrary
assumptions, and discover “unknown unknowns.” By generating multiple
possible alternative outcomes for any situation, Classic Quadrant
Crunching™ reduces the chance that events could play out in a way that
has not previously been imagined or considered. Analysts, for example,
would use Classic Quadrant Crunching™ to identify the many different
ways terrorists might conduct an attack on the homeland or business
competitors might react to a new product launch.

Analysts who use Foresight Quadrant Crunching™ can be more confident


that they have considered a broad range of possible ways a situation could
develop and have spotted indicators that signal a specific scenario is
starting to unfold. For example, an analyst could use Foresight Quadrant
Crunching™ to generate multiple scenarios of how the conflict in Syria
might evolve over the next five years and gain a better understanding of
the interplay of key drivers in that region.

Value Added
These techniques reduce the potential for surprise by providing a
structured framework with which the analyst can generate an array of
alternative options or ministories. Classic Quadrant Crunching™ requires
analysts to identify and systematically challenge all their key assumptions

153
about how a terrorist attack might be launched or any other specific
situation might evolve. By critically examining each assumption and how
a contrary assumption might play out, analysts can better assess their level
of confidence in their predictions and the strength of their lead hypothesis.
Foresight Quadrant Crunching™ is a new technique for conducting
scenarios analysis; it is most effective when there is a strong consensus
that only a single future outcome is likely.

Both techniques provide a useful platform for developing indicator lists


and for generating collection requirements. They also help decision makers
focus on what actions need to be undertaken today to best prepare for
events that could transpire in the future. By reviewing an extensive list of
potential alternatives, decision makers are in a better position to select
those that deserve the most attention. They can then take the necessary
actions to avoid or mitigate the impact of unwanted or bad alternatives and
help foster more desirable ones. The technique also can be used to
sensitize decision makers to potential “wild cards” (High Impact/Low
Probability developments) or “nightmare scenarios,” both of which could
have significant policy or resource implications.

The Method
Classic Quadrant Crunching™
is sometimes described as a Key Assumptions Check on steroids. It is most
useful when there is a well-established lead hypothesis that can be
articulated clearly. Classic Quadrant Crunching™ calls on the analyst to
break down the lead hypothesis into its component parts, identifying the
key assumptions that underlie the lead hypothesis, or dimensions that
focus on Who, What, How, When, Where, and Why. Once the key
dimensions of the lead hypothesis are articulated, the analyst generates two
or four examples of contrary dimensions. For example, two contrary
dimensions for a single attack would be simultaneous attacks and
cascading attacks. The various contrary dimensions are then arrayed in sets
of 2 × 2 matrices. If four dimensions are identified for a particular topic,
the technique would generate six different 2 × 2combinations of these four
dimensions (AB, AC, AD, BC, BD, and CD). Each of these pairs would be
presented as a 2 × 2 matrix with four quadrants. Different stories or
alternatives would be generated for each quadrant in each matrix. If 2

154
stories are imagined for each quadrant in each of these 2 × 2 matrices, a
total of 48 different ways the situation could evolve will have been
generated. Similarly, if six drivers are identified, the technique will
generate as many as 120 different stories to consider (see Figure 5.7a).

Once a rich array of potential alternatives has been generated, the analyst’s
task is to identify which of the various alternative stories are the most
deserving of attention. The last step in the process is to develop lists of
indicators for each story in order to track whether a particular story is
beginning to emerge.

The Classic Quadrant Crunching™ technique can be illustrated by


exploring the question, “How might terrorists attack a nation’s water
system?”

Figure 5.7a Classic Quadrant Crunching™: Creating a Set of Stories

Source: 2009 Pherson Associates, LLC.

✶ State the conventional wisdom for the most likely way this
terrorist attack might be launched. For example, “Al-Qaeda or its
affiliates will contaminate the water supply for a large metropolitan
area, causing mass casualties.”
✶ Break down this statement into its component parts or key
assumptions. For example, the statement makes five key assumptions:
(1) a single attack, (2) involving drinking water, (3) conducted by an
outside attacker, (4) against a major metropolitan area, (5) that causes
large numbers of casualties.

155
✶ Posit a contrary assumption for each key assumption. For example,
what if there are multiple attacks instead of a single attack?
✶ Identify two or four dimensions of that contrary assumption. For
example, what are different ways a multiple attack could be
launched? Two possibilities would be simultaneous attacks (as in the
September 2001 attacks on the World Trade Center and the Pentagon
or the London bombings in 2005) or cascading attacks (as in the
sniper killings in the Washington, D.C., area in October 2002).
✶ Repeat this process for each of the key dimensions. Develop two
or four contrary dimensions for each contrary assumption. (See
Figure 5.7b.)
✶ Array pairs of contrary dimensions into sets of 2 × 2 matrices. In
this case, ten different 2 × 2 matrices would be created. Two of the
ten matrices are shown in Figure 5.7c.
✶ For each cell in each matrix generate one to three examples of how
terrorists might launch an attack. In some cases, such an attack might
already have been imagined. In other quadrants there may be no
credible attack concept. But several of the quadrants will usually
stretch the analysts’ thinking, pushing them to think about the
dynamic in new and different ways.
✶ Review all the attack plans generated; using a preestablished set of
criteria, select those most deserving of attention. In this example,
possible criteria might be those plans that are most likely to:
– Cause the most damage; have the most impact.
– Be the hardest to detect or prevent.
– Pose the greatest challenge for consequence management.
✶ This process is illustrated in Figure 5.7d. In this case, three attack
plans were selected as the most likely. Attack plan 1 became Story A,
attack plans 4 and 7 were combined to form Story B, and attack plan
16 became Story C. It may also be desirable to select one or two
additional attack plans that might be described as “wild cards” or
“nightmare scenarios.” These are attack plans that have a low
probability of being tried but are worthy of attention because their
impact would be substantial if they did occur. The figure shows attack
plan 11 as a nightmare scenario.
✶ Consider what decision makers might do to prevent bad stories
from happening, mitigate their impact, and deal with their
consequences.
✶ Generate a list of key indicators to help assess which, if any, of
these attack plans is beginning to emerge.

156
Figure 5.7b Terrorist Attacks on Water Systems: Flipping Assumptions

Source: 2009 Pherson Associates, LLC.

Figure 5.7c Terrorist Attacks on Water Systems: Sample Matrices

Source: 2009 Pherson Associates, LLC.

Figure 5.7d Selecting Attack Plans

157
Source: 2013 Pherson Associates, LLC.

The best way to have a good idea is to have a lot of ideas.

—Louis Pasteur

Foresight Quadrant Crunching™


adopts much the same method as Classic Quadrant Crunching™, with two
major differences. In the first step, state the scenario that most analysts
believe has the greatest probability of emerging. When later developing
the list of alternative dimensions, include the dimensions contained in the

158
lead scenario. By including the lead scenario, the final set of alternative
scenarios or futures should be comprehensive and mutually exclusive. The
specific states for Foresight Quadrant Crunching™ are:

✶ State what most analysts believe is the most likely future scenario.
✶ Break down this statement into its component parts or key
assumptions.
✶ Posit a contrary assumption for each key assumption.
✶ Identify one or three contrary dimensions of that contrary
assumption.
✶ Repeat this process for each of the contrary assumptions—a
process similar to that shown in Figure 5.7b.
✶ Add the key assumption to the list of contrary dimensions, creating
either one or two pairs.
✶ Repeat this process for each row, creating one or two pairs
including a key assumption and one or three contrary dimensions.
✶ Array these pairs into sets of 2 × 2 matrices, a process similar to
that shown in Figure 5.7c.
✶ For each cell in each matrix generate one to three credible
scenarios. In some cases, such a scenario may already have been
imagined. In other quadrants, there may be no scenario that makes
sense. But several of the quadrants will usually stretch the analysts’
thinking, often generating counterintuitive scenarios.
✶ Review all the scenarios generated—a process similar to that
shown in Figure 5.7d; using a preestablished set of criteria, select
those scenarios most deserving of attention. The difference is that
with Classic Quadrant Crunching™, analysts are seeking to develop a
set of credible alternative attack plans to avoid surprise. In Foresight
Quadrant Crunching™, analysts are engaging in a new version of
multiple scenarios analysis.

Relationship to Other Techniques


Both Quadrant Crunching™ techniques are specific applications of a
generic method called Morphological Analysis (described in this chapter).
They draw on the results of the Key Assumptions Check and can
contribute to Multiple Scenarios Generation. They can also be used to
identify indicators.

159
Origins of This Technique
Classic Quadrant Crunching™ was developed by Randy Pherson and Alan
Schwartz to meet a specific analytic need in the counterterrorism arena. It
was first published in Randolph H. Pherson, Handbook of Analytic Tools
and Techniques (Reston, VA: Pherson Associates, LLC, 2008). Foresight
Quadrant Crunching™ was developed by Randy Pherson in 2013 as a new
method for conducting multiple scenarios analysis.

1. Daniel Kahneman, Thinking: Fast and Slow (New York: Farrar, Straus
and Giroux, 2011), 95–96.

160
6 Scenarios and Indicators

6.1 Scenarios Analysis [ 136 ]


6.2 Indicators [ 149 ]
6.3 Indicators Validator™ [ 157 ]

In the complex, evolving, uncertain situations that analysts and decision


makers must handle, the future is not easily predicable. Some events are
intrinsically of low predictability. The best the analyst can do is to identify
the driving forces that may determine future outcomes and monitor those
forces as they interact to become the future. Scenarios are a principal
vehicle for doing this. Scenarios are plausible and provocative stories
about how the future might unfold. When alternative futures have been
clearly outlined, decision makers can mentally rehearse these futures and
ask themselves, “What should I be doing now to prepare for these
futures?”

Scenarios Analysis provides a framework for considering multiple


plausible futures. As Peter Schwartz, author of The Art of the Long View,
has argued, “The future is plural.”1 Trying to divine or predict a single
outcome typically is a disservice to senior policy officials, decision
makers, and other clients. Generating several scenarios (for example, those
that are most likely, least likely, and most dangerous) helps focus attention
on the key underlying forces and factors most likely to influence how a
situation develops. Analysts can also use scenarios to examine
assumptions and deliver useful warning messages when high impact/low
probability scenarios are included in the exercise.

A robust Scenarios Analysis exercise can be a powerful instrument for


overcoming well-known cognitive biases such as groupthink. First, the
technique requires building a diverse team that is knowledgeable in a wide
variety of disciplines. Second, the process of developing key drivers—and
using them in combinations to generate a wide array of alternative
trajectories—forces analysts to think about the future in ways they never
would have contemplated if they relied only on intuition and their own
expert knowledge.

Identification and monitoring of indicators or signposts can provide early

161
warning of the direction in which the future is heading, but these early
signs are not obvious. Indicators take on meaning only in the context of a
specific scenario with which they have been identified. The prior
identification of a scenario and associated indicators can create an
awareness that prepares the mind to recognize early signs of significant
change. Indicators are particularly useful in helping overcome Hindsight
Bias because they generate objective, preestablished lists that can be used
to capture an analyst’s actual thought process at an earlier stage of the
analysis. Similarly, indicators can mitigate the cognitive bias of assuming
something is inevitable if the indicators that the analyst had expected to
emerge are not actually realized.

Change sometimes happens so gradually that analysts do not notice it, or


they rationalize it as not being of fundamental importance until it is too
obvious to ignore. Once analysts take a position on an issue, they typically
are slow to change their minds in response to new evidence. By going on
the record in advance to specify what actions or events would be
significant and might change their minds, analysts can avert this type of
rationalization.

Another benefit of scenarios is that they provide an efficient mechanism


for communicating complex ideas. A scenario is a set of complex ideas
that can be described with a short label. These labels provide a lexicon for
thinking and communicating with other analysts and decision makers
about how a situation or a country is evolving.

“The central mission of intelligence analysis is to warn U.S.


officials about dangers to national security interests and to alert
them to perceived openings to advance U.S. policy objectives.
Thus the bulk of analysts’ written and oral deliverables points
directly or indirectly to the existence, characteristics, and
implications of threats to and opportunities for U.S. national
security.”

—Jack Davis, “Strategic Warning,” Sherman Kent School for


Intelligence Analysis, September 2001

162
Overview of Techniques
Scenarios Analysis
identifies the multiple ways in which a situation might evolve. This form
of analysis can help decision makers develop plans to exploit whatever
opportunities might arise or avoid whatever risks the future may hold. Four
different techniques for generating scenarios are described in this chapter,
and a fifth was described previously in chapter 5, along with guidance on
when and how to use each technique. Simple Scenarios is a quick and easy
way for either an individual analyst or a small group of analysts to
generate scenarios. It starts with the current analytic line and then explores
other alternatives. The Cone of Plausibility works well with a small group
of experts who define a set of key drivers, establish a baseline scenario,
and then modify the drivers to create plausible alternative scenarios and
wild cards. Alternative Futures Analysis is a more systematic and
imaginative procedure that uses a group of experts, often including
decision makers and a trained facilitator. With some additional effort,
Multiple Scenarios Generation can handle a much larger number of
scenarios than Alternative Futures Analysis. It also requires a facilitator,
but the use of this technique can greatly reduce the chance that events
could play out in a way that was not at least foreseen as a possibility. A
fifth technique, Foresight Quadrant Crunching™, was developed in 2013
by Randy Pherson as a variation on Multiple Scenarios Generation and the
Key Assumptions Check. It adopts a different approach to generating
scenarios by flipping assumptions, and it is described along with its
companion technique, Classic Quadrant Crunching™, in chapter 5.

Indicators
are a classic technique used to provide early warning of some future event
or validate what is being observed. Indicators are often paired with
scenarios to identify which of several possible scenarios is developing.
They are also used to measure change toward an undesirable condition,
such as political instability, or a desirable condition, such as economic
reform. Use Indicators whenever you need to track a specific situation to
monitor, detect, or evaluate change over time. The indicator list becomes
the basis for directing collection efforts and for routing relevant
information to all interested parties. It can also serve as the basis for your

163
filing system to track emerging developments.

Indicators Validation
is a process that helps analysts assess the diagnostic power of an indicator.
An indicator is most diagnostic when it clearly points to the likelihood of
only one scenario or hypothesis and suggests that the others are not likely.
Too frequently indicators are of limited value because they may be
consistent with several different outcomes or hypotheses.

6.1 Scenarios Analysis


Identification and analysis of scenarios help to reduce uncertainties and
manage risk. By postulating different scenarios, analysts can identify the
multiple ways in which a situation might evolve. This process can help
decision makers develop plans to exploit whatever opportunities the future
may hold—or, conversely, to avoid risks. Monitoring of indicators keyed
to various scenarios can provide early warnings of the direction in which
the future may be heading.

Several techniques are available for developing and analyzing scenarios.


The first part of this section discusses when and why to use any form of
Scenarios Analysis. The subsections then discuss when and how to use
each of the four Scenarios Analysis techniques: Simple Scenarios, Cone of
Plausibility, Alternative Futures Analysis, and Multiple Scenarios
Generation.

When to Use It
Scenarios Analysis is most useful when a situation is complex or when the
outcomes are too uncertain to trust a single prediction. When decision
makers and analysts first come to grips with a new situation or challenge, a
degree of uncertainty always exists about how events will unfold. At this
point when national policies or long-term corporate strategies are in the
initial stages of formulation, Scenarios Analysis can have a strong impact
on decision makers’ thinking.

Scenarios do not predict the future, but a good set of scenarios bounds the
range of possible futures for which a decision maker may need to be

164
prepared. Scenarios Analysis can also be used as a strategic planning tool
that brings decision makers and stakeholders together with experts to
envisage the alternative futures for which they must plan.2

The amount of time and effort required depends upon the specific
technique used. Simple Scenarios and Cone of Plausibility can be used by
an analyst working alone without any technical or methodological support,
although a group effort is preferred for most structured techniques. The
time required for Alternative Futures Analysis and Multiple Scenarios
Generation varies, but it can require a team of experts spending several
days working together on a project. Engaging a facilitator who is
knowledgeable about scenarios analysis is highly recommended, as this
will definitely save time and produce better results.

Value Added
When analysts are thinking about scenarios, they are rehearsing the future
so that decision makers can be prepared for whatever direction that future
takes. Instead of trying to estimate the most likely outcome and being
wrong more often than not, scenarios provide a framework for considering
multiple plausible futures. Trying to divine or predict a single outcome can
be a disservice to senior policy officials, decision makers, and other valued
clients. Generating several scenarios helps focus attention on the key
underlying forces and factors most likely to influence how a situation will
develop. Scenarios also can be used to examine assumptions and deliver
useful warning messages when high impact/low probability scenarios are
included in the exercise.

Analysts have learned, from past experience, that involving decision


makers in a scenarios exercise is an effective way to communicate the
results of this technique and to sensitize them to important uncertainties.
Most participants find the process of developing scenarios as useful as any
written report or formal briefing. Those involved in the process often
benefit in several ways. Past experience has shown that scenarios can do
the following:

✶ Suggest indicators to monitor for signs that a particular future is


becoming more or less likely.
✶ Help analysts and decision makers anticipate what would
otherwise be surprising developments by forcing them to challenge

165
assumptions and consider plausible “wild card” scenarios or
discontinuous events.
✶ Produce an analytic framework for calculating the costs, risks, and
opportunities represented by different outcomes.
✶ Provide a means for weighing multiple unknown or unknowable
factors and presenting a set of plausible outcomes.
✶ Stimulate thinking about opportunities that can be exploited.
✶ Bound a problem by identifying plausible combinations of
uncertain factors.

When decision makers or analysts from different disciplines or


organizational cultures are included on the team, new insights invariably
emerge as new relevant information and competing perspectives are
introduced. Analysts from outside the organizational culture of a particular
analytic unit or team are likely to see a problem in different ways. They
are likely to challenge key working assumptions and established mental
models of the analytic unit and avoid the trap of expecting only
incremental change. Involving decision makers, or at least a few
individuals who work in the office of the ultimate client or decision maker,
can also bring invaluable perspective and practical insights to the process.

When analysts are required to look well into the future, they usually find it
extremely difficult to do a simple, straight-line projection given the high
number of variables that need to be considered. By changing the “analytic
lens” through which the future is to be foretold, analysts are also forced to
reevaluate their assumptions about the priority order of key factors driving
the issue. By pairing the key drivers to create sets of mutually exclusive
scenarios, the technique helps analysts think about the situation from
sometimes counterintuitive perspectives, often generating several
unexpected and dramatically different potential future worlds. By
engaging in a multifaceted and systematic examination of an issue,
analysts also create a more comprehensive set of alternative futures. This
enables them to maintain records about each alternative and track the
potential for change, thus gaining greater confidence in their overall
assessment.

Potential Pitfalls
Scenarios Analysis exercises will most likely fail if the group conducting
the exercise is not highly diverse, including representatives from a variety

166
of disciplines, organizations, and even cultures to avoid the trap of
groupthink. Another frequent pitfall is when participants in a scenarios
exercise have difficulty thinking outside of their comfort zone, resisting
instructions to look far into the future or to explore or suggest concepts
that do not fall within their area of expertise. Techniques that have worked
well to pry exercise participants out of such analytic ruts are (1) to define a
time period for the estimate (such as five or ten years) that one cannot
easily extrapolate to from current events, or (2) to post a list of concepts or
categories (such as social, economic, environmental, political, and
technical) to stimulate thinking about an issue from different perspectives.
The analysts involved in the process should also have a thorough
understanding of the subject matter and possess the conceptual skills
necessary to select the key drivers and assumptions that are likely to
remain valid throughout the period of the assessment.

6.1.1 The Method: Simple Scenarios


Of the three scenario techniques described in this chapter, Simple
Scenarios is the easiest one to use. It is the only one of the three that can be
implemented by an analyst working alone rather than in a group or a team,
and it is the only one for which a coach or a facilitator is not needed. On
the other hand, it is less systematic than the others, and the results may be
less than optimal, especially if the work is done by an individual rather
than a group. Here are the steps for using this technique:

✶ Clearly define the focal issue and the specific goals of the futures
exercise.
✶ Make a list of forces, factors, and events that are likely to
influence the future.
✶ Organize the forces, factors, and events that are related to one
another into five to ten affinity groups that are expected to be the
driving forces in how the focal issue will evolve.
✶ Label each of these drivers and write a brief description of each.
For example, one training exercise for this technique is to forecast the
future of the fictional country of Caldonia by identifying and
describing six drivers. Generate a matrix, as shown in Figure 6.1.1,
with a list of drivers down the left side. The columns of the matrix are
used to describe scenarios. Each scenario is assigned a value for each
driver. The values are strong or positive (+), weak or negative (–),
and blank if neutral or no change.

167
– Government effectiveness: To what extent does the
government exert control over all populated regions of the
country and effectively deliver services?
– Economy: Does the economy sustain a positive growth rate?
– Civil society: Can nongovernmental and local institutions
provide appropriate services and security to the population?
– Insurgency: Does the insurgency pose a viable threat to the
government? Is it able to extend its dominion over greater
portions of the country?
– Drug trade: Is there a robust drug-trafficking economy?
– Foreign influence: Do foreign governments, international
financial organizations, or nongovernmental organizations
provide military or economic assistance to the government?
✶ Generate at least four different scenarios—a best case, worst case,
mainline, and at least one other by assigning different values (+, 0, –)
to each driver.
✶ Reconsider the list of drivers and the selected scenarios. Is there a
better way to conceptualize and describe the drivers? Are there
important forces that have not been included? Look across the matrix
to see the extent to which each driver discriminates among the
scenarios. If a driver has the same value across all scenarios, it is not
discriminating and should be deleted. To stimulate thinking about
other possible scenarios, consider the key assumptions that were
made in deciding on the most likely scenario. What if some of these
assumptions turn out to be invalid? If they are invalid, how might that
affect the outcome, and are such outcomes included within the
available set of scenarios?
✶ For each scenario, write a one-page story to describe what that
future looks like and/or how it might come about. The story should
illustrate the interplay of the drivers.
✶ For each scenario, describe the implications for the decision
maker.
✶ Generate and validate a list of indicators, or “observables,” for
each scenario that would help you discover that events are starting to
play out in a way envisioned by that scenario.
✶ Monitor the list of indicators on a regular basis.
✶ Report periodically on which scenario appears to be emerging and
why.

Figure 6.1.1 Simple Scenarios

168
Source: 2009 Pherson Associates, LLC.

6.1.2 The Method: Cone of Plausibility


The Cone of Plausibility is a structured process using key drivers and
assumptions to generate a range of plausible alternative scenarios that help
analysts and decision makers imagine various futures and their effects.3 It
is valuable in understanding the drivers that are shaping current and future
events and as a tool for strategic warning. It can also be used to explore
how well or how poorly events might unfold and thereby bound the range
of possibilities for the decision maker.

The steps in using the technique are as follows (see Figure 6.1.2):

✶ Convene a small group of experts with some diversity of


background. Define the issue to be examined and set the time frame
of the assessment. A common question to ask is: What will X (e.g., a
country, regime, issue) look like in Y (e.g., two months, five years,
twenty years).
✶ Identify the drivers that are key factors or forces and thus most
useful in defining the issue and shaping the current environment.
Analysts in various fields have created mnemonics to guide their

169
analysis of key drivers. One of the most common is PEST, which
signifies political, economic, social, or technological variables. Other
analysts have added legal, military, environmental, psychological, or
demographic factors to form abbreviations such as STEEP,
STEEPLE, STEEPLED, PESTLE, or STEEP + 2.4
✶ The drivers should be written as a neutral statement and should be
expected to generally remain valid throughout the period of the
assessment. For example, write “the economy,” not “declining
economic growth.” Be sure you are listing true drivers and not just
describing important players or factors relevant to the situation. The
technique works best when four to seven drivers are generated.
✶ Make assumptions about how the drivers are most likely to play
out over the time frame of the assessment. Be as specific as possible;
for example, say “the economy will grow 2 to 4 percent annually over
the next five years,” not simply that “the economy will improve.”
Generate only one assumption per driver.
✶ Generate a baseline scenario based on the list of key drivers and
key assumptions. This is often a projection from the current situation
forward adjusted by the assumptions you are making about future
behavior. The scenario assumes that the drivers and their descriptions
will remain valid throughout the period. Write the scenario as a future
that has come to pass and describe how it came about. Construct one
to three alternative scenarios by changing an assumption or several of
the assumptions that you made in your initial list. Often it is best to
start by looking at those assumptions that appear least likely to
remain true. Consider the impact that change is likely to have on the
baseline scenario and describe this new end point and how it came
about. Also consider what impact changing one assumption would
have on the other assumptions on the list.
✶ We would recommend making at least one of these alternative
scenarios an opportunities scenario, illustrating how a positive
outcome that is significantly better than the current situation could
plausibly be achieved. Often it is also desirable to develop a scenario
that captures the full extent of the downside risk.
✶ Generate a possible wild card scenario by radically changing the
assumption that you judge as the least likely to change. This should
produce a High Impact/Low Probability scenario (see chapter 9) that
may have not been considered otherwise.

Figure 6.1.2 Cone of Plausibility

170
6.1.3 The Method: Alternative Futures Analysis
Alternative Futures Analysis and Multiple Scenarios Generation (the next
technique described below) differ from the first two techniques in that they
are usually larger projects that rely on a group of experts, often including
decision makers, academics, and other outside experts. They use a more
systematic process, and usually require the assistance of a knowledgeable
facilitator.

Alternative Futures Analysis differs from Multiple Scenarios Generation


only in the number of scenarios that are analyzed. For reasons noted
below, Alternative Futures Analysis is limited to two driving forces. Each
driving force is a spectrum with two extremes, and these drivers combine
to make four possible scenarios. Multiple Scenarios Generation has no
such limitation other than the practical limitations of time and complexity.

The steps in the Alternative Futures Analysis process are:

✶ Clearly define the focal issue and the specific goals of the futures
exercise.
✶ Brainstorm to identify the key forces, factors, or events that are
most likely to influence how the issue will develop over a specified
time period.

171
✶ If possible, group these various forces, factors, or events to form
two critical drivers that are expected to determine the future outcome.
In the example on the future of Cuba (Figure 6.1.3), the two key
drivers are Effectiveness of Government and Strength of Civil
Society. If there are more than two critical drivers, do not use this
technique—use the Multiple Scenarios Generation technique, which
can handle a larger number of scenarios.
✶ As in the Cuba example, define the two ends of the spectrum for
each driver.
✶ Draw a 2 × 2 matrix. Label the two ends of the spectrum for each
driver.
✶ Note that the square is now divided into four quadrants. Each
quadrant represents a scenario generated by a combination of the two
drivers. Now give a name to each scenario, and write it in the relevant
quadrant.
✶ Generate a narrative story of how each hypothetical scenario might
come into existence. Include a hypothetical chronology of key dates
and events for each scenario.
✶ Describe the implications of each scenario, should it be what
actually develops.
✶ Generate and validate a list of indicators, or “observables,” for
each scenario that would help determine whether events are starting
to play out in a way envisioned by that scenario.
✶ Monitor the list of indicators on a regular basis.
✶ Report periodically on which scenario appears to be emerging and
why.

Figure 6.1.3 Alternative Futures Analysis: Cuba

172
Source: 2009 Pherson Associates, LLC.

6.1.4 The Method: Multiple Scenarios Generation


Multiple Scenarios Generation is similar to Alternative Futures Analysis
(described above) except that with this technique, you are not limited to
two critical drivers generating four scenarios. By using multiple 2 × 2
matrices pairing every possible combination of multiple driving forces,
you can create a large number of possible scenarios. This is often desirable
to make sure nothing has been overlooked. Once generated, the scenarios
can be screened quickly without detailed analysis of each one. Once
sensitized to these different scenarios, analysts are more likely to pay
attention to outlying data that would suggest that events are playing out in
a way not previously imagined.

Training and an experienced facilitator are needed to use this technique.


Here are the basic steps:

✶ Clearly define the focal issue and the specific goals of the futures
exercise.
✶ Brainstorm to identify the key forces, factors, or events that are

173
most likely to influence how the issue will develop over a specified
time period.
✶ Define the two ends of the spectrum for each driver.
✶ Pair the drivers in a series of 2 × 2 matrices.
✶ Develop a story or two for each quadrant of each 2 × 2 matrix.
✶ From all the scenarios generated, select those most deserving of
attention because they illustrate compelling and challenging futures
not yet being considered.
✶ Develop and validate indicators for each scenario that could be
tracked to determine whether or not the scenario is developing.
✶ Report periodically on which scenario appears to be emerging and
why.

The technique can be illustrated by exploring the focal question, “What is


the future of the insurgency in Iraq?” (See Figure 6.1.4a.) Here are the
steps:

✶ Convene a group of experts (including some creative thinkers who


can challenge the group’s mental model) to brainstorm the forces and
factors that are likely to determine the future of the insurgency in
Iraq.
✶ Select from this list those factors or drivers whose outcome is the
hardest to predict or for which analysts cannot confidently assess how
the driver will influence future events. In the Iraq example, three
drivers meet these criteria:
– The role of neighboring states (Iran, Syria)
– The capability of Iraq’s security services (police, military)
– The political environment in Iraq
✶ Define the ends of the spectrum for each driver. For example, the
neighboring state could be stable and supportive at one end and
unstable and disruptive at the other end of the spectrum.
✶ Pair the drivers in a series of 2 × 2 matrices, as shown in Figure
6.1.4a.
✶ Develop a story or a couple of stories describing how events might
unfold for each quadrant of each 2 × 2 matrix. For example, in the 2 ×
2 matrix that is defined by the role of neighboring states and the
capability of Iraq’s security forces, analysts would be tasked with
describing how the insurgency would function in each quadrant on
the basis of the criteria defined at the far end of each spectrum. In the
upper-left quadrant, the criteria would be stable and supportive

174
neighboring states but ineffective internal security capabilities. (See
Figure 6.1.4b.) In this “world,” one might imagine a regional defense
umbrella that would help to secure the borders. Another possibility is
that the neighboring states would have the Shiites and Kurds under
control, with the only remaining insurgents Sunnis, who continue to
harass the Shia-led central government.
✶ Review all the stories generated, and select those most deserving
of attention. For example, which scenario
– Presents the greatest challenges to Iraqi and U.S. decision
makers?
– Raises particular concerns that have not been anticipated?
– Surfaces new dynamics that should be addressed?
– Suggests new collection needs?
✶ Select a few scenarios that might be described as “wild cards”
(low probability/high impact developments) or “nightmare scenarios”
(see Figure 6.1.4c).
✶ Consider what decision makers might do to prevent bad scenarios
from occurring or enable good scenarios to develop.
✶ Generate and validate a list of key indicators to help monitor
which scenario story best describes how events are beginning to play
out.
✶ Report periodically on which scenario appears to be emerging and
why.

Figure 6.1.4a Multiple Scenarios Generation: Future of the Iraq


Insurgency

175
Figure 6.1.4b Future of the Iraq Insurgency: Using Spectrums to Define
Potential Outcomes

Figure 6.1.4c Selecting Attention-Deserving and Nightmare Scenarios

Relationship to Other Techniques


Foresight Quadrant Crunching™ is a variation of Multiple Scenarios
Generation that employs a different method of generating scenarios by

176
flipping assumptions. It and its companion technique, Classic Quadrant
Crunching™, are described in detail in chapter 5. All three techniques are
specific applications of Morphological Analysis, also described in chapter
5.

Any scenarios analysis might be followed by constructing a Cross-Impact


Matrix to identify and analyze potential interactions or feedback loops
between the various driving forces in each scenario.

Origins of This Technique


Scenarios Analysis is a broad concept that can be implemented in various
ways for a variety of purposes. The four variations of Scenarios Analysis
that seem most useful for intelligence and business analysis were selected
for description in this book. The model of Simple Scenarios was developed
by Pherson Associates, LLC. Cone of Plausibility is a well-established
technique used by intelligence analysts in several countries. Alternative
Futures Analysis and Multiple Scenarios Generation were previously
described in Randolph H. Pherson, Handbook of Analytic Tools and
Techniques (Reston, VA: Pherson Associates, LLC, 2008). Foresight
Quadrant Crunching™ was developed by Randy Pherson in 2013 to meet
a specific analytic need. Its companion technique, Classic Quadrant
Crunching™, was first published in Randolph H. Pherson, Handbook of
Analytic Tools and Techniques (Reston, VA: Pherson Associates, LLC,
2008).

For information on other approaches to Scenarios Analysis, see Andy


Hines, “The Current State of Scenario Development: An Overview of
Techniques,” Foresight 9, no. 1 (March 2007). The Multiple Scenarios
Generation illustrations are drawn from a report prepared by Alan
Schwartz (PolicyFutures, LLC), “Scenarios for the Insurgency in Iraq,”
Special Report 174 (Washington, DC: United States Institute of Peace,
October 2006).

“It is important that we think deeply and creatively about the


future, or else we run the risk of being surprised and unprepared.
At the same time, the future is uncertain, so we must prepare for
multiple plausible futures, not just the one we expect to happen.
Scenarios contain the stories of these multiple futures, from the

177
expected to the wildcard, in forms that are analytically coherent
and imaginatively engaging. A good scenario grabs us by the collar
and says, ‘Take a good look at this future. This could be your
future. Are you going to be ready?’”

—Andy Hines, “The Current State of Scenario Development,”


Foresight (March 2007)

6.2 Indicators
Indicators are observable phenomena that can be periodically reviewed to
help track events, spot emerging trends, and warn of unanticipated
changes. An indicator list is a preestablished set of observable or
potentially observable actions, conditions, facts, or events whose
simultaneous occurrence would argue strongly that a phenomenon is
present or is highly likely to occur. Indicators can be monitored to obtain
tactical, operational, or strategic warnings of some future development
that, if it were to occur, would have a major impact.

The identification and monitoring of indicators are fundamental tasks of


analysis, as they are the principal means of avoiding surprise. When used
in intelligence analysis, they usually are forward looking and are often
described as predictive indicators. In the law enforcement community,
indicators are more often used to assess whether a target’s activities or
behavior are consistent with an established pattern. These indicators look
backward and are often described as descriptive indicators.

When to Use It
Indicators provide an objective baseline for tracking events, instilling rigor
into the analytic process, and enhancing the credibility of the final product.
The indicator list can become the basis for conducting an investigation or
directing collection efforts and routing relevant information to all
interested parties. It can also serve as the basis for the analyst’s filing
system to track how events are developing.

Maintaining separate indicator lists for alternative scenarios or hypotheses


is useful when making a case that a certain event is likely or unlikely to

178
happen. This is particularly appropriate when conducting a What If?
Analysis or a High Impact/Low Probability Analysis.

Descriptive indicators are best used to help the analyst assess whether
there are sufficient grounds to believe that a specific action is taking place.
They provide a systematic way to validate a hypothesis or help
substantiate an emerging viewpoint. Figure 6.2a is an example of a list of
descriptive indicators, in this case pointing to a clandestine drug
laboratory.

A classic application of predictive indicators is to seek early warning of


some undesirable event, such as a military attack or a nuclear test by a
foreign country. Today, indicators are often paired with scenarios to
identify which of several possible scenarios is developing. They are also
used to measure change that points toward an undesirable condition, such
as political instability or an economic slowdown, or toward a desirable
condition, such as economic reform or the potential for market growth.
Analysts can use this technique whenever they need to track a specific
situation to monitor, detect, or evaluate change over time. In the private
sector, indicators are used to track whether a new business strategy is
working or whether a low-probability scenario is developing that offers
new commercial opportunities.

Figure 6.2a Descriptive Indicators of a Clandestine Drug Laboratory

179
Source: Pamphlet from ALERT Unit, New Jersey State Police, 1990;
republished in The Community Model, Counterdrug Intelligence
Coordinating Group, 2003.

Value Added
The human mind sometimes sees what it expects to see and can overlook
the unexpected. Identification of indicators creates an awareness that
prepares the mind to recognize early signs of significant change. Change
often happens so gradually that analysts do not see it, or they rationalize it
as not being of fundamental importance until it is too obvious to ignore.
Once analysts take a position on an issue, they can be reluctant to change
their minds in response to new evidence. By specifying in advance the
threshold for what actions or events would be significant and might cause
them to change their minds, analysts can seek to avoid this type of
rationalization.

Defining explicit criteria for tracking and judging the course of events
makes the analytic process more visible and available for scrutiny by
others, thus enhancing the credibility of analytic judgments. Including an
indicator list in the finished product helps decision makers track future
developments and builds a more concrete case for the analytic conclusions.

Preparation of a detailed indicator list by a group of knowledgeable


analysts is usually a good learning experience for all participants. It can be
a useful medium for an exchange of knowledge between analysts from
different organizations or those with different types of expertise—for
example, analysts who specialize in a particular country and those who are
knowledgeable about a particular field, such as military mobilization,
political instability, or economic development.

When analysts or decision makers are sharply divided over (1) the
interpretation of events (for example, political dynamics in Iran or how the
conflict in Syria is progressing), (2) the guilt or innocence of a “person of
interest,” or (3) the culpability of a counterintelligence suspect, indicators
can help depersonalize the debate by shifting attention away from personal
viewpoints to more objective criteria. Emotions often can be diffused and
substantive disagreements clarified if all parties agree in advance on a set
of criteria that would demonstrate that developments are—or are not—

180
moving in a particular direction or that a person’s behavior suggests that
he or she is guilty as suspected or is indeed a spy.

Indicators help counteract Hindsight Bias because they provide a written


record that more accurately reflects what the analyst was actually thinking
at the time rather than relying on that person’s memory. The process of
developing indicators also forces the analyst to reflect and explore all that
might be required for a specific event to occur. The process can also
ensure greater objectivity if two sets of indicators are developed, one
pointing to whether the scenario will emerge and another that would show
that it is not emerging.

The Method
✶ The first step in using this technique is to create a list of indicators.
Developing the indicator list can range from a simple process to a
sophisticated team effort. For example, with minimum effort you
could jot down a list of things you would expect to see if a particular
situation were to develop as feared or foreseen. Or you could join
with others to define multiple variables that would influence a
situation and then rank the value of each variable based on incoming
information about relevant events, activities, or official statements. In
both cases, some form of brainstorming, hypothesis generation, or
scenario development is often used to identify the indicators.
✶ Review and refine the list, discarding any indicators that are
duplicative and combining those that are similar. (See Figure 6.2b for
a sample indicator list.)
✶ Examine each indicator to determine if it meets the following five
criteria. Discard those that are found wanting. The following criteria
are arranged along a continuum, from essential to highly desirable:
– Observable and collectible: There must be some reasonable
expectation that, if present, the indicator will be observed and
reported by a reliable source. If an indicator is to monitor change
over time, it must be collectable over time.
– Valid: An indicator must be clearly relevant to the end state
the analyst is trying to predict or assess, and it must be
inconsistent with all or at least some of the alternative
explanations or outcomes. It must accurately measure the
concept or phenomenon at issue.
– Reliable: Data collection must be consistent when

181
comparable methods are used. Those observing and collecting
data must observe the same things. Reliability requires precise
definition of the indicators.
– Stable: An indicator must be useful over time to allow
comparisons and to track events. Ideally, the indicator should be
observable early in the evolution of a development so that
analysts and decision makers have time to react accordingly.
– Unique: An indicator should measure only one thing and, in
combination with other indicators, point only to the phenomenon
being studied. Valuable indicators are those that are not only
consistent with a specified scenario or hypothesis but are also
inconsistent with alternative scenarios or hypotheses. The
Indicators Validator™ tool, described later in this chapter, can
be used to check the diagnosticity of indicators.
✶ Monitor the indicator lists regularly to chart trends and detect
signs of change.

Figure 6.2b Using Indicators to Track Emerging Scenarios in Zambria

182
Source: 2009 Pherson Associates, LLC.

Any indicator list used to monitor whether something has happened, is


happening, or will happen implies at least one alternative scenario or
hypothesis—that it has not happened, is not happening, or will not happen.
Many indicators that a scenario or hypothesis is happening are just the
opposite of indicators that it is not happening, but some are not. Some are

183
consistent with two or more scenarios or hypotheses. Therefore, an analyst
should prepare separate lists of indicators for each scenario or hypothesis.
For example, consider indicators of an opponent’s preparations for a
military attack where there may be three hypotheses—no attack, attack,
and feigned intent to attack with the goal of forcing a favorable negotiated
solution. Almost all indicators of an imminent attack are also consistent
with the hypothesis of a feigned attack. The analyst must identify
indicators capable of diagnosing the difference between true intent to
attack and feigned intent to attack. The mobilization of reserves is such a
diagnostic indicator. It is so costly that it is not usually undertaken unless
there is a strong presumption that the reserves will be needed.

After creating the indicator list or lists, the analyst or analytic team should
regularly review incoming reporting and note any changes in the
indicators. To the extent possible, the analyst or the team should decide
well in advance which critical indicators, if observed, will serve as early-
warning decision points. In other words, if a certain indicator or set of
indicators is observed, it will trigger a report advising of some
modification in the analysts’ appraisal of the situation.

Techniques for increasing the sophistication and credibility of an indicator


list include the following:

✶ Establishing a scale for rating each indicator


✶ Providing specific definitions of each indicator
✶ Rating the indicators on a scheduled basis (e.g., monthly,
quarterly, or annually)
✶ Assigning a level of confidence to each rating
✶ Providing a narrative description for each point on the rating scale,
describing what one would expect to observe at that level
✶ Listing the sources of information used in generating the rating

Figure 6.2c is an example of a complex indicators chart that incorporates


the first three techniques listed above.

Potential Pitfalls
The quality of indicators is critical, as poor indicators lead to analytic
failure. For these reasons, analysts must periodically review the validity
and relevance of an indicator list. Narrowly conceived or outdated

184
indicators can reinforce analytic bias, encourage analysts to discard new
evidence, and lull consumers of information inappropriately. Indicators
can also prove to be invalid over time, or they may turn out to be poor
“pointers” to what they were supposed to show. By regularly checking the
validity of the indicators, analysts may also discover that their original
assumptions were flawed. Finally, if an opponent learns what indicators
are on your list, the opponent may make operational changes to conceal
what you are looking for or arrange for you to see contrary indicators.

Figure 6.2c Zambria Political Instability Indicators

185
Relationship to Other Techniques
Indicators are closely related to a number of other techniques. Some form
of brainstorming is commonly used to draw upon the expertise in creating
indicators of multiple analysts with different perspectives and different
specialties. The development of alternative scenarios should always
involve the development and monitoring of indicators that point toward
which scenario is evolving. What If? Analysis and High Impact/Low
Probability Analysis depend upon the development and use of indicators.
Indicators are often entered as items of relevant information in
Te@mACH®, as discussed in chapter 7 on Hypothesis Generation and
Testing.

The Indicators Validator™, which is discussed in the next section, is a tool


used to test the diagnosticity of indicators.

Origins of This Technique


The identification and monitoring of indicators of military attack is one of
the oldest forms of intelligence analysis. The discussion here is based on
Randolph H. Pherson, “Indicators,” in Handbook of Analytic Tools and

186
Techniques (Reston, VA: Pherson Associates, LLC, 2008); and Pherson,
The Indicators Handbook (Reston, VA: Pherson Associates, LLC, 2008).
Cynthia M. Grabo’s book Anticipating Surprise: Analysis for Strategic
Warning (Lanham, MD: University Press of America, 2004) is a classic
text on the development and use of indicators.

6.3 Indicators Validator™


Indicators Validator™ is a process for assessing the diagnostic power of
an indicator. It was designed specifically to help analysts validate their
indicators in the most efficient way possible.

When to Use It
The Indicators Validator™ is an essential tool to use when developing
indicators for competing hypotheses or alternative scenarios. (See Figure
6.3a.) Once an analyst has developed a set of alternative scenarios or
competing hypotheses, the next step is to generate indicators for each
scenario (or hypothesis) that would appear if that particular world or
hypothesis were beginning to emerge or prove accurate. A critical question
that is not often asked is whether a given indicator would appear only in
the scenario to which it is assigned or also in one or more alternative
scenarios or hypotheses. Indicators that could appear in several scenarios
or hypotheses are not considered diagnostic, suggesting that they may not
be particularly useful in determining whether a specific scenario or a
particular hypothesis is true. The ideal indicator is highly likely or
consistent for the scenario or hypothesis to which it is assigned and highly
unlikely or inconsistent for all other alternatives.

Figure 6.3a Indicators Validator™ Model

187
Source: 2008 Pherson Associates, LLC.

Value Added
Employing the Indicators Validator™ to identify and dismiss
nondiagnostic indicators can significantly increase the credibility of an
analysis. By applying the tool, analysts can rank order their indicators
from most to least diagnostic and decide how far up the list they want to
draw the line in selecting the indicators that will be used in the analysis. In
some circumstances, analysts might discover that most or all the indicators
for a given scenario have been eliminated because they are also consistent
with other scenarios, forcing them to brainstorm a new and better set of
indicators. If analysts find it difficult to generate independent lists of
diagnostic indicators for two scenarios, it may be that the scenarios are not
sufficiently dissimilar, suggesting that they should be combined.

Indicators Validation can help overcome mindsets by showing analysts


how a set of indicators that point to one scenario may also point to others.
It can also show how some indicators, initially perceived to be useful or
diagnostic, may not be. By placing an indicator in a broader context
against multiple scenarios, the technique helps analysts focus on which

188
one(s) are actually useful and diagnostic instead of simply supporting a
particular scenario. Indicators Validation also helps guard against
premature closure by requiring an analyst to track whether the indicators
previously developed for a particular scenario are actually emerging.

The Method
The first step is to fill out a matrix similar to that used for Analysis of
Competing Hypotheses. This can be done manually or by using the
Indicators Validator™ software (see Figure 6.3b). The matrix should:

✶ List alternative along the top of the matrix (as is done for
hypotheses in Analysis of Competing Hypotheses)
✶ List indicators that have already been generated for all the
scenarios down the left side of the matrix (as is done with relevant
information in Analysis of Competing Hypotheses)
✶ In each cell of the matrix, assess whether the indicator for that
particular scenario is
– Highly Likely to appear
– Likely to appear
– Could appear
– Unlikely to appear
– Highly Unlikely to appear
✶ Indicators developed for their particular scenario, the home
scenario, should be either Highly Likely or Likely. If the software is
unavailable, you can do your own scoring. If the indicator is Highly
Likely in the home scenario, then in the other scenarios:
– Highly Likely is 0 points
– Likely is 1 point
– Could is 2 points
– Unlikely is 4 points
– Highly Unlikely is 6 points
✶ If the indicator is Likely in the home scenario, then in the other
scenarios:
– Highly Likely is 0 points
– Likely is 0 points
– Could is 1 point
– Unlikely is 3 points
– Highly Unlikely is 5 points
✶ Tally up the scores across each row.

189
✶ Once this process is complete, re-sort the indicators so that the
most discriminating indicators are displayed at the top of the matrix
and the least discriminating indicators at the bottom.
– The most discriminating indicator is “Highly Likely” to
emerge in one scenario and “Highly Unlikely” to emerge in all
other scenarios.
– The least discriminating indicator is “Highly Likely” to
appear in all scenarios.
– Most indicators will fall somewhere in between.
✶ The indicators with the most Highly Unlikely and Unlikely ratings
are the most discriminating.
✶ Review where analysts differ in their assessments and decide if
adjustments are needed in their ratings. Often, differences in how an
analysts rate a particular indicator cam be traced back to different
assumptions about the scenario when the analysts were doing the
ratings.
✶ Use your judgment as to whether you should retain or discard
indicators that have no Unlikely or Highly Unlikely ratings. In some
cases, an indicator may be worth keeping if it is useful when viewed
in combination with a cluster of indicators.
✶ Once nonhelpful indicators have been eliminated, regroup the
indicators under their home scenario.
✶ If a large number of diagnostic indicators for a particular scenario
have been eliminated, develop additional—and more diagnostic—
indicators for that scenario.
✶ Recheck the diagnostic value of any new indicators by applying
the Indicators Validator™ to them as well.

Figure 6.3b Indicators Validator™ Process

190
Source: 2013 Globalytica, LLC.

Potential Pitfalls
Serious thought should be given before discarding indicators that are
determined to be nondiagnostic. For example, an indicator might not have
diagnostic value on its own but be helpful when viewed as part of a cluster
of indicators. An indicator that a terrorist group “purchased guns” would
not be diagnostic in determining which of the following scenarios were
likely to happen: armed attack, hostage taking, or kidnapping; but knowing
that guns had been purchased could be critical in pointing to an intent to
commit an act of violence or even to warn of the imminence of the event.

Another argument for not discarding nondiagnostic indicators is that


maintaining and publishing such a list could prove valuable to collectors.
If analysts initially believed the indicators would be helpful in determining
whether a specific scenario was emerging, then collectors and other
analysts working the issue or a similar issue might come to the same
conclusion. For these reasons, facilitators of the Indicators Validator™
technique believe that the list of nondiagnostic indicators should also be

191
published to alert other analysts and collectors to the possibility that they
might make the same mistake.

Origins of This Technique


This technique was developed by Randy Pherson, Grace Scarborough,
Alan Schwartz, and Sarah Beebe, Pherson Associates, LLC. It was first
published in Randolph H. Pherson, Handbook of Analytic Tools and
Techniques (Reston, VA: Pherson Associates, LLC, 2008). For
information on other approaches to Scenarios Analysis, see Andy Hines,
“The Current State of Scenario Development: An Overview of
Techniques,” Foresight 9, no. 1 (March 2007).

1. Peter Schwartz, The Art of the Long View: Planning for the Future in an
Uncertain World (New York: Doubleday, 1996).

2. See, for example, Brian Nichiporuk, Alternative Futures and Army


Force Planning: Implications for the Future Force Era (Santa Monica,
CA: RAND, 2005).

3. The description of the Cone of Plausibility is taken from Quick Wins for
Busy Analysts, DI Futures and Analytic Methods (DI FAM), Professional
Head of Defence Intelligence Analysis, UK Ministry of Defence; and
Gudmund Thompson, Aide Memoire on Intelligence Analysis Tradecraft,
Chief of Defence Intelligence, Director General of Intelligence Production,
Canada, Version 4.02, and are used with the permission of the UK and
Canadian governments.

4. STEEP stands for Social, Technological, Economic, Environmental, and


Political; STEEP + 2 adds Psychological and Military; STEEPLE adds
Legal and Ethics to the original STEEP list; STEEPLED further adds
Demographics; and PESTLE stands for Political, Economic, Social,
Technological, Legal, and Environmental.

192
7 Hypothesis Generation and Testing

7.1 Hypothesis Generation [ 169 ]


7.2 Diagnostic Reasoning [ 178 ]
7.3 Analysis of Competing Hypotheses [ 181 ]
7.4 Argument Mapping [ 193 ]
7.5 Deception Detection [ 198 ]

Analysis conducted by the intelligence, law enforcement, and business


communities will never achieve the accuracy and predictability of a true
science, because the information with which analysts must work is
typically incomplete, ambiguous, and potentially deceptive. The analytic
process can, however, benefit from some of the lessons of science and
adapt some of the elements of scientific reasoning.

The scientific process involves observing, categorizing, formulating


hypotheses, and then testing those hypotheses. Generating and testing
hypotheses is a core function of structured analysis. A possible explanation
of the past or a judgment about the future is a hypothesis that needs to be
tested by collecting and presenting evidence. The first part of this chapter
describes techniques for generating hypotheses. These and other similar
techniques allow analysts to imagine new and alternative explanations for
their subject matter. The process of generating multiple hypotheses can
provide a strong antidote to several cognitive biases: It mitigates against
the Anchoring Effect by spurring analysts to generate alternative
explanations, reduces the influence of Confirmation Bias by exposing
analysts to new ideas and multiple permutations, and helps analysts avoid
Premature Closure.

These alternative explanations then need to be tested against the available


evidence. The second part of this chapter describes techniques for testing
hypotheses. These techniques spur the analyst to become more sensitive to
the quality of the data, looking for information that not only confirms but
can disconfirm the hypothesis. By systematically reviewing all the
evidence, the cognitive biases of focusing too much attention on the more
vivid or most familiar scenarios can also be mitigated.

The generation and testing of hypotheses is a skill, and its subtleties do not

193
come naturally. It is a form of reasoning that people can learn to use for
dealing with high-stakes situations. What does come naturally is drawing
on our existing body of knowledge and experience (mental model) to make
an intuitive judgment.1 In most circumstances in our daily lives, this is an
efficient approach that works most of the time. For intelligence analysis,
however, it is not sufficient, because intelligence issues are generally so
complex, and the risk and cost of error are too great. Also, the situations
are often novel, so the intuitive judgment shaped by past knowledge and
experience may well be wrong.

When one is facing a complex choice of options, the reliance on intuitive


judgment risks following a practice called “satisficing,” a term coined by
Nobel Prize winner Herbert Simon by combining the words satisfy and
suffice.2 It means being satisfied with the first answer that seems adequate,
as distinct from assessing multiple options to find the optimal or best
answer. The “satisficer” who does seek out additional information may
look only for information that supports this initial answer rather than
looking more broadly at all the possibilities.

Good analysis of a complex issue must start with a set of alternative


hypotheses. Another practice that the experienced analyst borrows from
the scientist’s toolkit involves the testing of alternative hypotheses. The
truth of a hypothesis can never be proven beyond doubt by citing only
evidence that is consistent with the hypothesis, because the same evidence
may be and often is consistent with one or more other hypotheses. Science
often proceeds by refuting or disconfirming hypotheses. A hypothesis that
cannot be refuted should be taken just as seriously as a hypothesis that
seems to have a lot of evidence in favor of it. A single item of evidence
that is shown to be inconsistent with a hypothesis can be sufficient grounds
for rejecting that hypothesis. The most tenable hypothesis is often the one
with the least evidence against it.

Analysts often test hypotheses by using a form of reasoning known as


abduction, which differs from the two better known forms of reasoning,
deduction and induction. Abductive reasoning starts with a set of facts.
One then develops hypotheses that, if true, would provide the best
explanation for these facts. The most tenable hypothesis is the one that
best explains the facts. Because of the uncertainties inherent to intelligence
analysis, conclusive proof or refutation of hypotheses is the exception
rather than the rule.

194
This chapter describes three techniques that are intended to be used
specifically for hypothesis generation. Other chapters include techniques
that can be used to generate hypotheses but also have a variety of other
purposes. These include Venn Analysis (chapter 4); Structured
Brainstorming, Nominal Group Technique, and Quadrant Crunching™
(chapter 5); Scenarios Analysis (chapter 6); the Delphi Method (chapter 9);
and Decision Trees (chapter 11).

This chapter discusses four techniques for testing hypotheses. One of


these, Analysis of Competing Hypotheses (ACH), was developed by
Richards Heuer specifically for use in intelligence analysis. It is the
application to intelligence analysis of Karl Popper’s theory of science.3
Popper was one of the most influential philosophers of science of the
twentieth century. He is known for, among other things, his position that
scientific reasoning should start with multiple hypotheses and proceed by
rejecting or eliminating hypotheses, while tentatively accepting only those
hypotheses that cannot be refuted.

Overview of Techniques
Hypothesis Generation
is a category that includes three specific techniques—Simple Hypotheses,
Multiple Hypotheses Generator™, and Quadrant Hypothesis Generation.
Simple Hypotheses is the easiest to use but not always the best selection.
Use the Multiple Hypotheses Generator™ to identify a large set of all
possible hypotheses. Quadrant Hypothesis Generation is used to identify a
set of hypotheses when the outcome is likely to be determined by just two
driving forces. The latter two techniques are particularly useful in
identifying a set of mutually exclusive and comprehensively exhaustive
(MECE) hypotheses.

Diagnostic Reasoning
applies hypothesis testing to the evaluation of significant new information.
Such information is evaluated in the context of all plausible explanations
of that information, not just in the context of the analyst’s well-established
mental model. The use of Diagnostic Reasoning reduces the risk of
surprise, as it ensures that an analyst will have given at least some

195
consideration to alternative conclusions. Diagnostic Reasoning differs
from the Analysis of Competing Hypotheses (ACH) technique in that it is
used to evaluate a single item of evidence, while ACH deals with an entire
issue involving multiple pieces of evidence and a more complex analytic
process.

Analysis of Competing Hypotheses


is the application of Popper’s philosophy of science to the field of
intelligence analysis. The requirement to identify and then refute all
reasonably possible hypotheses forces an analyst to recognize the full
uncertainty inherent in most analytic situations. ACH helps the analyst sort
and manage relevant information to identify paths for reducing that
uncertainty.

Argument Mapping
is a method that can be used to put a single hypothesis to a rigorous logical
test. The structured visual representation of the arguments and evidence
makes it easier to evaluate any analytic judgment. Argument Mapping is a
logical follow-on to an ACH analysis. It is a detailed presentation of the
arguments for and against a single hypothesis, while ACH is a more
general analysis of multiple hypotheses. The successful application of
Argument Mapping to the hypothesis favored by the ACH analysis would
increase confidence in the results of both analyses.

Deception Detection
is discussed in this chapter because the possibility of deception by a
foreign intelligence service, economic competitor, or other adversary
organization is a distinctive type of hypothesis that analysts must
frequently consider. The possibility of deception can be included as a
hypothesis in any ACH analysis. Information identified through the
Deception Detection technique can then be entered as relevant information
in the ACH matrix.

7.1 Hypothesis Generation


In broad terms, a hypothesis is a potential explanation or conclusion that is

196
to be tested by collecting and presenting evidence. It is a declarative
statement that has not been established as true—an “educated guess” based
on observation that needs to be supported or refuted by more observation
or through experimentation.

A good hypothesis does the following:

✶ It is written as a definite statement, not as a question.


✶ It is based on observations and knowledge.
✶ It is testable and can be proven wrong.
✶ It predicts the anticipated results clearly.
✶ It contains a dependent and an independent variable. The
dependent variable is the phenomenon being explained; the
independent variable does the explaining.

Hypothesis Generation should be an integral part of any rigorous analytic


process because it helps the analyst think broadly and creatively about a
range of possibilities and avoid being surprised when common wisdom
turns out to be wrong. The goal is to develop a list of hypotheses that can
be scrutinized and tested over time against existing relevant information
and new data that may become available in the future. Analysts should
strive to make the hypotheses mutually exclusive and the list as
comprehensive as possible.

Many techniques can be used to generate hypotheses, including several


techniques discussed elsewhere in this book, such as Venn Analysis,
Structured Brainstorming, Scenarios Analysis, Quadrant Crunching™,
Starbursting, the Delphi Method, and Decision Trees. This section
discusses techniques developed specifically for hypothesis generation and
then presents the method for three different techniques—Simple
Hypotheses, Multiple Hypotheses Generator™, and Quadrant Hypothesis
Generation.

When to Use It
Analysts should use some structured procedure to develop multiple
hypotheses at the start of a project when:

✶ The importance of the subject matter is such as to require


systematic analysis of all alternatives.

197
✶ Many variables are involved in the analysis.
✶ There is uncertainty about the outcome.
✶ Analysts or decision makers hold competing views.

Simple Hypotheses is often used to broaden the spectrum of plausible


hypotheses. It utilizes Structured Brainstorming to create potential
hypotheses based on affinity groups. Quadrant Hypothesis Generation
works best when the problem can be defined by two key drivers; in these
circumstances, a 2 × 2 matrix can be created and different hypotheses
generated for each quadrant. The Multiple Hypothesis Generator™ is
particularly helpful when there is a reigning lead hypothesis.

Value Added
Hypothesis Generation provides a structured way to generate a
comprehensive set of mutually exclusive hypotheses. This can increase
confidence that an important hypothesis has not been overlooked and can
also help to reduce bias. When the techniques are used properly, choosing
a lead hypothesis becomes much less critical than making sure that all the
possible explanations have been considered.

The techniques are particularly useful in helping intelligence analysts


overcome some classic intuitive traps such as:

✶ Imposing lessons learned: When under pressure, analysts can be


tempted to select a hypothesis only because it avoids a previous error
or replicates a past success. A prime example of this was the desire
not to repeat the mistake of underestimating Saddam Hussein’s WMD
capabilities in the run-up to the second U.S. war with Iraq.
✶ Rejecting evidence: Analysts will often hold to an analytic
judgment for months or years even when confronted with a mounting
list of scientific evidence that contradicts the initial conclusion. A
good example would be the persistent belief that human activity is not
contributing to global warming.
✶ Lacking sufficient bins: The failure to remember or factor
something into the analysis because the analyst lacks an appropriate
category or “bin” for that item of information has been the cause of
many U.S. intelligence failures. In the aftermath of the September 11
attacks, we realized that one factor contributing to the U.S.
Intelligence Community’s failure of imagination was that it did not

198
have “bins” for the use of a commercial airliner as a large
combustible missile or hijacking an airplane with no intent to land it.
✶ Expecting marginal change: Another frequent trap is focusing on
a narrow range of alternatives and projecting only marginal, not
radical, change. An example would be assuming that the conflict in
Syria is a battle between the Assad regime and united opposition and
not considering scenarios of more dramatic change such as partition
into several states, collapse into failed-state status, or a takeover by
radical Islamist forces.

7.1.1 The Method: Simple Hypotheses


To use the Simple Hypotheses method, define the problem and determine
how the hypotheses are expected to be used at the beginning of the project.
Will hypotheses be used in an Analysis of Competing Hypotheses, in some
other hypothesis-testing project, as a basis for developing scenarios, or as a
means to select from a wide range of alternative outcomes those that need
most careful attention? Figure 7.1.1 illustrates the process.

Gather together a diverse group to review the available information and


explanations for the issue, activity, or behavior that you want to evaluate.
In forming this diverse group, consider that you will need different types
of expertise for different aspects of the problem, cultural expertise about
the geographic area involved, different perspectives from various
stakeholders, and different styles of thinking (left brain/right brain,
male/female). Then do the following:

✶ Ask each member of the group to write down on a 3 × 5 card up to


three alternative explanations or hypotheses. Prompt creative thinking
by using the following:
– Situational logic: Take into account all the known facts and
an understanding of the underlying forces at work at that
particular time and place.
– Historical analogies: Consider examples of the same type of
phenomenon.
– Theory: Consider theories based on many examples of how
a particular type of situation generally plays out.
✶ Collect the cards and display the results on a whiteboard.
Consolidate the list to avoid any duplication.
✶ Employ additional group and individual brainstorming techniques

199
to identify key forces and factors.
✶ Aggregate the hypotheses into affinity groups and label each
group.
✶ Use problem restatement and consideration of the opposite to
develop new ideas.
✶ Update the list of alternative hypotheses. If the hypotheses will be
used in ACH, strive to keep them mutually exclusive—that is, if one
hypothesis is true all others must be false.
✶ Have the group clarify each hypothesis by asking the journalist’s
classic list of questions: Who, What, How, When, Where, and Why?
✶ Select the most promising hypotheses for further exploration.

Figure 7.1.1 Simple Hypotheses

200
Source: 2013 Globalytica, LLC.

7.1.2 The Method: Multiple Hypotheses


Generator™
The Multiple Hypotheses Generator™ is a technique for developing
multiple alternatives for explaining a particular issue, activity, or behavior.
Analysts often can brainstorm a useful set of hypotheses without such a
tool, but the Multiple Hypotheses Generator™ may give greater
confidence than other techniques that a critical alternative or an outlier has
not been overlooked. Analysts should employ the Multiple Hypotheses
Generator™ to ensure that they have considered a broad array of potential
hypotheses. In some cases, they may have considerable data and want to
ensure that they have generated a set of plausible explanations that are
consistent with all the data at hand. Alternatively, they may have been
presented with a hypothesis that seems to explain the phenomenon at hand
and been asked to assess its validity. The software also helps analysts rank
alternative hypotheses from the most to least credible and focus on those at
the top of the list deemed most worthy of attention.

To use this method:

201
✶ Define the issue, activity, or behavior that is subject to
examination. Often, it is useful to ask the question in the following
ways:
– What variations could be developed to challenge the lead
hypothesis that . . . ?
– What are the possible permutations that would flip the
assumptions contained in the lead hypothesis that . . . ?
✶ Break up the lead hypothesis into its component parts following
the order of Who, What, How, When, Where, and Why? Some of
these questions may not be appropriate, or they may be givens for the
particular issue, activity, or behavior you are examining. Usually,
only three of these components are directly relevant to the issue at
hand; moreover, the method works best when the number of
hypotheses generated remains manageable.
– In a case in which you are seeking to challenge a favored
hypothesis, identify the Who, What, How, When, Where, and
Why for the given hypothesis. Then generate plausible
alternatives for each relevant key component.
✶ Review the lists of alternatives for each of the key components;
strive to keep the alternatives on each list mutually exclusive.
✶ Generate a list of all possible permutations, as shown in Figure
7.1.2.
✶ Discard any permutation that simply makes no sense.
✶ Evaluate the credibility of the remaining permutations by
challenging the key assumptions of each component. Some of these
assumptions may be testable themselves. Assign a “credibility score”
to each permutation using a 1-to 5-point scale where 1 is low
credibility and 5 is high credibility.
✶ Re-sort the remaining permutations, listing them from most
credible to least credible.
✶ Restate the permutations as hypotheses, ensuring that each meets
the criteria of a good hypothesis.
✶ Select from the top of the list those hypotheses most deserving of
attention.

Figure 7.1.2 Multiple Hypotheses Generator™: Generating Permutations

202
Source: 2013 Globalytica, LLC.

7.1.3 The Method: Quadrant Hypothesis


Generation

203
Use the quadrant technique to identify a basic set of hypotheses when two
easily identified key driving forces can be identified that are likely to
determine the outcome of an issue. The technique identifies four potential
scenarios that represent the extreme conditions for each of the two major
drivers. It spans the logical possibilities inherent in the relationship and
interaction of the two driving forces, thereby generating options that
analysts otherwise may overlook.

Quadrant Hypothesis Generation is easier and quicker to use than the


Multiple Hypotheses Generator™, but it is limited to cases in which the
outcome of a situation will be determined by two major driving forces—
and it depends on the correct identification of these forces. It is less
effective when there are more than two major drivers or when analysts
differ over which forces constitute the two major drivers.

These are the steps for Quadrant Hypothesis Generation:

✶ Identify the two main drivers by using techniques such as


Structured Brainstorming or by surveying subject-matter experts. A
discussion to identify the two main drivers can be a useful exercise in
itself.
✶ Construct a 2 × 2 matrix using the two drivers.
✶ Think of each driver as a continuum from one extreme to the
other. Write the extremes of each of the drivers at the end of the
vertical and horizontal axes.
✶ Fill in each quadrant with the details of what the end state would
be as shaped by the two drivers.
✶ Develop signposts that show whether events are moving toward
one of the hypotheses. Use the signposts or indicators of change to
develop intelligence collection strategies or research priorities to
determine the direction in which events are moving.

Figure 7.1.3 shows an example of a quadrant chart. In this case analysts


have been tasked with developing a paper on the possible future of Iraq,
focusing on the potential end state of the government. The analysts have
identified and agreed upon the two key drivers in the future of the
government: the level of centralization of the federal government and the
degree of religious control of that government. They develop their
quadrant chart and lay out the four logical hypotheses based on their
decisions.

204
The four hypotheses derived from the quad chart can be stated as follows:

1. The final state of the Iraq government will be a centralized state and a
secularized society.
2. The final state of the Iraq government will be a centralized state and a
religious society.
3. The final state of the Iraq government will be a decentralized state
and a secularized society.
4. The final state of the Iraq government will be a decentralized state
and a religious society.

Potential Pitfalls
The value of this technique is limited to the ability of analysts to generate a
robust set of alternative explanations. If group dynamics are flawed, the
outcomes will be flawed. Whether the correct hypothesis or “idea”
emerges from this process and is identified as such by the analyst or
analysts cannot be guaranteed, but the prospect of the correct hypothesis
being included in the set of hypotheses under consideration is greatly
increased.

Figure 7.1.3 Quadrant Hypothesis Generation: Four Hypotheses on the


Future of Iraq

205
Relationship to Other Techniques
The product of any Scenarios Analysis can be thought of as a set of
alternative hypotheses. Quadrant Hypothesis Generation is a specific
application of the generic method called Morphological Analysis,
described in chapter 5. Alternative Futures Analysis uses a similar
quadrant chart approach to define four potential outcomes.

Origins of This Technique


The generation and testing of hypotheses is a key element of scientific
reasoning. The Simple Hypotheses approach was developed by Randy
Pherson. The Multiple Hypotheses Generator™ was also developed by
Randy Pherson and is described in more detail in his Handbook of Analytic
Tools and Techniques (Reston, VA: Pherson Associates, LLC, 2008) and
at www.globalytica.com. The description of Quadrant Hypothesis
Generation is from Defense Intelligence Agency training materials.

7.2 Diagnostic Reasoning


Diagnostic Reasoning is the application of hypothesis testing to a new
development, a single new item of information or intelligence, or the
reliability of a source. It differs from Analysis of Competing Hypotheses
in that it is used to evaluate a single item of relevant information or a
single source, while ACH deals with an entire range of hypotheses and
multiple items of relevant information.

When to Use It
Analysts should use Diagnostic Reasoning if they find themselves making
a snap intuitive judgment while assessing the meaning of a new
development, the significance of a new report, or the reliability of a stream
of reporting from a new source. Often, much of the information used to
support one’s lead hypothesis turns out to be consistent with alternative
hypotheses as well. In such cases, the new information should not—and
cannot—be used as evidence to support the prevailing view or lead
hypothesis.

206
The technique also helps reduce the chances of being caught by surprise,
as it ensures that the analyst or decision maker will have given at least
some consideration to alternative explanations. It is especially important to
use when an analyst—or decision maker—is looking for evidence to
confirm an existing mental model or policy position. It helps the analyst
assess whether the same information is consistent with other reasonable
conclusions or with alternative hypotheses.

Value Added
The value of Diagnostic Reasoning is that it helps analysts balance their
natural tendency to interpret new information as consistent with their
existing understanding of what is happening—that is, the analyst’s mental
model. The technique prompts analysts to ask themselves whether this
same information is consistent with other reasonable conclusions or
alternative hypotheses. It is a common experience to discover that much of
the information supporting what one believes is the most likely conclusion
is really of limited value in confirming one’s existing view, because that
same information is also consistent with alternative conclusions. One
needs to evaluate new information in the context of all possible
explanations of that information, not just in the context of a well-
established mental model.

The Diagnostic Reasoning technique helps the analyst identify and focus
on the information that is most needed to make a decision and avoid the
mistake of discarding or ignoring information that is inconsistent with
what the analyst expects to see. When evaluating evidence, analysts tend
to assimilate new information into what they currently perceive. Gradual
change is difficult to notice. Sometimes, past experience can provide a
handicap in that the expert has much to unlearn—and a fresh perspective
can be helpful.

Diagnostic Reasoning also helps analysts avoid the classic intuitive traps
of assuming the same dynamic is in play when something seems to accord
with an analyst’s past experiences and continuing to hold to an analytic
judgment when confronted with a mounting list of evidence that
contradicts the initial conclusion. It also helps reduce attention to
information that is not diagnostic by identifying information that is
consistent with all possible alternatives.

207
The Method
Diagnostic Reasoning is a process by which you try to refute alternative
judgments rather than confirm what you already believe to be true. Here
are the steps to follow:

✶ When you receive a potentially significant item of information,


make a mental note of what it seems to mean (i.e., an explanation of
why something happened or what it portends for the future). Make a
quick, intuitive judgment based on your current mental model.
✶ Define the focal question. For example, Diagnostic Reasoning
brainstorming sessions often begin with questions like:
– Are there alternative explanations for the lead hypothesis
(defined as . . . ) that would also be consistent with the new
information, new development, or new source of reporting?
– Is there a reason other than the lead hypothesis that . . . ?
✶ Brainstorm, either alone or in a small group, the alternative
judgments that another analyst with a different perspective might
reasonably deem to have a chance of being accurate. Make a list of
these alternatives.
✶ For each alternative, ask the following question: If this alternative
were true or accurate, how likely is it that I would see this new
information?
✶ Make a tentative judgment based on consideration of these
alternatives. If the new information is equally likely with each of the
alternatives, the information has no diagnostic value and can be
ignored. If the information is clearly inconsistent with one or more
alternatives, those alternatives might be ruled out.
✶ Following this mode of thinking for each of the alternatives,
decide which alternatives need further attention and which can be
dropped from consideration or put aside until new information
surfaces.
✶ Proceed by seeking additional evidence to refute the remaining
alternatives rather than to confirm them.

Potential Pitfalls
When new information is received, analysts need to validate that the new
information is accurate and not deceptive or intentionally misleading. It is

208
also possible that none of the key information turns out to be diagnostic, or
that all of the relevant information will not be identified.

Relationship to Other Techniques


Diagnostic Reasoning is an integral part of two other techniques: Analysis
of Competing Hypotheses and the Indicators Validator™ (chapter 6). It is
presented here as a separate technique to show that its use is not limited to
those two techniques. It is a fundamental form of critical reasoning that
should be widely used in intelligence analysis.

Origins of This Technique


Diagnostic Reasoning has been the principal method for medical problem
solving for many years. For information on the role of Diagnostic
Reasoning in the medical world, see the following publications: Albert S.
Elstein, “Thinking about Diagnostic Thinking: A Thirty-Year
Perspective,” Advances in Health Science Education, published online by
Springer Science+Business Media, August 11, 2009; and Pat Croskerry,
“A Universal Model of Diagnostic Reasoning,” Academic Medicine 84,
no. 8 (August 2009).

7.3 Analysis of Competing Hypotheses


Analysis of Competing Hypotheses (ACH) is an analytic process that
identifies a complete set of alternative hypotheses, systematically
evaluates data that are consistent or inconsistent with each hypothesis, and
proceeds by rejecting hypotheses rather than trying to confirm what
appears to be the most likely hypothesis. Rejecting rather than confirming
hypotheses applies to intelligence analysis the scientific principles
advocated by Karl Popper, one of the most influential philosophers of
science of the twentieth century.4

ACH starts with the identification of a set of mutually exclusive alternative


explanations or outcomes called hypotheses. The analyst assesses the
consistency or inconsistency of each item of relevant information with
each hypothesis, and then selects the hypothesis that best fits the relevant
information. The scientific principle behind this technique is to proceed by

209
trying to refute as many reasonable hypotheses as possible rather than to
confirm what initially appears to be the most likely hypothesis. The most
likely hypothesis is then the one with the least relevant information against
it, as well as information for it—not the one with the most relevant
information for it.

When to Use It
ACH is appropriate for almost any analysis where there are alternative
explanations for what has happened, is happening, or is likely to happen.
Use it when the judgment or decision is so important that you cannot
afford to be wrong. Use it when your gut feelings are not good enough,
and when you need a systematic approach to prevent being surprised by an
unforeseen outcome. Use it on controversial issues when it is desirable to
identify precise areas of disagreement and to leave an audit trail to show
what relevant information was considered and how different analysts
arrived at their different judgments.

ACH is particularly effective when there is a robust flow of data to absorb


and evaluate. For example, it is well suited for addressing questions about
technical issues in the chemical, biological, radiological, and nuclear
arenas, such as, “For which weapons system is this part most likely being
imported?” or, “Which type of missile system is Country X importing or
developing?” ACH is particularly helpful when an analyst must deal with
the potential for denial and deception, as it was initially developed for that
purpose.

The technique can be used by a single analyst, but it is most effective with
a small team whose members can question one another’s evaluation of the
relevant information. It structures and facilitates the exchange of
information and ideas with colleagues in other offices or agencies. An
ACH analysis requires a modest commitment of time; it may take a day or
more to build the ACH matrix once all the relevant information has been
collected, and it may require another day to work through all the stages of
the analytic process before writing up any conclusions. A facilitator or a
colleague previously schooled in the use of the technique is often needed
to help guide analysts through the process, especially if it is the first time
they have used the methodology.

210
Value Added
Analysts are commonly required to work with incomplete, ambiguous,
anomalous, and sometimes deceptive data. In addition, strict time
constraints and the need to “make a call” often conspire with natural
human cognitive biases to cause inaccurate or incomplete judgments. If the
analyst is already generally knowledgeable on the topic, a common
procedure is to develop a favored hypothesis and then search for relevant
information to confirm it. This is called a “satisficing” approach—going
with the first answer that seems to be supported by the evidence.

Satisficing is efficient because it saves time, and it works much of the


time. However, the analyst has made no investment in protection against
surprise or what is called Confirmation Bias. This approach provides no
stimulus for the analyst to identify and question fundamental assumptions.
It bypasses the analysis of alternative explanations or outcomes, which
should be fundamental to any complete analysis. As a result, satisficing
fails to distinguish that much relevant information seemingly supportive of
the favored hypothesis is also consistent with one or more alternative
hypotheses. It often fails to recognize the importance of what is missing
(i.e., what should be observable if a given hypothesis is true, but is not
there).

ACH improves the analyst’s chances of overcoming these challenges by


requiring the analyst(s) to identify and then try to refute as many
reasonable hypotheses as possible using the full range of data,
assumptions, and gaps that are pertinent to the problem at hand. The
method for analyzing competing hypotheses takes time and attention in the
initial stages, but it pays big dividends. When analysts are first exposed to
ACH and say they find it useful, it is because the simple focus on
identifying alternative hypotheses and how they might be disproved
prompts the analysts to think seriously about evidence, explanations, or
outcomes in ways that had not previously occurred to them.

The ACH process requires the analyst to assemble the collected


information and organize it in an analytically useful way, so that it can be
readily retrieved for use in the analysis. This is done by creating a matrix
with relevant information down the left side and hypotheses across the top.
Each item of relevant information is then evaluated as to whether it is
consistent or inconsistent with each hypothesis, and this material is used to

211
assess the support for and against each hypothesis. This can be done
manually, but it is much easier and better to use ACH software designed
for this purpose. Various ACH softwares can be used to sort and analyze
the data by type of source and date of information, as well as by degree of
support for or against each hypothesis.

ACH also helps analysts produce a better analytic product by:

✶ Maintaining a record of the relevant information and tracking how


that information relates to each hypothesis.
✶ Capturing the analysts’ key assumptions when the analyst is
coding the data and recording what additional information is needed
or what collection requirements should be generated.
✶ Enabling analysts to present conclusions in a way that is better
organized and more transparent as to how these conclusions were
reached than would otherwise be possible.
✶ Providing a foundation for identifying indicators that can then be
monitored and validated to determine the direction in which events
are heading.
✶ Leaving a clear audit trail as to how the analysis was done, the
conclusions reached, and how individual analysts may have differed
in their assumptions or judgments.

ACH Software
ACH started as a manual method at the CIA in the mid-1980s. The first
professionally developed and tested ACH software was created in 2005 by
the Palo Alto Research Center (PARC), with federal government funding
and technical assistance from Richards Heuer and Randy Pherson. Randy
Pherson managed its introduction into the U.S. Intelligence Community. It
was developed with the understanding that PARC would make it available
for public use at no cost, and it may still be downloaded at no cost from
the PARC website: http://www2.parc.com/istl/projects/ach/ach.html.

The PARC version was designed for use by an individual analyst, but in
practice it is commonly used by a collocated team of analysts. Members of
such groups report that:

✶ The technique helps them gain a better understanding of the


differences of opinion with other analysts or between analytic offices.

212
✶ Review of the ACH matrix provides a systematic basis for
identification and discussion of differences between participating
analysts.
✶ Reference to the matrix helps depersonalize the argumentation
when there are differences of opinion.

Other versions of ACH have been developed by various research centers


and academic institutions. One noteworthy version called Structured
Analysis of Competing Hypothesis, developed for instruction at
Mercyhurst College, builds on ACH by requiring deeper analysis at some
points.

The most recent advance in ACH technology is called Te@mACH®,


developed under the direction of Randy Pherson for Globalitica, LLC, in
2010. This has almost all of the functions of the PARC ACH tool, but it is
designed as a collaborative tool. It allows analysts in several locations to
work on the same problem simultaneously. They can propose hypotheses
and enter data on the matrix from multiple locations, but they must agree
to work from the same set of hypotheses and the same set of relevant
information. The software allows them to chat electronically about one
another’s assessments and assumptions, to compare their analysis with that
of their colleagues, and to learn what the group consensus was for the
overall problem solution.

Te@mACH® provides a tool for intraoffice or interagency collaboration


that ensures all analysts are working from the same database of evidence,
arguments, and assumptions, and that each member of the team has had an
opportunity to express his or her view on how that information relates to
the likelihood of each hypothesis. Analysts can use the tool both
synchronously and asynchronously. It adds other useful functions such as a
survey method to enter data that protects against bias, the ability to record
key assumptions and collection requirements, and a filtering function that
allows analysts to compare and contrast how each person rated the relevant
information.5

The Method
To retain three or five or seven hypotheses in working memory and note
how each item of information fits into each hypothesis is beyond the

213
capabilities of most people. It takes far greater mental agility than the
common practice of seeking evidence to support a single hypothesis that is
already believed to be the most likely answer. ACH can be accomplished,
however, with the help of the following nine-step process:

1. Identify the hypotheses to be considered. Hypotheses should be


mutually exclusive; that is, if one hypothesis is true, all others must
be false. The list of hypotheses should include all reasonable
possibilities. Include a deception hypothesis, if that is appropriate. For
each hypothesis, develop a brief scenario or “story” that explains how
it might be true.
2. Make a list of significant information, which for ACH means
everything that is relevant to evaluating the hypotheses—including
evidence, assumptions, and the absence of things one would expect to
see if a hypothesis were true. It is important to include assumptions as
well as factual evidence, because the matrix is intended to be an
accurate reflection of the analyst’s thinking about the topic. If the
analyst’s thinking is driven by assumptions rather than hard facts, this
needs to become apparent so that the assumptions can be challenged.
A classic example of absence of evidence is the Sherlock Holmes
story of the dog barking in the night. The failure of the dog to bark
was persuasive evidence that the guilty party was not an outsider but
an insider who was known to the dog.

3. Create a matrix with all hypotheses across the top and all items of
relevant information down the left side. See Figure 7.3a for an
example. Analyze each input by asking, “Is this input Consistent with
the hypothesis, is it Inconsistent with the hypothesis, or is it Not
Applicable or not relevant?” This can be done by either filling in each
cell of the matrix row-by-row or using the survey method which
randomly selects a cell in the matrix for the analyst to rate (see Figure
7.3b for an example). If it is Consistent, put a “C” in the appropriate
matrix box; if it is Inconsistent, put an “I”; if it is Not Applicable to
that hypothesis, put an “NA.” If a specific item of evidence,
argument, or assumption is particularly compelling, put two “Cs” in
the box; if it strongly undercuts the hypothesis, put two “Is.”

When you are asking if an input is Consistent or Inconsistent with a


specific hypothesis, a common response is, “It all depends on. . . .”
That means the rating for the hypothesis will be based on an

214
assumption—whatever assumption the rating “depends on.” You
should record all such assumptions. After completing the matrix, look
for any pattern in those assumptions—such as, the same assumption
being made when ranking multiple items of information. After the
relevant information has been sorted for diagnosticity, note how many
of the highly diagnostic Inconsistency ratings are based on
assumptions. Consider how much confidence you should have in
those assumptions and then adjust the confidence in the ACH
Inconsistency Scores accordingly.
4. Review where analysts differ in their assessments and decide if
adjustments are needed in the ratings (see Figure 7.3c). Often,
differences in how analysts rate a particular item of information can
be traced back to different assumptions about the hypotheses when
doing the ratings.
5. Refine the matrix by reconsidering the hypotheses. Does it make
sense to combine two hypotheses into one, or to add a new hypothesis
that was not considered at the start? If a new hypothesis is added, go
back and evaluate all the relevant information for this hypothesis.
Additional relevant information can be added at any time.

6. Draw tentative conclusions about the relative likelihood of each


hypothesis, basing your conclusions on an analysis regarding the
diagnosticity of each item of relevant information. The software helps
the analyst by adding up the number of Inconsistency ratings for each
hypothesis and posting an Inconsistency Score for each hypothesis. It
then ranks the hypotheses from those with the least Inconsistent
ratings to those with the most Inconsistent ratings. The hypothesis
with the lowest Inconsistency Score is tentatively the most likely
hypothesis. The one with the most Inconsistencies is usually the least
likely.

The Inconsistency Scores are broad generalizations, not precise


calculations. ACH is intended to help the analyst make an estimative
judgment, but not to actually make the judgment for the analyst. This
process is likely to produce correct estimates more frequently than
less systematic or rigorous approaches, but the scoring system does
not eliminate the need for analysts to use their own good judgment.
The Potential Pitfalls section, below, identifies several occasions
when analysts need to override the Inconsistency Scores.
7. Analyze the sensitivity of your tentative conclusion to a change in the

215
interpretation of a few critical items of relevant information that the
software automatically moves to the top of the matrix. Consider the
consequences for your analysis if one or more of these critical items
of relevant information were wrong or deceptive or subject to a
different interpretation. If a different interpretation would be
sufficient to change your conclusion, go back and do everything that
is reasonably possible to double-check the accuracy of your
interpretation.
8. Report the conclusions. Consider the relative likelihood of all the
hypotheses, not just the most likely one. State which items of relevant
information were the most diagnostic, and how compelling a case
they make in identifying the most likely hypothesis.
9. Identify indicators or milestones for future observation. Generate two
lists: the first focusing on future events or what might be developed
through additional research that would help prove the validity of your
analytic judgment; the second, a list of indicators that would suggest
that your judgment is less likely to be correct or that the situation has
changed. Validate the indicators and monitor both lists on a regular
basis, remaining alert to whether new information strengthens or
weakens your case.

Figure 7.3a Creating an ACH Matrix

216
Figure 7.3b Coding Relevant Information in ACH

217
Figure 7.3c Evaluating Levels of Disagreement in ACH

218
Potential Pitfalls
A word of caution: ACH only works when all the analysts approach an
issue with a relatively open mind. An analyst who is already committed to
a belief in what the right answer is will often find a way to interpret the
relevant information as consistent with that belief. In other words, as an
antidote to Confirmation Bias, ACH is similar to a flu shot. Taking the flu
shot will usually keep you from getting the flu, but it won’t make you well
if you already have the flu.

The Inconsistency Scores generated by the ACH software for each


hypothesis are not the product of a magic formula that tells you which
hypothesis to believe in! The ACH software takes you through a
systematic analytic process, and the computer does the addition, but the
judgment that emerges is only as accurate as your selection and evaluation
of the relevant information to be considered.

Because it is more difficult to refute hypotheses than to find information


that confirms a favored hypothesis, the generation and testing of
alternative hypotheses will often increase rather than reduce the analyst’s
level of uncertainty. Such uncertainty is frustrating, but it is usually an
accurate reflection of the true situation. The ACH procedure has the

219
offsetting advantage of focusing your attention on the few items of critical
information that cause the uncertainty or, if they were available, would
alleviate it. ACH can guide future collection, research, and analysis to
resolve the uncertainty and produce a more accurate judgment.

Analysts should be aware of five circumstances that can cause a


divergence between an analyst’s own beliefs and the Inconsistency Scores.
In the first two circumstances described in the following list, the
Inconsistency Scores seem to be wrong when they are actually correct. In
the next three circumstances, the Inconsistency Scores may seem correct
when they are actually wrong. Analysts need to recognize these
circumstances, understand the problem, and make adjustments
accordingly.

✶ Assumptions or logical deductions omitted: If the scores in the


matrix do not support what you believe is the most likely hypothesis,
the matrix may be incomplete. Your thinking may be influenced by
assumptions or logical deductions that have not been included in the
list of relevant information or arguments. If so, these should be
included so that the matrix fully reflects everything that influences
your judgment on this issue. It is important for all analysts to
recognize the role that unstated or unquestioned (and sometimes
unrecognized) assumptions play in their analysis. In political or
military analysis, for example, conclusions may be driven by
assumptions about another country’s capabilities or intentions. A
principal goal of the ACH process is to identify those factors that
drive the analyst’s thinking on an issue so that these factors can then
be questioned and, if appropriate, changed.
✶ Insufficient attention to less likely hypotheses: If you think the
scoring gives undue credibility to one or more of the less likely
hypotheses, it may be because you have not assembled the relevant
information needed to refute them. You may have devoted
insufficient attention to obtaining such relevant information, or the
relevant information may simply not be there. If you cannot find
evidence to refute a hypothesis, it may be necessary to adjust your
thinking and recognize that the uncertainty is greater than you had
originally thought.
✶ Definitive relevant information: There are occasions when
intelligence collectors obtain information from a trusted and well-
placed inside source. The ACH analysis can label the information as

220
having high credibility, but this is probably not enough to reflect the
conclusiveness of such relevant information and the impact it should
have on an analyst’s thinking about the hypotheses. In other words, in
some circumstances one or two highly authoritative reports from a
trusted source in a position to know may support one hypothesis so
strongly that they refute all other hypotheses regardless of what other
less reliable or less definitive relevant information may show.
✶ Unbalanced set of evidence: Evidence and arguments must be
representative of the problem as a whole. If there is considerable
evidence on a related but peripheral issue and comparatively few
items of evidence on the core issue, the Inconsistency Score may be
misleading.
✶ Diminishing returns: As evidence accumulates, each new item of
Inconsistent relevant information or argument has less impact on the
Inconsistency Scores than does the earlier relevant information. For
example, the impact of any single item is less when there are fifty
items than when there are only ten items. To understand this, consider
what happens when you calculate the average of fifty numbers. Each
number has equal weight, but adding a fifty-first number will have
less impact on the average than if you start with only ten numbers and
add one more. Stated differently, the accumulation of relevant
information over time slows down the rate at which the Inconsistency
Score changes in response to new relevant information. Therefore,
these numbers may not reflect the actual amount of change in the
situation you are analyzing. When you are evaluating change over
time, it is desirable to delete the older relevant information
periodically, or to partition the relevant information and analyze the
older and newer relevant information separately.

Some other caveats to watch out for when using ACH include:

✶ The possibility exists that none of the relevant information


identified is diagnostic.
✶ Not all relevant information is identified.
✶ The information identified could be inaccurate, deceptive, or
misleading.
✶ The ratings are subjective and therefore subject to human error.
✶ When the analysis is performed by a group, the outcome can be
contingent on healthy group dynamics.

221
Relationship to Other Techniques
ACH is often used in conjunction with other techniques. For example,
Structured Brainstorming, Nominal Group Technique, the Multiple
Hypotheses Generator™, or the Delphi Method may be used to identify
hypotheses or relevant information to be included in the ACH analysis or
to evaluate the significance of relevant information. Deception Detection
may identify an opponent’s motive, opportunity, or means to conduct
deception, or past deception practices; information about these factors
should be included in the list of ACH-relevant information. The
Diagnostic Reasoning technique is incorporated within the ACH method.
The final step in the ACH method identifies indicators for monitoring
future developments.

The ACH matrix is intended to reflect all relevant information and


arguments that affect one’s thinking about a designated set of hypotheses.
That means it should also include assumptions identified by a Key
Assumptions Check (chapter 8). Conversely, rating the consistency of an
item of relevant information with a specific hypothesis is often based on an
assumption. When rating the consistency of relevant information in an
ACH matrix, the analyst should ask, “If this hypothesis is true, would I see
this item of relevant information?” A common thought in response to this
question is, “It all depends on. . . .” This means that, however the
consistency of that item of relevant information is rated, that rating will be
based on an assumption—whatever assumption the rating “depends on.”
These assumptions should be recorded in the matrix and then considered in
the context of a Key Assumptions Check.

The Delphi Method (chapter 9) can be used to double-check the


conclusions of an ACH analysis. A number of outside experts are asked
separately to assess the probability of the same set of hypotheses and to
explain the rationale for their conclusions. If the two different groups of
analysts using different methods arrive at the same conclusion, this is
grounds for a significant increase in confidence in the conclusion. If they
disagree, their lack of agreement is also useful, as one can then seek to
understand the rationale for the different judgments.

ACH and Argument Mapping (described in the next section) are both used
on the same types of complex analytic problems. They are both systematic
methods for organizing relevant information, but they work in

222
fundamentally different ways and are best used at different stages in the
analytic process. ACH is used during an early stage to analyze a range of
hypotheses in order to determine which is most consistent with the broad
body of relevant information. At a later stage, when the focus is on
developing, evaluating, or presenting the case for a specific conclusion,
Argument Mapping is the appropriate method. Each method has strengths
and weaknesses, and the optimal solution is to use both.

Origins of This Technique


Richards Heuer originally developed the ACH technique at the CIA in the
mid-1980s as one part of a methodology for analyzing the presence or
absence of Soviet deception. It was first described publicly in his book
Psychology of Intelligence Analysis6 in 1999; Heuer and Randy Pherson
helped the Palo Alto Research Center gain funding from the federal
government during 2004 and 2005 to produce the first professionally
developed ACH software. Randy Pherson managed its introduction into
the U.S. Intelligence Community. Recently, with Pherson’s assistance,
Globalytica, LLC, developed a more collaborative version of the software
called Te@mACH®.

7.4 Argument Mapping


Argument Mapping is a technique that tests a single hypothesis through
logical reasoning. An Argument Map starts with a single hypothesis or
tentative analytic judgment and then graphically separates the claims and
evidence to help break down complex issues and communicate the
reasoning behind a conclusion. It is a type of tree diagram that starts with
the conclusion or lead hypothesis, and then branches out to reasons,
evidence, and finally assumptions. The process of creating the Argument
Map helps to identify key assumptions and gaps in logic.

An Argument Map makes it easier for both the analysts and the recipients
of the analysis to clarify and organize their thoughts and evaluate the
soundness of any conclusion. It shows the logical relationships between
various thoughts in a systematic way and allows one to assess quickly in a
visual way the strength of the overall argument. The technique also helps
the analysts and recipients of the report to focus on key issues and
arguments rather than focusing too much attention on minor points.

223
When to Use It
When making an intuitive judgment, use Argument Mapping to test your
own reasoning. Creating a visual map of your reasoning and the evidence
that supports this reasoning helps you better understand the strengths,
weaknesses, and gaps in your argument. It is best to use this technique
before you write your product to ensure the quality of the argument and
refine the line of argument.

Argument Mapping and Analysis of Competing Hypotheses (ACH) are


complementary techniques that work well either separately or together.
Argument Mapping is a detailed presentation of the argument for a single
hypothesis, while ACH is a more general analysis of multiple hypotheses.
The ideal is to use both.

✶ Before you generate an Argument Map, there is considerable benefit to


be gained by using ACH to take a closer look at the viability of alternative
hypotheses. After looking at alternative hypotheses, you can then select the
best one to map.

✶ After you have identified a favored hypothesis through ACH analysis,


there is much to be gained by using Argument Mapping to check and help
present the rationale for this hypothesis.

Value Added
An Argument Map organizes one’s thinking by showing the logical
relationships between the various thoughts, both pro and con. An
Argument Map also helps the analyst recognize assumptions and identify
gaps in the available knowledge. The visualization of these relationships
makes it easier to think about a complex issue and serves as a guide for
clearly presenting to others the rationale for the conclusions. Having this
rationale available in a visual form helps both the analyst and recipients of
the report focus on the key points rather than meandering aimlessly or
going off on nonsubstantive tangents.

When used collaboratively, Argument Mapping helps ensure that a variety


of views are expressed and considered. The visual representation of an

224
argument also makes it easier to recognize weaknesses in opposing
arguments. It pinpoints the location of any disagreement, and it can also
serve as an objective basis for mediating a disagreement.

An Argument Map is an ideal tool for dealing with issues of cause and
effect—and for avoiding the cognitive trap of assuming that correlation
implies causation. By laying out all the arguments for and against a lead
hypothesis—and all the supporting evidence and logic—for each
argument, it is easy to evaluate the soundness of the overall argument. The
process also mitigates against the intuitive traps of ignoring base rate
probabilities by encouraging the analyst to seek out and record all the
relevant facts that support each supposition. Similarly, the focus on
seeking out and recording all data that support or rebut the key points of
the argument makes it difficult for the analyst to make the mistake of
overdrawing conclusions from a small sample of data or continuing to hold
to an analytic judgment when confronted with a mounting list of evidence
that contradicts the initial conclusion.

The Method
An Argument Map starts with a hypothesis—a single-sentence statement,
judgment, or claim about which the analyst can, in subsequent statements,
present general arguments and detailed evidence, both pro and con. Boxes
with arguments are arrayed hierarchically below this statement, and these
boxes are connected with arrows. The arrows signify that a statement in
one box is a reason to believe, or not to believe, the statement in the box to
which the arrow is pointing. Different types of boxes serve different
functions in the reasoning process, and boxes use some combination of
color-coding, icons, shapes, and labels so that one can quickly distinguish
arguments supporting a hypothesis from arguments opposing it. Figure 7.4
is a very simple example of Argument Mapping, showing some of the
arguments bearing on the assessment that North Korea has nuclear
weapons.

Figure 7.4 Argument Mapping: Does North Korea Have Nuclear


Weapons?

225
Source: Diagram produced using the bCisive Argument Mapping
software from Austhink, www.austhink.com. Reproduced with
permission.

The specific steps involved in constructing a generic Argument Map are:

✶ Write down the lead hypothesis—a single-sentence statement,


judgment, or claim at the top of the argument tree.
✶ Draw a set of boxes below this initial box and list the key reasons
why the statement is true and the key objections to the statement.
✶ Use green lines to link the reasons to the primary claim or other
conclusions they support.
✶ Use green lines to connect evidence that supports the key reason to
that reason. (Hint: State the reason and then ask yourself, “Because?”
The answer should be the evidence you are seeking.)
✶ Identify any counterevidence that is inconsistent with the reason.
Use red lines to link the counterevidence to the reasons they
contradict.
✶ Identify any objections or challenges to the primary claim or key
conclusions. Use red lines to connect the objections to the primary
claim or key conclusions.
✶ Identify any counterevidence that supports the objections or
challenges. Use red lines to link the counterevidence to the objections

226
or challenges it supports.
✶ Specify rebuttals, if any, with orange lines. An objection,
challenge, or counterevidence that does not have an orange-line
rebuttal suggests a flaw in the argument.
✶ Evaluate the argument for clarity and completeness, ensuring that
red-lined opposing claims and evidence have orange-line rebuttals. If
all the reasons can be rebutted, then the argument is without merit.

Potential Pitfalls
Argument Mapping is a challenging skill. Training and practice are
required to use the technique properly and so gain its benefits. Detailed
instructions for effective use of this technique are available at the website
listed below under “Origins of This Technique.” Assistance by someone
experienced in using the technique is necessary for first-time users.
Commercial software and freeware are available for various types of
Argument Mapping. In the absence of software, using a sticky note to
represent each box in an Argument Map drawn on a whiteboard can be
very helpful, as it is easy to move the sticky notes around as the map
evolves and changes.

Origins of This Technique


The history of Argument Mapping goes back to the early nineteenth
century. In the early twentieth century, John Henry Wigmore pioneered its
use for legal argumentation. The availability of computers to create and
modify Argument Maps in the later twentieth century prompted broader
interest in Argument Mapping in Australia for use in a variety of analytic
domains. The short description here is based on material in the Austhink
website: http://www.austhink.com/critical/pages/argument_mapping.html.

7.5 Deception Detection


Deception is an action intended by an adversary to influence the
perceptions, decisions, or actions of another to the advantage of the
deceiver. Deception Detection is a set of checklists that analysts can use to
help them determine when to look for deception, discover whether
deception actually is present, and figure out what to do to avoid being

227
deceived. “The accurate perception of deception in counterintelligence
analysis is extraordinarily difficult. If deception is done well, the analyst
should not expect to see any evidence of it. If, on the other hand, deception
is expected, the analyst often will find evidence of deception even when it
is not there.”7

When to Use It
Analysts should be concerned about the possibility of deception when:

✶ The potential deceiver has a history of conducting deception.


✶ Key information is received at a critical time—that is, when either
the recipient or the potential deceiver has a great deal to gain or to
lose.
✶ Information is received from a source whose bona fides are
questionable.
✶ Analysis hinges on a single critical piece of information or
reporting.
✶ Accepting new information would require the analyst to alter a key
assumption or key judgment.
✶ Accepting new information would cause the recipient to expend or
divert significant resources.
✶ The potential deceiver may have a feedback channel that
illuminates whether and how the deception information is being
processed and to what effect.

Value Added
Most intelligence analysts know they cannot assume that everything that
arrives in their in-box is valid, but few know how to factor such concerns
effectively into their daily work practices. If an analyst accepts the
possibility that some of the information received may be deliberately
deceptive, this puts a significant cognitive burden on the analyst. All the
evidence is open then to some question, and it becomes difficult to draw
any valid inferences from the reporting. This fundamental dilemma can
paralyze analysis unless practical tools are available to guide the analyst in
determining when it is appropriate to worry about deception, how best to
detect deception in the reporting, and what to do in the future to guard
against being deceived.

228
The measure of a good deception operation is how well it exploits the
cognitive biases of its target audience. The deceiver’s strategy usually is to
provide some intelligence or information of value to the person being
deceived in the hopes that he or she will conclude the “take” is good
enough and should be disseminated. As additional information is collected,
the satisficing bias is reinforced and the recipient’s confidence in the
information or the source usually grows, further blinding the recipient to
the possibility that he or she is being deceived. The deceiver knows that
the information being provided is highly valued, although over time some
people will begin to question the bona fides of the source. Often, this puts
the person who developed the source or acquired the information on the
defensive, and the natural reaction is to reject any and all criticism. This
cycle is usually broken only by applying structured techniques such as
Deception Detection to force a critical examination of the true quality of
the information and the potential for deception.

It is very hard to deal with deception when you are really just
trying to get a sense of what is going on, and there is so much noise
in the system, so much overload, and so much ambiguity. When
you layer deception schemes on top of that, it erodes your ability to
act.

—Robert Jervis, “Signaling and Perception in the Information


Age,” in The Information Revolution and National Security, ed.
Thomas E. Copeland, U.S. Army War College Strategic Studies
Institute, August 2000

The Method
Analysts should routinely consider the possibility that opponents or
competitors are attempting to mislead them or to hide important
information. The possibility of deception cannot be rejected simply
because there is no evidence of it; because, if the deception is well done,
one should not expect to see evidence of it. Some circumstances in which
deception is most likely to occur are listed in the “When to Use It” section.
When such circumstances occur, the analyst, or preferably a small group
of analysts, should assess the situation using four checklists that are

229
commonly referred to by their acronyms: MOM, POP, MOSES, and EVE.
(See box on p. 201.)

Analysts have also found the following “rules of the road” helpful in
dealing with deception:8

✶ Avoid overreliance on a single source of information.


✶ Seek and heed the opinions of those closest to the reporting.
✶ Be suspicious of human sources or subsources who have not been
met with personally or for whom it is unclear how or from whom they
obtained the information.
✶ Do not rely exclusively on what someone says (verbal
intelligence); always look for material evidence (documents, pictures,
an address or phone number that can be confirmed, or some other
form of concrete, verifiable information).
✶ Look for a pattern where on several occasions a source’s reporting
initially appears correct but later turns out to be wrong and the source
offers a seemingly plausible, albeit weak, explanation for the
discrepancy.
✶ Generate and evaluate a full set of plausible hypotheses—
including a deception hypothesis, if appropriate—at the outset of a
project.
✶ Know the limitations as well as the capabilities of the potential
deceiver.

Relationship to Other Techniques


Analysts can combine Deception Detection with Analysis of Competing
Hypotheses to assess the possibility of deception. The analyst explicitly
includes deception as one of the hypotheses to be analyzed, and
information identified through the MOM, POP, MOSES, and EVE
checklists is then included as evidence in the ACH analysis.

Origins of This Technique


Deception—and efforts to detect it—has always been an integral part of
international relations. An excellent book on this subject is Michael
Bennett and Edward Waltz, Counterdeception Principles and Applications
for National Security (Boston: Artech House, 2007). The description of

230
Deception Detection in this book was previously published in Randolph H.
Pherson, Handbook of Analytic Tools and Techniques (Reston, VA:
Pherson Associates, LLC, 2008).

DECEPTION DETECTION CHECKLISTS

Motive, Opportunity, and Means (MOM)


✶ Motive: What are the goals and motives of the potential
deceiver?
✶ Channels: What means are available to the potential deceiver
to feed information to us?
✶ Risks: What consequences would the adversary suffer if such a
deception were revealed?
✶ Costs: Would the potential deceiver need to sacrifice sensitive
information to establish the credibility of the deception channel?
✶ Feedback: Does the potential deceiver have a feedback
mechanism to monitor the impact of the deception operation?

Past Opposition Practices (POP)


✶ Does the adversary have a history of engaging in deception?
✶ Does the current circumstance fit the pattern of past deceptions?
✶ If not, are there other historical precedents?
✶ If not, are there changed circumstances that would explain using
this form of deception at this time?

Manipulability of Sources (MOSES)


✶ Is the source vulnerable to control or manipulation by the potential
deceiver?
✶ What is the basis for judging the source to be reliable?
✶ Does the source have direct access or only indirect access to the
information?
✶ How good is the source’s track record of reporting?

Evaluation of Evidence (EVE)


231
✶ How accurate is the source’s reporting? Has the whole chain of
evidence, including translations, been checked?
✶ Does the critical evidence check out? Remember, the subsource
can be more critical than the source.
✶ Does evidence from one source of reporting (e.g., human
intelligence) conflict with that coming from another source (e.g.,
signals intelligence or open-source reporting)?
✶ Do other sources of information provide corroborating evidence?
✶ Is any evidence one would expect to see noteworthy by its
absence?

1. See the discussion in chapter 2 contrasting the characteristics of System


1 or intuitive thinking with System 2 or analytic thinking.

2. Herbert A. Simon, “A Behavioral Model of Rational Choice,” Quarterly


Journal of Economics LXIX (February 1955): 99–101.

3. Karl Popper, The Logic of Science (New York: Basic Books, 1959).

4. See Popper, The Logic of Science.

5. A more detailed description of Te@mACH® can be found on the


Software tab at http://www.globalytica.com. The PARC ACH tool can also
be accessed through this website.

6. Richards J. Heuer Jr., Psychology of Intelligence Analysis (Washington,


DC: CIA Center for the Study of Intelligence, 1999; reprinted by Pherson
Associates, LLC, Reston, VA 2007).

7. Richards J. Heuer Jr., “Cognitive Factors in Deception and


Counterdeception,” in Strategic Military Deception, ed. Donald C. Daniel
and Katherine L. Herbig (New York: Pergamon Press, 1982).

8. These rules are from Richards J. Heuer Jr., “Cognitive Factors in


Deception and Counterdeception,” in Strategic Military Deception, ed.
Donald C. Daniel and Katherine L. Herbig (New York: Pergamon Press,
1982); and Michael I. Handel, “Strategic and Operational Deception in
Historical Perspective,” in Strategic and Operational Deception in the
Second World War, ed. Michael I. Handel (London: Frank Cass, 1987).

232
8 Assessment of Cause and Effect

8.1 Key Assumptions Check [ 209 ]


8.2 Structured Analogies [ 215 ]
8.3 Role Playing [ 219 ]
8.4 Red Hat Analysis [ 223 ]
8.5 Outside-In Thinking [ 228 ]

Attempts to explain the past and forecast the future are based on an
understanding of cause and effect. Such understanding is difficult, because
the kinds of variables and relationships studied by the intelligence analyst
are, in most cases, not amenable to the kinds of empirical analysis and
theory development that are common in academic research. The best the
analyst can do is to make an informed judgment, but such judgments
depend upon the analyst’s subject-matter expertise and reasoning ability,
and are vulnerable to various cognitive pitfalls and fallacies of reasoning.

One of the most common causes of intelligence failures is mirror imaging,


the unconscious assumption that other countries and their leaders will act
as we would in similar circumstances. Another is the tendency to attribute
the behavior of people, organizations, or governments to the nature of the
actor and underestimate the influence of situational factors. Conversely,
people tend to see their own behavior as conditioned almost entirely by the
situation in which they find themselves. This is known as the “fundamental
attribution error.”

There is also a tendency to assume that the results of an opponent’s actions


are what the opponent intended, and we are slow to accept the reality of
simple mistakes, accidents, unintended consequences, coincidences, or
small causes leading to large effects. Analysts often assume that there is a
single cause and stop their search for an explanation when the first
seemingly sufficient cause is found. Perceptions of causality are partly
determined by where one’s attention is directed; as a result, information
that is readily available, salient, or vivid is more likely to be perceived as
causal than information that is not. Cognitive limitations and common
errors in the perception of cause and effect are discussed in greater detail
in Richards Heuer’s Psychology of Intelligence Analysis.

233
People also make several common errors in logic. When two occurrences
often happen at the same time, or when one follows the other, they are said
to be correlated. A frequent assumption is that one occurrence causes the
other, but correlation should not be confused with causation. It often
happens that a third variable causes both. “A classic example of this is ice
cream sales and the number of drownings in the summer. The fact that ice
cream sales increase as drownings increase does not mean that either one
caused the other. Rather, a third factor, such as hot weather, could have
caused both.”1

There is no single, easy technique for mitigating the pitfalls involved in


making causal judgments, because analysts usually lack the information
they need to be certain of a causal relationship. Moreover, the complex
events that are the focus of intelligence analysis often have multiple causes
that interact with one another. Appropriate use of the techniques in this
chapter can, however, help mitigate a variety of common cognitive
limitations and improve the analyst’s odds of getting it right.

Psychology of Intelligence Analysis describes three principal strategies that


intelligence analysts use to make judgments to explain the cause of current
events or forecast what might happen in the future:

✶ Situational logic: Making expert judgments based on the known


facts and an understanding of the underlying forces at work at that
particular time and place. When an analyst is working with
incomplete, ambiguous, and possibly deceptive information, these
expert judgments usually depend upon assumptions about
capabilities, intent, or the normal workings of things in the country of
concern. The Key Assumptions Check, which is one of the most
commonly used structured techniques, is described in this chapter.
✶ Comparison with historical situations: Combining an
understanding of the facts of a specific situation with knowledge of
what happened in similar situations in the past, either in one’s
personal experience or historical events. This strategy involves the
use of analogies. The Structured Analogies technique described in
this chapter adds rigor and increased accuracy to this process.
✶ Applying theory: Basing judgments on the systematic study of
many examples of the same phenomenon. Theories or models often
based on empirical academic research are used to explain how and
when certain types of events normally occur. Many academic models

234
are too generalized to be applicable to the unique characteristics of
most intelligence problems. Many others involve quantitative analysis
that is beyond the domain of structured analytic techniques as defined
in this book. However, a conceptual model that simply identifies
relevant variables and the diverse ways they might combine to cause
specific outcomes can be a useful template for guiding collection and
analysis of some common types of problems. Outside-In Thinking
can be used to explain current events or forecast the future in this
way.

I think we ought always to entertain our opinions with some


measure of doubt. I shouldn’t wish people dogmatically to believe
any philosophy, not even mine.

—Bertrand Russell, English philosopher

Overview of Techniques
Key Assumptions Check
is one of the most important and frequently used techniques. Analytic
judgment is always based on a combination of evidence and assumptions
—or preconceptions—that influence how the evidence is interpreted. The
Key Assumptions Check is a systematic effort to make explicit and
question the assumptions (i.e., mental model) that guide an analyst’s
thinking.

Structured Analogies
applies analytic rigor to reasoning by analogy. This technique requires that
the analyst systematically compare the issue of concern with multiple
potential analogies before selecting the one for which the circumstances
are most similar to the issue of concern. It seems natural to use analogies
when making decisions or forecasts as, by definition, they contain
information about what has happened in similar situations in the past.
People often recognize patterns and then consciously take actions that

235
were successful in a previous experience or avoid actions that previously
were unsuccessful. However, analysts need to avoid the strong tendency to
fasten onto the first analogy that comes to mind and supports their prior
view about an issue.

Role Playing
, as described here, starts with the current situation, perhaps with a real or
hypothetical new development that has just happened and to which the
players must react. This type of Role Playing, unlike much military
gaming, can often be completed in one or two days, with little advance
preparation. Even simple Role Playing exercises will stimulate creative
and systematic thinking about how a current complex situation might play
out. A Role Playing game is also an effective mechanism for bringing
together analysts who, although they work on a common problem, may
have little opportunity to meet and discuss their perspectives on that
problem. When major decisions are needed and time is short, this
technique can be used to bring analysts and decision makers together in the
same room to engage in dynamic problem solving.

Red Hat Analysis


is a useful technique for trying to perceive threats and opportunities as
others see them. Intelligence analysts frequently endeavor to forecast the
behavior of a foreign leader, group, organization, or country. In doing so,
they need to avoid the common error of mirror imaging, the natural
tendency to assume that others think and perceive the world in the same
way we do. Red Hat Analysis is of limited value without significant
cultural understanding of the country and people involved. See chapter 9
for a discussion of how this technique differs from Red Team Analysis.

Outside-In Thinking
broadens an analyst’s thinking about the forces that can influence a
particular issue of concern. This technique requires the analyst to reach
beyond his or her specialty area to consider broader social, organizational,
economic, environmental, technological, political, legal, and global forces
or trends that can affect the topic being analyzed.

236
8.1 Key Assumptions Check
Analytic judgment is always based on a combination of evidence and
assumptions, or preconceptions, which influences how the evidence is
interpreted.2 The Key Assumptions Check is a systematic effort to make
explicit and question the assumptions (the mental model) that guide an
analyst’s interpretation of evidence and reasoning about any particular
problem. Such assumptions are usually necessary and unavoidable as a
means of filling gaps in the incomplete, ambiguous, and sometimes
deceptive information with which the analyst must work. They are driven
by the analyst’s education, training, and experience, plus the
organizational context in which the analyst works.

The Key Assumptions Check is one of the most commonly used


techniques, because analysts typically need to make assumptions to fill
gaps. In the intelligence world, these are often assumptions about another
country’s intentions or capabilities, the way governmental processes
usually work in that country, the relative strength of political forces, the
trustworthiness or accuracy of key sources, the validity of previous
analyses on the same subject, or the presence or absence of relevant
changes in the context in which the activity is occurring. Assumptions are
often difficult to identify because many sociocultural beliefs are held
unconsciously or so firmly that they are assumed to be truth and not
subject to challenge.

When to Use It
Any explanation of current events or estimate of future developments
requires the interpretation of evidence. If the available evidence is
incomplete or ambiguous, this interpretation is influenced by assumptions
about how things normally work in the country of interest. These
assumptions should be made explicit early in the analytic process.

If a Key Assumptions Check is not done at the outset of a project, it can


still prove extremely valuable if done during the coordination process or
before conclusions are presented or delivered. If the Key Assumptions
Check was done early in the process, it is often desirable to review the
assumptions again later in the process—for example, just before or just
after drafting a report. Determine whether the assumptions still hold up or

237
should be modified.

Value Added
Preparing a written list of one’s working assumptions at the beginning of
any project helps the analyst do the following:

✶ Identify the specific assumptions that underpin the basic analytic


line.
✶ Achieve a better understanding of the fundamental dynamics at
play.
✶ Gain a better perspective and stimulate new thinking about the
issue.
✶ Discover hidden relationships and links among key factors.
✶ Identify any developments that would cause an assumption to be
abandoned.
✶ Avoid surprises should new information render old assumptions
invalid.

A Key Assumptions Check safeguards an analyst against several classic


mental mistakes, including the tendency to overdraw conclusions when
presented with only a small amount of data, giving too much weight to
first impressions, and not addressing the impact the absence of information
might have on the analysis. A sound understanding of the assumptions
underlying an analytic judgment sets the limits for the confidence the
analyst ought to have in making a judgment.

The Method
The process of conducting a Key Assumptions Check is relatively
straightforward in concept but often challenging to put into practice. One
challenge is that participating analysts must be open to the possibility that
they could be wrong. It helps to involve in this process several well-
regarded analysts who are generally familiar with the topic but have no
prior commitment to any set of assumptions about the issue at hand. Keep
in mind that many “key assumptions” turn out to be “key uncertainties.”
Randy Pherson’s extensive experience as a facilitator of analytic projects
indicates that approximately one in every four key assumptions collapses
on careful examination.

238
Here are the steps in conducting a Key Assumptions Check:

✶ Gather a small group of individuals who are working the issue


along with a few “outsiders.” The primary analytic unit already is
working from an established mental model, so the “outsiders” are
needed to bring other perspectives.
✶ Ideally, participants should be asked to bring their list of
assumptions when they come to the meeting. If this was not done,
start the meeting with a silent brainstorming session. Ask each
participant to write down several assumptions on 3 × 5 cards.
✶ Collect the cards and list the assumptions on a whiteboard for all
to see.
✶ Elicit additional assumptions. Work from the prevailing analytic
line back to the key arguments that support it. Use various devices to
help prod participants’ thinking:
– Ask the standard journalist questions. Who: Are we
assuming that we know who all the key players are? What: Are
we assuming that we know the goals of the key players? When:
Are we assuming that conditions have not changed since our last
report or that they will not change in the foreseeable future?
Where: Are we assuming that we know where the real action is
going to be? Why: Are we assuming that we understand the
motives of the key players? How: Are we assuming that we
know how they are going to do it?
– Use of phrases such as “will always,” “will never,” or
“would have to be” suggests that an idea is not being challenged.
Perhaps it should be.
– Use of phrases such as “based on” or “generally the case”
suggests that a challengeable assumption is being made.
– When the flow of assumptions starts to slow down, ask,
“What else seems so obvious that one would not normally think
about challenging it?” If no one can identify more assumptions,
then there is an assumption that they do not exist, which itself is
an assumption subject to challenge.
✶ After identifying a full set of assumptions, go back and critically
examine each assumption. Ask the following questions:
– Why am I confident that this assumption is correct?
– In what circumstances might this assumption be untrue?
– Could it have been true in the past but no longer be true
today?

239
– How much confidence do I have that this assumption is
valid?
– If it turns out to be invalid, how much impact would this
have on the analysis?
✶ Place each assumption in one of three categories:
– Basically solid.
– Correct with some caveats.
– Unsupported or questionable—the “key uncertainties.”
✶ Refine the list, deleting those assumptions that do not hold up to
scrutiny and adding new ones that emerge from the discussion. Above
all, emphasize those assumptions that would, if wrong, lead to
changing the analytic conclusions.
✶ Consider whether key uncertainties should be converted into
intelligence collection requirements or research topics.

When concluding the analysis, remember that the probability of your


analytic conclusion being accurate cannot be greater than the weakest link
in your chain of reasoning. Review your assumptions, review the quality
of evidence and reliability of sources, and consider the overall difficulty
and complexity of the issue. Then make a rough estimate of the probability
that your analytic conclusion will turn out to be wrong. Use this number to
calculate the rough probability of your conclusion turning out to be
accurate. For example, a three in four chance (75 percent) of being right
equates to a one in four chance (25 percent) of being wrong. This focus on
how and why we might be wrong is needed to offset the natural human
tendency toward reluctance to admit we might be wrong.

Figure 8.1 shows apparently flawed assumptions made in the Wen Ho Lee
espionage case during the 1990s and what further investigation showed
about these assumptions. A Key Assumptions Check could have identified
weaknesses in the case against Lee much earlier.

Relationship to Other Techniques


The Key Assumptions Check is frequently paired with other techniques
because assumptions play an important role in all structured analytic
efforts, and it is important to get them right. For example, when an
assumption is critical to an analysis, and questions remain about the
validity of that assumption, it may be desirable to follow the Key
Assumptions Check with a What If? Analysis. Imagine a future (or a

240
present) in which the assumption is wrong. What could have happened to
make it wrong, how could that have happened, and what are the
consequences?

There is a particularly noteworthy interaction between Key Assumptions


Check and Analysis of Competing Hypotheses (ACH). Key assumptions
need to be included as “evidence” in an ACH matrix to ensure that the
matrix is an accurate reflection of the analyst’s thinking. And analysts
frequently identify assumptions during the course of filling out an ACH
matrix. This happens when an analyst assesses the consistency or
inconsistency of an item of evidence with a hypothesis and concludes that
this judgment is dependent upon something else—usually an assumption.
Te@mACH® is the only version of ACH software that includes a field
analysts can use to keep track of the assumptions they make when
evaluating evidence against the hypotheses.

Figure 8.1 Key Assumptions Check: The Case of Wen Ho Lee

241
Source: 2009 Pherson Associates, LLC.

242
Quadrant Crunching™ (chapter 5) and Simple Scenarios (chapter 6) both
use assumptions and their opposites to generate multiple explanations or
outcomes.

Origins of This Technique


Although assumptions have been a topic of analytic concern for a long
time, the idea of developing a specific analytic technique to focus on
assumptions did not occur until the late 1990s. The discussion of Key
Assumptions Check in this book is from Randolph H. Pherson, Handbook
of Analytic Tools and Techniques (Reston, VA: Pherson Associates, LLC,
2008).

An organization really begins to learn when its most cherished


assumptions are challenged by counterassumptions. Assumptions
underpinning existing policies and procedures should therefore be
unearthed, and alternative policies and procedures put forward
based upon counterassumptions.

—Ian I. Mitroff and Richard O. Mason, Creating a Dialectical


Social Science: Concepts, Methods, and Models (1981)

8.2 Structured Analogies


The Structured Analogies technique applies increased rigor to analogical
reasoning by requiring that the issue of concern be compared
systematically with multiple analogies rather than with a single analogy.

When to Use It
It seems natural to use analogies when making judgments or forecasts
because, by definition, they contain information about what has happened
in similar situations in the past. People do this in their daily lives, and
analysts do it in their role as intelligence analysts. People recognize similar
situations or patterns and then consciously take actions that were
successful in a previous experience or avoid actions that previously were

243
unsuccessful. People often turn to analogical reasoning in unfamiliar or
uncertain situations where the available information is inadequate for any
other approach.

An analogy involves a perception that two things are similar and a


judgment that since they are similar in one way they are likely to be
similar in other analytically relevant ways. Analysts may observe that a
new military aircraft has several features that are similar to an existing
aircraft and conclude that the new aircraft has been designed for similar
missions. Examples of analogies on a larger scale include the successful
occupation of Germany and Japan after World War II as a reason for
believing that military occupation of Iraq will be successful, or the failed
Soviet occupation of Afghanistan as a harbinger of U.S. frustrations
associated with its involvement in the country. History records many
analogies that have led to bad decisions as well as good decisions.

When one is making any analogy, it is important to think about more than
just the similarities. It is also necessary to consider those conditions,
qualities, or circumstances that are dissimilar between the two phenomena.
This should be standard practice in all reasoning by analogy and especially
in those cases when one cannot afford to be wrong.

Many analogies are used loosely and have a broad impact on the thinking
of both decision makers and the public at large. One role for analysis is to
take analogies that are already being used by others, and that are having an
impact, and then subjecting these analogies to rigorous examination.

We recommend that analysts considering the use of this technique read


Richard D. Neustadt and Ernest R. May, “Unreasoning from Analogies,”
chapter 4 in Thinking in Time: The Uses of History for Decision Makers
(New York: Free Press, 1986). We also suggest Giovanni Gavetti and Jan
W. Rivkin, “How Strategists Really Think: Tapping the Power of
Analogy,” Harvard Business Review (April 2005).

One of the most widely used tools in intelligence analysis is the


analogy. Analogies serve as the basis for constructing many
predictive models, are the basis for most hypotheses, and rightly or
wrongly, underlie many generalizations about what the other side
will do and how they will go about doing it.

244
—Jerome K. Clauser and Sandra M. Weir, Intelligence Research
Methodology, Defense Intelligence School (1975)

Value Added
Reasoning by analogy helps achieve understanding by reducing the
unfamiliar to the familiar. In the absence of data required for a full
understanding of the current situation, reasoning by analogy may be the
only alternative. If this approach is taken, however, one should be aware of
the significant potential for error, and the analyst should reduce the
potential for error to the extent possible through the use of the Structured
Analogies technique. Use of the technique helps protect analysts from
making the mental mistakes of overdrawing conclusions from a small set
of data and assuming the same dynamic is in play when, at first glance,
something seems to accord with their past experiences.

The benefit of the Structured Analogies technique is that it avoids the


tendency to fasten quickly on a single analogy and then focus only on
evidence that supports the similarity of that analogy. Analysts should take
into account the time required for this structured approach and may choose
to use it only when the cost of being wrong is high.

Structured Analogies is one technique for which there has been an


empirical study of its effectiveness. A series of experiments compared
Structured Analogies with unaided judgments in predicting the decisions
made in eight conflict situations. These were difficult forecasting
problems, and the 32 percent accuracy of unaided experts was only slightly
better than chance. In contrast, 46 percent of the forecasts made by using
the Structured Analogies process described here were accurate. Among
experts who were independently able to think of two or more analogies
and who had direct experience with their closest analogy, 60 percent of the
forecasts were accurate. (See “Origins of This Technique.”)

When resorting to an analogy, [people] tend to seize upon the first


that comes to mind. They do not research more widely. Nor do
they pause to analyze the case, test its fitness, or even ask in what
ways it might be misleading.

245
—Ernest R. May, Lessons of the Past: The Use and Misuse of
History in American Foreign Policy (1975)

The Method
Training in this technique is recommended prior to using it. Such a
training course is available at
http://www.academia.edu/1070109/Structured_Analogies_for_Forecasting.

The following is a step-by-step description of this technique:

✶ Describe the issue and the judgment or decision that needs to be


made.
✶ Identify a group of experts who are familiar with the problem and
who also have a broad background that enables them to identify
analogous situations. The more varied the backgrounds, the better.
There should usually be at least five experts.
✶ Ask the group of experts to identify as many analogies as possible
without focusing too strongly on how similar they are to the current
situation. Various universities and international organizations
maintain databases to facilitate this type of research. For example, the
Massachusetts Institute of Technology (MIT) maintains its Cascon
System for Analyzing International Conflict, a database of 85 post–
World War II conflicts that are categorized and coded to facilitate
their comparison with current conflicts of interest. The University of
Maryland maintains the International Crisis Behavior Project
database covering 452 international crises between 1918 and 2006.
Each case is coded for eighty-one descriptive variables.
✶ Review the list of potential analogies and agree on which ones
should be examined further.
✶ Develop a tentative list of categories for comparing the analogies
to determine which analogy is closest to the issue in question. For
example, the MIT conflict database codes each case according to the
following broad categories as well as finer subcategories: previous or
general relations between sides, great power and allied involvement,
external relations generally, military-strategic, international
organization (UN, legal, public opinion), ethnic (refugees,
minorities), economic/resources, internal politics of the sides,

246
communication and information, and actions in disputed area.
✶ Write up an account of each selected analogy, with equal focus on
those aspects of the analogy that are similar and those that are
different. The task of writing accounts of all the analogies should be
divided up among the experts. Each account can be posted on a wiki
where each member of the group can read and comment on them.
✶ Review the tentative list of categories for comparing the analogous
situations to make sure they are still appropriate. Then ask each
expert to rate the similarity of each analogy to the issue of concern.
The experts should do the rating in private using a scale from 0 to 10,
where 0 = not at all similar, 5 = somewhat similar, and 10 = very
similar.
✶ After combining the ratings to calculate an average rating for each
analogy, discuss the results and make a forecast for the current issue
of concern. This will usually be the same as the outcome of the most
similar analogy. Alternatively, identify several possible outcomes, or
scenarios, based on the diverse outcomes of analogous situations.
Then use the analogous cases to identify drivers or policy actions that
might influence the outcome of the current situation.

Origins of This Technique


This technique is described in greater detail in Kesten C. Green and J.
Scott Armstrong, “Structured Analogies for Forecasting,” in Principles of
Forecasting: A Handbook for Researchers and Practitioners, ed. J. Scott
Armstrong (New York: Springer Science+Business Media, 2001), and
www.forecastingprinciples.com/paperpdf/Structured_Analogies.pdf.

8.3 Role Playing


In Role Playing, analysts assume the roles of the leaders who are the
subject of their analysis and act out their responses to developments. This
technique is also known as gaming, but we use the name Role Playing here
to distinguish it from the more complex forms of military gaming. This
technique is about simple Role Playing, when the starting scenario is the
current existing situation, perhaps with a real or hypothetical new
development that has just happened and to which the players must react.

247
When to Use It
Role Playing is often used to improve understanding of what might happen
when two or more people, organizations, or countries interact, especially
in conflict situations or negotiations. It shows how each side might react to
statements or actions from the other side. Many years ago Richards Heuer
participated in several Role Playing exercises, including one with analysts
of the Soviet Union from throughout the Intelligence Community playing
the role of Politburo members deciding on the successor to Soviet leader
Leonid Brezhnev. Randy Pherson has also organized several Role Playing
games on Latin America involving intelligence analysts and senior policy
officials. Role Playing has a desirable by-product that might be part of the
rationale for using this technique. It is a useful mechanism for bringing
together people who, although they work on a common problem, may have
little opportunity to meet and discuss their perspectives on this problem. A
Role Playing game may lead to the long-term benefits that come with
mutual understanding and ongoing collaboration. To maximize this
benefit, the organizer of the game should allow for participants to have
informal time together.

Value Added
Role Playing is a good way to see a problem from another person’s
perspective, to gain insight into how others think, or to gain insight into
how other people might react to U.S. actions. Playing a role gives one
license to think and act in a different manner. Simply trying to imagine
how another leader or country will think and react, which analysts do
frequently, is not Role Playing and is probably less effective than Role
Playing. One must actually act out the role and become, in a sense, the
person whose role is assumed. It is “living” the role that opens one’s
mental model and makes it possible to relate facts and ideas to one another
in ways that differ from habitual patterns.

Role Playing is particularly useful for understanding the potential


outcomes of a conflict situation. Parties to a conflict often act and react
many times, and they can change as a result of their interactions. There is a
body of research showing that experts using unaided judgment perform
little better than chance in predicting the outcome of such conflict.
Performance is improved significantly by the use of “simulated

248
interaction” (Role Playing) to act out the conflicts.3

Role Playing does not necessarily give a “right” answer, but it typically
enables the players to see some things in a new light. Players become more
conscious that “where you stand depends on where you sit.” By changing
roles, the participants can see the problem in a different context. Most
participants view their experiences as useful in providing new information
and insights.4 Bringing together analysts from various offices and agencies
offers each participant a modest reality test of his or her views.
Participants are forced to confront in a fairly direct fashion the fact that the
assumptions they make about the problem are not inevitably shared by
others. The technique helps them avoid the traps of giving too much
weight to first impressions and seeing patterns where they do not exist.

The Method
Most of the gaming done in the Department of Defense and in the
academic world is rather elaborate, so it requires substantial preparatory
work. It does not have to be that way. The preparatory work (such as
writing scripts) can be avoided by starting the game with the current
situation as already known to analysts and decision makers, rather than
with a notional scenario that participants have to learn. Just one notional
news or intelligence report is sufficient to start the action in the game. In
the authors’ experience, it is possible to have a useful political game in just
one day with only a modest investment in preparatory work.

Whenever possible, a Role Playing game should be conducted off site,


with mobile phones turned off. Being away from the office precludes
interruptions and makes it easier for participants to imagine themselves in
a different environment with a different set of obligations, interests,
ambitions, fears, and historical memories.

Each participant normally plays an individual foreign leader or stakeholder


and, in some cases, may need to prepare by doing research on the role,
interests, and personality of that individual prior to the game. The game
may simulate decision making within the leadership of a single country or
group, or the interaction between leaders in two or more countries or
groups. To keep teams down to an easily manageable size and provide an
active role for all participants, it may be appropriate for a single participant
to play two or more of the less active roles.

249
The analyst who plans and organizes the game leads a control team. This
team monitors time to keep the game on track, serves as the
communication channel to pass messages between teams, leads the after-
action review, and helps write the after-action report to summarize what
happened and lessons learned. The control team also plays any role that
becomes necessary but was not foreseen—for example, a United Nations
mediator. If necessary to keep the game on track or lead it in a desired
direction, the control team may introduce new events, such as a terrorist
attack that inflames emotions or a new policy statement on the issue by the
U.S. president.

After the game ends or on the following day, it is necessary to conduct an


after-action review. If there is agreement that all participants played their
roles well, there may be a natural tendency to assume that the outcome of
the game is a reasonable forecast of what will eventually happen in real
life. This natural bias in favor of the game outcome needs to be checked
during the after-action review. The control team during the game should
take notes on all statements or actions by any player that seem to have set
the direction in which the game proceeded. The group should then discuss
each of these turning points, including what other reasonable actions were
available to that player and whether other actions would have caused the
game to have a different outcome.

Potential Pitfalls
One limitation of Role Playing is the difficulty of generalizing from the
game to the real world. Just because something happens in a Role Playing
game does not necessarily mean the future will turn out that way. This
observation seems obvious, but it can actually be a problem. Because of
the immediacy of the experience and the personal impression made by the
simulation, the outcome may have a stronger impact on the participants’
thinking than is warranted by the known facts of the case. As we shall
discuss, this response needs to be addressed in the after-action review.

J. Scott Armstrong, a prominent specialist in forecasting methodologies,


has researched the literature on the validity of Role Playing as a method
for predicting decisions. He found that empirical evidence supports the
hypothesis that Role Playing provides greater accuracy than individual
expert opinion for predicting the outcome of conflicts. In five different

250
studies, Role Playing predicted the correct outcome for 56 percent of 143
predictions, while unaided expert opinion was correct for 16 percent of
172 predictions. (See “Origins of This Technique.”) Outcomes of conflict
situations are difficult to predict because they involve a series of actions
and reactions, with each action depending upon the other party’s previous
reaction to an earlier action, and so forth. People do not have the cognitive
capacity to play out such a complex situation in their heads. Acting out
situations and responses is helpful because it simulates this type of
sequential interaction.

Armstrong’s findings validate the use of Role Playing as a useful tool for
intelligence analysis, but they do not validate treating the outcome as a
valid prediction. Although Armstrong found Role Playing results to be
more accurate than expert opinion, the error rate for Role Playing was still
quite large. Moreover, the conditions in which Role Playing is used in
intelligence analysis are quite different from the studies that Armstrong
reviewed and conducted. When the technique is used for intelligence
analysis, the goal is not an explicit prediction but better understanding of
the situation and the possible outcomes. The method does not end with the
conclusion of the Role Playing. There must be an after-action review of
the key turning points and how the outcome might have been different if
different choices had been made at key points in the game.

Origins of This Technique


Role Playing is a basic technique in many different analytic domains. The
description of it here is based on Richards Heuer and Randy Pherson’s
personal experience, combined with information from the following
sources: Robert Mandel, “Political Gaming and Foreign Policy Making
During Crises,” World Politics 29, no. 4 (July 1977); and J. Scott
Armstrong, “Role Playing: A Method to Forecast Decisions,” in Principles
of Forecasting: A Handbook for Researchers and Practitioners,” ed. J.
Scott Armstrong (New York: Springer Science+Business Media, 2001).

8.4 Red Hat Analysis


Intelligence analysts frequently endeavor to forecast the actions of an
adversary or a competitor. In doing so, they need to avoid the common
error of mirror imaging, the natural tendency to assume that others think

251
and perceive the world in the same way we do. Red Hat Analysis5 is a
useful technique for trying to perceive threats and opportunities as others
see them, but this technique alone is of limited value without significant
cultural understanding of the other country and people involved.

When to Use It
The chances of a Red Hat Analysis being accurate are better when one is
trying to foresee the behavior of a specific person who has the authority to
make decisions. Authoritarian leaders as well as small, cohesive groups,
such as terrorist cells, are obvious candidates for this type of analysis. The
chances of making an accurate forecast about an adversary’s or a
competitor’s decision is significantly lower when the decision is
constrained by a legislature or influenced by conflicting interest groups. In
law enforcement, Red Hat Analysis can be used effectively to simulate the
likely behavior of a criminal or a drug lord.

Value Added
There is a great deal of truth to the maxim that “where you stand depends
on where you sit.” Red Hat Analysis is a reframing technique6 that
requires the analyst to adopt—and make decisions consonant with—the
culture of a foreign leader, cohesive group, criminal, or competitor. This
conscious effort to imagine the situation as the target perceives it helps the
analyst gain a different and usually more accurate perspective on a
problem or issue. Reframing the problem typically changes the analyst’s
perspective from that of an analyst observing and forecasting an
adversary’s behavior to that of a leader who must make a difficult decision
within that operational culture. This reframing process often introduces
new and different stimuli that might not have been factored into a
traditional analysis. For example, in a Red Hat exercise, participants might
ask themselves these questions: “What are my supporters expecting from
me?” “Do I really need to make this decision now?” “What are the
consequences of making a wrong decision?” “How will the United States
respond?”

The Method

252
✶ Gather a group of experts with in-depth knowledge of the target,
operating environment, and senior decision maker’s personality,
motives, and style of thinking. If at all possible, try to include people
who are well grounded in the adversary’s culture, who speak the same
language, share the same ethnic background, or have lived
extensively in the region.
✶ Present the experts with a situation or a stimulus and ask them
what they would do if confronted by this situation. For example, you
might ask for a response to this situation: “The United States has just
imposed sanctions on your country.” Or: “We are about to launch a
new product. How do you think our competitors will react?” The
reason for first asking the experts how they would react is to establish
a baseline for assessing whether the adversary is likely to react
differently.7
✶ Once the experts have articulated how they would have responded
or acted, ask them to explain why they think they would behave that
way. Ask the experts to list what core values or core assumptions
were motivating their behavior or actions. Again, this step establishes
a baseline for assessing why the adversary is likely to react
differently.
✶ Once they can explain in a convincing way why they chose to act
the way they did, ask the experts to put themselves in the adversary’s
or competitor’s shoes and simulate how their adversary or competitor
would respond. At this point, the experts should ask themselves,
“Does our adversary or competitor share our values or motives or
methods of operation?” If not, then how do those differences lead
them to act in ways we might not have anticipated before engaging in
this exercise?
✶ If trying to foresee the actions of a group or an organization,
consider using the Role Playing technique. To gain cultural expertise
that might otherwise be lacking, consider using the Delphi Method
(chapter 9) to elicit the expertise of geographically distributed
experts.
✶ In presenting the results, describe the alternatives that were
considered and the rationale for selecting the path the person or group
is believed most likely to take. Consider other less conventional
means of presenting the results of your analysis, such as the
following:
– Describing a hypothetical conversation in which the leader
and other players discuss the issue in the first person.

253
– Drafting a document (set of instructions, military orders,
policy paper, or directives) that the adversary or competitor
would likely generate.

Figure 8.4 shows how one might use the Red Hat Technique to catch bank
robbers.

Figure 8.4 Using Red Hat Analysis to Catch Bank Robbers

254
Source: Eric Hess, Senior Biometric Product Manager, MorphoTrak,
Inc. From an unpublished paper, “Facial Recognition for Criminal
Investigations,” delivered at the International Association of Law
Enforcements Intelligence Analysts, Las Vegas, 2009. Reproduced
with permission.

Potential Pitfalls
Forecasting human decisions or the outcome of a complex organizational
process is difficult in the best of circumstances. For example, how
successful would you expect to be in forecasting the difficult decisions to
be made by the U.S. president or even your local mayor? It is even more
difficult when dealing with a foreign culture and significant gaps in the
available information. Mirror imaging is hard to avoid because, in the
absence of a thorough understanding of the foreign situation and culture,
your own perceptions appear to be the only reasonable way to look at the
problem.

255
A common error in our perceptions of the behavior of other people,
organizations, or governments of all types is likely to be even more
common when assessing the behavior of foreign leaders or groups. This is
the tendency to attribute the behavior of people, organizations, or
governments to the nature of the actor and to underestimate the influence
of situational factors. This error is especially easy to make when one
assumes that the actor has malevolent intentions but our understanding of
the pressures on that actor is limited. Conversely, people tend to see their
own behavior as conditioned almost entirely by the situation in which they
find themselves. We seldom see ourselves as a bad person, but we often
see malevolent intent in others. This is known to cognitive psychologists
as the “fundamental attribution error.”8

Analysts should always try to see the situation from the other side’s
perspective, but if a sophisticated grounding in the culture and operating
environment of their subject is lacking, they will often be wrong and fall
victim to mirror imaging. Recognition of this uncertainty should prompt
analysts to consider using words such as “possibly” and “could happen”
rather than “likely” or “probably” when reporting the results of Red Hat
Analysis. A key first step in avoiding mirror imaging is to establish how
you would behave and the reasons why. Once this baseline is established,
the analyst then asks if the adversary would act differently and why this is
likely to occur. Is the adversary motivated by different stimuli, or does the
adversary hold different core values? The task of Red Hat Analysis then
becomes illustrating how these differences would result in different
policies or behaviors.

Relationship to Other Techniques


Red Hat Analysis differs from a Red Team Analysis in that it can be done
or organized by any analyst who needs to understand or forecast an
adversary’s behavior and who has, or can gain access to, the required
cultural expertise. A Red Team Analysis is a challenge analysis technique,
described in chapter 9. It is usually conducted by a permanent
organizational unit or a temporary group staffed by individuals who are
well qualified to think like or play the role of an adversary. The goal of
Red Hat Analysis is to exploit the available resources to develop the best
possible analysis of an adversary’s or competitor’s behavior. The goal of
Red Team Analysis is usually to challenge the conventional wisdom or an

256
opposing team.

Origins of This Technique


Red Hat, Red Cell, and Red Team Analysis became popular during the
Cold War when “red” symbolized the Soviet Union, and it continues to
have broad applicability. This description of Red Hat Analysis is a
modified version of that in Randolph H. Pherson, Handbook of Analytic
Tools and Techniques (Reston, VA: Pherson Associates, 2008).

To see the options faced by foreign leaders as these leaders see


them, one must understand their values and assumptions and even
their misperceptions and misunderstandings. Without such insight,
interpreting foreign leaders’ decisions or forecasting future
decisions is often little more than partially informed speculation.
Too frequently, behavior of foreign leaders appears ‘irrational’ or
‘not in their own best interest.’ Such conclusions often indicate
analysts have projected American values and conceptual
frameworks onto the foreign leaders and societies, rather than
understanding the logic of the situation as it appears to them.

—Richards J. Heuer Jr., Psychology of Intelligence Analysis (1999)

8.5 Outside-In Thinking


Outside-In Thinking identifies the broad range of global, political,
environmental, technological, economic, or social forces and trends that
are outside the analyst’s area of expertise but that may profoundly affect
the issue of concern. Many analysts tend to think from the inside out,
focused on factors in their specific area of responsibility with which they
are most familiar.

When to Use It
This technique is most useful in the early stages of an analytic process
when analysts need to identify all the critical factors that might explain an

257
event or could influence how a particular situation will develop. It should
be part of the standard process for any project that analyzes potential
future outcomes, for this approach covers the broader environmental
context from which surprises and unintended consequences often come.

Outside-In Thinking also is useful if a large database is being assembled


and needs to be checked to ensure that no important field in the database
architecture has been overlooked. In most cases, important categories of
information (or database fields) are easily identifiable early on in a
research effort, but invariably one or two additional fields emerge after an
analyst or group of analysts is well into a project, forcing them to go back
and review all previous files, recoding for that new entry. Typically, the
overlooked fields are in the broader environment over which the analysts
have little control. By applying Outside-In Thinking, analysts can better
visualize the entire set of data fields early on in the research effort.

Value Added
Most analysts focus on familiar factors within their field of specialty, but
we live in a complex, interrelated world where events in our little niche of
that world are often affected by forces in the broader environment over
which we have no control. The goal of Outside-In Thinking is to help
analysts get an entire picture, not just the part of the picture with which
they are already familiar.

Outside-In Thinking reduces the risk of missing important variables early


in the analytic process by focusing on a narrow range of alternatives
representing only marginal change. It encourages analysts to rethink a
problem or an issue while employing a broader conceptual framework.
This technique is illustrated in Figure 8.5. By casting their net broadly at
the beginning, analysts are more likely to see an important dynamic or to
include a relevant alternative hypothesis. The process can provide new
insights and uncover relationships that were not evident from the
intelligence reporting. In doing so, the technique helps analysts think in
terms that extend beyond day-to-day reporting. It stimulates them to
address the absence of information and identify more fundamental forces
and factors that should be considered.

Figure 8.5 Inside-Out Analysis versus Outside-In Approach

258
Source: 2009 Pherson Associates, LLC.

The Method
✶ Generate a generic description of the problem or phenomenon to
be studied.
✶ Form a group to brainstorm the key forces and factors that could
have an impact on the topic but over which the subject can exert little
or no influence, such as globalization, the emergence of new
technologies, historical precedent, and the growing impact of social
media.
✶ Employ the mnemonic STEEP + 2 to trigger new ideas (Social,
Technical, Economic, Environmental, and Political, plus Military and
Psychological).
✶ Move down a level of analysis and list the key factors about which
some expertise is available.
✶ Assess specifically how each of these forces and factors could
have an impact on the problem.

259
✶ Ascertain whether these forces and factors actually do have an
impact on the issue at hand basing your conclusion on the available
evidence.
✶ Generate new intelligence collection tasking or research priorities
to fill in information gaps.

Relationship to Other Techniques


Outside-In Thinking is essentially the same as a business analysis
technique that goes by different acronyms, such as STEEP, STEEPLED,
PEST, or PESTLE. For example, PEST is an acronym for Political,
Economic, Social, and Technological, while STEEPLED also includes
Legal, Ethical, and Demographic. All require the analysis of external
factors that may have either a favorable or unfavorable influence on an
organization.

Origins of This Technique


This technique has been used in a planning and management environment
to ensure that outside factors that might affect an outcome have been
identified. The Outside-In Thinking described here is from Randolph H.
Pherson, Handbook of Analytic Tools and Techniques (Reston, VA:
Pherson Associates, LLC, 2008).

1. David Roberts, “Reasoning: Errors in Causation,”


http://writing2.richmond.edu/WRITING/wweb/reason2b.html.

2. Stuart K. Card, “The Science of Analytical Reasoning,” in Illuminating


the Path: The Research and Development Agenda for Visual Analytics, ed.
James J. Thomas and Kristin A. Cook (Richland, WA: National
Visualization and Analytics Center, Pacific Northwest National
Laboratory, 2005), http://nvac.pnl.gov/agenda.stm.

3. Kesten C. Green, “Game Theory, Simulated Interaction, and Unaided


Judgment for Forecasting Decisions in Conflicts,” International Journal of
Forecasting 21 (July–September 2005): 463–472; Kesten C. Green,
“Forecasting Decisions in Conflict Situations: A Comparison of Game
Theory, Role-Playing, and Unaided Judgment,” International Journal of
Forecasting 18 (July–September 2002): 321–344.

260
4. Ibid.

5. This technique should not be confused with Edward de Bono’s Six


Thinking Hats technique.

6. See the discussion of reframing in the introduction to chapter 9.

7. The description of how to conduct a Red Hat Analysis has been updated
since publication of the first edition to capture insights provided by Todd
Sears, a former DIA analyst, who noted that mirror imaging is unlikely to
be overcome simply by sensitizing analysts to the problem. The value of a
structured technique like Red Hat Analysis is that it requires analysts to
think first about what would motivate them to act before articulating why a
foreign adversary would act differently.

8. Richards J. Heuer Jr., Psychology of Intelligence Analysis (Washington,


DC: CIA Center for the Study of Intelligence, 1999; reprinted by Pherson
Associates, LLC, 2007), 134–138.

261
9 Challenge Analysis

9.1 Premortem Analysis [ 240 ]


9.2 Structured Self-Critique [ 245 ]
9.3 What If? Analysis [ 250 ]
9.4 High Impact/Low Probability Analysis [ 255 ]
9.5 Devil’s Advocacy [ 260 ]
9.6 Red Team Analysis [ 263 ]
9.7 Delphi Method [ 265 ]

Challenge analysis encompasses a set of analytic techniques that have also


been called contrarian analysis, alternative analysis, competitive analysis,
red team analysis, and Devil’s Advocacy. What all of these have in
common is the goal of challenging an established mental model or analytic
consensus in order to broaden the range of possible explanations or
estimates that are seriously considered. The fact that this same activity has
been called by so many different names suggests there has been some
conceptual diversity about how and why these techniques are being used
and what might be accomplished by their use.

There is broad recognition in the U S. Intelligence Community that failure


to question a consensus judgment, or a long-established mental model, has
been a consistent feature of most significant intelligence failures. The
postmortem analysis of virtually every major U.S. intelligence failure
since Pearl Harbor has identified an analytic mental model (mindset) as a
key factor contributing to the failure. The situation changed, but the
analyst’s mental model, based on past experience, did not keep pace with
that change or did not recognize all the ramifications of the change.

This record of analytic failures has generated discussion about the


“paradox of expertise.”1 The experts can be the last to recognize the reality
and significance of change. For example, few specialists on the Middle
East foresaw the Arab Spring, few experts on the Soviet Union foresaw its
collapse, and the experts on Germany were the last to accept that Germany
was going to be reunified. Going all the way back to the Korean War,
experts on China were saying that China would not enter the war—until it
did.

262
As we noted in chapter 1, an analyst’s mental model can be regarded as a
distillation of everything the analyst knows about how things normally
work in a certain country or a specific scientific field. It tells the analyst,
sometimes subconsciously, what to look for, what is important, and how to
interpret what he or she sees. A mental model formed through education
and experience serves an essential function: it is what enables the analyst
to provide on a daily basis reasonably good intuitive assessments or
estimates about what is happening or likely to happen.

The problem is that a mental model that has previously provided accurate
assessments and estimates for many years can be slow to change. New
information received incrementally over time is easily assimilated into
one’s existing mental model, so the significance of gradual change over
time is easily missed. It is human nature to see the future as a continuation
of the past. As a general rule, major trends and events evolve slowly, and
the future is often foreseen by skilled intelligence analysts. However, life
does not always work this way. The most significant intelligence failures
have been failures to foresee historical discontinuities, when history pivots
and changes direction. Much the same can be said about economic experts
not anticipating the U.S. financial crisis in 2010.

Such surprising events are not foreseeable unless they are first imagined so
that one can start examining the world from a different perspective. That is
what this chapter is about—techniques that enable the analyst, and
eventually the intelligence consumer or business client, to evaluate events
from a different perspective—in other words, with a different mental
model.

There is also another logical rationale for consistently challenging


conventional wisdom. Former CIA director Michael Hayden has stated
that “our profession deals with subjects that are inherently ambiguous, and
often deliberately hidden. Even when we’re at the top of our game, we can
offer policymakers insight, we can provide context, and we can give them
a clearer picture of the issue at hand, but we cannot claim certainty for our
judgments.” The director went on to suggest that getting it right seven
times out of ten might be a realistic expectation.2

Director Hayden’s estimate of seven times out of ten is supported by a


quick look at verbal expressions of probability used in intelligence reports.
“Probable” seems to be the most common verbal expression of the

263
likelihood of an assessment or estimate. Unfortunately, there is no
consensus within the Intelligence Community on what “probable” and
other verbal expressions of likelihood mean when they are converted to
numerical percentages. For discussion here, we accept Sherman Kent’s
definition of “probable” as meaning “75% plus or minus 12%.”3 This
means that analytic judgments described as “probable” are expected to be
correct roughly 75 percent of the time—and, therefore, incorrect or off
target about 25 percent of the time.

Logically, therefore, one might expect that one of every four judgments
that intelligence analysts describe as “probable” will turn out to be wrong.
This perspective broadens the scope of what challenge analysis might
accomplish. It should not be limited to questioning the dominant view to
be sure it’s right. Even if the challenge analysis confirms the initial
probability judgment, it should go further to seek a better understanding of
the other 25 percent. In what circumstances might there be a different
assessment or outcome, what would that be, what would constitute
evidence of events moving in that alternative direction, how likely is it,
and what would be the consequences? As we will discuss in the next
chapter, on conflict management, an understanding of these probabilities
should reduce the frequency of unproductive conflict between opposing
views. Analysts who recognize a one in four chance of being wrong should
at least be open to consideration of alternative assessments or estimates to
account for the other 25 percent.

This chapter describes three types of challenge analysis techniques: self-


critique, critique of others, and solicitation of critique by others:

✶ Self-critique: Two techniques that help analysts challenge their


own thinking are Premortem Analysis and Structured Self-Critique.
These techniques can counteract the pressures for conformity or
consensus that often suppress the expression of dissenting opinions in
an analytic team or group. We adapted Premortem Analysis from the
business world and applied it to the analytic process more broadly.
✶ Critique of others: Analysts can use What If? Analysis or High
Impact/Low Probability Analysis to tactfully question the
conventional wisdom by making the best case for an alternative
explanation or outcome.
✶ Critique by others: Several techniques are available for seeking
out critique by others. Devil’s Advocacy is a well-known example of

264
that. The term “Red Team” is used to describe a group that is
assigned to take an adversarial perspective. The Delphi Method is a
structured process for eliciting opinions from a panel of outside
experts.

What gets us into trouble is not what we don’t know, it’s what we
know for sure that just ain’t so.

—Mark Twain, American author and humorist

Reframing Techniques
Three of the techniques in this chapter work by a process called reframing.
A frame is any cognitive structure that guides the perception and
interpretation of what one sees. A mental model of how things normally
work can be thought of as a frame through which an analyst sees and
interprets evidence. An individual or a group of people can change their
frame of reference, and thus challenge their own thinking about a problem,
simply by changing the questions they ask or changing the perspective
from which they ask the questions. Analysts can use this reframing
technique when they need to generate new ideas, when they want to see
old ideas from a new perspective, or any other time when they sense a
need for fresh thinking.4

Reframing helps analysts break out of a mental rut by activating a different


set of synapses in their brain. To understand the power of reframing and
why it works, it is necessary to know a little about how the human brain
works. The brain is now believed to have roughly 100 billion neurons,
each analogous to a computer chip capable of storing information. Each
neuron has octopus-like arms called axons and dendrites. Electrical
impulses flow through these arms and are ferried by neurotransmitting
chemicals across the synaptic gap between neurons. Whenever two
neurons are activated, the connections, or synapses, between them are
strengthened. The more frequently those same neurons are activated, the
stronger the path between them.

Once a person has started thinking about a problem one way, the same

265
mental circuits or pathways are activated and strengthened each time the
person thinks about it. The benefit of this is that it facilitates the retrieval
of information one wants to remember. The downside is that these
pathways become mental ruts that make it difficult to see the information
from a different perspective. When an analyst reaches a judgment or
decision, this thought process is embedded in the brain. Each time the
analyst thinks about it, the same synapses are triggered, and the analyst’s
thoughts tend to take the same well-worn pathway through the brain.
Getting the same answer each time one thinks about it builds confidence,
and often overconfidence, in that answer.

Another way of understanding this process is to compare these mental ruts


to the route a skier will cut looking for the best path down a mountain top
(see Figure 9.0). After several runs, the skier has identified the ideal path
and most likely will remain stuck in this rut unless other stimuli or barriers
force him or her to break out and explore new opportunities.

Figure 9.0 Mount Brain: Creating Mental Ruts

266
Fortunately, it is fairly easy to open the mind to think in different ways,
and the techniques described in this chapter are designed to serve that
function. The trick is to restate the question, task, or problem from a
different perspective that activates a different set of synapses in the brain.
Each of the three applications of reframing described in this chapter does
this in a different way. Premortem Analysis asks analysts to imagine
themselves at some future point in time, after having just learned that a
previous analysis turned out to be completely wrong. The task then is to
figure out how and why it might have gone wrong. Structured Self-
Critique asks a team of analysts to reverse its role from advocate to critic
in order to explore potential weaknesses in the previous analysis. This
change in role can empower analysts to express concerns about the
consensus view that might previously have been suppressed. What If?
Analysis asks the analyst to imagine that some unlikely event has
occurred, and then to explain how it could happen and the implications of
the event.

These techniques are generally more effective in a small group than with a
single analyst. Their effectiveness depends in large measure on how fully
and enthusiastically participants in the group embrace the imaginative or
alternative role they are playing. Just going through the motions is of
limited value.

267
Overview of Techniques
Premortem Analysis
reduces the risk of analytic failure by identifying and analyzing a potential
failure before it occurs. Imagine yourself several years in the future. You
suddenly learn from an unimpeachable source that your analysis or
estimate or strategic plan was wrong. Then imagine what could have
happened to cause it to be wrong. Looking back from the future to explain
something that has happened is much easier than looking into the future to
forecast what will happen, and this exercise helps identify problems one
has not foreseen.

Structured Self-Critique
is a procedure that a small team or group uses to identify weaknesses in its
own analysis. All team or group members don a hypothetical black hat and
become critics rather than supporters of their own analysis. From this
opposite perspective, they respond to a list of questions about sources of
uncertainty, the analytic processes that were used, critical assumptions,
diagnosticity of evidence, anomalous evidence, information gaps, changes
in the broad environment in which events are happening, alternative
decision models, availability of cultural expertise, and indicators of
possible deception. Looking at the responses to these questions, the team
reassesses its overall confidence in its own judgment.

What If? Analysis


is an important technique for alerting decision makers to an event that
could happen, or is already happening, even if it may seem unlikely at the
time. It is a tactful way of suggesting to decision makers the possibility
that they may be wrong. What If? Analysis serves a function similar to that
of Scenarios Analysis—it creates an awareness that prepares the mind to
recognize early signs of a significant change, and it may enable a decision
maker to plan ahead for that contingency. The analyst imagines that an
event has occurred and then considers how the event could have unfolded.

High Impact/Low Probability Analysis

268
is used to sensitize analysts and decision makers to the possibility that a
low-probability event might actually happen and stimulate them to think
about measures that could be taken to deal with the danger or to exploit the
opportunity if it does occur. The analyst assumes the event has occurred,
and then figures out how it could have happened and what the
consequences might be.

Devil’s Advocacy
is a technique in which a person who has been designated the Devil’s
Advocate, usually by a responsible authority, makes the best possible case
against a proposed analytic judgment, plan, or decision.

Red Team Analysis


as described here is any project initiated by management to marshal the
specialized substantive, cultural, or analytic skills required to challenge
conventional wisdom about how an adversary or competitor thinks about
an issue. See also Red Hat Analysis in chapter 8.

Delphi Method
is a procedure for obtaining ideas, judgments, or forecasts electronically
from a geographically dispersed panel of experts. It is a time-tested,
extremely flexible procedure that can be used on any topic for which
expert judgment can contribute. It is included in this chapter because it can
be used to identify divergent opinions that challenge conventional wisdom.
It can also be used as a double-check on any research finding. If two
analyses from different analysts who are using different techniques arrive
at the same conclusion, this is grounds for a significant increase in
confidence in that conclusion. If the two conclusions disagree, this is also
valuable information that may open a new avenue of research.

9.1 Premortem Analysis


The goal of Premortem Analysis is to reduce the risk of surprise and the
subsequent need for a postmortem investigation of what went wrong. It is
an easy-to-use technique that enables a group of analysts who have been
working together on any type of future-oriented analysis or project to

269
challenge effectively the accuracy of their own conclusions. It is a specific
application of the reframing method, in which restating the question, task,
or problem from a different perspective enables one to see the situation
differently and come up with different ideas.

When to Use It
Premortem Analysis should be used by analysts who can devote a few
hours to challenging their own analytic conclusions about the future to see
where they might be wrong. It may be used by a single analyst, but, like all
structured analytic techniques, it is most effective when used in a small
group.

A Premortem as an analytic aid was first used in the context of decision


analysis by Gary Klein in his 1998 book, Sources of Power: How People
Make Decisions. He reported using it in training programs to show
decision makers that they typically are overconfident that their decisions
and plans will work. After the trainees formulated a plan of action, they
were asked to imagine that it is several months or years into the future, and
their plan has been implemented but has failed. They were then asked to
describe how it might have failed, despite their original confidence in the
plan. The trainees could easily come up with multiple explanations for the
failure, but none of these reasons were articulated when the plan was first
proposed and developed.

This assignment provided the trainees with evidence of their


overconfidence, and it also demonstrated that the Premortem strategy can
be used to expand the number of interpretations and explanations that
decision makers consider. Klein explains: “We devised an exercise to take
them out of the perspective of defending their plan and shielding
themselves from flaws. We tried to give them a perspective where they
would be actively searching for flaws in their own plan.”5 Klein reported
his trainees showed a “much higher level of candor” when evaluating their
own plans after being exposed to the Premortem exercise, as compared
with other, more passive attempts at getting them to self-critique their own
plans.6

Value Added

270
It is important to understand what it is about the Premortem Analysis
approach that helps analysts identify potential causes of error that
previously had been overlooked. Briefly, there are two creative processes
at work here. First, the questions are reframed—this exercise typically
elicits responses that are different from the original ones. Asking questions
about the same topic, but from a different perspective, opens new
pathways in the brain, as we noted in the introduction to this chapter.
Second, the Premortem approach legitimizes dissent. For various reasons,
many members of small groups suppress dissenting opinions, leading to
premature consensus. In a Premortem Analysis, all analysts are asked to
make a positive contribution to group goals by identifying weaknesses in
the previous analysis.

Research has documented that an important cause of poor group decisions


is the desire for consensus. This desire can lead to premature closure and
agreement with majority views regardless of whether they are perceived as
right or wrong. Attempts to improve group creativity and decision making
often focus on ensuring that a wider range of information and opinions is
presented to the group and given consideration.7

Group members tend to go along with the group leader, with the first
group member to stake out a position, or with an emerging majority
viewpoint for many reasons. Most benign is the common rule of thumb
that when we have no firm opinion, we take our cues from the opinions of
others. We follow others because we believe (often rightly) they know
what they are doing. Analysts may also be concerned that their views will
be critically evaluated by others, or that dissent will be perceived as
disloyalty or as an obstacle to progress that will just make the meeting last
longer.

In a candid newspaper column written long before he became CIA


Director, Leon Panetta wrote that “an unofficial rule in the bureaucracy
says that to ‘get along, go along.’ In other words, even when it is obvious
that mistakes are being made, there is a hesitancy to report the failings for
fear of retribution or embarrassment. That is true at every level, including
advisers to the president. The result is a ‘don’t make waves’ mentality . . .
that is just another fact of life you tolerate in a big bureaucracy.”8

A significant value of Premortem Analysis is that it legitimizes dissent.


Team members who may have previously suppressed questions or doubts

271
because they lacked confidence are empowered by the technique to
express previously hidden divergent thoughts. If this change in perspective
is handled well, each team member will know that they add value to the
exercise for being critical of the previous judgment, not for supporting it.

It is not bigotry to be certain we are right; but it is bigotry to be


unable to imagine how we might possibly have gone wrong.

—G. K. Chesterton, English writer

The Method
The best time to conduct a Premortem Analysis is shortly after a group has
reached a conclusion on an action plan, but before any serious drafting of
the report has been done. If the group members are not already familiar
with the Premortem technique, the group leader, another group member, or
a facilitator steps up and makes a statement along the lines of the
following: “Okay, we now think we know the right answer, but we need to
double-check this. To free up our minds to consider other possibilities,
let’s imagine that we have made this judgment, our report has gone
forward and been accepted, and now, x months or years later, we gain
access to a crystal ball. Peering into this ball, we learn that our analysis
was wrong, and things turned out very differently from the way we had
expected. Now, working from that perspective in the future, let’s put our
imaginations to work and brainstorm what could have possibly happened
to cause our analysis to be spectacularly wrong.”

Ideally, a separate meeting should be held for the Premortem discussion so


that participants have time prior to the meeting to think about what might
have happened to cause the analytic judgment to be wrong. They might be
asked to bring to the meeting a list of things that might have gone
differently than expected. To set the tone for the Premortem meeting,
analysts should be advised not to focus only on the hypotheses,
assumptions, and key evidence already discussed during their group
meetings. Rather, they should also look at the situation from the
perspective of their own life experiences. They should think about how
fast the world is changing, how many government programs are

272
unsuccessful or have unintended consequences, or how difficult it is to see
things from the perspective of a foreign culture. This type of thinking may
bring a different part of analysts’ brains into play as they are mulling over
what could have gone wrong with their analysis. Outside-In Thinking
(chapter 8) can also be helpful for this purpose.

At the Premortem meeting, the group leader or a facilitator writes the ideas
on a whiteboard or flip chart. To ensure that no single person dominates
the presentation of ideas, the Nominal Group version of brainstorming
might be used. With that technique, the facilitator goes around the room in
round-robin fashion, taking one idea from each participant until all have
presented every idea on their lists. (See Nominal Group Technique in
chapter 5.) After all ideas are posted on the board and made visible to all,
the group discusses what it has learned from this exercise, and what action,
if any, the group should take. This generation and initial discussion of
ideas can often be accomplished in a single two-hour meeting, which is a
small investment of time to undertake a systematic challenge to the
group’s thinking.

One expected result is an increased appreciation of the uncertainties


inherent in any assessment of the future. Another outcome might be
identification of indicators that, if observed, would provide early warning
that events are not proceeding as expected. Such findings may lead to
modification of the existing analytic framework.

If the Premortem Analysis leads the group to reconsider and revise its
analytic judgment, the questions shown in Figure 9.1 are a good starting
point. For a more thorough set of self-critique questions, see the discussion
of Structured Self-Critique, which involves changing one’s role from
advocate to critic of one’s previous analysis.

Premortem Analysis may identify problems, conditions, or alternatives that


require rethinking the group’s original position. In such a case, the
Premortem has done its job by alerting the group to the fact that it has a
problem, but it does not necessarily tell the group exactly what the
problem is or how to fix it. That is beyond the scope of the Premortem.
The Premortem makes a start by alerting the group to the fact that it has a
problem, but it does not systematically assess the likelihood of these things
happening; nor has it evaluated other possible sources of analytic error or
made a comprehensive assessment of alternative courses of action.

273
Relationship to Other Techniques
If the Premortem Analysis identifies a significant problem, the natural
follow-up technique for addressing this problem is Structured Self-
Critique, described in the next section.

Figure 9.1 Structured Self-Critique: Key Questions

274
Source: 2009 Pherson Associates, LLC.

Origins of This Technique


Premortem Analysis was originally developed by Gary Klein to train
managers to recognize their habitual overconfidence that their plans and
decisions will lead to success. The authors adapted Premortem Analysis
and redefined it as an intelligence analysis technique. For original
references on this subject, see Gary Klein, Sources of Power: How People
Make Decisions (Cambridge, MA: MIT Press, 1998); Klein, Intuition at
Work: Why Developing Your Gut Instinct Will Make You Better at What
You Do (New York: Doubleday, 2002); and Klein, “Performing a Project
PreMortem,” Harvard Business Review (September 2007).

9.2 Structured Self-Critique


275
Structured Self-Critique is a systematic procedure that a small team or
group can use to identify weaknesses in its own analysis. All team or
group members don a hypothetical black hat and become critics rather than
supporters of their own analysis. From this opposite perspective, they
respond to a list of questions about sources of uncertainty, the analytic
processes that were used, critical assumptions, diagnosticity of evidence,
anomalous evidence, information gaps, changes in the broad environment
in which events are happening, alternative decision models, availability of
cultural expertise, and indicators of possible deception. As it reviews
responses to these questions, the team reassesses its overall confidence in
its own judgment.

When to Use It
You can use Structured Self-Critique productively to look for weaknesses
in any analytic explanation of events or estimate of the future. It is
specifically recommended for use in the following ways:

✶ As the next step if the Premortem Analysis raises unresolved


questions about any estimated future outcome or event.
✶ As a double-check prior to the publication of any major product
such as a National Intelligence Estimate or a corporate strategic plan.
✶ As one approach to resolving conflicting opinions (as discussed in
chapter 10 under Adversarial Collaboration).

The amount of time required to work through the Structured Self-Critique


will vary greatly depending upon how carefully the previous analysis was
done. The questions listed in the method as described later in this section
are actually just a prescription for careful analysis. To the extent that these
same questions have been explored during the initial analysis, the time
required for the Structured Self-Critique is reduced. If these questions are
being asked for the first time, the process will take longer. As analysts gain
experience with Structured Self-Critique, they may have less need for
certain parts of it, as those parts will have been internalized and will have
been done during the initial analysis (as they should have been).

Value Added
When people are asked questions about the same topic but from a different

276
perspective, they often give different answers than the ones they gave
before. For example, if a team member is asked if he or she supports the
team’s conclusions, the answer will usually be “yes.” However, if all team
members are asked to look for weaknesses in the team’s argument, that
member may give a quite different response.

This change in the frame of reference is intended to change the group


dynamics. The critical perspective should always generate more critical
ideas. Team members who previously may have suppressed questions or
doubts because they lacked confidence or wanted to be good team players
are now empowered to express those divergent thoughts. If the change in
perspective is handled well, all team members will know that they win
points with their colleagues for being critical of the previous judgment, not
for supporting it.

The Method
Start by reemphasizing that all analysts in the group are now wearing a
black hat. They are now critics, not advocates, and they will now be
judged by their ability to find weaknesses in the previous analysis, not on
the basis of their support for the previous analysis. Then work through the
following topics or questions:

✶ Sources of uncertainty: Identify the sources and types of


uncertainty in order to set reasonable expectations for what the team
might expect to achieve. Should one expect to find: (1) a single
correct or most likely answer, (2) a most likely answer together with
one or more alternatives that must also be considered, or (3) a number
of possible explanations or scenarios for future development? To
judge the uncertainty, answer these questions:
– Is the question being analyzed a puzzle or a mystery?
Puzzles have answers, and correct answers can be identified if
enough pieces of the puzzle are found. A mystery has no single
definitive answer; it depends upon the future interaction of many
factors, some known and others unknown. Analysts can frame
the boundaries of a mystery only “by identifying the critical
factors and making an intuitive judgment about how they have
interacted in the past and might interact in the future.”9
– How does the team rate the quality and timeliness of its
evidence?

277
– Are there a greater than usual number of assumptions
because of insufficient evidence or the complexity of the
situation?
– Is the team dealing with a relatively stable situation or with a
situation that is undergoing, or potentially about to undergo,
significant change?
✶ Analytic process: In the initial analysis, see if the team did the
following. Did it identify alternative hypotheses and seek out
information on these hypotheses? Did it identify key assumptions?
Did it seek a broad range of diverse opinions by including analysts
from other offices, agencies, academia, or the private sector in the
deliberations? If these steps were not taken, the odds of the team
having a faulty or incomplete analysis are increased. Either consider
doing some of these things now or lower the team’s level of
confidence in its judgment.
✶ Critical assumptions: Assuming that the team has already
identified key assumptions, the next step is to identify the one or two
assumptions that would have the greatest impact on the analytic
judgment if they turned out to be wrong. In other words, if the
assumption is wrong, the judgment will be wrong. How recent and
well documented is the evidence that supports each such assumption?
Brainstorm circumstances that could cause each of these assumptions
to be wrong, and assess the impact on the team’s analytic judgment if
the assumption is wrong. Would the reversal of any of these
assumptions support any alternative hypothesis? If the team has not
previously identified key assumptions, it should do a Key
Assumptions Check now.
✶ Diagnostic evidence: Identify alternative hypotheses and the
several most diagnostic items of evidence that enable the team to
reject alternative hypotheses. For each item, brainstorm for any
reasonable alternative interpretation of this evidence that could make
it consistent with an alternative hypothesis. See Diagnostic Reasoning
in chapter 7.
✶ Information gaps: Are there gaps in the available information, or
is some of the information so dated that it may no longer be valid? Is
the absence of information readily explainable? How should absence
of information affect the team’s confidence in its conclusions?
✶ Missing evidence: Is there any evidence that one would expect to
see in the regular flow of intelligence or open-source reporting if the
analytic judgment is correct, but that turns out not to be there?

278
✶ Anomalous evidence: Is there any anomalous item of evidence
that would have been important if it had been believed or if it could
have been related to the issue of concern, but that was rejected as
unimportant because it was not believed or its significance was not
known? If so, try to imagine how this item might be a key clue to an
emerging alternative hypothesis.
✶ Changes in the broad environment: Driven by technology and
globalization, the world as a whole seems to be experiencing social,
technical, economic, environmental, and political changes at a faster
rate than ever before in history. Might any of these changes play a
role in what is happening or will happen? More broadly, what key
forces, factors, or events could occur independently of the issue that
is the subject of analysis that could have a significant impact on
whether the analysis proves to be right or wrong?
✶ Alternative decision models: If the analysis deals with decision
making by a foreign government or nongovernmental organization
(NGO), was the group’s judgment about foreign behavior based on a
rational actor assumption? If so, consider the potential applicability of
other decision models, specifically that the action was or will be the
result of bargaining between political or bureaucratic forces, the result
of standard organizational processes, or the whim of an authoritarian
leader.10 If information for a more thorough analysis is lacking,
consider the implications of that for confidence in the team’s
judgment.
✶ Cultural expertise: If the topic being analyzed involves a foreign
or otherwise unfamiliar culture or subculture, does the team have or
has it obtained cultural expertise on thought processes in that culture?
11
✶ Deception: Does another country, NGO, or commercial
competitor about which the team is making judgments have a motive,
opportunity, or means for engaging in deception to influence U.S.
policy or to change your behavior? Does this country, NGO, or
competitor have a past history of engaging in denial, deception, or
influence operations?

After responding to these questions, the analysts take off the black hats
and reconsider the appropriate level of confidence in the team’s previous
judgment. Should the initial judgment be reaffirmed or modified?

279
Potential Pitfalls
The success of this technique depends in large measure on the team
members’ willingness and ability to make the transition from supporters to
critics of their own ideas. Some individuals lack the intellectual flexibility
to do this well. It must be very clear to all members that they are no longer
performing the same function as before. Their new task is to critique an
analytic position taken by some other group (actually themselves, but with
a different hat on).

To emphasize the different role analysts are playing, Structured Self-


Critique meetings should be scheduled exclusively for this purpose. The
meetings should be led by a different person from the usual leader, and,
preferably, held at a different location. It will be helpful if an experienced
facilitator is available to lead the meeting(s). This formal reframing of the
analysts’ role from advocate to critic is an important part of helping
analysts see an issue from a different perspective.

Relationship to Other Techniques


One version of what has been called Devil’s Advocacy is similar to
Structured Self-Critique in that one member of the team is designated to
play the role of Devil’s Advocate. That member takes one of the team’s
critical assumptions, reverses it, and then argues from that perspective
against the team’s conclusions. We believe it is more effective for the
entire team to don the hypothetical black hat and play the role of critic.
When only one team member dons the black hat and tries to persuade the
rest of the team that they are wrong, the team almost always becomes
defensive and resists making any changes. Often, Devil’s Advocates will
also find themselves acting out a role that they may not actually agree
with. See information on Devil’s Advocacy later in this chapter.

Origins of This Technique


Richards Heuer and Randy Pherson developed Structured Self-Critique. A
simpler version of this technique appears in Randolph H, Pherson, “The
Pre-Mortem Assessment,” in Handbook of Analytic Tools and Techniques
(Reston, VA: Pherson Associates, LLC, 2008).

280
Begin challenging your own assumptions. Your assumptions are
your windows on the world. Scrub them off every once in a while,
or the light won’t come in.

—Alan Alda, American actor

9.3 What If? Analysis


What If? Analysis imagines that an event has occurred with the potential
for a major positive or negative impact and then, with the benefit of
“hindsight,” explains how this event could have come about and what the
consequences might be.

When to Use It
This technique should be in every analyst’s toolkit. It is an important
technique for alerting decision makers to an event that could happen, even
if it may seem unlikely at the present time. What If? Analysis serves a
function similar to Scenarios Analysis—it creates an awareness that
prepares the mind to recognize early signs of a significant change, and it
may enable the decision maker to plan ahead for that contingency. It is
most appropriate when any of the following conditions are present:

✶ A mental model is well ingrained within the analytic or the


customer community that a certain event will not happen.
✶ The issue is highly contentious either within the analytic
community or among decision makers and no one is focusing on what
actions need to be considered to deal with or prevent this event.
✶ There is a perceived need for others to focus on the possibility this
event could actually happen and to consider the consequences if it
does occur.

What If? Analysis is a logical follow-up after any Key Assumptions Check
that identifies an assumption that is critical to an important estimate but
about which there is some doubt. In that case, the What If? Analysis would
imagine that the opposite of this assumption is true. Analysis would then
focus on how this outcome could possibly occur and what the
consequences would be.

281
When analysts are too cautious in estimative judgments on threats,
they brook blame for failure to warn. When too aggressive in
issuing warnings, they brook criticism for ‘crying wolf.’

—Jack Davis, “Improving CIA Analytic Performance: Strategic


Warning,” Sherman Kent School for Intelligence Analysis,
September 2002

Value Added
Shifting the focus from asking whether an event will occur to imagining
that it has occurred and then explaining how it might have happened opens
the mind to think in different ways. What If? Analysis shifts the discussion
from “How likely is it?” to these questions:

✶ How could it possibly come about?


✶ Could it come about in more than one way?
✶ What would be the impact?
✶ Has the possibility of the event happening increased?

The technique also gives decision makers the following additional


benefits:

✶ A better sense of what they might be able to do today to prevent an


untoward development from occurring, or what they might do today
to leverage an opportunity for advancing their interests.
✶ A list of specific indicators to monitor to help determine if the
chances of a development actually occurring are increasing.

What If? Analysis is a useful tool for exploring unanticipated or unlikely


scenarios that are within the realm of possibility and that would have
significant consequences should they come to pass. Figure 9.3 is an
example of this. It posits a dramatic development—the emergence of India
as a new international hub for finance—and then explores how this
scenario could come about. In this example, the technique spurs the
analyst to challenge traditional analysis and rethink the underlying
dynamics of the situation.

282
The Method
✶ A What If? Analysis can be done by an individual or as a team
project. The time required is about the same as that for drafting a
short paper. It usually helps to initiate the process with a
brainstorming session and/or to interpose brainstorming sessions at
various stages of the process.
✶ Begin by assuming that what could happen has actually occurred.
Often it is best to pose the issue in the following way: “The New York
Times reported yesterday that. . . .” Be precise in defining both the
event and its impact. Sometimes it is useful to posit the new
contingency as the outcome of a specific triggering event, such as a
natural disaster, an economic crisis, a major political miscalculation,
or an unexpected new opportunity that vividly reveals that a key
analytic assumption is no longer valid.
✶ Develop at least one chain of argumentation—based on both
evidence and logic—to explain how this outcome could have come
about. In developing the scenario or scenarios, focus on what must
actually occur at each stage of the process. Work backwards from the
event to the present day. This is called “backwards thinking.” Try to
envision more than one scenario or chain of argument.
✶ Generate a list of indicators or “observables” for each scenario that
would help to detect whether events are starting to play out in a way
envisioned by that scenario.
✶ Assess the level of damage or disruption that would result from a
negative scenario and estimate how difficult it would be to overcome
or mitigate the damage incurred. For new opportunities, assess how
well developments could turn out and what can be done to ensure that
such a positive scenario might actually come about.
✶ Rank order the scenarios according to how much attention they
merit, taking into consideration both difficulty of implementation and
the potential significance of the impact.
✶ Develop and validate indicators or observables for the scenarios
and monitor them on a regular or periodic basis.
✶ Report periodically on whether any of the proposed scenarios may
be emerging and why.

Figure 9.3 What If? Scenario: India Makes Surprising Gains from the
Global Financial Crisis

283
Source: This example was developed by Ray Converse and Elizabeth
Manak, Pherson Associates, LLC.

284
Relationship to Other Techniques
What If? Analysis is sometimes confused with the High Impact/Low
Probability technique, as each deals with low-probability events. However,
only What If? Analysis uses the reframing technique of assuming that a
future event has happened and then thinking backwards in time to imagine
how it could have happened. High Impact/Low Probability requires new or
anomalous information as a trigger and then projects forward to what
might occur and the consequences if it does occur.

A Cautionary Note
Scenarios developed using both the What If? Analysis and the High
Impact/Low Probability techniques can often contain highly sensitive
data requiring a very limited distribution of the final product. Examples
are the following: How might a terrorist group launch a debilitating
attack on a vital segment of the U.S. infrastructure? How could a coup
be launched successfully against a friendly government? What could be
done to undermine or disrupt global financial networks? Obviously, if
an analyst identifies a major vulnerability that could be exploited by an
adversary, extreme care must be taken to prevent that detailed
description from falling into the hands of the adversary. An additional
concern is that the more “brilliant” and provocative the scenario, the
more likely it will attract attention, be shared with others, and possibly
leak.

Origins of This Technique


The term “What If? Analysis” has been applied to a variety of different
techniques for a long time. The version described here is based on
Randolph H. Pherson, “What If? Analysis,” in Handbook of Analytic Tools
and Techniques (Reston, VA: Pherson Associates, LLC, 2008) and
training materials from the Department of Homeland Security, Office of
Intelligence and Analysis.

9.4 High Impact/Low Probability Analysis


High Impact/Low Probability Analysis provides decision makers with

285
early warning that a seemingly unlikely event with major policy and
resource repercussions might actually occur.

When to Use It
High Impact/Low Probability Analysis should be used when one wants to
alert decision makers to the possibility that a seemingly long-shot
development that would have a major policy or resource impact may be
more likely than previously anticipated. Events that would have merited
such treatment before they occurred include the reunification of Germany
in 1989, the collapse of the Soviet Union in 1991, and the devastation
caused by Hurricane Katrina to New Orleans in August 2005—the
costliest natural disaster in the history of the United States. A variation of
this technique, High Impact/Uncertain Probability Analysis, might also be
used to address the potential impact of an outbreak of H5N1 (avian
influenza) or applied to a terrorist attack when intent is well established
but there are multiple variations on how it might be carried out.

A High Impact/Low Probability study usually is initiated when some new


and often fragmentary information is received suggesting that a previously
unanticipated event might actually occur. For example, it is conceivable
that such a tip-off could be received suggesting the need to alert decision
makers to the susceptibility of the United States to a major information
warfare attack or a dramatic terrorist attack on a national holiday. The
technique can also be used to sensitize analysts and decision makers to the
possible effects of low-probability events and stimulate them to think early
on about measures that could be taken to deal with the danger or to exploit
the opportunity.

A thoughtful senior policy official has opined that most potentially


devastating threats to U.S. interests start out being evaluated as
unlikely. The key to effective intelligence-policy relations in
strategic warning is for analysts to help policy officials in
determining which seemingly unlikely threats are worthy of serious
consideration.

—Jack Davis, “Improving CIA Analytic Performance: Strategic


Warning,” Sherman Kent School for Intelligence Analysis,
September 2002

286
Value Added
The High Impact/Low Probability Analysis format allows analysts to
explore the consequences of an event—particularly one not deemed likely
by conventional wisdom—without having to challenge the mainline
judgment or to argue with others about how likely an event is to happen. In
other words, this technique provides a tactful way of communicating a
viewpoint that some recipients might prefer not to hear.

The analytic focus is not on whether something will happen but to take it
as a given that an event could happen that would have a major and
unanticipated impact. The objective is to explore whether an increasingly
credible case can be made for an unlikely event occurring that could pose a
major danger—or offer great opportunities. The more nuanced and
concrete the analyst’s depiction of the plausible paths to danger, the easier
it is for a decision maker to develop a package of policies to protect or
advance vital U.S. interests.

The Method
An effective High Impact/Low Probability Analysis involves these steps:

✶ Clearly describe the unlikely event.


✶ Define the high-impact consequences if this event occurs.
Consider both the actual event and the secondary impacts of the
event.
✶ Identify any recent information or reporting suggesting that the
likelihood of the unlikely event occurring may be increasing.
✶ Postulate additional triggers that would propel events in this
unlikely direction or factors that would greatly accelerate timetables,
such as a botched government response, the rise of an energetic
challenger, a major terrorist attack, or a surprise electoral outcome
that benefits U.S. interests.
✶ Develop one or more plausible pathways that would explain how
this seemingly unlikely event could unfold. Focus on the specifics of
what must happen at each stage of the process for the train of events
to play out.

287
✶ Generate and validate a list of indicators that would help analysts
and decision makers recognize that events were beginning to unfold
in this way.
✶ Identify factors that would deflect a bad outcome or encourage a
positive outcome.
✶ Report periodically on whether any of the proposed scenarios may
be emerging and why.

It is very important that the analyst periodically review the list of


indicators. Such periodic reviews help analysts overcome prevailing
mental models that the events being considered are too unlikely to merit
serious attention.

Potential Pitfalls
Analysts need to be careful when communicating the likelihood of
unlikely events. The meaning of the word “unlikely” can be interpreted as
meaning anywhere from 1 percent to 25 percent probability, while “highly
unlikely” may mean from 1 percent to 10 percent.12 Customers receiving
an intelligence report that uses words of estimative probability such as
“very unlikely” will typically interpret the report as consistent with their
own prior thinking, if at all possible. If the report says a terrorist attack
against a specific U.S. embassy abroad within the next year is very
unlikely, it is quite possible, for example, that the analyst may be thinking
of about a 10 percent possibility, while a decision maker sees that as
consistent with his or her own thinking that the likelihood is less than 1
percent. Such a difference in likelihood can make the difference between a
decision to pay or not to pay for expensive contingency planning or a
proactive preventive countermeasure. When an analyst is describing the
likelihood of an unlikely event, it is desirable to express the likelihood in
numeric terms, either as a range (such as less than 5 percent or 10 to 20
percent) or as bettor’s odds (such as one chance in ten).

Figure 9.4 shows an example of an unlikely event—the outbreak of


conflict in the Arctic Ocean—that could have major geopolitical
consequences for the United States and other neighboring countries.
Analysts can employ the technique to sensitize decision makers to the
possible effects of the melting of Arctic ice and stimulate them to think
about measures that could be taken to deal with the danger.

288
Relationship to Other Techniques
High Impact/Low Probability Analysis is sometimes confused with What
If? Analysis. Both deal with low-probability or unlikely events. High
Impact/Low Probability Analysis is primarily a vehicle for warning
decision makers that recent, unanticipated developments suggest that an
event previously deemed highly unlikely might actually occur. Based on
recent evidence or information, it projects forward to discuss what could
occur and the consequences if the event does occur. It challenges the
conventional wisdom. What If? Analysis does not require new or
anomalous information to serve as a trigger. It reframes the question by
assuming that a surprise event has happened. It then looks backwards from
that surprise event to map several ways it could have come about. It also
tries to identify actions that, if taken in a timely manner, might have
prevented it.

Figure 9.4 High Impact/Low Probability Scenario: Conflict in the Arctic

Source: This example was developed by Michael Bannister and Ray


Converse, Pherson Associates, LLC.

Origins of This Technique


The description here is based on Randolph H. Pherson, “High Impact/Low
Probability Analysis,” in Handbook of Analytic Tools and Techniques

289
(Reston, VA: Pherson Associates, LLC, 2008); and Department of
Homeland Security, Office of Intelligence and Analysis training materials.

9.5 Devil’s Advocacy


Devil’s Advocacy is a process for critiquing a proposed analytic judgment,
plan, or decision, usually by a single analyst not previously involved in the
deliberations that led to the proposed judgment, plan, or decision. The
origins of Devil’s Advocacy “lie in a practice of the Roman Catholic
Church in the early 16th century. When a person was proposed for
beatification or canonization to sainthood, someone was assigned the role
of critically examining the life and miracles attributed to that individual;
his duty was to especially bring forward facts that were unfavorable to the
candidate.”13

When to Use It
Devil’s Advocacy is most effective when initiated by a manager as part of
a strategy to ensure that alternative solutions are thoroughly considered.
The following are examples of well-established uses of Devil’s Advocacy:

✶ Before making a decision, a policymaker or military commander


asks for a Devil’s Advocate analysis of what could go wrong.
✶ An intelligence organization designates a senior manager as a
Devil’s Advocate to oversee the process of reviewing and challenging
selected assessments or creates a special unit—sometimes called a
Red Cell—to perform this function.
✶ A manager commissions a Devil’s Advocacy analysis when he or
she is concerned about seemingly widespread unanimity on a critical
issue, or when the manager suspects that the mental model of analysts
working an issue for a long time has become so deeply ingrained that
they are unable to see the significance of recent changes.

Value Added
The unique attribute and value of Devil’s Advocacy as described here is
that the critique is initiated at the discretion of management to test the
strength of the argument for a proposed analytic judgment, plan, or

290
decision. It looks for what could go wrong and often focuses attention on
questionable assumptions, evidence, or lines of argument that undercut the
generally accepted conclusion, or on the insufficiency of current evidence
on which to base such a conclusion.

The Method
The Devil’s Advocate is charged with challenging the proposed judgment
by building the strongest possible case against it. There is no prescribed
procedure. Devil’s Advocates may be selected because of their expertise
with a specific technique. Or they may decide to use a different technique
from that used in the original analysis or to use a different set of
assumptions. They may also simply review the evidence and the analytic
procedures looking for weaknesses in the argument. In the latter case, an
ideal approach would be asking the questions about how the analysis was
conducted that are listed in this chapter under Structured Self-Critique.
These questions cover sources of uncertainty, the analytic processes that
were used, critical assumptions, diagnosticity of evidence, anomalous
evidence, information gaps, changes in the broad environment in which
events are happening, alternative decision models, availability of cultural
expertise, and indicators of possible deception.

Potential Pitfalls
Devil’s Advocacy is sometimes used as a form of self-critique. Rather than
being done by an outsider, a member of the analytic team volunteers or is
asked to play the role of Devil’s Advocate. We do not believe this
application of the Devil’s Advocacy technique is effective because:

✶ Calling such a technique Devil’s Advocacy is inconsistent with the


historic concept of Devil’s Advocacy that calls for investigation by an
independent outsider.
✶ Research shows that a person playing the role of a Devil’s
Advocate, without actually believing it, is significantly less effective
than a true believer and may even be counterproductive. Apparently,
more attention and respect is accorded to someone with the courage
to advance his or her own minority view than to someone who is
known to be only playing a role. If group members see the Devil’s
Advocacy as an analytic exercise they have to put up with, rather than

291
the true belief of one of their members who is courageous enough to
speak out, this exercise may actually enhance the majority’s original
belief—“a smugness that may occur because one assumes one has
considered alternatives though, in fact, there has been little serious
reflection on other possibilities.”14 What the team learns from the
Devil’s Advocate presentation may be only how to better defend the
team’s own entrenched position.

There are other forms of self-critique, especially Premortem Analysis and


Structured Self-Critique as described in this chapter, which may be more
effective in prompting even a cohesive, heterogeneous team to question
their mental model and to analyze alternative perspectives.

Origins of This Technique


As noted, this technique dates back to sixteenth-century practices of the
Roman Catholic Church. Since then, it has been used in different ways and
for different purposes. The discussion of Devil’s Advocacy here is based
on the authors’ analysis of current practices throughout the U.S.
Intelligence Community; Charlan Jeanne Nemeth and Brendan Nemeth-
Brown, “Better Than Individuals? The Potential Benefits of Dissent and
Diversity for Group Creativity,” in Group Creativity: Innovation through
Collaboration, ed. Paul B. Paulus and Bernard A. Nijstad (New York:
Oxford University Press, 2003); and Alexander L. George and Eric K.
Stern, “Harnessing Conflict in Foreign Policy Making: From Devil’s to
Multiple Advocacy,” Presidential Studies Quarterly 32 (September 2002).

9.6 Red Team Analysis


The term “red team” or “red teaming” has several meanings. One
definition is that red teaming is “the practice of viewing a problem from an
adversary or competitor’s perspective.”15 This is how red teaming is
commonly viewed by intelligence analysts.

A broader definition, described by the Defense Science Board’s task force


on red teaming activities in the Department of Defense, is that red teaming
is a strategy for challenging an organization’s plans, programs, and
assumptions at all levels—strategic, operational, and tactical. This
“includes not only ‘playing’ adversaries or competitors, but also serving as

292
devil’s advocates, offering alternative interpretations (team B) and
otherwise challenging established thinking within an enterprise.”16 This is
red teaming as a management strategy rather than as a specific analytic
technique. It is in this context that red teaming is sometimes used as a
synonym for any form of challenge analysis or alternative analysis.

To accommodate these two different ways that red teaming is used, in this
book we identify two separate techniques called Red Team Analysis and
Red Hat Analysis.

✶ Red Team Analysis, as described in this section, is a challenge


analysis technique. It is usually initiated by a leadership decision to
create a special project or cell or office with analysts whose cultural
and analytical skills qualify them for dealing with this special type of
analysis. A Red Team Analysis is often initiated to challenge the
conventional wisdom as a matter of principle to ensure that other
reasonable alternatives have been carefully considered. It is, in effect,
a modern version of Devil’s Advocacy.
✶ Red Hat Analysis, described in chapter 8, is a culturally sensitive
analysis of an adversary’s or competitor’s behavior and decision
making initiated by a regular line analyst or analytic team. It can and
should be done, to the maximum extent possible, as part of standard
analytic practice.

When to Use It
Management should initiate a Red Team Analysis whenever there is a
perceived need to challenge the conventional wisdom on an important
issue or whenever the responsible line office is perceived as lacking the
level of cultural expertise required to fully understand an adversary’s or
competitor’s point of view.

Value Added
Red Team Analysis can help free analysts from their own well-developed
mental model—their own sense of rationality, cultural norms, and personal
values. When analyzing an adversary, the Red Team approach requires
that an analyst change his or her frame of reference from that of an
“observer” of the adversary or competitor, to that of an “actor” operating

293
within the adversary’s cultural and political milieu. This reframing or role
playing is particularly helpful when an analyst is trying to replicate the
mental model of authoritarian leaders, terrorist cells, or non-Western
groups that operate under very different codes of behavior or motivations
than those to which most Americans are accustomed.

The Method
The function of Red Team Analysis is to challenge a proposed or existing
judgment by building the strongest possible case against it. If the goal is to
understand the thinking of an adversary or a competitor, the method is
similar to that described in chapter 8 under Red Hat Analysis. The
difference is that additional resources for cultural, substantive expertise
and analytic skills may be available to implement the Red Team Analysis.
The use of Role Playing or the Delphi Method to cross-check or
supplement the Red Team approach is encouraged.

Relationship to Other Techniques


The Red Team technique is closely related to Red Hat Analysis, described
in chapter 8.

Origins of This Technique


The term “Red Team” evolved during the Cold War, with the word “red”
symbolizing any Communist adversary. The following materials were used
in preparing this discussion of Red Team Analysis: Red Team Journal,
http://redteamjournal.com; Defense Science Board Task Force, The Role
and Status of DoD Red Teaming Activities (Washington, DC: Office of the
Under Secretary of Defense for Acquisition, Technology, and Logistics,
September 2003); Michael K. Keehan, “Red Teaming for Law
Enforcement,” Police Chief 74, no. 2 (February 2007); and Department of
Homeland Security training materials.

9.7 Delphi Method


Delphi is a method for eliciting ideas, judgments, or forecasts from a group
of experts who may be geographically dispersed. It is different from a

294
survey in that there are two or more rounds of questioning. After the first
round of questions, a moderator distributes all the answers and
explanations of the answers to all participants, often anonymously. The
expert participants are then given an opportunity to modify or clarify their
previous responses, if so desired, on the basis of what they have seen in the
responses of the other participants. A second round of questions builds on
the results of the first round, drills down into greater detail, or moves to a
related topic. There is great flexibility in the nature and number of rounds
of questions that might be asked.

When to Use It
The Delphi Method was developed by the RAND Corporation at the
beginning of the Cold War in the 1950s to forecast the impact of new
technology on warfare. It was also used to assess the probability, intensity,
or frequency of future enemy attacks. In the 1960s and 1970s, Delphi
became widely known and used as a method for futures research,
especially forecasting long-range trends in science and technology. Futures
research is similar to intelligence analysis in that the uncertainties and
complexities one must deal with often preclude the use of traditional
statistical methods, so explanations and forecasts must be based on the
experience and informed judgments of experts.

Over the years, Delphi has been used in a wide variety of ways, and for an
equally wide variety of purposes. Although many Delphi projects have
focused on developing a consensus of expert judgment, a variant called
Policy Delphi is based on the premise that the decision maker is not
interested in having a group make a consensus decision, but rather in
having the experts identify alternative policy options and present all the
supporting evidence for and against each option. That is the rationale for
including Delphi in this chapter on challenge analysis. It can be used to
identify divergent opinions that may be worth exploring.

One group of Delphi scholars advises that the Delphi technique “can be
used for nearly any problem involving forecasting, estimation, or decision
making”—as long as the problem is not so complex or so new as to
preclude the use of expert judgment. These Delphi advocates report using
it for diverse purposes that range from “choosing between options for
regional development, to predicting election outcomes, to deciding which
applicants should be hired for academic positions, to predicting how many

295
meals to order for a conference luncheon.”17

Several software programs have been developed to handle these tasks, and
one of these is hosted for public use
(http://armstrong.wharton.upenn.edu/delphi2). The distributed decision
support systems now publicly available to support virtual teams include
some or all of the functions necessary for Delphi as part of a larger
package of analytic tools.

Value Added
Consultation with relevant experts in academia, business, and
nongovernmental organizations is encouraged by Intelligence Community
Directive No. 205, on Analytic Outreach, dated July 2008. We believe the
development of Delphi panels of experts on areas of critical concern
should be standard procedure for outreach to experts outside the analytic
unit of any organization, and particularly the Intelligence Community
because of its more insular work environment.

As an effective process for eliciting information from outside experts,


Delphi has several advantages:

✶ Outside experts can participate from their home locations, thus


reducing the costs in time and travel commonly associated with the
use of outside consultants.
✶ Delphi can provide analytic judgments on any topic for which
outside experts are available. That means it can be used as an
independent cross-check of conclusions reached in-house. If the same
conclusion is reached in two analyses using different analysts and
different methods, this is grounds for a significant increase in
confidence in that conclusion. If the conclusions disagree, this is also
valuable information that may open a new avenue of research.
✶ Delphi identifies any outliers who hold an unusual position.
Recognizing that the majority is not always correct, researchers can
then focus on gaining a better understanding of the grounds for any
views that diverge significantly from the consensus. In fact,
identification of experts who have an alternative perspective and are
qualified to defend it might be the objective of a Delphi project.
✶ The process by which the expert panel members are provided
feedback from other experts and are given an opportunity to modify

296
their responses makes it easy for experts to adjust their previous
judgments in response to new evidence.
✶ In many Delphi projects, the experts remain anonymous to other
panel members so that no one can use his or her position of authority,
reputation, or personality to influence others. Anonymity also
facilitates the expression of opinions that go against the conventional
wisdom and may not otherwise be expressed.

The anonymous features of the Delphi Method substantially reduce the


potential for groupthink. They also make it more difficult for any
participant with strong views based on his or her past experiences and
worldview to impose that on the rest of the group. The requirement that the
group of experts engage in several rounds of information sharing also
helps mitigate the potential for satisficing. The Delphi Method can also
help protect analysts from falling victim to several analytic traps, including
the following ways:

✶ Giving too much weight to first impressions or initial data,


especially if they attract our attention and seem important at the time.
✶ Continuing to hold to an analytic judgment when confronted with
a mounting list of evidence that contradicts the initial conclusion.

The Method
In a Delphi project, a moderator (analyst) sends a questionnaire to a panel
of experts who may be in different locations. The experts respond to these
questions and usually are asked to provide short explanations for their
responses. The moderator collates the results from this first questionnaire
and sends the collated responses back to all panel members, requesting
them to reconsider their responses based on what they see and learn from
the other experts’ responses and explanations. Panel members may also be
asked to answer another set of questions. This cycle of question, response,
and feedback continues through several rounds using the same or a related
set of questions. It is often desirable for panel members to remain
anonymous so that they are not unduly influenced by the responses of
senior members. This method is illustrated in Figure 9.7.

Examples

297
To show how Delphi can be used for intelligence analysis, we have
developed three illustrative applications:

✶ Evaluation of another country’s policy options: The Delphi


project manager or moderator identifies several policy options that a
foreign country might choose. The moderator then asks a panel of
experts on the country to rate the desirability and feasibility of each
option, from the other country’s point of view, on a five-point scale
ranging from “Very Desirable” or “Feasible” to “Very Undesirable”
or “Definitely Infeasible.” Panel members are also asked to identify
and assess any other policy options that ought to be considered and to
identify the top two or three arguments or items of evidence that
guided their judgments. A collation of all responses is sent back to the
panel with a request for members to do one of the following:
reconsider their position in view of others’ responses, provide further
explanation of their judgments, or reaffirm their previous response. In
a second round of questioning, it may be desirable to list key
arguments and items of evidence and ask the panel to rate them on
their validity and their importance, again from the other country’s
perspective.
✶ Analysis of alternative hypotheses: A panel of outside experts is
asked to estimate the probability of each hypothesis in a set of
mutually exclusive hypotheses where the probabilities must add up to
100 percent. This could be done as a stand-alone project or to double-
check an already completed Analysis of Competing Hypotheses
(ACH) (chapter 7). If two analyses using different analysts and
different methods arrive at the same conclusion, this is grounds for a
significant increase in confidence in the conclusion. If the analyses
disagree, that may also be useful to know, as one can then seek to
understand the rationale for the different judgments.
✶ Warning analysis or monitoring a situation over time: An
analyst asks a panel of experts to estimate the probability of a future
event. This might be either a single event for which the analyst is
monitoring early warning indicators or a set of scenarios for which
the analyst is monitoring milestones to determine the direction in
which events seem to be moving. There are two ways to manage a
Delphi project that monitors change over time. One is to have a new
round of questions and responses at specific intervals to assess the
extent of any change. The other is what is called either Dynamic
Delphi or Real Time Delphi, where participants can modify their

298
responses at any time as new events occur or as new information is
submitted by one of the participants.18 The probability estimates
provided by the Delphi panel can be aggregated to provide a measure
of the significance of change over time. They can also be used to
identify differences of opinion among the experts that warrant further
examination.

Figure 9.7 Delphi Technique

299
Potential Pitfalls
A Delphi project involves administrative work to identify the experts,
communicate with panel members, and collate and tabulate their responses
through several rounds of questioning. The Intelligence Community can
pose additional obstacles, such as ensuring that the experts have
appropriate security clearances or requiring them to meet with the analysts
in cleared office spaces. Another potential pitfall is that overenthusiastic
use of the technique can force consensus when it might be better to present
two competing hypotheses and the evidence supporting each position.

Relationship to Other Techniques


Delphi is easily combined with other techniques, such as Virtual
Brainstorming, and techniques for prioritizing, ranking, or scaling lists of
information.

Origins of This Technique


The origin of Delphi as an analytic method was described above under

300
“When to Use It.” The following references were useful in researching this
topic: Murray Turoff and Starr Roxanne Hiltz, “Computer-Based Delphi
Processes,” 1996, http://web.njit.edu/~turoff/Papers/delphi3.html; and
Harold A. Linstone and Murray Turoff, The Delphi Method: Techniques
and Applications (Reading, MA: Addison-Wesley, 1975). A 2002 digital
version of Linstone and Turoff’s book is available online at
http://is.njit.edu/pubs/delphibook; see in particular the chapter by Turoff
on “The Policy Delphi” (http://is.njit.edu/pubs/delphibook/ch3b1.pdf). For
more current information on validity and optimal techniques for
implementing a Delphi project, see Gene Rowe and George Wright,
“Expert Opinions in Forecasting: The Role of the Delphi Technique,” in
Principles of Forecasting, ed. J. Scott Armstrong (New York: Springer
Science+Business Media, 2001).

1. Rob Johnston, Analytic Culture in the U.S. Intelligence Community


(Washington, DC: CIA Center for the Study of Intelligence, 2005), 64.

2. Paul Bedard, “CIA Chief Claims Progress with Intelligence Reforms,”


U.S. News and World Report, May 16, 2008.

3. Donald P. Steury, ed., Sherman Kent and the Board of National


Estimates: Collected Essays (Washington, DC: CIA Center for the Study
of Intelligence, 1994), 133.

4. Reframing is similar to the Problem Restatement technique Morgan


Jones described in his book The Thinker’s Toolkit (New York: Three
Rivers Press, 1995). Jones observed that “the moment we define a problem
our thinking about it quickly narrows considerably.” We create a frame
through which we view the problem, and that tends to obscure other
interpretations of the problem. A group can change that frame of reference,
and challenge its own thinking, simply by redefining the problem.

5. Gary Klein, Sources of Power: How People Make Decisions


(Cambridge, MA: MIT Press, 1998), 71.

6. Gary Klein, Intuition at Work: Why Developing Your Gut Instinct Will
Make You Better at What You Do (New York: Doubleday, 2002), 91.

7. Charlan J. Nemeth and Brendan Nemeth-Brown, “Better Than


Individuals? The Potential Benefits of Dissent and Diversity for Group
Creativity,” in Group Creativity: Innovation through Collaboration, ed.

301
Paul B. Paulus and Bernard A. Nijstad (New York: Oxford University
Press, 2003), 63.

8. Leon Panetta, “Government: A Plague of Incompetence,” Monterey


County Herald, March 11, 2007, F1.

9. Gregory F. Treverton, “Risks and Riddles,” Smithsonian Magazine,


June 1, 2007.

10. For information about these three decision-making models, see


Graham T. Allison and Philip Zelikov, Essence of Decision, 2nd ed. (New
York: Longman, 1999).

11. For information on fundamental differences in how people think in


different cultures, see Richard Nisbett, The Geography of Thought: How
Asians and Westerners Think Differently and Why (New York: Free Press,
2003).

12. Richards J. Heuer Jr., Psychology of Intelligence Analysis


(Washington, DC: CIA Center for the Study of Intelligence, 1999;
reprinted by Pherson Associates, LLC, 2007), 155.

13. Nemeth and Nemeth-Brown, “Better Than Individuals?,” 75–76.

14. Nemeth and Nemeth-Brown, “Better Than Individuals?,” 77–78.

15. This definition is from the Red Team Journal,


http://redteamjournal.com.

16. Defense Science Board Task Force, The Role and Status of DoD Red
Teaming Activities (Washington, DC: Office of the Under Secretary of
Defense for Acquisition, Technology, and Logistics, September 2003).

17. Kesten C. Green, J. Scott Armstrong, and Andreas Graefe, “Methods


to Elicit Forecasts from Groups: Delphi and Prediction Markets
Compared,” Foresight: The International Journal of Applied Forecasting
(Fall 2007), www.forecastingprinciples.com/paperpdf/Delphi-
WPlatestV.pdf.

18. See Real Time Delphi, www.realtimedelphi.org.

302
10 Conflict Management

10.1 Adversarial Collaboration [ 277 ]


10.2 Structured Debate [ 283 ]

As discussed in the previous chapter, challenge analysis frequently leads to


the identification and confrontation of opposing views. That is, after all,
the purpose of challenge analysis. This raises two important questions,
however. First, how can confrontation be managed so that it becomes a
learning experience rather than a battle between determined adversaries?
Second, in an analysis of any topic with a high degree of uncertainty, how
can one decide if one view is wrong or if both views have merit and need
to be discussed in an analytic report? This chapter offers a conceptual
framework and several useful techniques for dealing with analytic
conflicts.

A widely distributed article in the Harvard Business Review stresses that


improved collaboration among organizations or organizational units that
have different interests can be achieved only by accepting and actively
managing the inevitable—and desirable—conflicts between these units:

The disagreements sparked by differences in perspective,


competencies, access to information, and strategic focus . . . actually
generate much of the value that can come from collaboration across
organizational boundaries. Clashes between parties are the crucibles
in which creative solutions are developed. . . . So instead of trying
simply to reduce disagreements, senior executives need to embrace
conflict and, just as important, institutionalize mechanisms for
managing it.1

The most common procedures for dealing with differences of opinion have
been to force a consensus, water down the differences, or—in the U.S.
Intelligence Community—add a dissenting footnote to an estimate. We
believe these practices are suboptimal, at best, and hope they will become
increasingly rare as our various analytic communities embrace greater
collaboration early in the analytic process, rather than endure mandated

303
coordination at the end of the process after all parties are locked into their
positions. One of the principal benefits of using structured analytic
techniques for intraoffice and interagency collaboration is that these
techniques identify differences of opinion at the start of the analytic
process. This gives time for the differences to be at least understood, if not
resolved, at the working level before management becomes involved.

How one deals with conflicting analytic assessments or estimates depends,


in part, upon one’s expectations about what can be achieved. Mark
Lowenthal has written persuasively of the need to recalibrate expectations
of what intelligence analysis can accomplish.2 More than in any other
discipline, intelligence analysts typically work with incomplete,
ambiguous, and potentially deceptive evidence. Combine this with the fact
that intelligence analysts are seeking to understand human behavior, which
is often difficult to predict even in our own culture, and it should not be
surprising that intelligence analysis sometimes turns out to be “wrong.”
Acceptance of the basic principle that it is okay for intelligence analysts to
be uncertain, because they are dealing with uncertain matters, helps to set
the stage for appropriate management of conflicting views. In some cases,
one position will be refuted and rejected. In other cases, two or more
positions may be reasonable assessments or estimates, usually with one
more likely than the others. In such cases, conflict is mitigated when it is
recognized that each position has some value in covering the full range of
options.

In the previous chapter we noted that an assessment or estimate that is


properly described as “probable” has about a one-in-four chance of being
wrong. This has clear implications for appropriate action when analysts
hold conflicting views. If an analysis meets rigorous standards and
conflicting views still remain, decision makers are best served by an
analytic product that deals directly with the uncertainty rather than
minimizing or suppressing it. The greater the uncertainty, the more
appropriate it is to go forward with a product that discusses the most likely
assessment or estimate and gives one or more alternative possibilities.
Factors to be considered when assessing the amount of uncertainty include
the following:

✶ An estimate of the future generally has more uncertainty than an


assessment of a past or current event.
✶ Mysteries, for which there are no knowable answers, are far more

304
uncertain than puzzles, for which an answer does exist if one could
only find it.3
✶ The more assumptions that are made, the greater the uncertainty.
Assumptions about intent or capability, and whether or not they have
changed, are especially critical.
✶ Analysis of human behavior or decision making is far more
uncertain than analysis of technical data.
✶ The behavior of a complex dynamic system is more uncertain than
that of a simple system. The more variables and stakeholders
involved in a system, the more difficult it is to foresee what might
happen.

If the decision is to go forward with a discussion of alternative assessments


or estimates, the next step might be to produce any of the following:

✶ A comparative analysis of opposing views in a single report. This


calls for analysts to identify the sources and reasons for the
uncertainty (e.g., assumptions, ambiguities, knowledge gaps),
consider the implications of alternative assessments or estimates,
determine what it would take to resolve the uncertainty, and suggest
indicators for future monitoring that might provide early warning of
which alternative is correct.
✶ An analysis of alternative scenarios, as described in chapter 6.
✶ A What If? Analysis or High Impact/Low Probability Analysis, as
described in chapter 9.
✶ A report that is clearly identified as a “second opinion.”

Overview of Techniques
Adversarial Collaboration
in essence is an agreement between opposing parties on how they will
work together to resolve their differences, gain a better understanding of
why they differ, or collaborate on a joint paper to explain the differences.
Six approaches to implementing Adversarial Collaboration are described.

Structured Debate
is a planned debate of opposing points of view on a specific issue in front

305
of a “jury of peers,” senior analysts, or managers. As a first step, each side
writes up the best possible argument for its position and passes this
summation to the opposing side. The next step is an oral debate that
focuses on refuting the other side’s arguments rather than further
supporting one’s own arguments. The goal is to elucidate and compare the
arguments against each side’s argument. If neither argument can be
refuted, perhaps both merit some consideration in the analytic report.

10.1 Adversarial Collaboration


Adversarial Collaboration is an agreement between opposing parties about
how they will work together to resolve or at least gain a better
understanding of their differences. Adversarial Collaboration is a relatively
new concept championed by Daniel Kahneman, the psychologist who
along with Amos Tversky initiated much of the research on cognitive
biases described in Richards Heuer’s Psychology of Intelligence Analysis.
Kahneman received a Nobel Prize in 2002 for his research on behavioral
economics, and he wrote an intellectual autobiography in connection with
this work in which he commented as follows on Adversarial Collaboration:

One line of work that I hope may become influential is the


development of a procedure of adversarial collaboration which I
have championed as a substitute for the format of critique-reply-
rejoinder in which debates are currently conducted in the social
sciences. Both as a participant and as a reader I have been appalled by
the absurdly adversarial nature of these exchanges, in which hardly
anyone ever admits an error or acknowledges learning anything from
the other. Adversarial collaboration involves a good-faith effort to
conduct debates by carrying out joint research—in some cases there
may be a need for an agreed arbiter to lead the project and collect the
data. Because there is no expectation of the contestants reaching
complete agreement at the end of the exercise, adversarial
collaboration will usually lead to an unusual type of joint publication,
in which disagreements are laid out as part of a jointly authored
paper.4

Kahneman’s approach to Adversarial Collaboration involves agreement on


empirical tests for resolving a dispute and conducting those tests with the

306
help of an impartial arbiter. A joint report describes the tests, states what
has been learned on which both sides agree, and provides interpretations of
the test results on which they disagree.5

Although differences of opinion on intelligence judgments can seldom be


resolved through empirical research, the Adversarial Collaboration concept
can, nevertheless, be adapted to apply to intelligence analysis. Analysts
can agree to use a variety of techniques to reduce, resolve, more clearly
define, or explain their differences. These are grouped together here under
the overall heading of Adversarial Collaboration.

When to Use It
Adversarial Collaboration should be used only if both sides are open to
discussion of an issue. If one side is fully locked into its position and has
repeatedly rejected the other side’s arguments, this technique is unlikely to
be successful. Structured Debate is more appropriate to use in these
situations because it includes an independent arbiter who listens to both
sides and then makes a decision.

Value Added
Adversarial Collaboration can help opposing analysts see the merit of
another group’s perspective. If successful, it will help both parties gain a
better understanding of what assumptions or evidence are behind their
opposing opinions on an issue and to explore the best way of dealing with
these differences. Can one side be shown to be wrong, or should both
positions be reflected in any report on the subject? Can there be agreement
on indicators to show the direction in which events seem to be moving?

A key advantage of Adversarial Collaboration techniques is that they bring


to the surface critical items of evidence, logic, and assumptions that the
other side had not factored into its own analysis. This is especially true for
any evidence that is inconsistent with or unhelpful in supporting either
side’s lead hypothesis.

The Method
Six approaches to Adversarial Collaboration are described here. What they

307
all have in common is the forced requirement to understand and address
the other side’s position rather than simply dismiss it. Mutual
understanding of the other side’s position is the bridge to productive
collaboration. These six techniques are not mutually exclusive; in other
words, one might use several of them for any specific project.

Key Assumptions Check:


The first step in understanding what underlies conflicting judgments is a
Key Assumptions Check, as described in chapter 8. Evidence is always
interpreted in the context of a mental model about how events normally
transpire in a given country or situation, and a Key Assumptions Check is
one way to make a mental model explicit. If a Key Assumptions Check has
not already been done, each side can apply this technique and then share
the results with the other side.

Discussion should then focus on the rationale for each assumption and
suggestions for how the assumption might be either confirmed or refuted.
If the discussion focuses on the probability of Assumption A versus
Assumption B, it is often helpful to express probability as a numerical
range—for example, 65 percent to 85 percent for probable. When analysts
go through these steps, they sometimes discover they are not as far apart as
they thought. The discussion should focus on refuting the other side’s
assumptions rather than supporting one’s own.

Analysis of Competing Hypotheses:


When opposing sides are dealing with a collegial difference of opinion,
with neither side firmly locked into its position, Analysis of Competing
Hypotheses (ACH), described in chapter 7, may be a good structured
format for helping to identify and discuss their differences. One important
benefit of ACH is that it pinpoints the exact sources of disagreement. Both
parties agree on a set of hypotheses and then rate each item of evidence or
relevant information as consistent or inconsistent with each hypothesis.
When analysts disagree on these consistency ratings, the differences are
often quickly resolved. When not resolved, the differences often point to
previously unrecognized assumptions or to some interesting rationale for a
different interpretation of the evidence. One can also use ACH to trace the
significance of each item of relevant information in supporting the overall
conclusion.

308
The use of ACH may not result in the elimination of all the differences of
opinion, but it can be a big step toward understanding these differences
and determining which might be reconcilable through further intelligence
collection or research. The analysts can then make a judgment about the
potential productivity of further efforts to resolve the differences. ACH
may not be helpful, however, if two sides are already locked into their
positions. It is all too easy in ACH for one side to interpret the evidence
and enter assumptions in a way that deliberately supports its preconceived
position. To challenge a well-established mental model, other challenge or
conflict management techniques may be more appropriate.

Argument Mapping:
Argument Mapping, which was described in chapter 7, maps the logical
relationship between each element of an argument. Two sides might agree
to work together to create a single Argument Map with the rationale both
for and against a given conclusion. Such an Argument Map will show
where the two sides agree, where they diverge, and why. The visual
representation of the argument makes it easier to recognize weaknesses in
opposing arguments. This technique pinpoints the location of any
disagreement and could serve as an objective basis for mediating a
disagreement.

An alternative approach might be to focus the discussion on alternative


contrasting Argument Maps.

Mutual Understanding:
When analysts in different offices or agencies disagree, the disagreement
is often exacerbated by the fact that they have a limited understanding of
the other side’s position and logical reasoning. The Mutual Understanding
approach addresses this problem directly.

After an exchange of information on their positions, the two sides meet


together with a facilitator, moderator, or decision maker. Side 1 is required
to explain to Side 2 its understanding of Side 2’s position. Side 1 must do
this in a manner that satisfies Side 2 that its position is appropriately
represented. Then the roles are reversed, and Side 2 explains its
understanding of Side 1’s position. This mutual exchange is often difficult
to do without carefully listening to and understanding the opposing view

309
and what it is based upon. Once all the analysts accurately understand each
side’s position, they can discuss their differences more rationally and with
less emotion. Experience shows that this technique normally prompts some
movement of the opposing parties toward a common ground.6

There are two ways to measure the health of a debate: the kinds of
questions being asked and the level of listening.

—David A. Garvin and Michael A. Roberto, “What You Don’t


Know about Making Decisions,” Harvard Business Review,
September 2001

Joint Escalation:
When disagreement occurs within an analytic team, the disagreement is
often referred to a higher authority. This escalation often makes matters
worse. What typically happens is that a frustrated analyst takes the
problem up to his or her boss, briefly explaining the conflict in a manner
that is clearly supportive of the analyst’s own position. The analyst then
returns to the group armed with the boss’s support. However, the opposing
analyst(s) have also gone to their bosses and come back with support for
their solution. Each analyst is then locked into what has become “my
manager’s view” of the issue. An already thorny problem has become even
more intractable. If the managers engage each other directly, both will
quickly realize they lack a full understanding of the problem and must also
factor in what their counterparts know before trying to resolve the issue.

This situation can be avoided by an agreement among team members, or


preferably an established organization policy, that requires joint
escalation.7 The analysts should be required to prepare a joint statement
describing the disagreement and to present it jointly to their superiors. This
requires each analyst to understand and address, rather than simply
dismiss, the other side’s position. It also ensures that managers have access
to multiple perspectives on the conflict, its causes, and the various ways it
might be resolved.

Just the need to prepare such a joint statement discourages escalation and

310
often leads to an agreement. The proponents of this approach report their
experience that “companies that require people to share responsibility for
the escalation of a conflict often see a decrease in the number of problems
that are pushed up the management chain. Joint escalation helps create the
kind of accountability that is lacking when people know they can provide
their side of an issue to their own manager and blame others when things
don’t work out.”8

The Nosenko Approach:


Yuriy Nosenko was a Soviet intelligence officer who defected to the
United States in 1964. Whether he was a true defector or a Soviet plant
was a subject of intense and emotional controversy within the CIA for
more than a decade. In the minds of some, this historic case is still
controversial.

At a critical decision point in 1968, the leadership of the CIA’s Soviet


Bloc Division set up a three-man team to review all the evidence and make
a recommendation for the division’s action in this case. The amount of
evidence is illustrated by the fact that just one single report arguing that
Nosenko was still under Soviet control was 1,000 pages long. The team
consisted of one leader who was of the view that Nosenko was a Soviet
plant, one leader who believed that he was a bona fide defector, and an
experienced officer who had not previously been involved but was inclined
to think Nosenko might be a plant.

The interesting point here is the ground rule that the team was instructed to
follow. After reviewing the evidence, each officer identified those items of
evidence thought to be of critical importance in making a judgment on
Nosenko’s bona fides. Any item that one officer stipulated as critically
important then had to be addressed by the other two members.

It turned out that fourteen items were stipulated by at least one of the team
members and had to be addressed by both of the others. Each officer
prepared his own analysis, but they all had to address the same fourteen
issues. Their report became known as the “Wise Men” report.

The team did not come to a unanimous conclusion. However, it was


significant that the thinking of all three moved in the same direction. When
the important evidence was viewed from the perspective of searching for

311
the truth, rather than proving Nosenko’s guilt or innocence, the case that
Nosenko was a plant began to unravel. The officer who had always
believed that Nosenko was bona fide felt he could now prove the case. The
officer who was relatively new to the case changed his mind in favor of
Nosenko’s bona fides. The officer who had been one of the principal
analysts and advocates for the position that Nosenko was a plant became
substantially less confident in that conclusion. There were now sufficient
grounds for management to make the decision.

The ground rules used in the Nosenko case can be applied in any effort to
abate a long-standing analytic controversy. The key point that makes these
rules work is the requirement that each side must directly address the
issues that are important to the other side and thereby come to understand
the other’s perspective. This process guards against the common
propensity of analysts to make their own arguments and then simply
dismiss those of the other side as unworthy of consideration.9

Truth springs from argument amongst friends.

—David Hume, Scottish philosopher

10.2 Structured Debate


A Structured Debate is a planned debate between analysts or analytic
teams that hold opposing points of view on a specific issue. The debate is
conducted according to set rules and before an audience, which may be a
“jury of peers” or one or more senior analysts or managers.

When to Use It
Structured Debate is called for when a significant difference of opinion
exists within or between analytic units or within the decision-making
community. It can also be used effectively when Adversarial Collaboration
has been unsuccessful or is impractical, and a choice must be made
between two opposing opinions or a decision to go forward with a
comparative analysis of both. Structured Debate requires a significant

312
commitment of analytic time and resources. A long-standing policy issue,
a critical decision that has far-reaching implications, or a dispute within
the analytic community that is obstructing effective interagency
collaboration would be grounds for making this type of investment in time
and resources.

Value Added
In the method proposed here, each side presents its case in writing to the
opposing side; then, both cases are combined in a single paper presented to
the audience prior to the debate. The oral debate then focuses on refuting
the other side’s position. Glib and personable speakers can always make
arguments for their own position sound persuasive. Effectively refuting the
other side’s position is a different ball game, however. The requirement to
refute the other side’s position brings to the debate an important feature of
the scientific method: that the most likely hypothesis is actually the one
with the least evidence against it as well as good evidence for it. (The
concept of refuting hypotheses is discussed in chapter 7.)

The goal of the debate is to decide what to tell the customer. If neither side
can effectively refute the other, then arguments for and against both sides
should be included in the report. Customers of intelligence analysis gain
more benefit by weighing well-argued conflicting views than from reading
an assessment that masks substantive differences among analysts or drives
the analysis toward the lowest common denominator. If participants
routinely interrupt one another or pile on rebuttals before digesting the
preceding comment, the teams are engaged in emotional conflict rather
than constructive debate.

He who knows only his own side of the case, knows little of that.
His reasons may be good, and no one may have been able to refute
them. But if he is equally unable to refute the reasons on the
opposite side, if he does not so much as know what they are, he has
no ground for preferring either opinion.

—John Stuart Mill, On Liberty (1859)

313
The Method
Start by defining the conflict to be debated. If possible, frame the conflict
in terms of competing and mutually exclusive hypotheses. Ensure that all
sides agree with the definition. Then follow these steps:

✶ An individual is selected or a group identified who can develop the


best case that can be made for each hypothesis.
✶ Each side writes up the best case from its point of view. This
written argument must be structured with an explicit presentation of
key assumptions, key pieces of evidence, and careful articulation of
the logic behind the argument.
✶ Each side presents the opposing side with its arguments, and the
two sides are given time to develop counterarguments to refute the
opposing side’s position.

Next, conduct the debate phase in the presence of a jury of peers, senior
analysts, or managers who will provide guidance after listening to the
debate. If desired, an audience of interested observers might also watch the
debate.

✶ The debate starts with each side presenting a brief (maximum five
minutes) summary of the argument for its position. The jury and the
audience are expected to have read each side’s full argument.
✶ Each side then presents to the audience its rebuttal of the other
side’s written position. The purpose here is to proceed in the oral
arguments by systematically refuting alternative hypotheses rather
than by presenting more evidence to support one’s own argument.
This is the best way to evaluate the strengths of the opposing
arguments.
✶ After each side has presented its rebuttal argument, the other side
is given an opportunity to refute the rebuttal.
✶ The jury asks questions to clarify the debaters’ positions or gain
additional insight needed to pass judgment on the debaters’ positions.
✶ The jury discusses the issue and passes judgment. The winner is
the side that makes the best argument refuting the other side’s
position, not the side that makes the best argument supporting its own
position. The jury may also recommend possible next steps for further
research or intelligence collection efforts. If neither side can refute
the other’s arguments, it may be because both sides have a valid

314
argument; if that is the case, both positions should be represented in
any subsequent analytic report.

Relationship to Other Techniques


Structured Debate is similar to the Team A/Team B technique that has
been taught and practiced in parts of the Intelligence Community.
Structured Debate differs from Team A/Team B in its focus on refuting the
other side’s argument. And it avoids the historical association with the
infamous 1976 Team A/Team B exercise, which is not an appropriate role
model for how analysis should be done today.10

Origins of This Technique


The history of debate goes back to the Socratic dialogues in ancient
Greece, and even earlier. Many different forms of debate have evolved
since then. Richards Heuer formulated the idea of focusing the debate on
refuting the other side’s argument rather than supporting one’s own.

1. Jeff Weiss and Jonathan Hughes, “Want Collaboration? Accept—and


Actively Manage—Conflict,” Harvard Business Review, March 2005.

2. Mark M. Lowenthal, “Towards a Reasonable Standard for Analysis:


How Right, How Often on Which Issues?” Intelligence and National
Security 23 (June 2008).

3. Gregory F. Treverton, “Risks and Riddles,” Smithsonian, June 2007.

4. Daniel Kahneman, Autobiography, 2002, available on the Nobel Prize


website:
http://nobelprize.org/nobel_prizes/economics/laureates/2002/kahneman-
autobio.html. For a pioneering example of a report on an Adversarial
Collaboration, see Barbara Mellers, Ralph Hertwig, and Daniel Kahneman,
“Do Frequency Representations Eliminate Conjunction Effects? An
Exercise in Adversarial Collaboration,” Psychological Science 12 (July
2001).

5. Richards Heuer is grateful to Steven Rieber of the Office of the Director


of National Intelligence, Office of Analytic Integrity and Standards, for

315
referring him to Kahneman’s work on Adversarial Collaboration.

6. Richards Heuer is grateful to Jay Hillmer of the Defense Intelligence


Agency for sharing his experience in using this technique to resolve
coordination disputes.

7. Weiss and Hughes, “Want Collaboration?”

8. Ibid.

9. This discussion is based on Richards J. Heuer Jr., “Nosenko: Five Paths


to Judgment,” in Inside CIA’s Private World: Declassified Articles from
the Agency’s Internal Journal, 1955–1992, ed. H. Bradford Westerbrook
(New Haven: Yale University Press, 1995).

10. The term Team A/Team B is taken from a historic analytic experiment
conducted in 1976. A team of CIA Soviet analysts (Team A) and a team of
outside critics (Team B) prepared competing assessments of the Soviet
Union’s strategic military objectives. This exercise was characterized by
entrenched and public warfare between long-term adversaries. In other
words, the historic legacy of Team A/Team B is exactly the type of trench
warfare between opposing sides that we need to avoid. The 1976
experiment did not achieve its goals, and it is not a model that most
analysts who are familiar with it would want to follow. We recognize that
some recent Team A/Team B exercises have been quite fruitful, but we
believe other conflict management techniques described in this chapter are
a better way to proceed.

316
11 Decision Support

11.1 Decision Trees [ 294 ]


11.2 Decision Matrix [ 297 ]
11.3 Pros-Cons-Faults-and-Fixes [ 300 ]
11.4 Force Field Analysis [ 304 ]
11.5 SWOT Analysis [ 308 ]
11.6 Impact Matrix [ 311 ]
11.7 Complexity Manager [ 314 ]

Managers, commanders, planners, and other decision makers all make


choices or trade-offs among competing goals, values, or preferences.
Because of limitations in human short-term memory, we usually cannot
keep all the pros and cons of multiple options in mind at the same time.
That causes us to focus first on one set of problems or opportunities and
then another, a situation that often leads to vacillation or procrastination in
making a firm decision. Some decision-support techniques help overcome
this cognitive limitation by laying out all the options and interrelationships
in graphic form so that analysts can test thse results of alternative options
while still keeping the problem as a whole in view. Other techniques help
decision makers untangle the complexity of a situation or define the
opportunities and constraints in the environment in which the choice needs
to be made.

It is usually not the analyst’s job to make the choices or decide on the
trade-offs, but analysts can and should use decision-support techniques to
provide timely support to managers, commanders, planners, and decision
makers who do make these choices. To engage in this type of customer-
support analysis, analysts must understand the operating environment of
the decision maker and anticipate how the decision maker is likely to
approach an issue. They must understand the dynamics of the decision-
making process in order to recognize when and how they can be most
useful. Most of the decision-support techniques described here are used in
both government and industry. By using such techniques, analysts can see
a problem from the decision maker’s perspective. They can use these
techniques without overstepping the limits of their role as analysts because
the technique doesn’t make the decision; it just structures all the relevant
information in a format that makes it easier for the manager, commander,

317
planner, or other decision maker to make a choice.

The decision aids described in this chapter provide a framework for


analyzing why or how an individual leader, group, organization, or country
has made or is likely to make a decision. If an analyst can describe an
adversary’s or a competitor’s goals and preferences, it may be easier to
foresee their decision. Similarly, when the decisions are known, the
technique makes it easier to infer the competitor’s or adversary’s goals and
preferences. Analysts can use these decision-support techniques to help the
decision maker frame a problem instead of trying to predict a foreign
organization or government’s decision. Often, the best support the analyst
can provide is to describe the forces that are expected to shape a decision,
identify several potential outcomes, and then select indicators or signs to
look for that would provide early warning of the direction in which events
are headed. (See chapter 6 for a discussion of Scenarios and Indicators.)

Caution is in order, however, whenever one thinks of predicting or even


explaining another person’s decision, regardless of whether the person is
of similar background or not. People do not always act rationally in their
own best interests. Their decisions are influenced by emotions and habits,
as well as by what others might think or values of which others may not be
aware.

The same is true of organizations and governments. One of the most


common analytic errors is to assume that an organization or a government
will act rationally—that is, in its own best interests. All intelligence
analysts seeking to understand the behavior of another country should be
familiar with Graham Allison’s analysis of U.S. and Soviet decision
making during the Cuban missile crisis.1 It documents three different
models for how governments make decisions—bureaucratic bargaining
processes and standard organizational procedures as well as the rational
actor model.

Even if the organization or government is making a rational decision,


analysts may get it wrong because foreign organizations and governments
typically view their own best interests quite differently from the way
people from other cultures, countries, or backgrounds see them. Also,
organizations and governments do not always have a clear understanding
of their own best interests, and they often have a variety of conflicting
interests.

318
Decision making and decision analysis are large and diverse fields of study
and research. The decision-support techniques described in this chapter are
only a small sample of what is available, but they do meet many of the
basic requirements for intelligence analysis.

By providing structure to the decision-making process, the techniques used


for Decision Support discussed in this chapter help intelligence analysts as
well as decision makers avoid the common cognitive limitations of
satisficing and groupthink. Application of the techniques will often surface
new options or demonstrate that a previously favored option is less optimal
than originally thought. The natural tendency toward mirror imaging is
more likely to be kept in check when using these techniques because they
provide multiple perspectives for viewing a problem and envisioning the
interplay of complex factors.

Decision-support techniques can also help analysts overcome several


practitioner’s heuristics or mental mistakes, including focusing on a
narrow range of alternatives representing marginal, not radical, change;
failing to remember or factor something into the analysis because the
analyst lacks an appropriate category or “bin” for that item of information;
and overestimating the probability of multiple independent events
occurring in order for an event or attack to take place. Techniques such as
Decision Trees and the Decision Matrix use simple math to help analysts
and decision makers calculate the most probable or preferred outcomes.

The role of the analyst in the policymaking process is similar to


that of the scout in relation to the football coach. The job of the
scout is not to predict in advance the final score of the game, but to
assess the strengths and weaknesses of the opponent so that the
coach can devise a winning game plan. Then the scout sits in a
booth with powerful binoculars, to report on specific vulnerabilities
the coach can exploit.

—Douglas MacEachin, CIA Deputy Director for Intelligence,


1993–1995

Overview of Techniques

319
Decision Trees
are a simple way to chart the range of options available to a decision
maker, estimate the probability of each option, and show possible
outcomes. They provide a useful landscape to organize a discussion and
weigh alternatives but can also oversimplify a problem.

Decision Matrix
is a simple but powerful device for making trade-offs between conflicting
goals or preferences. An analyst lists the decision options or possible
choices, the criteria for judging the options, the weights assigned to each
of these criteria, and an evaluation of the extent to which each option
satisfies each of the criteria. This process will show the best choice—based
on the values the analyst or a decision maker puts into the matrix. By
studying the matrix, one can also analyze how the best choice would
change if the values assigned to the selection criteria were changed or if
the ability of an option to satisfy a specific criterion were changed. It is
almost impossible for an analyst to keep track of these factors effectively
without such a matrix, as one cannot keep all the pros and cons in working
memory at the same time. A Decision Matrix helps the analyst see the
whole picture.

Pros-Cons-Faults-and-Fixes
is a strategy for critiquing new policy ideas. It is intended to offset the
human tendency of analysts and decision makers to jump to conclusions
before conducting a full analysis of a problem, as often happens in group
meetings. The first step is for the analyst or the project team to make lists
of Pros and Cons. If the analyst or team is concerned that people are being
unduly negative about an idea, he or she looks for ways to “Fix” the Cons
—that is, to explain why the Cons are unimportant or even to transform
them into Pros. If concerned that people are jumping on the bandwagon
too quickly, the analyst tries to “Fault” the Pros by exploring how they
could go wrong. Usually, the analyst will either “Fix” the Cons or “Fault”
the Pros, but not do both. Of the various techniques described in this
chapter, this is one of the easiest and quickest to use.

Force Field Analysis

320
is a technique that analysts can use to help a decision maker decide how to
solve a problem or achieve a goal and determine whether it is possible to
do so. The analyst identifies and assigns weights to the relative importance
of all the factors or forces that are either a help or a hindrance in solving
the problem or achieving the goal. After organizing all these factors in two
lists, pro and con, with a weighted value for each factor, the analyst or
decision maker is in a better position to recommend strategies that would
be most effective in either strengthening the impact of the driving forces or
reducing the impact of the restraining forces.

SWOT Analysis
is used to develop a plan or strategy for achieving a specified goal. (SWOT
is an acronym for Strengths, Weaknesses, Opportunities, and Threats.) In
using this technique, the analyst first lists the Strengths and Weaknesses in
the organization’s ability to achieve a goal, and then lists Opportunities
and Threats in the external environment that would either help or hinder
the organization from reaching the goal.

Impact Matrix
can be used by analysts or managers to assess the impact of a decision on
their organization by evaluating what impact it is likely to have on all key
actors or participants in that decision. It also gives the analyst or decision
maker a better sense of how the issue is most likely to play out or be
resolved in the future.

Complexity Manager
is a simplified approach to understanding complex systems—the kind of
systems in which many variables are related to each other and may be
changing over time. Government policy decisions are often aimed at
changing a dynamically complex system. It is because of this dynamic
complexity that many policies fail to meet their goals or have unforeseen
and unintended consequences. Use Complexity Manager to assess the
chances for success or failure of a new or proposed policy, identify
opportunities for influencing the outcome of any situation, determine what
would need to change in order to achieve a specified goal, or recognize the
potential for unintended consequences from the pursuit of a policy goal.

321
11.1 Decision Trees
Decision Trees establish chains of decisions and/or events that illustrate a
comprehensive range of possible future actions. They paint a landscape for
the decision maker showing the range of options available, the estimated
value or probability of each option, and the likely implications or
outcomes of choosing each option.

When to Use It
Decision Trees can be used to do the following:

✶ Aid decision making by explicitly comparing options.


✶ Create a heuristic model of the decision-making process of the
subject or adversary.
✶ Map multiple competing hypotheses about an array of possible
actions.

A Decision Tree illustrates a comprehensive set of possible decisions and


the possible outcomes of those decisions. It can be used both to help a
decision maker faced with a difficult problem to resolve or to assess what
options an adversary or competitor might choose to implement.

In constructing a Decision Tree, analysts need to have a rich understanding


of the operating environment in which the decision is being made. This
can include knowledge of motives, capabilities, sensitivities to risk, current
doctrine, cultural norms and values, and other relevant factors.

Value Added
Decision Trees are simple to understand and easy to use and interpret. A
Decision Tree can be generated by a single analyst—or, preferably, by a
group using brainstorming techniques discussed in chapter 5. Once the tree
has been built, it can be posted on a wall or in a wiki and adjusted over a
period of time as new information is gathered. When significant new data
are received that add new branches to the tree or substantially alter the
probabilities of the options, these changes can be inserted into the tree and
highlighted with color to show the decision maker what has changed and
how it may have changed the previous line of analysis.

322
The Method
Using a Decision Tree is a fairly simple process involving two steps: (1)
building the tree, and (2) calculating the value or probability of each
outcome represented on the tree. Follow these steps:

✶ Draw a square on a piece of paper or a whiteboard to represent a


decision point.
✶ Draw lines from the square representing a range of options that
can be taken.
✶ At the end of the line for each option indicate whether other
options are available (by drawing a square followed by more lines) or
by designating an outcome (by drawing a circle followed by one or
more lines describing the range of possibilities).
✶ Continue this process along each branch of the tree until all
options and outcomes have been specified.

Once the tree has been constructed, do the following:

✶ Establish a set of percentages (adding to 100) for each set of lines


emanating from each square.
✶ Multiply the percentages shown along each critical path or branch
of the tree and record these percentages at the far right of the tree.
Check to make sure all the percentages in this column add to 100.

The most valuable or most probable outcome will have the highest
percentage assigned to it, and the least valuable or least probable outcome
will have the lowest percentage assigned to it.

Potential Pitfalls
A Decision Tree is only as good as the reliability of the data, completeness
of the range of options, and validity of the qualitative probabilities or
values assigned to each option. A detailed Decision Tree can present the
misleading impression that the authors have thought of all possible options
or outcomes. In intelligence analysis, options are often available to the
subjects of the analysis that were not imagined, just as there might be
unintended consequences that the subjects did not anticipate.

323
Relationship to Other Techniques
A Decision Tree is structurally similar to critical path analysis and to
Program Evaluation and Review Technique (PERT) charts. Both of these
techniques, however, only show the activities and connections that need to
be undertaken to complete a complex task. A timeline analysis as done in
support of a criminal investigation is essentially a Decision Tree drawn
after the fact, showing only the paths actually taken.

Origins of This Technique


This description of Decision Trees is from the Canadian government’s
Structured Analytic Techniques for Senior Analysts course. The
Intelligence Analyst Learning Program developed the course, and the
materials are used here with the permission of the Canadian government.
More detailed discussion of how to build and use Decision Trees is readily
available on the Internet, for example at the MindTools website.

11.2 Decision Matrix


A Decision Matrix helps analysts identify the course of action that best
achieves specified goals or preferences.

When to Use It
The Decision Matrix technique should be used when a decision maker has
multiple options from which to choose, has multiple criteria for judging
the desirability of each option, and/or needs to find the decision that
maximizes a specific set of goals or preferences. For example, it can be
used to help choose among various plans or strategies for improving
intelligence analysis, to select one of several IT systems one is considering
buying, to determine which of several job applicants is the right choice, or
to consider any personal decision, such as what to do after retiring.

A Decision Matrix is not applicable to most intelligence analysis, which


typically deals with evidence and judgments rather than goals and
preferences. It can be used, however, for supporting a decision maker’s
consideration of alternative courses of action. It might also be used to

324
support a Red Hat Analysis that examines decision options from an
adversary’s or competitor’s perspective.

Value Added
This technique deconstructs a decision problem into its component parts,
listing all the options or possible choices, the criteria for judging the
options, the weights assigned to each of these criteria, and an evaluation of
the extent to which each option satisfies each of these criteria. All these
judgments are apparent to anyone looking at the matrix. Because it is so
explicit, the matrix can play an important role in facilitating
communication between those who are involved in or affected by the
decision process. It can be easy to identify areas of disagreement and to
determine whether such disagreements have any material impact on the
decision. One can also see how sensitive the decision is to changes that
might be made in the values assigned to the selection criteria or to the
ability of an option to satisfy the criteria. If circumstances or preferences
change, it is easy to go back to the matrix, make changes, and calculate the
impact of the changes on the proposed decision.

The Method
Create a Decision Matrix table. To do this, break the decision problem
down into two main components by making two lists—a list of options or
alternatives for making a choice and a list of criteria to be used when
judging the desirability of the options. Then follow these steps:

✶ Create a matrix with one column for each option. Write the name
of each option at the head of one of the columns. Add two more blank
columns on the left side of the table.
✶ Count the number of selection criteria, and then adjust the table so
that it has that many rows plus two more, one at the top to list the
options and one at the bottom to show the scores for each option. In
the first column on the left side, starting with the second row, write in
all the selection criteria down the left side of the table. There is some
value in listing them roughly in order of importance, but doing so is
not critical. Leave the bottom row blank. (Note: Whether you enter
the options across the top row and the criteria down the far-left
column, or vice versa, depends on what fits best on the page. If one of

325
the lists is significantly longer than the other, it usually works best to
put the longer list in the left-side column.)
✶ Assign weights based on the importance of each of the selection
criteria. There are several ways to do this, but the preferred way is to
take 100 percent and divide these percentage points among the
selection criteria. Be sure that the weights for all the selection criteria
combined add to 100 percent. Also be sure that all the criteria are
phrased in such a way that a higher weight is more desirable. (Note: If
this technique is being used by an intelligence analyst to support
decision making, this step should not be done by the analyst. The
assignment of relative weights is up to the decision maker.)
✶ Work across the matrix one row at a time to evaluate the relative
ability of each of the options to satisfy each of the selection criteria.
For example, assign ten points to each row and divide these points
according to an assessment of the degree to which each of the options
satisfies each of the selection criteria. Then multiply this number by
the weight for that criterion. Figure 11.2 is an example of a Decision
Matrix with three options and six criteria.
✶ Add the columns for each of the options. If you accept the
judgments and preferences expressed in the matrix, the option with
the highest number will be the best choice.

Figure 11.2 Decision Matrix

When using this technique, many analysts will discover relationships or


opportunities not previously recognized. A sensitivity analysis may find

326
that plausible changes in some values would lead to a different choice. For
example, the analyst might think of a way to modify an option in a way
that makes it more desirable or might rethink the selection criteria in a way
that changes the preferred outcome. The numbers calculated in the matrix
do not make the decision. The matrix is just an aid to help the analyst and
the decision maker understand the trade-offs between multiple competing
preferences.

Origins of This Technique


This is one of the most commonly used techniques for decision analysis.
Many variations of this basic technique have been called by many different
names, including decision grid, Multiple Attribute Utility Analysis
(MAUA), Multiple Criteria Decision Analysis (MCDA), Multiple Criteria
Decision Making (MCDM), Pugh Matrix, and Utility Matrix. For a
comparison of various approaches to this type of analysis, see Panos M.
Parlos and Evangelos Triantaphyllou, eds., Multi-Criteria Decision
Making Methods: A Comparative Study (Dordrecht, The Netherlands:
Kluwer Academic Publishers, 2000).

11.3 Pros-Cons-Faults-and-Fixes
Pros-Cons-Faults-and-Fixes is a strategy for critiquing new policy ideas. It
is intended to offset the human tendency of a group of analysts and
decision makers to jump to a conclusion before full analysis of the
problem has been completed.

When to Use It
Making lists of pros and cons for any action is a common approach to
decision making. The “Faults” and “Fixes” are what is new in this strategy.
Use this technique to make a quick appraisal of a new idea or a more
systematic analysis of a choice between two options.

One advantage of Pros-Cons-Faults-and-Fixes is its applicability to


virtually all types of decisions. Of the various structured techniques for
decision making, it is one of the easiest and quickest to use. It requires
only a certain procedure for making the lists and discussing them with

327
others to solicit divergent input.

In the business world, the technique can also be used to discover potential
vulnerabilities in a proposed strategy to introduce a new product or acquire
a new company. By assessing how Pros can be “Faulted,” one can
anticipate how competitors might react to a new corporate initiative; by
assessing how Cons can be “Fixed,” potential vulnerabilities can be
addressed and major mistakes avoided early in the planning process.

Value Added
It is unusual for a new idea to meet instant approval. What often happens
in meetings is that a new idea is brought up, one or two people
immediately explain why they don’t like it or believe it won’t work, and
the idea is then dropped. On the other hand, there are occasions when just
the opposite happens. A new idea is immediately welcomed, and a
commitment to support it is made before the idea is critically evaluated.
The Pros-Cons-Faults-and-Fixes technique helps to offset this human
tendency to jump to conclusions.

The technique first requires a list of Pros and Cons about the new idea or
the choice between two alternatives. If there seems to be excessive
enthusiasm for an idea and a risk of acceptance without critical evaluation,
the next step is to look for “Faults.” A Fault is any argument that a Pro is
unrealistic, won’t work, or will have unacceptable side effects. On the
other hand, if there seems to be a bias toward negativity or a risk of the
idea being dropped too quickly without careful consideration, the next step
is to look for “Fixes.” A Fix is any argument or plan that would neutralize
or minimize a Con, or even change it into a Pro. In some cases, it may be
appropriate to look for both Faults and Fixes before comparing the two
lists and making a decision.

The Pros-Cons-Faults-and-Fixes technique does not tell an analyst whether


the Pros or the Cons have the strongest argument. That answer is still
based on the analyst’s professional judgment. The role of the technique is
to offset any tendency to rush to judgment. It also organizes the elements
of the problem in a logical manner that can help the decision maker make a
carefully considered choice. Writing things down in this manner helps the
analyst and the decision maker see things more clearly and become more
objective and emotionally detached from the decision. (See Figure 11.3.)

328
The Method
Start by clearly defining the proposed action or choice. Then follow these
steps:

✶ List the Pros in favor of the decision or choice. Think broadly and
creatively, and list as many benefits, advantages, or other positives as
possible.
✶ List the Cons, or arguments against what is proposed. There are
usually more Cons than Pros, as most humans are naturally critical. It
is easier to think of arguments against a new idea than to imagine
how the new idea might work. This is why it is often difficult to get
careful consideration of a new idea.
✶ Review and consolidate the list. If two Pros are similar or
overlapping, consider merging them to eliminate any redundancy. Do
the same for any overlapping Cons.
✶ If the choice is between two clearly defined options, go through
the previous steps for the second option. If there are more than two
options, a technique such as Decision Matrix may be more
appropriate than Pros-Cons-Faults-and-Fixes.
✶ At this point you must make a choice. If the goal is to challenge an
initial judgment that the idea won’t work, take the Cons, one at a
time, and see if they can be “Fixed.” That means trying to figure a
way to neutralize their adverse influence or even to convert them into
Pros. This exercise is intended to counter any unnecessary or biased
negativity about the idea. There are at least four ways an argument
listed as a Con might be Fixed:
– Propose a modification of the Con that would significantly
lower the risk of the Con being a problem.
– Identify a preventive measure that would significantly reduce
the chances of the Con being a problem.
– Do contingency planning that includes a change of course if
certain indicators are observed.
– Identify a need for further research or information gathering
to confirm or refute the assumption that the Con is a problem.
✶ If the goal is to challenge an initial optimistic assumption that the
idea will work and should be pursued, take the Pros, one at a time,
and see if they can be “Faulted.” That means to try and figure out
how the Pro might fail to materialize or have undesirable
consequences. This exercise is intended to counter any wishful

329
thinking or unjustified optimism about the idea. A Pro might be
Faulted in at least three ways:
– Identify a reason why the Pro would not work or why the
benefit would not be received.
– Identify an undesirable side effect that might accompany the
benefit.
– Identify a need for further research or information gathering
to confirm or refute the assumption that the Pro will work or be
beneficial.
✶ A third option is to combine both approaches, to Fault the Pros and
Fix the Cons.
✶ Compare the Pros, including any Faults, against the Cons,
including the Fixes. Weigh the balance of one against the other, and
make the choice. The choice is based on your professional judgment,
not on any numerical calculation of the number or value of Pros
versus Cons.

Figure 11.3 Pros-Cons-Faults-and-Fixes Analysis

Potential Pitfalls
Often when listing the Pros and Cons, analysts will assign weights to each
Pro and Con on the list and then re-sort the lists, with the Pros or Cons
receiving the most points at the top of the list and those receiving the

330
fewest points at the bottom. This can be a useful exercise, helping the
analyst weigh the balance of one against the other, but the authors strongly
recommend against adding up the scores on each side and deciding that the
list with the most number of points is the right choice. Any numerical
calculation can be easily manipulated by simply adding more Pros or more
Cons to either list to increase its overall score. The best protection against
this practice is simply not to add up the points in either column.

Origins of This Technique


Pros-Cons-Faults-and-Fixes is Richards Heuer’s adaptation of the Pros-
Cons-and-Fixes technique described by Morgan D. Jones in The Thinker’s
Toolkit: Fourteen Powerful Techniques for Problem Solving (New York:
Three Rivers Press, 1998), 72–79. Jones assumed that humans are
“compulsively negative,” and that “negative thoughts defeat creative
objective thinking.” Thus his technique focused only on Fixes for the
Cons. The technique that we describe here recognizes that analysts and
decision makers can also be biased by overconfidence, in which case
Faulting the Pros may be more important than Fixing the Cons.

11.4 Force Field Analysis


Force Field Analysis is a simple technique for listing and assessing all the
forces for and against a change, problem, or goal. Kurt Lewin, one of the
fathers of modern social psychology, believed that all organizations are
systems in which the present situation is a dynamic balance between forces
driving for change and restraining forces. In order for any change to occur,
the driving forces must exceed the restraining forces, and the relative
strength of these forces is what this technique measures. This technique is
based on Lewin’s theory.2

When to Use It
Force Field Analysis is useful in the early stages of a project or research
effort, when the analyst is defining the issue, gathering data, or developing
recommendations for action. It requires that the analyst clearly define the
problem in all its aspects. It can aid an analyst in structuring the data and
assessing the relative importance of each of the forces affecting the issue.

331
The technique can also help the analyst overcome the natural human
tendency to dwell on the aspects of the data that are most comfortable. The
technique can be used by an individual analyst or by a small team.

In the world of business and politics, the technique can also be used to
develop and refine strategies to promote a particular policy or ensure that a
desired outcome actually comes about. In such instances, it is often useful
to define the various forces in terms of key individuals who need to be
persuaded. For example, instead of listing budgetary restrictions as a key
factor, one would write down the name of the person who controls the
budget. Similarly, Force Field Analysis can be used to diagnose what
forces and individuals need to be constrained or marginalized in order to
prevent a policy from being adopted or an outcome from happening.

Value Added
The primary benefit of Force Field Analysis is that it requires an analyst to
consider the forces and factors (and, in some cases, individuals) that
influence a situation. It helps the analyst think through the ways various
forces affect the issue and fosters the recognition that forces can be divided
into two categories: the driving forces and the restraining forces. By
sorting the evidence into driving and restraining forces, the analyst must
delve deeply into the issue and consider the less obvious factors and
issues.

By weighing all the forces for and against an issue, the analyst can better
recommend strategies that would be most effective in reducing the impact
of the restraining forces and strengthening the effect of the driving forces.
Force Field Analysis also offers a powerful way to visualize the key
elements of the problem by providing a simple tally sheet for displaying
the different levels of intensity of the forces individually and as a whole.
With the data sorted into two lists, decision makers can more easily
identify which forces deserve the most attention and develop strategies to
overcome the negative elements while promoting the positive elements.
Figure 11.4 is an example of a Force Field diagram.

An issue is held in balance by the interaction of two opposing sets


of forces—those seeking to promote change (driving forces) and

332
those attempting to maintain the status quo (restraining forces).

—Kurt Lewin, Resolving Social Conflicts (1948)

The Method
✶ Define the problem, goal, or change clearly and concisely.
✶ Brainstorm to identify the main forces that will influence the issue.
Consider such topics as needs, resources, costs, benefits,
organizations, relationships, attitudes, traditions, interests, social and
cultural trends, rules and regulations, policies, values, popular desires,
and leadership to develop the full range of forces promoting and
restraining the factors involved.
✶ Make one list showing the forces or people “driving” the change
and a second list showing the forces or people “restraining” the
change.
✶ Assign a value (the intensity score) to each driving or restraining
force to indicate its strength. Assign the weakest intensity scores a
value of 1 and the strongest a value of 5. The same intensity score can
be assigned to more than one force if you consider the factors equal in
strength. List the intensity scores in parentheses beside each item.
✶ Examine the two lists to determine if any of the driving forces
balance out the restraining forces.
✶ Devise a manageable course of action to strengthen those forces
that lead to the preferred outcome and weaken the forces that would
hinder the desired outcome.

Figure 11.4 Force Field Analysis: Removing Abandoned Cars from City
Streets

333
Source: 2007 Pherson Associates, LLC.

You should keep in mind that the preferred outcome may be either
promoting a change or restraining a change. For example, if the problem is
increased drug use or criminal activity, you would focus the analysis on
the factors that would have the most impact on restraining criminal activity
or drug use. On the other hand, if the preferred outcome is improved
border security, you would highlight the driving forces that, if
strengthened, would be most likely to promote border security.

Potential Pitfalls
When assessing the balance between driving and restraining forces, the
authors recommend against adding up the scores on each side and
concluding that the side with the most number of points will win out. Any
numerical calculation can be easily manipulated by simply adding more

334
forces or factors to either list to increase its overall score.

Origins of This Technique


Force Field Analysis is widely used in social science and business
research. (A Google search on the term brings up more than seventy-one
million hits.) This version of the technique is largely from Randolph H.
Pherson, Handbook of Analytic Tools and Techniques (Reston, VA:
Pherson Associates, LLC, 2008); and Pherson Associates teaching
materials.

11.5 SWOT Analysis


SWOT is commonly used by all types of organizations to evaluate the
Strengths, Weaknesses, Opportunities, and Threats involved in any project
or plan of action. The Strengths and Weaknesses are internal to the
organization, while the Opportunities and Threats are characteristics of the
external environment.

When to Use It
After setting a goal or objective, use SWOT as a framework for collecting
and organizing information in support of strategic planning and decision
making to achieve the goal or objective. Information is collected to
analyze the plan’s Strengths and Weaknesses and the Opportunities and
Threats present in the external environment that might have an impact on
the ability to achieve the goal.

SWOT is easy to use. It can be used by a single analyst, although it is


usually a group process. It is particularly effective as a cross-functional
team-building exercise at the start of a new project. Businesses and
organizations of all types use SWOT so frequently that a Google search on
“SWOT Analysis” turns up more than one million hits.

Value Added
SWOT can generate useful information with relatively little effort, and it
brings that information together in a framework that provides a good base

335
for further analysis. It often points to specific actions that can or should be
taken. Because the technique matches an organization’s or plan’s Strengths
and Weaknesses against the Opportunities and Threats in the environment
in which it operates, the plans or action recommendations that develop
from the use of this technique are often quite practical.

The Method
✶ Define the objective.
✶ Fill in the SWOT table by listing Strengths, Weaknesses,
Opportunities, and Threats that are expected to facilitate or hinder
achievement of the objective. (See Figure 11.5.) The significance of
the attributes’ and conditions’ impact on achievement of the objective
is far more important than the length of the list. It is often desirable to
list the items in each quadrant in order of their significance or to
assign them values on a scale of 1 to 5.
✶ Identify possible strategies for achieving the objective. This is
done by asking the following questions:
– How can we use each Strength?
– How can we improve each Weakness?
– How can we exploit each Opportunity?
– How can we mitigate each Threat?

An alternative approach is to apply “matching and converting” techniques.


Matching refers to matching Strengths with Opportunities to make the
Strengths even stronger. Converting refers to matching Opportunities with
Weaknesses in order to convert the Weaknesses into Strengths.

Potential Pitfalls
SWOT is simple, easy, and widely used, but it has limitations. It focuses
on a single goal without weighing the costs and benefits of alternative
means of achieving the same goal. In other words, SWOT is a useful
technique as long as the analyst recognizes that it does not necessarily tell
the full story of what decision should or will be made. There may be other
equally good or better courses of action.

Figure 11.5 SWOT Analysis

336
Another strategic planning technique, the TOWS Matrix, remedies one of
the limitations of SWOT. The factors listed under Threats, Opportunities,
Weaknesses, and Strengths are combined to identify multiple alternative
strategies that an organization might pursue.3

Relationship to Other Techniques


The factors listed in the Opportunities and Threats quadrants of a SWOT
Analysis are the same as the outside or external factors the analyst seeks to
identify during Outside-In Thinking (chapter 8). In that sense, there is
some overlap between the two techniques.

Origins of This Technique


The SWOT technique was developed in the late 1960s at Stanford
Research Institute as part of a decade-long research project on why
corporate planning fails. It is the first part of a more comprehensive
strategic planning program. It has been so heavily used over such a long
period of time that several versions have evolved. Richards Heuer has
selected the version he believes most appropriate for intelligence analysis.
It comes from multiple Internet sites, including the following:
http://www.businessballs.com/swotanalysisfreetemplate.htm,
http://en.wikipedia.org/wiki/SWOT_analysis, http://www.mindtools.com,
http://www.valuebasedmanagement.net, and http://www.mycoted.com.

11.6 impact matrix

337
The Impact Matrix identifies the key actors involved in a decision, their
level of interest in the issue, and the impact of the decision on them. It is a
framing technique that can give analysts and managers a better sense of
how well or poorly a decision may be received, how it is most likely to
play out, and what would be the most effective strategies to resolve a
problem.

When to Use It
The best time for a manager to use this technique is when a major new
policy initiative is being contemplated or a mandated change is about to be
announced. The technique helps the manager identify where he or she is
most likely to encounter both resistance and support. Intelligence analysts
can also use the technique to assess how the public might react to a new
policy pronouncement by a foreign government or a new doctrine posted
on the Internet by a political movement. Invariably, the technique will
uncover new insights by focusing in a systematic way on all possible
dimensions of the issue.

The template matrix makes the technique fairly easy to use. Most often, an
individual manager will apply the technique to develop a strategy for how
he or she plans to implement a new policy or respond to a newly decreed
mandate from on high. Managers can also use the technique proactively
before they announce a new policy or procedure. The technique can
expose unanticipated pockets of resistance or support, as well as those it
might be smart to consult before the policy or procedure becomes public
knowledge. A single intelligence analyst can also use the technique,
although it is usually more effective if done as a group process.

Value Added
The technique provides the user with a comprehensive framework for
assessing whether a new policy or procedure will be met with resistance or
support. A key concern is to identify any actor who will be heavily
impacted in a negative way. They should be engaged early on or ideally
before the policy is announced, in case they have ideas on how to make the
new policy more digestible. At a minimum, they will appreciate that their
views were sought out and considered, whether positive or negative.
Support can be enlisted from those who will be strongly impacted in a

338
positive way.

The Impact Matrix usually is most effective when used by a manager as he


or she is developing a new policy. The matrix helps the manager identify
who will be most affected, and he or she can consider whether this argues
for either modifying the plan or modifying the strategy for announcing the
plan.

The Method
The impact matrix process involves the following steps (a template for
using the Impact Matrix is provided in Figure 11.6):

✶ Identify all the individuals or groups involved in the decision or


issue. The list should include: me, my supervisor, my employees or
subordinates, my customer, my colleagues or counterparts in my
office or agency, and my counterparts in other agencies.
✶ Rate how important this issue is to each actor or how much they
are likely to care about it. Use a three-point scale: Low, Medium, or
High. Their level of interest should reflect how great an impact the
decision would have on such issues as their time, their quality of
work life, and their prospects for success.
✶ Categorize the impact of the decision on each actor as mostly
positive (P), neutral (N), or mostly negative (N). If a decision has the
potential to be negative, mark it as negative. If in some cases the
impact on a person or group is mixed, then either mark it as neutral or
split the group into subgroups if specific subgroups can be identified.
✶ Once the matrix has been completed, assess the likely overall
reaction to the policy.
✶ Develop an action plan.
✶ Identify where the decision is likely to have a major negative
impact and consider the utility of prior consultations.
✶ Identify where the decision is likely to have a major positive
impact and consider enlisting the support of key actors in helping
make the decision or procedure work.
✶ Announce the decision and monitor reactions.
✶ Reassess the action plan based on feedback received on a periodic
basis.

Figure 11.6 Impact Matrix: Identifying Key Actors, Interests, and Impact

339
Origins of This Technique
The Impact Matrix was developed by Mary O’Sullivan and Randy
Pherson, Pherson Associates, LLC, and is taught in courses for mid-level
managers in the government, law enforcement, and business.

11.7 Complexity Manager


Complexity Manager helps analysts and decision makers understand and
anticipate changes in complex systems. As used here, the word
“complexity” encompasses any distinctive set of interactions that are more
complicated than even experienced analysts can think through solely in
their heads.4

340
When to Use It
As a policy support tool, Complexity Manager can be used to assess the
chances for success or failure of a new or proposed program or policy, and
opportunities for influencing the outcome of any situation. It also can be
used to identify what would have to change in order to achieve a specified
goal, as well as unintended consequences from the pursuit of a policy goal.

When trying to foresee future events, both the intelligence and business
communities have typically dealt with complexity by the following:

✶ Assuming that the future is unpredictable and generating


alternative future scenarios and indicators that can be tracked to
obtain early warning of which future is emerging.
✶ Developing or contracting for complex computer models and
simulations of how the future might play out. This practice is costly
in time and money and often of limited practical value to the working
analysts.
✶ Making a number of assumptions and relying on the analyst’s
intuition or expert judgment to generate a best guess of how things
will work out.

The use of Complexity Manager is a fourth approach that may be


preferable in some circumstances, especially in cases of what one might
call “manageable complexity.” It can help decision makers ask better
questions and anticipate problems.

Complexity Manager is different from other methods for dealing with


complexity, because it can be used by the average analyst who does not
have advanced quantitative skills. It can be used by analysts who do not
have access to software for programs such as Causal Loop Diagramming
or Block-Flow Diagramming commonly used in System Dynamics
analysis.

Value Added
We all know that we live in a complex world of interdependent political,
economic, social, and technological systems in which each event or change
has multiple impacts. These impacts then have additional impacts on other
elements of the system. Although we understand this, we usually do not

341
analyze the world in this way, because the multitude of potential
interactions is too difficult for the human brain to track simultaneously. As
a result, analysts often fail to foresee future problems or opportunities that
may be generated by current trends and developments. Or they fail to
foresee the undesirable side effects of well-intentioned policies.5

Complexity Manager can often improve an analyst’s understanding of a


complex situation without the time delay and cost required to build a
computer model and simulation. The steps in the Complexity Manager
technique are the same as the initial steps required to build a computer
model and simulation. These are identification of the relevant variables or
actors, analysis of all the interactions between them, and assignment of
rough weights or other values to each variable or interaction.

Scientists who specialize in the modeling and simulation of complex social


systems report that “the earliest—and sometimes most significant—
insights occur while reducing a problem to its most fundamental players,
interactions, and basic rules of behavior,” and that “the frequency and
importance of additional insights diminishes exponentially as a model is
made increasingly complex.”6 Thus in many cases the Complexity
Manager technique is likely to provide much, although not all, of the
benefit one could gain from computer modeling and simulation, but
without the time lag and contract costs. However, if key variables are
quantifiable with changes that are trackable over time, it would be more
appropriate to use a quantitative modeling technique such as System
Dynamics.

Complexity Manager, like most structured analytic techniques, does not


itself provide analysts with answers. It enables analysts to find a best
possible answer by organizing in a systematic manner the jumble of
information about many relevant variables. It enables analysts to get a grip
on the whole problem, not just one part of the problem at a time. Analysts
can then apply their expertise in making an informed judgment about the
problem. This structuring of the analyst’s thought process also provides
the foundation for a well-organized report that clearly presents the
rationale for each conclusion. This may also lead to some form of visual
presentation, such as a Concept Map or Mind Map, or a causal or influence
diagram.

It takes time to work through the Complexity Manager process, but it may

342
save time in the long run. This structured approach helps analysts work
efficiently without getting mired down in the complexity of the problem.
Because it produces a better and more carefully reasoned product, it also
saves time during the editing and coordination processes.

The Method
Complexity Manager requires the analyst to proceed through eight specific
steps:

1. Define the problem: State the problem (plan, goal, outcome) to be


analyzed, including the time period to be covered by the analysis.

2. Identify and list relevant variables: Use one of the brainstorming


techniques described in chapter 5 to identify the significant variables
(factors, conditions, people, etc.) that may affect the situation of interest
during the designated time period. Think broadly to include organizational
or environmental constraints that are beyond anyone’s ability to control. If
the goal is to estimate the status of one or more variables several years in
the future, those variables should be at the top of the list. Group the other
variables in some logical manner with the most important variables at the
top of the list.

3. Create a Cross-Impact Matrix: Create a matrix in which the number


of rows and columns are each equal to the number of variables plus one.
Leaving the cell at the top-left corner of the matrix blank, enter all the
variables in the cells in the row across the top of the matrix and the same
variables in the column down the left side. The matrix then has a cell for
recording the nature of the relationship between all pairs of variables. This
is called a Cross-Impact Matrix—a tool for assessing the two-way
interaction between each pair of variables. Depending on the number of
variables and the length of their names, it may be convenient to use the
variables’ letter designations across the top of the matrix rather than the
full names.

When deciding whether or not to include a variable, or to combine two


variables into one, keep in mind that the number of variables has a
significant impact on the complexity and the time required for an analysis.
If an analytic problem has 5 variables, there are 20 possible two-way
interactions between those variables. That number increases rapidly as the

343
number of variables increases. With 10 variables, as in Figure 11.7, there
are 90 possible interactions. With 15 variables, there are 210. Complexity
Manager may be impractical with more than 15 variables.

4. Assess the interaction between each pair of variables: Use a diverse


team of experts on the relevant topic to analyze the strength and direction
of the interaction between each pair of variables, and enter the results in
the relevant cells of the matrix. For each pair of variables, ask the
question: Does this variable impact the paired variable in a manner that
will increase or decrease the impact or influence of that variable?

When entering ratings in the matrix, it is best to take one variable at a


time, first going down the column and then working across the row. Note
that the matrix requires each pair of variables to be evaluated twice—for
example, the impact of variable A on variable B and the impact of variable
B on variable A. To record what variables impact variable A, work down
column A and ask yourself whether each variable listed on the left side of
the matrix has a positive or negative influence, or no influence at all, on
variable A. To record the reverse impact of variable A on the other
variables, work across row A to analyze how variable A impacts the
variables listed across the top of the matrix.

Analysts can record the nature and strength of impact that one variable has
on another in two different ways. Figure 11.7 uses plus and minus signs to
show whether the variable being analyzed has a positive or negative
impact on the paired variable. The size of the plus or minus sign signifies
the strength of the impact on a three-point scale. The small plus or minus
sign shows a weak impact, the medium size a medium impact, and the
large size a strong impact. If the variable being analyzed has no impact on
the paired variable, the cell is left empty. If a variable might change in a
way that could reverse the direction of its impact, from positive to negative
or vice versa, this is shown by using both a plus and a minus sign.

The completed matrix shown in Figure 11.7 is the same matrix you will
see in chapter 14, when the Complexity Manager technique is used to
forecast the future of structured analytic techniques. The plus and minus
signs work well for the finished matrix. When first populating the matrix,
however, it may be easier to use letters (P and M for plus and minus) to
show whether each variable has a positive or negative impact on the other
variable with which it is paired. Each P or M is then followed by a number
to show the strength of that impact. A three-point scale is used, with 3

344
indicating a Strong impact, 2 Medium, and 1 Weak.

Figure 11.7 Variables Affecting the Future Use of Structured Analysis

345
After rating each pair of variables, and before doing further analysis,
consider pruning the matrix to eliminate variables that are unlikely to have
a significant effect on the outcome. It is possible to measure the relative
significance of each variable by adding up the weighted values in each row
and column. The sum of the weights in each row is a measure of each
variable’s impact on the system as a whole. The sum of the weights in
each column is a measure of how much each variable is affected by all the
other variables. Those variables most impacted by the other variables
should be monitored as potential indicators of the direction in which
events are moving or as potential sources of unintended consequences.

5. Analyze direct impacts: Write several paragraphs about the impact of


each variable, starting with variable A. For each variable, describe the
variable for further clarification, if necessary. Identify all the variables that
impact on that variable with a rating of 2 or 3, and briefly explain the
nature, direction, and, if appropriate, the timing of this impact. How strong
is it and how certain is it? When might these impacts be observed? Will
the impacts be felt only in certain conditions? Next, identify and discuss
all variables on which this variable has an impact with a rating of 2 or 3
(strong or medium effect), including the strength of the impact and how
certain it is to occur. Identify and discuss the potentially good or bad side
effects of these impacts.

346
6. Analyze loops and indirect impacts: The matrix shows only the
direct impact of one variable on another. When you are analyzing the
direct impacts variable by variable, there are several things to look for and
make note of. One is feedback loops. For example, if variable A has a
positive impact on variable B, and variable B also has a positive impact on
variable A, this is a positive feedback loop. Or there may be a three-
variable loop, from A to B to C and back to A. The variables in a loop gain
strength from one another, and this boost may enhance their ability to
influence other variables. Another thing to look for is circumstances where
the causal relationship between variables A and B is necessary but not
sufficient for something to happen. For example, variable A has the
potential to influence variable B, and may even be trying to influence
variable B, but it can do so effectively only if variable C is also present. In
that case, variable C is an enabling variable and takes on greater
significance than it ordinarily would have.

All variables are either static or dynamic. Static variables are expected to
remain more or less unchanged during the period covered by the analysis.
Dynamic variables are changing or have the potential to change. The
analysis should focus on the dynamic variables, as these are the sources of
surprise in any complex system. Determining how these dynamic variables
interact with other variables and with each other is critical to any forecast
of future developments. Dynamic variables can be either predictable or
unpredictable. Predictable change includes established trends or
established policies that are in the process of being implemented.
Unpredictable change may be a change in leadership or an unexpected
change in policy or available resources.

7. Draw conclusions: Using data about the individual variables


assembled in steps 5 and 6, draw conclusions about the system as a whole.
What is the most likely outcome, or what changes might be anticipated
during the specified time period? What are the driving forces behind that
outcome? What things could happen to cause a different outcome? What
desirable or undesirable side effects should be anticipated? If you need
help to sort out all the relationships, it may be useful to sketch out by hand
a diagram showing all the causal relationships. A Concept Map (chapter 4)
may be useful for this purpose. If a diagram is helpful during the analysis,
it may also be helpful to the reader or customer to include such a diagram
in the report.

347
8. Conduct an opportunity analysis: When appropriate, analyze what
actions could be taken to influence this system in a manner favorable to
the primary customer of the analysis.

Relationship to Other Techniques


The same procedures for creating a matrix and coding data can be applied
in using a Cross-Impact Matrix (chapter 5). The difference is that the
Cross-Impact Matrix technique is used only to identify and share
information about the cross-impacts in a group or team exercise. The goal
of Complexity Manager is to build on the Cross-Impact Matrix to analyze
the working of a complex system.

Use a form of Scenarios Analysis rather than Complexity Manager when


the future is highly uncertain and the goal is to identify alternative futures
and indicators that will provide early warning of the direction in which
future events are headed. Use a computerized modeling system such as
System Dynamics rather than Complexity Manager when changes over
time in key variables can be quantified or when there are more than fifteen
variables to be considered.7

Origins of This Technique


Complexity Manager was developed by Richards Heuer to fill an
important gap in structured techniques that are available to the average
analyst. It is a very simplified version of older quantitative modeling
techniques, such as System Dynamics.

1. See Graham T. Allison and Philip Zelikow, Essence of Decision:


Explaining the Cuban Missile Crisis, 2nd ed. (New York: Addison-
Wesley, 1999).

2. Kurt Lewin, Resolving Social Conflicts: Selected Papers on Group


Dynamics (New York: Harper and Row, 1948).

3. Heinz Weihrich, “The TOWS Matrix—A Tool for Situational


Analysis,” Long Range Planning 15, no. 2 (April 1982): 54–66.

4. Seth Lloyd, a specialist in complex systems, has listed thirty-two

348
definitions of complexity. See Seth Lloyd, Programming the Universe
(New York: Knopf, 2006).

5. Dietrich Dorner, The Logic of Failure (New York: Basic Books, 1996).

6. David S. Dixon and William N. Reynolds, “The BASP Agent-Based


Modeling Framework: Applications, Scenarios, and Lessons Learned,”
Hawaii International Conference on System Sciences, 2003,
www2.computer.org/portal/web/csdl/doi/10.1109/HICSS.2003.1174225.
Also see Donnella H. Meadows and J. M. Robinson, The Electronic
Oracle: Computer Models and Social Decisions (New York: Wiley, 1985).

7. John Sterman, Business Dynamics: Systems Thinking and Modeling for


a Complex World (New York: McGraw Hill, 2000).

349
12 Practitioner’s Guide to Collaboration

12.1 Social Networks and Analytic Teams [ 323 ]


12.2 Dividing the Work [ 327 ]
12.3 Common Pitfalls with Small Groups [ 330 ]
12.4 Benefiting from Diversity [ 331 ]
12.5 Advocacy versus Objective Inquiry [ 333 ]
12.6 Leadership and Training [ 335 ]

Analysis in the intelligence profession—and many comparable disciplines


—is evolving from being predominantly an analytic activity by a single
analyst to becoming a collaborative group activity. The increased use of
structured analytic techniques is one of several factors spurring this
transition to more collaborative work products.

This chapter starts with some practical guidance on how to take advantage
of the collaborative environment while preventing or avoiding the many
well-known problems associated with small-group processes. It then goes
on to describe how structured analytic techniques provide the process by
which collaboration becomes most effective. Many things change when
the internal thought process of analysts is externalized in a transparent
manner so that evidence is shared early and differences of opinion can be
shared, built on, and easily critiqued by others.

The rapid growth of social networks across organizational boundaries and


the increased geographic distribution of their members are changing how
analysis needs to be done—within the intelligence profession and even
more so in business. In this chapter we identify three different types of
groups that engage in analysis—two types of teams and a group described
here as a “social network.”

We recommend that analysis increasingly be done in two phases: an initial,


divergent analysis phase conducted by a geographically distributed social
network, and a convergent analysis phase and final report done by a small
analytic team. We then review a number of problems known to impair the
performance of teams and small groups, and conclude with some practical
measures for limiting the occurrence of such problems.

350
12.1 Social Networks and Analytic Teams
Teams and groups can be categorized in several ways. When the purpose
of the group is to generate an analytic product, it seems most useful to deal
with three types: the traditional analytic team, the special project team, and
teams supported by social networks. Traditional teams are usually
colocated and focused on a specific task. Special project teams are most
effective when their members are colocated or working in a synchronous
virtual world (similar to Second Life). Analytic teams supported by social
networks can operate effectively in colocated, geographically distributed,
and synchronous as well as asynchronous modes. These three types of
groups differ in the nature of their leadership, frequency of face-to-face
and virtual world meetings, breadth of analytic activity, and amount of
time pressure under which they work.1

✶ Traditional analytic team: This is the typical work team assigned


to perform a specific task. It has a leader appointed by a manager or
chosen by the team, and all members of the team are collectively
accountable for the team’s product. The team may work jointly to
develop the entire product, or each team member may be responsible
for a specific section of the work. Historically, in the U.S.
Intelligence Community many teams were composed of analysts from
a single agency, and involvement of other agencies was through
coordination during the latter part of the process rather than
collaboration from the beginning. That way is now changing as a
consequence of changes in policy and easier access to secure
interagency communications and collaboration software. Figure 12.1a
shows how the traditional analytic team works. The core analytic
team, with participants usually working at the same office, drafts a
paper and sends it to other members of the community for comment
and coordination. Ideally, the core team will alert other stakeholders
in the community of their intent to write on a specific topic; but, too
often, such dialogue occurs much later, when they are coordinating
the draft. In most cases, specific permissions are required, or
established procedures must be followed to tap the expertise of
experts outside the office or outside the government.
✶ Special project team: Such a team is usually formed to provide
decision makers with near–real time analytic support during a crisis
or an ongoing operation. A crisis support task force or field-deployed
interagency intelligence team that supports a military operation

351
exemplifies this type of team. Members typically are located in the
same physical office space or are connected by video
communications. There is strong team leadership, often with close
personal interaction among team members. Because the team is
created to deal with a specific situation, its work may have a narrower
focus than a social network or regular analytic team, and its duration
may be limited. There is usually intense time pressure, and around-
the-clock operation may be required. Figure 12.1b is a diagram of a
special project team.
✶ Social networks: Experienced analysts have always had their own
network of experts in their field or related fields with whom they
consult from time to time and whom they may recruit to work with
them on a specific analytic project. Social networks are critical to the
analytic business. They do the day-to-day monitoring of events,
produce routine products as needed, and may recommend the
formation of a more formal analytic team to handle a specific project.
The social network is the form of group activity that is now changing
dramatically with the growing ease of cross-agency secure
communications and the availability of collaborative software. Social
networks are expanding exponentially across organization
boundaries. The term “social network,” as used here, includes all
analysts working anywhere in the world on a particular country, such
as Brazil; on an issue, such as the development of chemical weapons;
or on a consulting project in the business world. It can be limited to a
small group with special clearances or comprise a broad array of
government, business, nongovernment organization (NGO), and
academic experts. The network can be located in the same office, in
different buildings in the same metropolitan area, or increasingly at
multiple locations around the globe.

Figure 12.1a Traditional Analytic Team

352
Source: 2009 Pherson Associates, LLC.

Figure 12.1b Special Project Team

353
Source: 2009 Pherson Associates, LLC.

The key problem that arises with social networks is the geographic
distribution of their members. Even within the Washington, D.C.
metropolitan area, distance is a factor that limits the frequency of face-to-
face meetings, particularly as traffic congestion becomes a growing
nightmare and air travel an unaffordable expense. From their study of
teams in diverse organizations, which included teams in the U.S.
Intelligence Community, Richard Hackman and Anita Woolley came to
this conclusion:

Distributed teams do relatively well on innovation tasks for which


ideas and solutions need to be generated . . . but generally
underperform face-to-face teams on decision-making tasks. Although
decision- support systems can improve performance slightly,
decisions made from afar still tend to take more time, involve less
exchange of information, make error detection and correction more
difficult, and can result in less participant satisfaction with the
outcome than is the case for face-to-face teams. . . . In sum,
distributed teams are appropriate for many, but not all, team tasks.

354
Using them well requires careful attention to team structure, a face-
to-face launch when members initially come together, and leadership
support throughout the life of the team to keep members engaged and
aligned with collective purposes.2

Research on effective collaborative practices has shown that


geographically and organizationally distributed teams are most likely to
succeed when they satisfy six key imperatives. Participants must do the
following:

✶ Know and trust one another; this usually requires that they meet
face to face at least once.
✶ Feel a personal need to engage the group in order to perform a
critical task.
✶ Derive mutual benefits from working together.
✶ Connect with one another virtually on demand and easily add new
members.
✶ Perceive incentives for participating in the group, such as saving
time, gaining new insights from interaction with other knowledgeable
analysts, or increasing the impact of their contribution.
✶ Share a common understanding of the problem with agreed lists of
common terms and definitions.3

12.2 Dividing the Work


Managing the geographic distribution of the social network can be
addressed by dividing the analytic task into two parts—first, exploiting the
strengths of the social network for divergent or creative analysis to identify
ideas and gather information; and second, forming a small analytic team
that employs convergent analysis to meld these ideas into an analytic
product. When the draft is completed, it goes back for review to all
members of the social network who contributed during the first phase of
the analysis, and then back to the team to edit and produce the final paper.

Structured analytic techniques and collaborative software facilitate this


two-part approach to analysis nicely. A series of basic techniques used for
divergent analysis early in the analytic process works well for a
geographically distributed social network communicating online, often via

355
a wiki. This provides a solid foundation for the smaller analytic team to do
the subsequent convergent analysis. In other words, each type of group
performs the type of task for which it is best qualified. This process is
applicable to most analytic projects. Figure 12.2 shows how it can work.

Figure 12.2 Wikis as Collaboration Enablers

356
Source: 2009 Pherson Associates, LLC.

A project leader informs a social network of an impending project and


provides a tentative project description, target audience, scope, and process
to be followed. The leader also gives the name of the wiki or the
collaborative virtual workspace to be used and invites interested analysts
knowledgeable in that area to participate. Any analyst with access to the
collaborative network also is authorized to add information and ideas to it.
Any or all of the following techniques, as well as others, may come into
play during the divergent analysis phase as specified by the project leader:

✶ Issue Redefinition, as described in chapter 4.


✶ Collaboration in sharing and processing data using other
techniques, such as timelines, sorting, networking, mapping, and
charting, as described in chapter 4.
✶ Some form of brainstorming, as described in chapter 5, to generate
a list of driving forces, variables, players, etc.
✶ Ranking or prioritizing this list, as described in chapter 4.

357
✶ Putting this list into a Cross-Impact Matrix, as described in chapter
5, and then discussing and recording in the wiki the relationship, if
any, between each pair of driving forces, variables, or players in that
matrix.
✶ Developing a list of alternative explanations or outcomes
(hypotheses) to be considered, as described in chapter 7.
✶ Developing a list of relevant information to be considered when
evaluating these hypotheses, as described in chapter 7.
✶ Doing a Key Assumptions Check, as described in chapter 8. This
actually takes less time using a synchronous collaborative virtual
workspace than when done in a face-to-face meeting, and is highly
recommended to learn the network’s thinking about key assumptions.

Most of these steps involve making lists, which can be done quite
effectively in a virtual environment. Making such input online in a chat
room or on a wiki can be even more productive than a face-to-face
meeting, because analysts have more time to think about and write up their
thoughts. They can look at their contribution over several days and make
additions or changes as new ideas come to them.

The process should be overseen and guided by a project leader. In addition


to providing a sound foundation for further analysis, this process enables
the project leader to identify the best analysts to be included in the smaller
team that conducts the second phase of the project—making analytic
judgments and drafting the report. Team members should be selected to
maximize the following criteria: level of expertise on the subject, level of
interest in the outcome of the analysis, and diversity of opinions and
thinking styles among the group. The action then moves from the social
network to a small, trusted team (preferably no larger than eight analysts)
to complete the project, perhaps using other techniques such as Analysis of
Competing Hypotheses or What If? Analysis. At this stage in the process,
the use of virtual collaborative software is usually more efficient than face-
to-face meetings. Randy Pherson has found that the use of avatars that
meet in a virtual conference room is particularly effective.4 Software used
for exchanging ideas and revising text should allow for privacy of
deliberations and provide an audit trail for all work done.

The draft report is best done by a single person. That person can work
from other team members’ inputs, but the report usually reads better if it is
crafted in one voice. As noted earlier, the working draft should be

358
reviewed by those members of the social network who participated in the
first phase of the analysis.

12.3 Common Pitfalls with Small Groups


As more analysis is done collaboratively, the quality of intelligence
products is increasingly influenced by the success or failure of small-group
processes. The various problems that afflict small-group processes are well
known and have been the subject of considerable research.5 One might
reasonably be concerned that more collaboration will just mean more
problems, more interagency battles. However, as we explain here, it turns
out that the use of structured analytic techniques frequently helps analysts
avoid many of the common pitfalls of the small-group process.

Some group process problems are obvious to anyone who has participated
in trying to arrive at decisions or judgments in a group meeting. Guidelines
for how to run meetings effectively are widely available, but many group
leaders fail to follow them.6 Key individuals are absent or late, and
participants are unprepared. Meetings are often dominated by senior
members or strong personalities, while some participants are reluctant to
speak up or to express their true beliefs. Discussion can get stuck on
several salient aspects of a problem, rather than covering all aspects of the
subject. Decisions may not be reached, and if they are reached, may wind
up not being implemented. Such problems are often greatly magnified
when the meeting is conducted virtually, over telephones or computers.

Academic studies show that “the order in which people speak has a
profound effect on the course of a discussion. Earlier comments are more
influential, and they tend to provide a framework within which the
discussion occurs.”7 Once that framework is in place, discussion tends to
center on that framework, to the exclusion of other options.

Much research documents that the desire for consensus is an important


cause of poor group decisions. Development of a group consensus is
usually perceived as success, but in reality, it is often indicative of failure.
Premature consensus is one of the more common causes of suboptimal
group performance. It leads to failure to identify or seriously consider
alternatives, failure to examine the negative aspects of the preferred
position, and failure to consider the consequences that might follow if the

359
preferred position is wrong.8 This phenomenon is what is commonly
called “groupthink.”

Other problems that are less obvious but no less significant have been
documented extensively by academic researchers. Often, some reasonably
satisfactory solution is proposed on which all members can agree, and the
discussion is ended without further search to see if there may be a better
answer. Such a decision often falls short of the optimum that might be
achieved with further inquiry. Another phenomenon, known as group
“polarization,” leads in certain predictable circumstances to a group
decision that is more extreme than the average group member’s view prior
to the discussion. “Social loafing” is the phenomenon that people working
in a group will often expend less effort than if they were working to
accomplish the same task on their own. In any of these situations, the
result is often an inferior product that suffers from a lack of analytic rigor.

If you had to identify, in one word, the reason that the human race
has not achieved, and never will achieve, its full potential, that
word would be meetings.

Dave Barry, American humorist

12.4 Benefiting from Diversity


Improvement of group performance requires an understanding of these
problems and a conscientious effort to avoid or mitigate them. The
literature on small-group performance is virtually unanimous in
emphasizing that groups make better decisions when their members bring
to the table a diverse set of ideas, opinions, and perspectives. What
premature closure, groupthink, and polarization all have in common is a
failure to recognize assumptions, to work from a common lexicon, and to
adequately identify and consider alternative points of view.

Laboratory experiments have shown that even a single dissenting opinion,


all by itself, makes a group’s decisions more nuanced and its decision-
making process more rigorous.9 “The research also shows that these
benefits from dissenting opinions occur regardless of whether or not the

360
dissenter is correct. The dissent stimulates a reappraisal of the situation
and identification of options that otherwise would have gone
undetected.”10 To be effective, however, dissent must be genuine—not
generated artificially, as in some applications of the Devil’s Advocacy
technique.11

Small, distributed asynchronous groups are particularly good at generating


and evaluating lists of assumptions, indicators, drivers, potential
explanations of current events, or potential outcomes. They are also good
for making lists of pros and cons on any particular subject. With the aid of
distributed group support software, the group can then categorize items on
such a list or prioritize, score, rank, scale, or vote on them. For such tasks,
a distributed asynchronous meeting may be more productive than a
traditional face-to-face meeting. That is because analysts have more time
to think about their input; they are able to look at their contribution over
several days and make additions or changes as additional ideas come to
them; and if rank or position of some group members is likely to have an
undue influence, arrangements can be made for all group members to
make their input anonymously.

Briefly, then, the route to better analysis is to create small groups of


analysts who are strongly encouraged by their leader to speak up and
express a wide range of ideas, opinions, and perspectives. The use of
structured analytic techniques generally ensures that this happens. These
techniques guide the dialogue between analysts as they share evidence and
alternative perspectives on the meaning and significance of the evidence.
Each step in the technique prompts relevant discussion within the team,
and such discussion can generate and evaluate substantially more
divergent information and new ideas than can a group that does not use
such a structured process.

With any heterogeneous group, this reduces the risk of premature closure,
groupthink, and polarization. Use of a structured technique also sets a clear
step-by-step agenda for any meeting where that technique is used. This
makes it easier for a group leader to keep a meeting on track to achieve its
goal.12

The same procedures can be used either on classified systems or with


outside experts on an unclassified network. Open-source information has
rapidly come to play a larger role in intelligence analysis than in the past.

361
Distributed asynchronous collaboration followed by distributed
synchronous collaboration that uses some of the basic structured
techniques is one of the best ways to tap the expertise of a group of
knowledgeable individuals. The Delphi Method, discussed in chapter 9, is
one well-known method for accomplishing the first phase, and
collaborative, synchronous virtual worlds with avatars are showing great
promise for optimizing work done in phase two.

12.5 Advocacy versus Objective Inquiry


The desired diversity of opinion is, of course, a double-edged sword, as it
can become a source of conflict that degrades group effectiveness.13 It is
not easy to introduce true collaboration and teamwork into a community
with a history of organizational rivalry and mistrust. Analysts must engage
in inquiry, not advocacy, and they must be critical of ideas but not critical
of people.

In a task-oriented team environment, advocacy of a specific position can


lead to emotional conflict and reduced team effectiveness. Advocates tend
to examine evidence in a biased manner, accepting at face value
information that seems to confirm their own point of view and subjecting
any contrary evidence to highly critical evaluation. Advocacy is
appropriate in a meeting of stakeholders that one is attending for the
purpose of representing a specific interest. It is also “an effective method
for making decisions in a courtroom when both sides are effectively
represented, or in an election when the decision is made by a vote of the
people.” 14 However, it is not an appropriate method of discourse within a
team “when power is unequally distributed among the participants, when
information is unequally distributed, and when there are no clear rules of
engagement—especially about how the final decision will be made.”15 An
effective resolution may be found only through the creative synergy of
alternative perspectives.

Figure 12.5 displays the differences between advocacy and the objective
inquiry expected from a team member or a colleague.16 When advocacy
leads to emotional conflict, it can lower team effectiveness by provoking
hostility, distrust, cynicism, and apathy among team members. On the
other hand, objective inquiry, which often leads to cognitive conflict, can
lead to new and creative solutions to problems, especially when it occurs

362
in an atmosphere of civility, collaboration, and common purpose. Several
effective methods for managing analytic differences are described in
chapter 10.

Figure 12.5 Advocacy versus Inquiry in Small-Group Processes

363
Source: 2009 Pherson Associates, LLC.

A team or group using structured analytic techniques is believed to be less


vulnerable to these group process traps than is a comparable group doing
traditional analysis, because the techniques move analysts away from
advocacy and toward inquiry. This idea has not yet been tested and
demonstrated empirically, but the rationale is clear. As we have stated
repeatedly throughout the eight chapters on structured techniques, these
techniques work best when an analyst is collaborating with a small group
of other analysts. Just as these techniques provide structure to our
individual thought processes, they play an even stronger role in guiding the
interaction of analysts within a small team or group.17

Some techniques, such as the Key Assumptions Check, Analysis of


Competing Hypotheses (ACH), and Argument Mapping, help analysts
gain a clear understanding of how and exactly why they disagree. For
example, many CIA and FBI analysts report that their preferred use of
ACH is to gain a better understanding of the differences of opinion
between them and other analysts or between analytic offices. The process
of creating an ACH matrix requires identification of the evidence and

364
arguments being used and determining how these are interpreted as either
consistent or inconsistent with the various hypotheses. Review of this
matrix provides a systematic basis for identification and discussion of
differences between two or more analysts. CIA and FBI analysts also note
that referring to the matrix helps to depersonalize the argumentation when
there are differences of opinion.18 In other words, ACH can help analysts
learn from their differences rather than fight over them, and other
structured techniques do this as well.

12.6 Leadership and Training


Considerable research on virtual teaming shows that leadership
effectiveness is a major factor in the success or failure of a virtual team.19
Although leadership usually is provided by a group’s appointed leader, it
can also emerge as a more distributed peer process and is greatly aided by
the use of a trained facilitator (see Figure 12.6). When face-to-face contact
is limited, leaders, facilitators, and team members must compensate by
paying more attention than they might otherwise devote to the following
tasks:

✶ Articulating a clear mission, goals, specific tasks, and procedures


for evaluating results.
✶ Defining measurable objectives with milestones and timelines for
achieving them.
✶ Establishing a common lexicon.
✶ Identifying clear and complementary roles and responsibilities.
✶ Building relationships with and among team members and with
stakeholders.
✶ Agreeing on team norms and expected behaviors.
✶ Defining conflict resolution procedures.
✶ Developing specific communication protocols and practices.20

As illustrated in Figure 12.6, the interactions among the various types of


team participants—whether analyst, leader, facilitator, or technologist—
are as important as the individual roles played by each. For example,
analysts on a team will be most effective not only when they have subject-
matter expertise or knowledge that lends a new viewpoint, but also when
the rewards for their participation are clearly defined by their manager.
Likewise, a facilitator’s effectiveness is greatly increased when the goals,

365
timeline, and general focus of the project are agreed to with the leader in
advance. When roles and interactions are explicitly defined and
functioning, the group can more easily turn to the more challenging
analytic tasks at hand.

Figure 12.6 Effective Small-Group Roles and Interactions

366
Source: 2009 Pherson Associates, LLC.

As greater emphasis is placed on intra- and interoffice collaboration, and


more work is done through computer-mediated communications, it
becomes increasingly important that analysts be trained in the knowledge,
skills, and abilities required for facilitation and management of both face-
to-face and virtual meetings, with a strong emphasis on conflict
management during such meetings. Training is more effective when it
occurs just before the skills and knowledge must be put to use. Ideally, it
should be fully integrated into the work process, with instructors acting in
the roles of coaches, mentors, and facilitators.

Multi-agency or Intelligence Community–wide training programs of this


sort could provide substantial support to interagency collaboration and the
formation of virtual teams. Whenever a new interagency or virtual team or
work group is formed, all members should have received the same training
in understanding the pitfalls of group processes, performance expectations,
standards of conduct, and conflict resolution procedures. Standardization
of this training across multiple organizations or agencies would accelerate
the development of a shared experience and culture and reduce start-up
time for any new interagency group.

367
1. This chapter was inspired by and draws on the research done by the
Group Brain Project at Harvard University. That project was supported by
the National Science Foundation and the CIA Intelligence Technology
Innovation Center. See in particular J. Richard Hackman and Anita W.
Woolley, “Creating and Leading Analytic Teams,” Technical Report 5
(February 2007), http://groupbrain.wjh.harvard.edu/publications.html.

2. Ibid., 8.

3. Randolph H. Pherson and Joan McIntyre, “The Essence of


Collaboration: The IC Experience,” in Scientific Underpinnings of
“Collaboration” in the National Security Arena: Myths and Reality—What
Science and Experience Can Contribute to Its Success, ed. Nancy Chesser
(Washington, DC: Strategic Multi-Layer Assessment Office, Office of the
Secretary of Defense, Director of Defense Research and
Engineering/Rapid Reaction Technology Office, June 2009).

4. Randy Pherson’s company, Globalytica, LLC, has had notable success


using its avatar-based collaborative platform, TH!NK LIVE®, to conduct
synchronous training and mentor analysts in the use of structured analytic
techniques. The software is easy to use, quick to load on any computer,
and relatively inexpensive. For more information about the system, contact
[email protected].

5. For example, Paul B. Paulus and Bernard A. Nijstad, Group Creativity:


Innovation through Collaboration (New York: Oxford University Press,
2003).

6. J. Scott Armstrong, “How to Make Better Forecasts and Decisions:


Avoid Face-to-Face Meetings,” Foresight 5 (Fall 2006).

7. James Surowiecki, The Wisdom of Crowds (New York: Doubleday,


2004), 184.

8. Charlan J. Nemeth and Brendan Nemeth-Brown, “Better Than


Individuals? The Potential Benefits of Dissent and Diversity for Group
Creativity,” in Group Creativity, ed. Paulus and Nijstad, 63–64.

9. Surowiecki, The Wisdom of Crowds, 183–184.

10. Nemeth and Nemeth-Brown, “Better Than Individuals?,” 73.

368
11. Ibid., 76–78.

12. This paragraph and the previous paragraph express the authors’
professional judgment based on personal experience and anecdotal
evidence gained in discussion with other experienced analysts. As
discussed in chapter 13, there is a clear need for systematic research on
this topic and other variables related to the effectiveness of structured
analytic techniques.

13. Frances J. Milliken, Caroline A. Bartel, and Terri R. Kurtzberg,


“Diversity and Creativity in Work Groups,” in Group Creativity, ed.
Paulus and Nijstad, 33.

14. Martha Lagace, “Four Questions for David Garvin and Michael
Roberto,” Working Knowledge: A First Look at Faculty Research, Harvard
Business School weekly newsletter, October 15, 2001,
http://hbswk.hbs.edu/item/3568.html.

15. Ibid.

16. The table is from David A. Garvin and Michael A. Roberto, “What
You Don’t Know About Making Decisions,” Working Knowledge: A First
Look at Faculty Research, Harvard Business School weekly newsletter,
October 15, 2001, http://hbswk.hbs.edu/item/2544.html.

17. This paragraph expresses our professional judgment based on personal


experience and anecdotal evidence gained in discussion with other
experienced analysts. As we discuss in chapter 13, there is a clear need for
systematic research on this topic and other variables related to the
effectiveness of structured analytic techniques.

18. This information was provided by two senior educators in the


Intelligence Community.

19. Jonathan N. Cummings, “Leading Groups from a Distance: How to


Mitigate Consequences of Geographic Dispersion,” in Leadership at a
Distance: Research in Technologically-Supported Work, ed. Susan
Weisband (New York: Routledge, 2007).

20. Sage Freechild, “Team Building and Team Performance


Management.” Originally online at www.phoenixrisingcoaching.com. This

369
article is no longer available online.

370
13 Validation of Structured Analytic
Techniques

13.1 Limits of Empirical Analysis [ 341 ]


13.2 Establishing Face Validity [ 343 ]
13.3 A Program for Empirical Validation [ 345 ]
13.4 Recommended Research Program [ 350]

The authors of this book are often asked for proof of the effectiveness of
structured analytic techniques. The testing of these techniques in the U.S.
Intelligence Community has been done largely through the experience of
using them. That experience has certainly been successful, but it has not
been sufficient to persuade skeptics reluctant to change their long-
ingrained habits or many academics accustomed to looking for hard,
empirical evidence. The question to be addressed in this chapter is how to
go about testing or demonstrating the effectiveness of structured analytic
techniques not just in the intelligence profession, but in other disciplines as
well. The same process should also be applicable to the use of structured
techniques in business, medicine, and many other fields that consistently
deal with probabilities and uncertainties rather than hard data.

13.1 Limits of Empirical Analysis


Findings from empirical experiments can be generalized to apply to
intelligence analysis or any other specific field only if the test conditions
match relevant conditions in which the analysis is conducted. Because
there are so many variables that can affect the research results, it is
extremely difficult to control for all or even most of them. These variables
include the purpose for which a technique is used, implementation
procedures, context of the experiment, nature of the analytic task,
differences in analytic experience and skill, and whether the analysis is
done by a single analyst or as a group process. All of these variables affect
the outcome of any experiment that ostensibly tests the utility of an
analytic technique. In a number of available examples of empirical
research on structured analytic techniques, we identified serious questions
about the applicability of the research findings to intelligence analysis.

371
Two factors raise questions about the practical feasibility of valid
empirical testing of structured analytic techniques as used in intelligence.
First, these techniques are commonly used as a group process. That would
require testing with groups of analysts rather than individual analysts.
Second, intelligence deals with issues of high uncertainty. Former CIA
director Michael Hayden wrote that because of the inherent uncertainties
in intelligence analysis, a record of 70 percent accuracy is a good
performance.1 If this is true, a single experiment in which the use of a
structured technique leads to a wrong answer is meaningless. Multiple
repetitions of the same experiment would be needed to evaluate how often
the analytic judgments were accurate.

Many problems could largely be resolved if experiments were conducted


with intelligence analysts using techniques as they are used every day to
analyze typical intelligence issues.2 But even if such conditions were met,
major obstacles to meaningful conclusions would still remain. For
example, many structured analytic techniques can be used for several
different purposes, and research findings on the effectiveness of these
techniques can be generalized and applied to the Intelligence Community
only if the technique is used in the same way and for the same purpose as
actually used in the Intelligence Community.

For example, Philip Tetlock, in his outstanding, pathbreaking book, Expert


Political Judgment, describes two experiments that show scenario
development may not be an effective analytic technique. The experiments
compared judgments on a political issue before and after the test subjects
prepared scenarios in an effort to gain a better understanding of the
issues.3 The experiments showed that the predictions by both experts and
nonexperts were more accurate before generating the scenarios; in other
words, the generation of scenarios actually reduced the accuracy of their
predictions. Several experienced analysts have separately cited this finding
as evidence that scenario development may not be a useful method for
intelligence analysis.4

However, Tetlock’s conclusions should not be generalized to apply to


intelligence analysis, as those experiments tested scenarios as a predictive
tool. The Intelligence Community does not use scenarios for prediction.
The purpose of scenario development is to describe several outcomes or
futures that a decision maker should consider because intelligence is
unable to predict a single outcome with reasonable certainty. Another

372
major purpose of scenario development is to identify indicators and
milestones for each potential scenario. The indicators and milestones can
then be monitored to gain early warning of the direction in which events
seem to be heading. Tetlock’s experiments did not use scenarios in this
way.

13.2 Establishing Face Validity


We believe the easiest way to assess the value of structured analytic
techniques is to look at the purpose for which a technique is being used.
Once that is established, the next step is to determine whether it actually
achieves that purpose or some better way exists to achieve that same
purpose. This book has eight chapters of techniques. The introduction to
each chapter discusses the rationale for using the type of techniques in that
chapter. How does using this type of technique help the analyst do a better
job? For each individual technique, the book describes not only how to use
that specific technique, but also when to use it and the value added by
using it. In other words, each structured analytic technique has what is
called face validity, because there is reason to expect that it will help to
mitigate or avoid a type of problem that sometimes occurs when one is
doing intelligence analysis.

The following paragraphs provide examples of the face validity of several


structured techniques, i.e., explanations for how and why these analytic
techniques are useful. A great deal of research in human cognition during
the past sixty years shows the limits of working memory and suggests that
one can manage a complex problem most effectively by breaking it down
into smaller pieces. That is, in fact, the dictionary definition of “analysis,”5
and that is what all the techniques that involve making a list, tree, matrix,
diagram, map, or model do. It is reasonable to expect, therefore, that an
analyst who uses such tools for organization or visualization of
information will in many cases do a better job than an analyst who does
not.

Similarly, much empirical evidence suggests that the human mind tends to
see what it is looking for and often misses what it is not looking for. This
is why it is useful to develop scenarios and indicators of possible future
events for which intelligence needs to provide early warning. These
techniques can help collectors target needed information; and, for analysts,

373
they prepare the mind to recognize the early signs of significant change.

“Satisficing” is the term Herbert Simon invented to describe the act of


selecting the first identified alternative that appears “good enough” rather
than evaluating all the likely alternatives and identifying the best one (see
chapter 7). Satisficing is a common analytic shortcut that people use in
making everyday decisions when there are multiple possible answers. It
saves a lot of time when you are making judgments or decisions of little
consequence, but it is ill-advised when making judgments or decisions
with significant consequence for national security. It seems self-evident
that an analyst who deliberately identifies and analyzes alternative
hypotheses before reaching a conclusion is more likely to find a better
answer than an analyst who does not.

Given the necessary role that assumptions play when making intelligence
judgments based on incomplete and ambiguous information, it seems
likely that an analyst who uses the Key Assumptions Check will, on
average, do a better job than an analyst who makes no effort to identify
and validate assumptions. There is also extensive empirical evidence that
reframing a question helps to unblock the mind and enables one to see
other perspectives.

The empirical research on small-group performance is virtually unanimous


in emphasizing that groups make better decisions when their members
bring to the table a diverse set of ideas, experiences, opinions, and
perspectives.6 Looking at these research findings, one may conclude that
the use of any structured technique in a group process is likely to improve
the quality of analysis, as compared with analysis by a single individual
using that technique or by a group that does not use any structured process
for eliciting divergent ideas or opinions.

The experience of U.S. Intelligence Community analysts using the


Analysis of Competing Hypotheses (ACH) software and similar computer-
aided analytic tools provides anecdotal evidence to support this
conclusion. One of the goals in using ACH is to gain a better
understanding of the differences of opinion with other analysts or between
analytic offices.7 The creation of an ACH matrix requires step-by-step
discussion of evidence and arguments being used and deliberation about
how these are interpreted as either consistent or inconsistent with each of
the hypotheses. This process takes time, but many analysts believe it is

374
time well spent; they say it saves time in the long run once they have
learned this technique.

These experiences in teaching ACH to intelligence analysts illustrate how


structured techniques can elicit significantly more divergent information
when used as a group process. Intelligence and law enforcement analysts
consider this group discussion the most valuable part of the ACH process.
Use of structured techniques does not guarantee a correct judgment, but
this anecdotal evidence suggests that these techniques do make a
significant contribution to better analysis.

13.3 A Program for Empirical Validation


Section 13.1, on the Limits of Empirical Analysis, showed that empirical
analysis of the accuracy of structured techniques is extremely difficult, if
not impossible, to achieve. We now assert that it is also inappropriate even
to try. Structured analytic techniques do not, by themselves, predict the
future; nor do they guarantee the absence of surprise. When an analytic
judgment turns out to be wrong, there is seldom any basis for knowing
whether the error should be attributed to the structured technique, to the
analyst using that technique, or to the inherent uncertainty of the situation.

We believe the most feasible and effective approach for the empirical
evaluation of structured analytic techniques is to look at the purpose for
which a technique is being used, and then test whether or not it actually
achieves that purpose or if the purpose can be achieved in some better
way. This section lays out a program for empirical documentation of the
value gained by the application of structured analytic techniques. This
approach is embedded in the reality of how analysis is actually done in the
U.S. Intelligence Community. We then show how this approach might be
applied to the analysis of three specific techniques.

Step 1 is to agree on the most common causes of analytic error and which
structured techniques are most useful in preventing such errors.

Step 2 is to identify what we know, or think we know, about the benefits


from using a particular structured technique. This is the face validity, as
described earlier in this chapter, plus whatever analysts believe they have
learned from frequent use of a technique. For example, we think we know
that ACH provides multiple benefits that help produce a better intelligence

375
product. It does this by:

✶ Driving a much broader search for information than analysts


would otherwise pursue. The focus on rejecting hypotheses casts a
broader perspective on what information the analyst seeks and
considers most valuable.
✶ Requiring analysts to start by developing a full set of alternative
hypotheses. This process reduces the risk of what is called
“satisficing”—going with the first answer that comes to mind that
seems to meet the need. It ensures that all reasonable alternatives are
considered before the analyst gets locked into a preferred conclusion.
✶ Prompting analysts to try to refute hypotheses rather than support
a single hypothesis. This helps analysts overcome the tendency to
search for or interpret new information in a way that confirms their
preconceptions and neglect information and interpretations that
contradict prior beliefs.
✶ Breaking the analytic problem down into its component parts by
showing the evidence and hypotheses and how they relate to one
another. This also enables analysts to sort data by type, date, and
diagnosticity of the evidence.
✶ Providing the ability to sort and compare evidence by analytically
useful categories, such as evidence from open sources versus
evidence from clandestine sources and from human sources versus
technical sources, recent evidence versus older evidence, and
conclusions based on hard evidence (intelligence reports) versus soft
evidence (the analyst’s own assumptions and logical deductions).
✶ Spurring analysts to present conclusions in a way that is better
organized and more transparent as to how these conclusions were
reached than would otherwise be possible.
✶ Creating a foundation for identifying key assumptions and
indicators that can be monitored on an ongoing basis to determine the
direction in which events are heading.
✶ Leaving a clear audit trail as to how the analysis was done and
how individual analysts may have differed in their assumptions or
judgments.
✶ Capturing the analysts’ overall level of confidence in the quality of
the data and recording what additional information is needed or what
collection requirements should be generated.

Step 3 is to obtain evidence to test whether or not a technique actually

376
provides the expected benefits. Acquisition of evidence for or against these
benefits is not limited to the results of empirical experiments. It includes
structured interviews of analysts, managers, and customers; observations
of meetings of analysts as they use these techniques; and surveys as well
as experiments.

Step 4 is to obtain evidence of whether or not these benefits actually lead


to higher-quality analysis. Quality of analysis is not limited to accuracy.
Other measures of quality include clarity of presentation, transparency in
how the conclusion was reached, and construction of an audit trail for
subsequent review, all of which are benefits that might be gained, for
example, by use of ACH. Evidence of higher quality might come from
independent evaluation of quality standards or interviews of customers
receiving the reports. Cost effectiveness, including cost in analyst time as
well as money, is another criterion of interest. As stated previously in this
book, we claim that the use of a structured technique often saves analysts
time in the long run. That claim should also be subjected to empirical
analysis of three specific techniques (see Figure 13.3).

Figure 13.3 Three Approaches to Evaluation

377
Source: 2009 Pherson Associates, LLC.

Key Assumptions Check


The Key Assumptions Check described in chapter 8 is one of the most
important and most commonly used techniques. Its purpose is to make
explicit and then to evaluate the assumptions that underlie any specific
analytic judgment or analytic approach. The evaluation questions are (1)

378
How successful is the technique in achieving this goal? and (2) How does
that improve the quality and perhaps accuracy of analysis? The following
is a list of studies that might answer these questions:

✶ Survey analysts to determine how often they use a particular


technique, their criteria for when to use it, procedures for using it,
what they see as the benefits gained from using it, and what impact it
had on their report.
✶ Compare the quality of a random sample of reports written without
having done a Key Assumptions Check with reports written after such
a check.
✶ Interview customers to determine whether identification of key
assumptions is something they want to see in intelligence reports.
How frequently do they see such information?
✶ Test whether a single analyst can effectively identify his or her
own assumptions, or should this always be done as a small-group
process? The answer seems obvious, but it may be useful to document
the magnitude of the difference with a before-and-after comparison.
Analysts are commonly asked to develop their own set of
assumptions prior to coming to a meeting to do a Key Assumptions
Check. Compare these initial lists with the list developed as a result
of the meeting.
✶ Observe several groups as they conduct a Key Assumptions
Check. Interview all analysts afterward to determine how they
perceive their learning experience during the meeting. Did it affect
their thinking about the most likely hypothesis? Will their experience
make them more likely or less likely to want to use this technique in
the future? Conduct an experiment to explore the impact of different
procedures for implementing this technique.

Cross-Impact Matrix
The Cross-Impact Matrix described in chapter 5 is an old technique that
Richards Heuer tested many years ago, but as far as both of us know it is
not now being used in the U.S. Intelligence Community. Heuer
recommends that it become part of the standard drill early in a project,
when an analyst is still in a learning mode, trying to sort out a complex
situation. It is the next logical step after a brainstorming session is used to
identify all the relevant factors or variables that may influence a situation.

379
All the factors identified by the brainstorming are put into a matrix, which
is then used to guide discussion of how each factor influences all other
factors to which it is judged to be related in the context of a particular
problem. Such a group discussion of how each pair of variables interacts is
an enlightening learning experience and a good basis on which to build
further analysis and ongoing collaboration.

Cross-Impact Matrix is a simple technique with obvious benefits, but like


any technique its utility can and should be tested. There are several viable
ways to do that:

✶ Interview or survey analysts after the cross-impact discussion to


gain their judgment of its value as a learning experience.
✶ Before the cross-impact discussion, ask each analyst individually
to identify and analyze the key interrelationships among the various
factors. Then compare these individual efforts with the group effort.
This is expected to show the benefits of the group process.
✶ Develop a scenario and conduct an experiment. Compare the
conclusions and quality of a report prepared by one team that was
unaware of the concept of a Cross-Impact Matrix with that of another
team that was instructed to use this technique as a basis for
discussion. Is the analysis prepared by the team that used the Cross-
Impact Matrix more complete and thorough than the analysis by the
other team?

Indicators Validator™
The Indicators Validator™ described in chapter 6 is a relatively new
technique developed by Randy Pherson to test the power of a set of
indicators to provide early warning of future developments, such as which
of several potential scenarios seems to be developing. It uses a matrix
similar to an ACH matrix, with scenarios listed across the top and
indicators down the left side. For each combination of indicator and
scenario, the analyst rates on a five-point scale the likelihood that this
indicator will or will not be seen if that scenario is developing. This rating
measures the diagnostic value of each indicator or its ability to diagnose
which scenario is becoming most likely.

It is often found that indicators have little or no value because they are
consistent with multiple scenarios. The explanation for this phenomenon is

380
that when analysts are identifying indicators, they typically look for
indicators that are consistent with the scenario they are concerned about
identifying. They do not think about the value of an indicator being
diminished if it is also consistent with other hypotheses.

The Indicators Validator™ was developed to meet a perceived need for


analysts to better understand the requirements for a good indicator. Ideally,
however, the need for this technique and its effectiveness should be tested
before all analysts working with indicators are encouraged to use it. Such
testing might be done as follows:

✶ Check the need for the new technique. Select a sample of


intelligence reports that include an indicators list and apply the
Indicators Validator™ to each indicator on the list. How often does
this test identify indicators that have been put forward despite their
having little or no diagnostic value?
✶ Do a before-and-after comparison. Identify analysts who have
developed a set of indicators during the course of their work. Then
have them apply the Indicators Validator™ to their work and see how
much difference it makes.

13.4 Recommended Research Program


There is an obvious need to establish organizational units within the
analytic community with wide responsibilities for research on analytic
tradecraft. We are skeptical, however, about the benefits to be gained by
proposals for either a national institute8 or a virtual center managed by
consultants.9 The history of research to improve intelligence analysis
would argue that either of those options is likely to serve primarily as a
magnet for expensive contract proposals and to have only a marginal
impact, if any, on how the average analyst does his or her job. And to get
the results of the research implemented, the research needs to tap the in-
house talent of working analysts.

Several years ago, Nancy Dixon conducted a study for the Defense
Intelligence Agency on the “lessons learned problem” across the U.S.
Intelligence Community and within the DIA specifically. This study
confirmed a disconnect between the so-called lessons learned and the
implementation of these lessons. Dixon found that “the most frequent

381
concern expressed across the Intelligence Community is the difficulty of
getting lessons implemented.” She attributes this to two major factors:

First, those responsible for developing and teaching lessons are


seldom accountable for the implementation of what has been learned.
. . . Under the best of circumstances this handoff of findings from
studies and reports to ownership by operations is notoriously difficult
to accomplish.

Secondly, most lessons-learned efforts lack the initial identification of


a specific target audience to employ these lessons. . . . Without a pre-
designated mutually agreed upon target audience for the lessons,
recommendations necessarily lack context that would otherwise make
them actionable.10

If the research recommended in this chapter is to be done effectively and


have a significant effect on how analysis is actually done, we believe it
must be conducted by or in close collaboration with a line component
within the Intelligence Community that has responsibility for
implementing its findings.

1. Paul Bedard, “CIA Chief Claims Progress with Intelligence Reforms,”


U.S. News and World Report, May 16, 2008,
www.usnews.com/articles/news/2008/05/16/cia-chief-claims-progress-
with-intelligence-reforms.html.

2. One of the best examples of research that does meet this comparability
standard is Master Sergeant Robert D. Folker Jr., Intelligence Analysis in
Theater Joint Intelligence Centers: An Experiment in Applying Structured
Methods (Washington, DC: Joint Military Intelligence College, 2000).

3. Philip Tetlock, Expert Political Judgment (Princeton, NJ: Princeton


University Press, 2005), 190–202.

4. These judgments have been made in public statements and in personal


communications to the authors.

5. Merriam-Webster Online, www.m-w.com/dictionary/analysis.

382
6. Charlan J. Nemeth and Brendan Nemeth-Brown, “Better Than
Individuals: The Potential Benefits of Dissent and Diversity for Group
Creativity,” in Group Creativity: Innovation through Collaboration, ed.
Paul B. Paulus and Bernard A. Nijstad (New York: Oxford University
Press, 2003), 63–64.

7. This information was provided by a senior Intelligence Community


educator in December 2006 and has been validated subsequently on many
occasions in projects done by government analysts.

8. Steven Rieber and Neil Thomason, “Creation of a National Institute for


Analytic Methods,” Studies in Intelligence 49, no. 4 (2005).

9. Gregory F. Treverton and C. Bryan Gabbard, Assessing the Tradecraft


of Intelligence Analysis, RAND Corporation Technical Report (Santa
Monica, CA: RAND, 2008).

10. Nancy Dixon, “The Problem and the Fix for the U.S. Intelligence
Agencies’ Lessons Learned,” July 1, 2009,
http://www.nancydixonblog.com/2009/07/the-problem-and-the-fix-for-the-
us-intelligence-agencies-lessons-learned.html.

383
14 The Future of Structured Analytic
Techniques

14.1 Structuring the Data [ 356 ]


14.2 Key Drivers [ 358 ]
14.3 Imagining the Future: 2020 [ 359 ]

Intelligence analysts and managers are continuously looking for ways to


improve the quality of their analysis. One of these paths is the increased
use of structured analytic techniques. This book is intended to encourage
and support that effort.

This final chapter employs a new technique called Complexity Manager


(chapter 11) to instill some rigor in addressing a complex problem—the
future of structured analytic techniques. Richards Heuer developed the
Complexity Manager as a simplified combination of two long-established
futures analysis methods, Cross-Impact Analysis and System Dynamics. It
is designed for analysts who have not been trained in the use of such
advanced, quantitative techniques.

We apply the technique specifically to address the following questions:

✶ What is the prognosis for the use of structured analytic techniques


in 2020? Will the use of structured analytic techniques gain traction
and be used with greater frequency by intelligence agencies, law
enforcement, and the business sector? Or will its use remain at current
levels? Or will it atrophy?
✶ What forces are spurring the increased use of structured analysis,
and what opportunities are available to support these forces?
✶ What obstacles are hindering the increased use of structured
analysis, and how might these obstacles be overcome?

At the end of this chapter, we suppose that it is now the year 2020 and the
use of structured analytic techniques is widespread. We present our vision
of what has happened to make this a reality and how the use of structured
analytic techniques has transformed the way analysis is done—not only in
intelligence but across a broad range of disciplines.

384
14.1 Structuring the Data
The analysis for the case study starts with a brainstormed list of variables
that will influence, or be impacted by, the use of structured analytic
techniques over the next five years or so. The first variable listed is the
target variable, followed by nine other variables related to it.

1. Increased use of structured analytic techniques


2. Executive support for collaboration and structured analytic techniques
3. Availability of virtual collaborative technology platforms
4. Generational change of analysts
5. Availability of analytic tradecraft support and mentoring
6. Change in budget for analysis
7. Change in customer preferences for collaborative, electronic products
8. Research on effectiveness of structured techniques
9. Analysts’ perception of time pressure
10. Lack of openness to change among senior analysts and mid-level
managers

The next step in Complexity Manager is to put these ten variables into a
Cross-Impact Matrix. This is a tool for the systematic description of the
two-way interaction between each pair of variables. Each pair is assessed
by asking the following question: Does this variable impact the paired
variable in a manner that will contribute to increased or decreased use of
structured analytic techniques in 2020? The completed matrix is shown in
Figure 14.1. This is the same matrix as that shown in chapter 11.

The goal of this analysis is to assess the likelihood of a substantial increase


in the use of structured analytic techniques by 2020, while identifying any
side effects that might be associated with such an increase. That is why
increased use of structured analytic techniques is the lead variable,
variable A, which forms the first column and top row of the matrix. The
letters across the top of the matrix are abbreviations of the same variables
listed down the left side.

Figure 14.1 Variables Affecting the Future Use of Structured Analysis

385
To fill in the matrix, the authors started with column A to assess the impact
of each of the variables listed down the left side of the matrix on the
frequency of use of structured analysis. This exercise provides an
overview of all the variables that will impact positively or negatively on
the use of structured analysis. Next, the authors completed row A across
the top of the matrix. This shows the reverse impact—the impact of
increased use of structured analysis on the other variables listed across the
top of the matrix. Here one identifies the second-tier effects. Does the
growing use of structured analytic techniques impact any of these other
variables in ways that one needs to be aware of?1

The remainder of the matrix was then completed one variable at a time,
while identifying and making notes on potentially significant secondary
effects. A secondary effect occurs when one variable strengthens or
weakens another variable, which in turn impacts on or is impacted by
structured analytic techniques.

14.2 Key Drivers


A rigorous analysis of the interaction of all the variables suggests several
conclusions about the future of structured analysis as a whole. The analysis

386
focuses on those variables that (1) are changing or that have the potential
to change and (2) have the greatest impact on other significant variables.

The principal drivers of the system—and, indeed, the variables with the
most cross-impact on other variables as shown in the matrix—are the
extent to which (1) senior executives provide a culture of collaboration, (2)
the work environment supports the development of virtual collaborative
technologies, and (3) customers indicate they are seeking more rigorous
and collaborative analysis delivered on electronic devices such as tablets.
These three variables provide strong support to structured analysis through
their endorsement of and support for collaboration. Structured analysis
reinforces them in turn by providing an optimal process through which
collaboration occurs.

A fourth variable, the new generation of analysts accustomed to social


networking, is strongly supportive of information sharing and
collaboration and therefore indirectly supportive of structured analytic
techniques. The impact of the new generation of analysts is important
because it means time is not neutral. In other words, with the new
generation, time is now on the side of change. The interaction of these four
variables, all reinforcing one another and moving in the same direction, is
sufficient to signal that the future of structured techniques is most likely to
be positive.

Other variables play an important role as well. They identify opportunities


to facilitate or expedite the change, or they present obstacles that need to
be minimized. The speed and ease of the change will be significantly
influenced by the support—or lack of support—for analytic tradecraft
cells, on-the-job mentoring, and facilitators to assist in team or group
processes using structured techniques.2 A research program on the
effectiveness of structured techniques would also be helpful in optimizing
their use and countering the opposition from those who are uncomfortable
using the techniques.

The odds seem to favor a fundamental change in how analysis is done.


However, any change is far from guaranteed, because the outcome
depends on two assumptions, either of which, if it turned out to be wrong,
could preclude the desired outcome. One assumption is that funding for
analysis during the next few years will be sufficient to provide an
environment conducive to the expanded use of structured analytic

387
techniques. This is critical to fund analytic tradecraft and collaboration
support cells, facilitation support, mentoring programs, research on the
effectiveness of structured analytic techniques, and other support for the
learning and use of structured analysis. A second assumption is that senior
executives will have the wisdom to allocate the necessary personnel,
funding, and training to support collaboration and will create the necessary
incentives to foster a broader use of structured techniques across their
organizations.

14.3 Imagining the Future: 2020


Imagine it is now 2020. Our assumptions have turned out to be accurate,
and collaboration in the use of structured analytic techniques is now
widespread. What has happened to make this outcome possible, and how
has it transformed the way analysis is done in 2020? This is our vision of
what could be happening by that date.

The use of synchronous, virtual collaborative platforms has been growing


rapidly for the past five years. Younger analysts, working as teams from
several different locations, have embraced avatar-based virtual worlds as a
user-friendly vehicle for the production of joint papers with colleagues
working on related topics in other offices. Analysts in different geographic
locations arrange to meet from time to time, but most of the ongoing
interaction is accomplished using both asynchronous and synchronous
collaborative tools and systems.

Analysts can put on headphones at their desk and with just a click of a
mouse, find themselves sitting as an avatar in a virtual meeting room
conferring with experts from multiple geographic locations who are also
represented by avatars in the same room. They can post their papers for
others in the virtual world to read and edit, call up an Internet site that
merits examination, or project what they see on their own desktop
computer screen so that others, for example, can view how they are using a
particular computer tool. The avatar platform also allows an analyst or a
group of analysts to be mentored “on demand” by a senior analyst on how
to use a particular technique or by an instructor to conduct an analytic
techniques workshop without requiring anyone to leave his or her desk.

Structured analytic techniques are a major part of the process by which


information is shared, and analysts in different units often work together

388
toward a common goal. There is a basic set of techniques and critical
thinking skills that collaborating teams or groups commonly use at the
beginning of most projects to establish a shared foundation for their
communication and work together. This includes Virtual Brainstorming to
identify a list of relevant variables to be tracked and considered; Cross-
Impact Matrix as a basis for discussion and learning from one another
about the relationships between key variables; and Key Assumptions
Check to discuss assumptions about how things normally work in the topic
area of common interest. Judgments about the cross-impacts between
variables and the key assumptions are drafted collaboratively and posted
electronically. This process establishes a common base of knowledge and
understanding about a topic of interest. It also identifies at an early stage of
the collaborative process any potential differences of opinion, ascertains
gaps in available information, and determines which analysts are most
knowledgeable about various aspects of the project.

By 2016, all the principal elements of the U.S. Intelligence Community


and several foreign intelligence services have created analytic tradecraft or
collaboration support cells in their analytic components. Analysts with
experience in using structured techniques are now helping other analysts
overcome their uncertainty when using a technique for the first time;
helping others decide which techniques are most appropriate for their
particular needs; providing oversight when needed to ensure that a
technique is being used appropriately; and teaching other analysts through
example and on-the-job training how to effectively facilitate team or group
meetings.

The process for coordinating analytic papers and assessments has changed
dramatically. Formal coordination prior to publication is now usually a
formality. Collaboration among interested parties is now taking place as
papers are initially being conceptualized, and all relevant information and
intelligence is shared. AIMS (Audience, Issue, Message, and Storyline),
the Getting Started Checklist, Key Assumptions Check, and other basic
critical thinking and structured analytic techniques are being used
regularly, and differences of opinion are explored early in the preparation
of an analytic product. New analytic techniques, such as Premortem
Analysis, Structured Self-Critique, and Adversarial Collaboration, have
almost become a requirement to bullet-proof analytic products and resolve
disagreements as much as possible prior to the final coordination.

389
Exploitation of outside knowledge—especially cultural, environmental,
and technical expertise—has increased significantly. The Delphi Method is
used extensively as a flexible procedure for obtaining ideas, judgments, or
forecasts electronically on avatar-based collaborative systems from
geographically dispersed panels of experts.

By 2020, the use of structured analytic techniques has expanded across the
globe. All U.S. intelligence agencies, almost all intelligence services in
Europe, and a dozen other services in other parts of the world have
incorporated structured techniques into their analytic processes. Over
twenty Fortune 50 companies with competitive intelligence units—and a
growing number of hospitals—have incorporated selected structured
techniques, including the Key Assumptions Check, ACH, and Premortem
Analysis, into their analytic processes, concluding that they could no
longer afford multimillion-dollar mistakes that could have been avoided by
engaging in more rigorous analysis as part of their business processes.

One no longer hears the old claim that there is no proof that the use of
structured analytic techniques improves analysis. The widespread use of
structured techniques in 2020 is partially attributable to the debunking of
that claim. Canadian studies involving a sample of reports prepared with
the assistance of several structured techniques and a comparable sample of
reports where structured techniques had not been used showed that the use
of structured techniques had distinct value. Researchers interviewed the
authors of the reports, their managers, and the customers who received
these reports. The study confirmed that reports prepared with the
assistance of the selected structured techniques were more thorough,
provided better accounts of how the conclusions were reached, and
generated greater confidence in the conclusions than did reports for which
such techniques were not used. These findings were replicated by other
intelligence services that used the techniques, and this was sufficient to
quiet most of the doubters.

The collective result of all these developments is an analytic climate in


2020 that produces more rigorous, constructive, and informative analysis
—a development that decision makers have noted and are making use of as
they face increasingly complex and interrelated policy challenges. As a
result, policymakers are increasingly demanding analytic products that, for
example, consider multiple scenarios or challenge the conventional
wisdom. The key conclusions generated by techniques like Quadrant

390
Crunching™ and What If? Analysis are commonly discussed among
analysts and decision makers alike. In some cases, decision makers or their
aides even observe or participate in Role Playing exercises using
structured techniques. These interactions help both customers and analysts
understand the benefits and limitations of using collaborative processes
and tools to produce analysis that informs and augments policy
deliberations.

This vision of a robust and policy-relevant analytic climate in 2020 is


achievable. But it is predicated on the willingness and ability of senior
managers in the intelligence, law enforcement, and business communities
to foster a collaborative environment that encourages the use of structured
analytic techniques. Achieving this goal will require a relatively modest
infusion of resources for analytic tradecraft centers, facilitators, mentors,
and methodology development and testing. It will also require patience
and a willingness to tolerate some mistakes as analysts become familiar
with the techniques, collaborative software, and working in virtual worlds.
We believe the outcome will definitely be worth the risk involved in
charting a new analytic frontier.

Structured Analytic Techniques: Families and


Linkages
The structured analytic techniques presented in this book can be used
independently or in concert with other techniques. For ease of
presentation, we have sorted the techniques into eight groups, or
domains, based on the predominant focus and purpose of each
technique. See Chapter 3 for guidance on how to select the proper
technique(s).

The graphic on the opposing page illustrates the relationships among


the techniques. Mapping the techniques in this manner reveals less
obvious connections and highlights the mutually reinforcing nature of
many of the techniques.

Connections within a domain are shown by a thick blue line.

391
Connections to techniques in other domains are shown by a thin gray
line.

“Core” techniques, or “hubs,” are highlighted in blue circles because of


the connections they have with at least two other domains. Most of
these core techniques have been cited by analysts in the Intelligence
Community and businesses in the private sector as the tools they are
most likely to use in their analysis.

392
Structured analytic techniques in six different domains often make use
of indicators and are indicated by stars.

Most techniques are also enhanced by brainstorming. The art and


science of analysis is dynamic, however, and we expect the list of
techniques to change over time.

1. For a more detailed explanation of how each variable was rated in the
Complexity Analysis matrix, send an e-mail requesting the data to
[email protected].

2. The concept of analytic tradecraft support cells is explored more fully in


Randolph H. Pherson, “Transformation Cells: An Innovative Way to
Institutionalize Collaboration,” in Collaboration in the National Security
Arena: Myths and Reality—What Science and Experience Can Contribute
to its Success, June 2009. It is part of a collection published by the Topical

393
Strategic Multilayer Assessment (SMA), Multi-Agency/Multi-Disciplinary
White Papers in Support of Counter-Terrorism and Counter-WMD, Office
of Secretary of Defense/DDR&E/RTTO, accessible at
http://www.hsdl.org/?view&did=712792.

394
Table of Contents
Table of Contents 2
Foreword 17
Preface 20
Introduction and Overview 26
Building a System 2 Taxonomy 42
Choosing the Right Technique 52
Decomposition and Visualization 66
Idea Generation 128
Scenarios and Indicators 161
Hypothesis Generation and Testing 193
Assessment of Cause and Effect 233
Challenge Analysis 262
Conflict Management 303
Decision Support 317
Practitioner’s Guide to Collaboration 350
Validation of Structured Analytic Techniques 371
The Future of Structured Analytic Techniques 384

395

You might also like