Capabilities Bases Assessment Guide

Download as pdf or txt
Download as pdf or txt
You are on page 1of 91

Capabilities-Based Assessment (CBA)

Users Guide
Version 2

Force Structure, Resources, and Assessments Directorate


(JCS J-8)
December 2006

Foreword
The Joint Capabilities Integration and Development System (JCIDS) was established in 2003 to
overcome several shortcomings in the existing requirements process. While much has been published
on administering JCIDS, there was a lack of written advice on how to assess the DODs needs and
recommend solutions in terms of operational capabilities.
The initial version of this paper (January 2006) was the first document to offer practical advice on
how to conduct such an assessment. Since that version was released, the instructions and manuals
governing JCIDS have been revised, and the DOD has done many more Capabilities-Based
Assessments (CBAs). This update addresses both the regulatory changes and what we have learned
about doing these analyses.
This document does three things: first, it advises an action officer on how to organize and execute a
CBA; second, it connects the CBA process to both the overarching strategic guidance and the proven
analytical methods available in the DOD; and third, it is readable. As a result, this paper discusses
bureaucratic realities that would not be addressed in an instruction, points out the occasional area
where strategic guidance is immature, inconsistent, or conflicting, and uses an informal style aimed at
engaging the reader.
This guide will provide you with a great deal of advice on how to assemble an assessment that meets
the aims of JCIDS. While the guide is not directive or prescriptive, it captures important lessons from
the CBAs conducted to date, and discusses the techniques and practices that have worked. Doing a
good CBA is difficult, and this guide will not change that. But, a CBA should not be mystifying, and
this paper is aimed at demystifying the inputs, best practices, and desired outcomes of such an
assessment.
An electronic version of this guide is available on the unclassified J-7 Joint Experimentation, Concept
Development, and Training web site (http://www.dtic.mil/futurejointwarfare/), as well as the Joint
Requirement Oversight Councils classified Knowledge Management and Development System
(https://jrockmds1.js.smil.mil/guestjrcz/gbase.guesthome).

Table of Contents
1.

WHAT IS A CAPABILITIES-BASED ASSESSMENT?................................................................ 4


1.1.
1.2.
1.3.

2.

INITIAL PREPARATION FOR A CBA.......................................................................................... 9


2.1.
2.2.
2.3.
2.4.
2.5.
2.6.

3.

ORIGINS AND INTENT OF JCIDS.................................................................................................... 4


MAJOR ELEMENTS OF A CBA ....................................................................................................... 7
TYPES OF CBAS ............................................................................................................................ 7
DO YOU KNOW WHY YOURE DOING THIS CBA?........................................................................ 9
THE RELATIONSHIP OF JOINT CONCEPTS ...................................................................................... 9
IDENTIFYING RELEVANT STRATEGIC GUIDANCE ....................................................................... 11
IDENTIFYING STRATEGIC ANALYSIS GUIDANCE: THE DOD ANALYTIC AGENDA...................... 13
COLLECTING RELEVANT ANALYSES ........................................................................................... 14
IDENTIFYING RELEVANT EXPERTISE .......................................................................................... 14

ORGANIZING TO CONDUCT A CBA......................................................................................... 16


3.1.
3.2.
3.3.
3.4.
3.5.

STUDY TEAM COMPOSITION ....................................................................................................... 16


INTERNAL WORK PROCESSES ..................................................................................................... 17
EXTERNAL WORK AND STAFFING PROCESSES............................................................................ 18
INFORMATION EXCHANGE .......................................................................................................... 19
SCHEDULING AND MAJOR DECISION POINTS .............................................................................. 20

4.

THE STUDY PLAN.......................................................................................................................... 22

5.

THE QUICK LOOK......................................................................................................................... 25


5.1.
5.2.
5.3.

6.

THE FUNCTIONAL AREA ANALYSIS (FAA) ........................................................................... 28


6.1.
6.2.
6.3.
6.4.
6.5.
6.6.
6.7.

7.

DEFINING THE MILITARY PROBLEM AND THE CONCEPT TO BE EXAMINED ............................... 28


SCOPING I: SCENARIO SELECTION .............................................................................................. 29
FROM SCENARIOS TO CAPABILITIES ........................................................................................... 30
COLLECTING AND DOCUMENTING DOCTRINAL APPROACHES ................................................... 31
SCOPING II: TASK STRUCTURE AND FUNCTIONS ........................................................................ 32
USING STRATEGIC GUIDANCE TO SHAPE STANDARDS ............................................................... 33
THE OVERALL FAA PROCESS ..................................................................................................... 35

THE FUNCTIONAL NEEDS ANALYSIS (FNA) ......................................................................... 37


7.1.
7.2.
7.3.
7.4.
7.5.
7.6.
7.7.

8.

THE NEED FOR A PILOT EFFORT ................................................................................................. 25


ESTABLISHING ANALYTICAL BOUNDS ........................................................................................ 25
QUICK LOOK PRODUCTS AND TIMING ........................................................................................ 26

OPERATIONAL DEPICTION........................................................................................................... 37
CHOOSING AN ANALYTICAL APPROACH..................................................................................... 40
COLLECTING AND INSPECTING PERFORMANCE DATA ................................................................ 44
EXECUTING THE ANALYTICAL APPROACH ................................................................................. 45
EXTRACTING AND REPORTING NEEDS ........................................................................................ 45
VISION AND REALITY IN STATING NEEDS................................................................................... 48
THE OVERALL FNA PROCESS ..................................................................................................... 48

THE FUNCTIONAL SOLUTIONS ANALYSIS (FSA)................................................................ 50


8.1.
8.2.

GAPS VERSUS GAME-CHANGING CAPABILITIES ........................................................................ 51


EXAMINING POLICY ALTERNATIVES .......................................................................................... 52
1

ANALYSIS OF MISSION EFFECTIVENESS...................................................................................... 53


FINDING AFFORDABLE, FEASIBLE, AND RESPONSIVE SOLUTIONS ............................................. 53
DESCRIBING COLLECTIONS OF OPTIONS VIA PORTFOLIOS......................................................... 55
EXCESSES .................................................................................................................................... 58
THE OVERALL FSA PROCESS ...................................................................................................... 59

8.3.
8.4.
8.5.
8.6.
8.7.
9.

THE QUICK TURN CBA................................................................................................................ 61


9.1.
9.2.
9.3.
9.4.
9.5.
9.6.

TYPICAL REASONS FOR AN ACCELERATED ASSESSMENT .......................................................... 61


SCOPING, DOWNSCOPING, AND NEGOTIATING THE OBJECTIVE ................................................. 62
FORMING A TEAM ....................................................................................................................... 63
WORKING ARRANGEMENTS ........................................................................................................ 65
DESIGNING TO TIME .................................................................................................................... 66
COMMUNICATING RESULTS AND RISKS ...................................................................................... 69

10.

A TWENTY-QUESTION SUMMARY ...................................................................................... 71

11.

REFERENCES.............................................................................................................................. 73

12.

LIST OF ACRONYMS ................................................................................................................ 75

13.

APPENDIX: THE BIOMETRICS QUICK TURN CBA .......................................................... 77

13.1.
13.2.
13.3.
13.4.
13.5.
13.6.
13.7.

BACKGROUND ......................................................................................................................... 77
FORMING THE CBA TEAM ...................................................................................................... 78
INITIAL PLANNING AND SCHEDULING .................................................................................... 79
TEAM EVOLUTION ................................................................................................................... 80
METHODOLOGY AND EXECUTION ........................................................................................... 81
OBSERVATIONS ....................................................................................................................... 86
REFERENCES............................................................................................................................ 86

Table of Figures
FIGURE 1. MEMO FROM THE SECRETARY OF DEFENSE THAT BEGAN JCIDS................................................ 5
FIGURE 2. JCIDS ANALYSIS PROCESS. .......................................................................................................... 6
FIGURE 3. SIMPLIFIED DIAGRAM OF MAJOR CBA INPUTS, ANALYSES, AND OUTPUTS.................................. 7
FIGURE 4. DOCUMENTS COMPRISING THE JOINT OPERATIONS CONCEPTS. ................................................ 10
FIGURE 5. RELATIONSHIPS OF KEY STRATEGIC DOCUMENTS. ................................................................... 12
FIGURE 6. NOTIONAL ORGANIZATION MATRIX FOR A CBA........................................................................ 16
FIGURE 7. TASK RELATIONSHIPS AND OVERLAP FOR A CBA...................................................................... 20
FIGURE 8. NOTIONAL VALUE FUNCTION FOR EXPECTED PERSONNEL LOSSES. ........................................... 35
FIGURE 9. MAJOR FAA TASKS AND FLOW. ................................................................................................. 36
FIGURE 10. TASK FLOW FOR A RENDITION OPERATION FROM THE GLOBAL STRIKE RAID SCENARIO CBA.
............................................................................................................................................................ 38
FIGURE 11. DERIVATION OF THE COMMON JOINT ENGAGEMENT SEQUENCE FOR THE IAMD CBA. ......... 39
FIGURE 12. TRANSITION DIAGRAM FOR LEADERSHIP TARGETS FROM THE GLOBAL STRIKE RAID
SCENARIO CBA................................................................................................................................... 40
FIGURE 13. CLASSIFICATION OF MODELS BY WARFIGHTING SCOPE. .......................................................... 41
FIGURE 14. A CLASSIFICATION SCHEME FOR ANALYSIS APPROACHES. ...................................................... 41
FIGURE 15. OVERALL FNA TASK FLOW...................................................................................................... 49
FIGURE 16. OVERALL FSA TASK FLOW. ..................................................................................................... 60
FIGURE 17. EXAMPLE QUICK TURN CBA FAA TASK FLOW....................................................................... 67
FIGURE 18. EXAMPLE QUICK TURN CBA FNA TASK FLOW....................................................................... 68
FIGURE 19. EXAMPLE QUICK TURN CBA FSA TASK FLOW. ...................................................................... 69
TABLE A.1. CANDIDATE BIOMETRICS USE CASES, WITH CATEGORIES AND SUBCATEGORIES. ................... 82
TABLE A.2. THE PAIRWISE COMPARISON OF THE REVISED SET OF CAPABILITY GAPS. ............................... 83
TABLE A.3. THE TOP FIVE CAPABILITY GAPS FOR THE SHARE CATEGORY. ................................................ 85

1. What Is a Capabilities-Based Assessment?


On 20 October 2003, the Joint Requirements Oversight Council (JROC) issued a memorandum on a
recently completed study of forcible entry operations. This memorandum directed that
The Director, J-8, Joint Staff, in coordination with the Commander, US Joint Forces
Command, and the Services, develop a Forcible Entry Joint Operating Concept (JOC) by 31
December 2003.
The Director, J-8, Joint Staff, in coordination with the Services, use the JOC-derived tasks to
conduct a capabilities-based assessment by 30 September 2004. [JROCM 199-03, 2003]
Suppose that this memo made its way to your desk, with a handwritten note telling you that you
would lead the assessment. Your first thought might be one of self-satisfaction, since the four-star
generals and admirals charged with determining the needs of the DOD have chosen you to lead an
analysis of a critical mission area.
More likely, however, your first thoughts would be:
1. Whats the background?
2. Whats the issue? Whats the real issue?
3. Whats a capabilities-based assessment?
4. How am I going to do this?
Uncovering the answers to the first two questions is necessary for any staff action, and we will
reinforce the importance of knowing the answers to these questions. But, the thrust of this paper is to
help you answer the last two questions.
You may think that Question 3 should be easy, and there is a short, authoritative answer available.
CJCSI 3170.01F, Joint Capabilities Integration and Development System (JCIDS), states that
The CBA is the JCIDS analysis process that includes three phases: the FAA [functional area
analysis], the FNA [functional needs analysis], and the FSA [functional solutions analysis]. The
results of the CBA are used to develop a JCD (based on the FAA and FNA) or ICD (based on
the full analysis). [2006, p. GL-5]
Unfortunately, this definition introduces three types of analyses and two output documents that
youve never heard of, all of which appear to be couched in terms of an unstated set of capabilities.
The short answer isnt really an answer; it just generates 3 + 2 + 1 = 6 more questions.
So yes, there is a compact answer to the question what is a CBA? But understanding what a CBA is
requires a bit more discussion.

1.1. Origins and Intent of JCIDS


The first step in comprehending a CBA is learning why we have something called JCIDS. Prior
to 2002, the DOD had a requirements process to determine needs, which was operated by the
Joint Staff and featured the JROC as the highest-level decision body.
But, there was widespread dissatisfaction with this process, as evidenced by the memo issued
by the Secretary of Defense shown in Figure 1.

March 18, 2002 7:17 AM


TO:

Gen. Pace

CC:

Paul Wolfowitz
Gen. Myers
Steve Cambone

FROM:

Donald Rumsfeld

SUBJECT:

Requirements System

As Chairman of the JROC, please think through what we all need to do, individually or
collectively, to get the requirements system fixed.
It is pretty clear it is broken, and it is so powerful and inexorable that it invariably
continues to require things that ought not to be required, and does not require things that
need to be required.
Please screw your head into that, and lets have four or five of us meet and talk about it.
Thanks.

Figure 1. Memo from the Secretary of Defense that began JCIDS.

Predictably, a considerable amount of activity followed (led by the decision to banish the word
requirement from the new process). This effort resulted in three principles that form the
foundation of JCIDS:

Describing needs in terms of capabilities, instead of systems or force elements. One


of the major frustrations of the previous requirements processes was that solutions were
introduced to the system without any higher-level rationalization. The intent was to
replace statements such as we need a more advanced fighter, with we need the
capability to defeat enemy air defenses. The latter statement provides the rationalization
for needs, and also allows for competition among solutions.

Deriving needs from a joint perspective, from a new set of joint concepts. The JCIDS
architects recognized that a new set of documents would be necessary to link strategic
ends to warfighting means. Furthermore, these documents would have to go beyond
doctrine, which are beliefs about the best way to do things with existing resources. The
joint concepts would have to challenge existing approaches and provide impetus for
improvement. Also, these documents would broaden the strategic view and force the
DOD to consider the needs of a variety of military problems, not just one or two
canonical conflicts.

Having a single general or flag officer oversee each DOD functional portfolio. One
problem with the existing requirements process was that no one organization had
responsibility for knowing what DOD was doing in, say, command-and-control systems.
As a result, senior DOD decision makers became involved only after an unacceptably
small set of options were defined. In JCIDS, each Functional Capability Board (FCB) is
directed by a general or flag officer who has that responsibility.

By the summer of 2003, JCIDS was up and operating. The FCBs began functioning, and the
production of joint concept documents began.
We do not claim that this transition has been straightforward or painless. CJCSI 3170.01, the
governing instruction for JCIDS, has been revised six times in its first three years. Also, debate
continues on what exactly a capabilities-based approach is, what task structures should be used,
5

the role of future planning scenarios and current operations plans, and the exact relationship
between JCIDS and the formal DOD acquisition system.
One early principle that has not survived is the idea of mandating integrated architectures as the
sole basis for capabilities assessments. Early JCIDS work asserted that the emergent DOD
Architecture Framework (DODAF) [DODAF Working Group, 2004] should be the basis for
functional assessments, interoperability assessments, and assessment of mission areas. The
DODAF, which provides a variety of systems engineering tools, was originally oriented
towards command, control, and communication interoperability. It was later expanded to
portray a variety of military functions, and JCIDS originally featured architectures as a central
mechanism.
Architectures are useful (and probably essential) once you have decided what to do, as they
provide a framework to help determine how to do it. JCIDS capability assessments, however,
tend to be concerned more with what to do, and a DOD study concluded in July 2004 that
architecture development was more appropriate after a JCIDS assessment was complete [J-8,
2004]. It is true that architecture production is still a fundamental principle of JCIDS, and many
key JCIDS documents must contain certain DODAF products. Nonetheless, producing
architectures is not a requirement for a CBA.
More recently, JCIDS implemented a lexicon for capabilities. This lexicon, called the Joint
Capabilities Areas (JCAs) [J-8, 2005], divides joint operations into a hierarchy that is not tied
to particular force elements or platforms. JCAs have been given high-level endorsement
[Secretary of Defense, 2005] and are becoming common in JCIDS analyses.
JCIDS is both ambitious and evolving. Consequently, executing most JCIDS processes requires
flexibility and creativity, because the DOD must continue to change to fully implement a
system based on the principles listed above. Regardless, it is important for you to understand
the aims of JCIDS. For further information on the motivations behind JCIDS, an excellent
source is the Joint Defense Capabilities Study Final Report [Aldridge, et. al., 2003].
DOD Strategic
Guidance

Joint Operations Concepts


CONOPS

Functional Area Analysis

CPD

CDD
Integrated
Architectures

Functional
Needs
Analysis
ICD

JCD

Ideas for
non-Materiel
Approaches
(DOTMLPF
Analysis)

Analysis
of Materiel/
non-Materiel
Approaches

Ideas for
Materiel
Approaches

Approach N
Approach 2
Approach 1

Functional Solution Analysis

CBA analysis
Figure 2. JCIDS analysis process.

DCR

1.2. Major Elements of a CBA


CJCSM 3170.01C, Operation of the Joint Capabilities Integration and Development System
[2006, p. A-3], offers Figure 2 as the portrayal of a JCIDS CBA.
Although it is not obvious, this diagram contains the major elements of a CBA: the Functional
Area Analysis (FAA), the Functional Needs Analysis (FNA), and the Functional Solutions
Analysis (FSA).
Figure 3 reduces Figure 2 to the simplest depiction possible.

Existing
guidance

FAA

What are
we talking
about?
How good
are we at
doing it?

FNA

FSA

What
should we
do about it?

Figure 3. Simplified diagram of major CBA inputs, analyses, and outputs.

The FAA synthesizes existing guidance to specify the military problems to be studied. The
FNA then examines that problem, assesses how well the DOD can address the problem given
its current program, and recommends needs the DOD should address. The FSA takes this
assessment as input, and generates recommendations for solutions to the needs.
Of course, these simplified inputs and outputs decompose into much more complicated sets of
products, and the analyses themselves require much more examination. The point is, however,
that a JCIDS CBA is not really different than any other analysis. It must specify the issues,
estimate our current and projected abilities, and recommend actions.
Note that your CBA may not include an FSA. The current trend in JCIDS for jointly-initiated
assessments is to do an FAA and an FNA, and then produce a Joint Capabilities Document
(JCD), which is sent to the JROC. If the JROC opts to act on the needs identified in the
assessment, they will assign a sponsor (typically a Service), to do one or more FSAs.
This also means that you may do a CBA which consists of nothing but an FSA. In these cases,
you will have to rely on someone elses FAA and FNA, to include repairing any defects and
reacting to subsequent changes in guidance.

1.3. Types of CBAs


To conclude this introductory section, we offer a CBA taxonomy. CBAs commissioned under
JCIDS cover a broad spectrum, and the type of CBA will significantly influence how you
structure and conduct the assessment. This taxonomy is not outlined in any formal JCIDS

documents, but is a synthesis of what has been directed to date, and also reflects our experience
with DOD mission area assessments. The types of CBAs in this taxonomy are:

CBAs based on operational shortcomings we have already experienced;

CBAs based on perceived future needs;

CBAs to provide a unified look at a mission area;

CBAs to provide joint examination of an operational concept proposed by a particular


community;

CBAs to provide a broad examination of a functional area; and

CBAs to provide answers on extremely compressed timelines.

The reason we suggest this taxonomy is that the six different types have different implications
for what the CBA must emphasize. For example, a CBA based on an actual operational failure
will likely spend little (or no) time in the FAA, as the what has already been demonstrated.
Conversely, a CBA based on a perceived need, such as a study result, will still require
considerable work in the FAA. The fact that the needs are forecast, and not demonstrated,
indicates that there is still some question about the exact definition of the problem, its scope, or
whether the stated problem really is a problem.
CBAs aimed at unified examinations of mission areas support a primary objective of JCIDS. If
the mission area is not wholly within the province of a particular community (particularly a
Service), then it is likely that either multiple communities are addressing the problems without
much coordination, or no one is addressing it.
CBAs may also examine the utility of a proposed concept or solution. While this seems
contrary to the fundamental principle of having needs come from top-down concepts, the fact is
that good ideas can come from below, and may have much broader application than the
originators thought. Seabasing, which potentially addresses a wide range of military problems,
is the best current example of this type of CBA.
A CBA may be concerned with a broad look at a functional area. Again, this seems contrary;
for example, the Joint Chiefs of Staff (JCS) Tank has commissioned a CBA on joint
distribution, but JCIDS already has a Focused Logistics FCB whose entire mission is assessing
joint logistics. The answer is that the CBA should take a crosscutting look at the function, to
include assessing its affects on a variety of military problems. FAA scoping is very important in
this type of CBA, because attempting to examine the impacts of one functional area on
everything else is unmanageable.
Finally, this paper also discusses something called a Quick Turn CBA (Section 9). The
functional taxonomy of CBAs we list above still applies to these types of CBAs. However, we
discuss them separately because they normally must be executed in 30 to 60 days, and the
tremendous time compression requires modifying the approaches we recommend for less
frantic efforts.

2. Initial Preparation for a CBA


Your chain of command will probably know that a CBA is coming. For example, the JROC or the
Joint Chiefs choose CBAs from a list of proposals gathered annually by JCS/J-7, so your leadership
will have participated in the call for topics. The Combatant Commands and the Services have been
very aggressive about proposing CBA topics, so the candidates will have been under consideration for
several months.
However, a CBA may be commissioned by another source such as a Combatant Command
(COCOM) or a Service. So, you may not get all the warning time you would like and you may be
tempted to skip some of the advice we offer below on initial preparations. You will be under some
pressure, as organizations campaign aggressively to lead JCIDS CBAs, and your management likely
expended considerable political capital to put your organization in charge. So, your bosses will want
you to move out rapidly and get on with the analyses.
Be warned, though: you will do all the things listed below eventually, and doing them later in the
process will be very painful.

2.1. Do You Know Why Youre Doing this CBA?


This is a fundamental question, and the answer is not the 4-stars said to do it. You will not
receive a formal description of what you are supposed to do, or why; the forcible entry example
cited in Section 1 is as probably as much as you will get. The typical CBA tasking memo
makes a five-paragraph order look like a textbook.
You will have to discover who wanted this assessment done, what motivated them to be
concerned about it, and why this particular CBA topic prevailed. If you are lucky, your chain of
command will tell you. If not, you will have to find out.
Learning the answers to these questions is not just a Machiavellian journey to collect gossip
about high-level DOD conversations. It is essential that you know as much as possible about
why this CBA is of concern and what the people who commissioned the work are expecting.
Our experience has been that the results of these efforts can differ substantially from what the
decision makers expected to see. Now, there is nothing wrong with that; confirming or denying
notions about military problems is precisely why we conduct studies.
Politics, however, are inescapable. JCIDS CBAs inevitably raise questions that challenge major
programs, major concepts, and even core Service competencies. Questions such as these
generate resistance, and you must be able to deal with this resistance if you expect to do a
decent assessment. We cannot overemphasize the value of knowing who championed your
CBA topic, what caused them to promote it, and why (as well as who opposed its selection, and
why). You will see these people again!
Furthermore, JCIDS CBAs (other than the Quick Look variety) are time-consuming. None of
the JROC-commissioned CBAs done to date have been able to finish in less than 11 months.
During that time, the major decision makers in the JROC will inevitably change, and the
strategic environment may change as well. At least one of the new players will ask you for the
history, and not being able to provide it will be a failure.

2.2. The Relationship of Joint Concepts


Recall that one of the fundamental principles of JCIDS is the determination of needs from a set
of joint concepts. To support that principle, the Joint Staff formalized the production of these
concept documents at the same time JCIDS was designed. Although some of these documents

are written by FCBs, producing joint concepts is not a JCIDS function; it is a separate process,
managed by JCS/J-7 and US Joint Forces Command/J-9.
Consequently, this paper will not describe joint concepts and their production in detail.
Nonetheless, you will have to be familiar with the applicable joint concepts, particularly one
called the Joint Integrating Concept (JIC). To date, all JROC-directed CBAs have been
accompanied by a JIC, which was tasked at the same time as the CBA (in the case of the
forcible entry example in Section 1, the document at that time was called a JOC). So the JIC
has fundamental relationship to a CBA.
But what exactly is a JIC?
Recall that doctrine is a statement of beliefs about the best way to do something with the
resources we currently have. Joint concepts, however, are ideas about how something might be
done with resources we may not have yet. The Chairman of the Joint Chiefs of Staff (CJCS)
issued a series of Joint Vision documents through the 1990s as a means to drive progress in
the DOD; joint concepts documents have now assumed that role.
So, a JIC is a statement of how something might be done; in particular, it states how we would
like to do that thing in the future. Furthermore, the JIC is the lowest level of a family of concept
documents collectively called the Joint Operations Concepts. These are shown in Figure 4.
So, the integration the JIC performs is to use a set of general operational and functional
concepts to produce a description of how some specific operation or function might be done in
the future.
This may lead you to believe that the JIC that comes with your CBA will contain complete
guidance, and your job will be reduced to executing the quantitative assessment. After all,
CJCSI 3010.02B, Joint Operations Concepts, says:
JICs are narrowly scoped to identify, describe and apply specific capabilities,
decomposing them into the fundamental tasks, conditions, and standards required to
conduct a CBA Additionally, a JIC contains an illustrative vignette to facilitate
understanding of the concept. [2005, p. A-3]
Broad
statement
of how to
operate in
the future

Broad
description
of joint
force
operations

Capstone Concept for


Joint Operations (CCJO)

Joint Operating
Concepts (JOCs)

Joint Functional
Concepts (JFCs)

Joint Integrating
Concepts (JICs)

Broad
description
enduring
joint force
functions

Description
of narrowly
focused
operations
or functions

Figure 4. Documents comprising the Joint Operations Concepts.

10

Unfortunately, this has not been the case. The first set of JICs (Global Strike, Joint Logistics
Distribution, Joint Command and Control, Seabasing, Integrated Air and Missile Defense, Joint
Undersea Superiority, and Joint Forcible Entry Operations) range from immensely detailed lists
of necessary tasks to Clausewitzian discussions of military operations, and nearly everything in
between.
We are not disparaging the authors of the current set of JICs. On the contrary, it is
extraordinarily difficult to write something that induces progress without making the document
either fanciful or vacuous. Formal joint concept development was established at the same time
as JCIDS, and has been subject to the same growing pains.
The most honest advice we can give you is that first, you should participate in (or at least
follow) the development of the JIC, and second, you must respect what the JIC says in the
execution of your assessment. But, the JIC will not be a statement of work for your CBA. The
concept development staffing process will ensure that the JIC contains at least the elements
cited above, but you will have to sharpen and augment the JIC to conduct your analysis.
You may find yourself doing a CBA that does not have a JIC. In this case, you will have to
provide what the JIC provides, particularly the statement of the military problem and the
specific operation or function being considered. Since a JIC does not exist, you will likely have
to come up with justification from some strategic guidance document (see Section 2.3) that
describes the operation or function and the need to examine it. You can fill in the other
elements that a JIC would contain in the course of doing your FAA.

2.3. Identifying Relevant Strategic Guidance


The Joint Concept documents are tied to an even broader chain of strategic documents, which
are illustrated in Figure 5.
The National Security Strategy (NSS) is a document required by law after a Presidential
election, and is signed by the President. While the NSS is the foundation document for national
security, it will likely not contain advice directly applicable to your CBA.
The National Defense Strategy, however, is signed by the Secretary of Defense and does
contain information relevant to your CBA. The current Defense Strategy contains substantial
guidance on security challenges, key operational capabilities, and operational priorities, all of
which will influence your analyses. The National Military Strategy (NMS) is signed by the
CJCS and provides operational context to the Defense Strategy, and the joint concepts add
detail to both the Defense Strategy and the NMS.
There are several other Secretary of Defense-level documents that may impact your CBA. The
first two are operational documents called the Unified Command Plan (UCP) and the
Contingency Planning Guidance (CPG). As opposed to the documents listed so far, the CPG
is classified.
The UCP provides basic guidance to the Combatant Commanders. It defines their roles,
missions, geographic responsibilities, and functional responsibilities, and also establishes
command relationships. The reason the UCP is relevant (or even central) to your CBA is that
the mission or function you are assessing will be executed by a COCOM, and the UCP will
provide advice on which combatant commands must be able to execute that mission or
function. The UCP may also implicitly define the mission or function and set standards for its
execution, making it a potentially important source of guidance.
The CPG is signed by the Secretary of Defense and is approved by the President, and
establishes strategic priorities and directs the combatant commanders to prepare certain

11

contingency plans. It is not a well-known document, and many members of the DOD have
never heard of it. Nonetheless, it is extremely important, because it defines the current
challenges that the DOD must plan for, and gives advice on priorities.

National
Interests, Goals
Priorities

National
Security
Strategy

Integrating
Instruments of
National Power

Americas
Current
Position

National
Security
Directives

Persistent and
Emerging
Challenges
Assumptions

Geo-political,
Geo-economic
Space

National Defense
Strategy
Security
Objectives
How We Will
Accomplish
Objectives
Implementation
Guidelines

Key Operational
Capabilities

National
Military
Strategy

Size and Shape


of the Force
Defense
Posture

Attributes and
Principles

Operational
Priorities

Full-Spectrum
Dominance

Risk
Assessments

Capstone
Concept for
Joint
Operations

Political-Military
Space
Operations
Space

Joint
Operations
Concepts
Joint
Operating
Concepts
Joint
Functional
Concepts
Joint
Integrating
Concepts

Battle
Space

Figure 5. Relationships of Key Strategic Documents.

While the UCP and CPG are purely operational documents, the Strategic Planning Guidance
(SPG) provides the bridge between the Defense Strategy and the planning, programming, and
budgeting world. The SPG is a classified document signed by the Secretary of Defense, and
gives overarching direction on strategic and budget priorities. In particular, the SPG (which is
published biennially) is a central source for guidance on where the DOD should reduce risk or
accept risk.
The SPG leads what is now known as the Enhanced Planning Process, which results in a set of
documents known as the Joint Programming Guidance (JPG). The JPGs provides guidance
to the military departments and defense agencies on building their program proposals, and may
contain specific program information that will be useful for your assessment.
Another bridging document is the Transformation Planning Guidance (TPG). The first TPG
was signed by the Secretary of Defense in April 2003, and was designed to reinforce the current
Administrations interest in substantially changing the direction of DOD. As with the joint
concepts documents, it describes what the DOD might be and sets directions for change. In fact,
the 2003 TPG directed the preparation of the family of joint concepts documents.
The final document that you should examine is the most recent published Quadrennial
Defense Review (QDR) report. The QDR is mandated by law and requires the DOD to
undertake a comprehensive examination of its strategy and performance. To date, QDRs have

12

been conducted in 1997, 2001, and 2005, and each of those reviews has resulted in substantial
strategic and program changes. Many of the ideas that appear in the documents above first
appeared in a QDR report; for example, the notion of a capabilities-based approach (which
ultimately led to JCIDS) was first described in QDR 2001. Also note that a QDR report may
substitute for that years SPG or TPG.
You still may not be convinced that you need to study these documents for a CBA. If you are
not, heres a short list of very compelling reasons to study them.

To find an organizing framework. The mission or function you are assessing probably
covers an enormous range of potential military operations. The documents above offer a
number of organizing frameworks (particularly the security environment framework in
the Defense Strategy) that will help you make your assessment manageable.

To identify overarching priorities. The SPG in particular has been quite aggressive in
specifying areas where the DOD should improve, and areas where the DOD can take risk.
If these documents offer such advice on areas related to your CBA, you should use them.

To help set performance standards. A central issue you will have to settle in your CBA
is setting the criteria for the assessment of how well DOD does (or should) perform a
mission or task. These documents contain authoritative advice on such criteria, such as
friendly losses and collateral damage.

To secure unchallengeable guidance. You will face a number of serious bureaucratic


challenges when conducting your CBA that is inevitable. If your position is supported
by a document signed by the Secretary of Defense, you greatly increase your odds of
winning the argument.

2.4. Identifying Strategic Analysis Guidance: the DOD Analytic Agenda


A CBA is a strategic analysis, because it examines the effectiveness and sufficiency of current
and planned forces. It turns out that DOD has a policy for such analyses:
The Department shall institute a comprehensive and systematic process to provide data
for strategic analyses, using approved scenarios and ensuring that data are available,
easily accessible, integrated, [and] pedigreed The Department will develop, in a joint,
transparent, and collaborative manner, appropriate, up-to-date, traceable, and integrated
baselines [packages consisting of a scenario, concepts of operation, and integrated data]
suitable for strategic analyses [DODD 8620.1, 2002, P. 3].
The processes that this directive mandates are collectively known as the DOD Analytic Agenda,
and are overseen by an organization called the Joint Analytic Data Management Steering
Committee (JADMSC). This committee has representatives from all parts of OSD, the Services,
the DIA, and the Joint Staff, and has the job of scenario, baseline, and data production.
Knowledge of available scenarios, baseline, and data is very important to your CBA, so you need
to know what the suite of Analytic Agenda scenarios contains and how to get them. Much of the
information is catalogued by Joint Data Support (JDS), an OSD organization that maintains a
repository accessible via the SIPRNET (https://jds.pae.osd.smil.mil). You should also find out
your organizations contact with the JADMSC, as this will provide you a way to find out the
current state of scenario and data availability. The Analytic Agendas scenario production
schedule is heavily influenced by guidance in the SPG, so scenario analyses (usually called
analytic baselines) produced by the Analytic Agenda will probably be an important source for
your CBA.

13

2.5. Collecting Relevant Analyses


This step begins with a literature search. If your area is important enough for senior leaders to
commission a CBA, then it is almost certain that several major studies have been conducted on
the topic. For example, both the Joint Forcible Entry Operations and Joint Undersea Superiority
CBAs were directed as an outcome of prior DOD studies.
If you have done what we suggest in Section 2.1, you will already know which studies, if any,
convinced the JCS to begin your CBA. Those studies, if properly documented, will reference
other studies, and you will soon build up a large library.
One important source is the reports issued by the Defense Science Board (DSB). These reports
are readily available on the Internet (http://www.acq.osd.mil/dsb/), and the DSB will have
likely considered some portions of your topic in the last several years. DSB reports are prepared
by national experts at the very highest levels of their fields, and have considerable influence.
Study the available joint doctrine on your CBA topic. Doctrine is the statement of how we do
things now, and you will have to thoroughly understand our current approaches to assess where
we are.
Another important set of documents to study are the Combatant Commanders integrated
priority lists (IPLs). The Combatant Commanders use IPLs as their primary means to
communicate their near-term operational needs and priorities to the planning and programming
community, and result from considerable analysis done by COCOM staffs. You will probably
find several IPLs related to your CBA.
You should also collect op-ed articles written in the defense literature about your topic. You
might think that articles that appear in places such as Defense News, Armed Forces Journal,
and Foreign Affairs arent relevant to your assessment, but they actually are. For one thing,
they are good indicators of the range of debate about your CBA topic. Do the commentators
think we need more? That we have too much? That our current plans make no sense? Also,
such articles are written and edited by professional authors, and communicate the arguments
much more effectively than a typical DOD study report.
A substantial challenge that you will face in this era of stripped-down PowerPoint presentations
is that many important efforts are not documented properly. As a result, you may uncover only
a very thin, 3-bullets-per-slide decision brief with no accompanying notes. Such briefs are
impossible to interpret unambiguously, so in these cases you must find the original authors and
interview them about what they did.

2.6. Identifying Relevant Expertise


Interviewing these authors will also help you with a necessary step, which is identifying experts
real experts that can help.
Doing a CBA well is a challenge. To date, the typical JCIDS CBA has been led by an O-5
action officer with no previous large-scale study experience. In addition, most of the study
leads were on their first tour in a joint, Service, or COCOM staff. Yet, they were expected to
perform a comprehensive analysis of a broad mission or functional area, provide defensible
quantitative results, and function in an extremely contentious bureaucratic environment. So
how did they do it? And how will you do it?
You will have to find expertise of the following types.

Adversary expertise: who can credibly estimate the range of options open to an enemy?

14

Analytical ability: who has the tools, techniques, and track record that can support my
CBA?

Bureaucratic agility: who knows how to navigate among all the competing interests
safely?

Communications ability: who can communicate the results with brevity, clarity, and
believability to senior decision makers?

Cost estimation: who can forecast the costs of the options of interest?

Doctrinal knowledge: who can describe how we do these things now?

Study design: who can build a study plan that satisfies the tasking, provides appropriate
linkage to the strategy, and is executable in the time allotted?

Study management: who knows how to organize and execute the CBA?

Technical knowledge: who knows what technology options are realizable as CBA
solutions?

Policy knowledge: who knows what policy options are realizable as CBA solutions?

Too often, we believe that to do a successful CBA on, say, integrated air and missile defense,
we just need to unearth a set of experts on air and missile defense doctrine, and the rest will
take care of itself. Unfortunately, history has shown this to be untrue; you will need all ten of
the types of expertise shown above, and you will not find all of them in one person.
Consequently, you need to explore the community and find out who is good at these things. If
they are available, you should note that for the eventual composition of your study team. If not,
you get advice from them on how to execute your assessment.
The difficult part of this job is finding out who is really good, as opposed to those who merely
claim to be good. The answer is not earthshaking; as you would with, say, a home improvement
project, you have to gather and check references.
This is where the literature search can come in very handy. If a study is deemed successful and
induces the DOD to make a substantial move, then many things went right. So find out who
made things go right. You can combine this search with your literature search, and you will end
up with a list of both useful study products and real experts.
This approach also helps you avoid being overwhelmed with people who find out you are
looking for help. Important studies attract many potential providers, but you cannot allow
yourself to be consumed with unsolicited proposals in the preparation phase. You have to lead
the search for expertise.

15

3. Organizing to Conduct a CBA


Organizing your CBA consists of two tasks: forming your team, and deciding how you will operate.
As with all things in your CBA, you will get plenty of direction on both, but you will still have many
decisions to make.

3.1. Study Team Composition


We suggest that you use the list of expertise areas in Section 2.6 to help determine whether you
have all the help you will need. Figure 6 shows an example of a matrix you might use as you
are building your team.
Study Director's chain
director
of command
Adversary expertise
Analytic team
Bureaucratic advisor
Communicator
Cost estimator
Doctrinal Experts
Study designer
Study organizer
Policy experts
Technical experts

Organization
A
X

Organization Contractor Contractor


B
A
B
X

X
X
X
Area 1
Area 2

X
X
X
X
X

X
X
X

Figure 6. Notional organization matrix for a CBA.

In addition to the types of expertise shown above, youll have to designate someone to function
as your deputy essentially, someone who stands in when you are unavailable.
Within the expertise chart, there are many choices of providers. You may use some
combination of:

government personnel in your own organization;

government personnel in other organizations;

personnel on contract to your organization;

personnel on contract to other organizations; and

informal advisors who are neither in the government nor are on contract to the
government.

We cannot give you a precise answer on whom to use as providers, because different CBAs
require different mixes of skills. We can, however, offer some considerations.
First of all, government organizations can and are often redirected to other higher-priority tasks.
If you have a commitment from a government organization to provide help for your CBA, then
your chain of command will have to enforce it. And, since CBAs are often viewed as long-term
efforts that can tolerate delays, redirections away from CBA work are common.
Also, recognize that your CBA will largely be an additional duty for anyone helping you in an
external organization. The original JCIDS vision was that the JIC would divide the topic into
functions, those functions would be passed to each owning FCB for assessment, and the results
would be consolidated by the CBA lead. Unfortunately, the CBAs that have attempted this

16

method have found that it doesnt work very well. First, FCBs have a large routine workload,
and second, these assessments arent easily partitioned; they really have to be done by an
integrated team.
Of course, you can use contractors. In this case, you have a formal contract, along with formal
avenues for redress if the work is unsatisfactory. But, to use contractors, you will have to get
funding, and allocate time to the competitive bidding process. Also, getting someone on
contract tends to take at least 60 days. More importantly, you must also ensure that any forprofit contractors you use do not have a financial interest in the outcome of the analyses. CBAs
result in findings that are acquisition-sensitive, as they prioritize needs and inform future
budgets and investments.
Another option is to use Federally Funded Research and Development Centers (FFRDCs) or
University Affiliated Research Centers (UARCs). These are not-for-profit organizations that
have a special relationship with DOD, and you can hire them directly. Understand, though, that
public law limits the amount of support FFRDCs can provide to DOD, so FFRDC man-years
are formally allocated. If you have identified an FFRDC as a source of expertise you want to
employ, you will have to get a man-year allocation.
You should try to get your core team together at the start. In several CBAs, certain elements of
the study team were not brought into the CBA until considerable work was done, due to
funding delays or the feeling that they werent needed early on. This is an enormous mistake. If
you want a team that functions well, they have to be in on the whole effort, from end to end.
As a final note, when we say study team, we are talking about the small, core team that takes
direction from you, and you alone, on what will be done and when. We are not referring to the
larger working group that you will form to deal with Combatant Commands, Services, Defense
Agencies, and other communities. Representatives from these groups will work with you, but
they report to other chains of command and have a primary task of monitoring your effort for
their organizations. This is not a pejorative distinction; these organizations have a right to know
what you are doing, and working group members are a valuable source of input. But, they are
not a part of your core team.

3.2. Internal Work Processes


Since CBAs are wide-ranging assessments, you, as the study director, will have to deal with a
large group of people. But, you also have to deliver an assessment on time.
Consequently, you should try to organize your team so that the group that is putting out
products is shielded from meetings. Indeed, the best organization is front desk group that
works with external organizations, and a back shop team that produces analyses, written
documents, and briefings. If you are constantly dragging your best analytical and doctrinal
experts to what-are-you-guys-doing-and-how-might-that-affect-us sessions, your progress will
suffer. Save yourself some trouble and find someone that can monitor the back shop team and
answer questions in meetings.
Be careful, however, that you do not create a problem by walling off your back shop team from
the outside world. They need to understand the entire landscape of the CBA, because the inputs
and issues that you confront affect the content of the analyses.
The important characteristic of the core team is that you command it; it is not subject to any
guidance other than yours. It should contain all the expertise shown in Figure 6, and it also
must contain a person (other than you) who is the internal team lead. This person is the
executive officer who runs the operation day in and day out while you deal with the outside
world.
17

This team will accomplish the bulk of the work in the CBA. They will do all the fundamental
analysis and all the integration, and will generate all the supporting information for your
presentations. You will meet with them frequently, but remember that you are commanding, not
controlling. Unless you are a quantitative or policy analyst yourself, it is likely that your core
team is doing things that you are not trained to do. So resist the urge to tinker with them beyond
obtaining what you need to know to coherently present their work. This team is your most
valuable resource, so do not waste their time (e.g., dont make ten people come to your
overcrowded workspaces twice a week when you could go to their offices once a week).
On the other hand, you have to give your team overarching direction. You have to ensure that
they are helping you stay on track, that they are addressing the issues and not becoming too
focused on a narrow set of scenarios or analytical tools, and that their work reflects external
changes that you bring to them. You also have to ensure that they give you an accurate and
usable set of project management options when you have to react to (inevitable) schedule or
scope changes.
Finally, if your CBA is looking at issues at higher classification levels, you have to ensure at
the outset that the critical members of this team either have or can get the appropriate
clearances. Several CBAs have had significant problems with clearances, so much that they led
to delays of up to one year.

3.3. External Work and Staffing Processes


This raises the question of how to work with the outside world. You may believe that the
collaborative analysis process that so many DOD documents talk about is truly collaborative,
and that anyone who shows up at your meetings has committed to your search for truth.
Unfortunately, most of these collaborative efforts are actually competitions in which the
participants are playing by an unstated set of rules. CBAs ultimately result in advice on the
allocation of resources, and everyone in DOD competes for resources.
You will have to conduct regular meetings with an external working group. This working group
will consist of people who:

are monitoring your CBA and reporting to their organizations if it appears the CBA
supports or refutes any of their organizations equities (enforcers);

have been directed to slow down your CBA so that it doesnt interfere with initiatives
being promoted by their organizations (saboteurs);

are waging personal campaigns to cure certain areas they believe to be defective in DOD,
and view your CBA as a means to those ends (zealots);

give long philosophical speeches that may or may not make any sense, but prevent your
meetings from accomplishing anything (bloviators);

are attending your meetings because their organization has no idea what else to do with
them (potted plants);

are convinced that your assessment is a cover story for a secret plot to destroy their
interests (conspiracy theorists);

are attending your meetings to as a means to generate work for their organizations (war
profiteers);

have been directed to ensure that your CBA doesnt result in additional work for their
organizations (evaders); and
18

are forthright and competent individuals who want to get you relevant information and
useful advice that will help you succeed (professionals).

In a perfect world, your working group would consist only of professionals. But it will not.
Furthermore, you cant pick your working group; they are ambassadors chosen by their owning
organizations, and they can only be replaced under extraordinary circumstances. So, what can
you do?

First, you and your core team should stay several weeks ahead of the working group. A
CBA cannot be conducted as a journey of discovery in which you and a mixed crew
(which may contain mutineers) simultaneously discover what is around the next bend of
the river. You must plot the course.

Second, you should provide read-aheads to your working group a respectable time prior
to any meeting, set the meeting agenda and duration, and adhere to both with no
exceptions. Furthermore, you should minimize the number of meetings, and work with
individual organizations individually, on individual issues, as much as possible.

Another issue with working groups is that the comments of a working group participant, in
general, do not represent his organizations formal position. So, for certain critical questions
(large data calls, scenario selections, CONOPS solicitations) you will have to staff the
questions formally. JCIDS CBAs are littered with cases in which the study lead thought he had
concurrence, but was overturned later with a formal response. What you have to do is pick the
critical requests for which you want ironclad responses, and ask for a formal organization
position. If you are on the Joint Staff or are working within the FCB structure, the Joint Staff
Action Package (JSAP) is the best mechanism for doing this.
You have to provide a mechanism for integration among the FCBs (or affected functional
organizations if you are working on a Service or COCOM CBA). As we mentioned in Section
3.1, the original notion of partitioning the CBA among FCBs has not worked well. Nonetheless,
you still need to know what is going on in those functional areas after all, the FCBs are
responsible for knowing the DOD portfolio in their functional area. Furthermore, the FCBs
need a way to examine your analyses of their function as it relates to the CBA.
Probably the best way out of this dilemma is to have people on your core team who have
relationships with the relevant FCBs and can accurately represent and analyze their capabilities.
This doesnt create more work for the other FCBs, and provides the visibility they need.
Finally, you will also have to consider what sort of governance your CBA will have. If you are
in a FCB working on a JROC-directed CBA, you will likely use FCB-JCB-JROC oversight
procedures for staffing. There may, however, be different oversight groups that you must
include, or you may be doing a CBA within a Service or Combatant Command. In that case,
you must decide what your governance structure will look like.
Oversight is a tradeoff between getting and maintaining senior leader support and being overmanaged. Consequently, you do not want to set up a huge structure. Most studies work quite
well with a single working group and a general officer steering group; some add an additional
integration group at the O-6 level. You should try to avoid having more than two levels of
oversight above your external working group.

3.4. Information Exchange


A lot of people will want to subscribe to your CBAs progress. By far the most efficient way to
do this is to maintain a classified website where you post all your briefings and key documents.
All the CBAs currently in work maintain websites.

19

You may think its a good idea to limit access to your site via passwords, but this really doesnt
work; it just means that someone who wants your products has to be a bit cleverer about getting
it. Site passwords do not prevent someone else from using the password, nor do they prevent
redistribution. So dont bother. If you dont want something distributed, dont put it on the site.

3.5. Scheduling and Major Decision Points


Its time to start your CBA, so you will have to present a satisfactory plan to your chain of
command to convince them that youre ready to get started. Figure 7 below shows both
precedent relationships (tasks which must be done prior to starting another task) and the degree
of overlap you can tolerate in the early phases of a CBA.
JIC Preparation (if a JIC was commissioned)

Doctrine Review
Literature Review
Why
This
CBA?

Expertise
Search

Strategic Guidance
Review

Final Team
Selection

Study Plan
Preparation,
Approval

FAA

FNA

Present
Results

Working
Group
Formation

FSA

Present
Results

Present
Results

Quick Look

Figure 7. Task relationships and overlap for a CBA.

The precedent relationships in Figure 7 follow what we have discussed so far. Answering the
question of why you are doing your particular CBA is the starting point, and we do not
recommend that you proceed unless you have that answer. The strategic, doctrine, and literature
reviews can all start in parallel with the expertise search, but you should finish the strategic
review prior to filling out your core team. In keeping with the edict to stay ahead of your
working group, you should have the study plan (including organization and working
relationships) drafted prior to the first meeting of the working group.
Note that you will have to get some help early on if you want to try to do the various review
tasks and search tasks in parallel. Hence, the task is to complete selection of the core team prior
building a study plan and forming a working group.
You may have a CBA that depends on a JIC, but you may not be able to influence the progress
of the JIC. Yet, experience has shown that you dont need to wait until the JIC is complete to
do a substantial amount of work. Figure 7 shows how much can be done prior to final JIC
approval. The other shaded process (the quick look) is a task limited to your core team, and will
be discussed further in Chapter 5. For scheduling purposes, you should try to complete the
quick look prior to the completion of the FAA.
Because of the wide range of CBA types and topics, it is difficult for us to recommend typical
times that it should take to complete the tasks shown above. Also, some tasks may have already
been completed. For example, the JIC writing team may have already summarized all the
relevant doctrinal literature (and this would be another reason to participate in JIC
development).
20

JCIDS specifies three formal decision points: the FAA, the FNA, and the FSA. So, you will
staff results through JCIDS channels at least three times. In addition, if the JROC directed your
CBA, then your study plan must be approved as well.
The other major decision points are concerned with requests that you want to staff formally,
such as data calls, threat assessments, and so on. You will normally conduct these within an
FAA, FNA, or FSA, so we will cover those in Chapters 6 - 8.
When CBAs were first established, the prevailing opinion was that they should take 90 days (30
days for each of the major analyses). Unfortunately, none of the JROC-directed CBAs done to
date has even come close to finishing in 270 days.
Here are some reasons for this.

JIC delays. The JIC is usually commissioned at the same time as the CBA, so the CBA
cant really start until the JIC has least been drafted. In reality, several CBAs have
completed the FAA and gone into the FNA before the JIC was finally approved (NOTE:
if this happens, it is possible to execute the CBA and JIC simultaneously, but such an
arrangement would have to be negotiated through the JROC, and your CBA will have to
absorb the overhead of frequent communication with the JIC writers).

False starts. Several CBAs were well down the road when they discovered that they
either had an unmanageable scope, the wrong team, or the wrong methodology.
Backtracking and fixing these problems caused considerable delay.

Staffing results through JCIDS. Suppose that your CBA was JROC-directed. Then the
CBA study plan, FAA, FNA, and FSA must be approved by the JROC. This means that
each must be presented to the lead FCB, then the Joint Capabilities Board (JCB), and
then to the JROC. If each of these takes a week to schedule and execute (including the
inevitable prebriefs and resulting modifications), then you will spend at least 4 x 3 x 7 =
84 calendar days just getting results presented and approved and this does not include
staffing. And, since each step determines the next step, its risky to start the next step
without approval of the previous step.

Command redirection. CBAs tend to outlast the four-stars that commissioned them, and
their replacements may direct (and have directed) substantial changes to the scope and
emphasis of the assessment.

Clearance problems. As mentioned in Section 3.2, several CBAs have had significant
delays because of difficulties getting clearances for study team members.

Regrettably, presenting results for approval will take far more time than you ever thought. As a
result, you have to schedule so that your team is still producing while you are bringing forward
results for approval. This is a challenge, because the JCIDS analysis process is entirely
sequential. We recommend a quick look (Chapter 5) as a way to mitigate some of these delays.
Finally, just because history has shown that these assessments tend to go slowly does not allow
you to execute at a glacial pace. You will have to push the effort along. Otherwise, you will be
in serious danger of delivering answers long after the key decision windows have closed, and
the four-stars that were interested in the topic have already made up their minds without being
informed by anything that you did.

21

4. The Study Plan


All the work you have done to this point is aimed at determining a way to execute your CBA. The
first document that you will produce that goes to the outside world is the study plan, and it
communicates what you want to do, and how.
JCIDS specifically addresses study plans (also known as terms of reference, or TORs) for JROCdirected CBAs:
When the JROC directs the initiation of a CBA, the CBA study plan will be included as a step
prior to the functional area analysis. The study plan will include specific areas the CBA will
examine. The study plan will scope the CBA, clearly identify the focus of the assessment, identify
which of the four Capability Based Planning challenges (traditional, irregular, disruptive,
catastrophic) it will address, and demonstrate that the assessment will address the tasking
authoritys requestThis study plan also makes clear what the CBA will not address. The CBA
needs to be thorough yet not subject to mission creep. [JROCM 062-06, 2006]
This direction was due to experiences with the first few CBAs, which, like most pioneering efforts
under a new process, had considerable problems maintaining a clear focus.
Clearly, the JROCs overarching concern is limiting the scope of the assessment to something that
both addresses their intent and can be delivered in a reasonable amount of time. But what, exactly,
defines the scope of a CBA?
The scope of a CBA is specified by the following six elements.

Capabilities desired. A capability, in JCIDS, is the ability to achieve an effect in a military


operation. CBAs such as Joint Forcible Entry Operations and Global Strike Raid Scenario
have differing scopes because the effects that those types of operations are intended to
achieve are different.

Scenarios considered. We cannot say that we actually have a capability unless we test it
against various adversaries and operating conditions. The sample of adversaries and operating
conditions in other words, the scenarios used are a component of the scope of an
assessment.

Functions considered. It is difficult to find a military operation that does not employ
virtually all functions of the DOD, from exercising space control to providing physical fitness
facilities. But, not all of the employed functions must (or should) be analyzed in a CBA.

Types of solutions considered. In some cases, the type of solutions allowed by policy,
existing treaties, and so on may narrow the scope (e.g., space-based weapons may be ruled
out at the outset). Also, if you have a solution-oriented CBA such as Seabasing, your
assessment is limited to assessing the alternatives within, and utility of, that concept.

Resource limits. While resource limits have not been imposed on any CBA done to date, it is
entirely possible to scope a CBA by stipulating limits on solutions, such as requiring that the
FSA output must present options that do not require additional manpower or funding. Note
that recent changes to JCIDS require that the FSA consider alternative CONOPS that use
non-materiel solutions [JROCM 062-06, 2006].

Planning horizon. This is the time period that the CBA is considering, for both adversaries
and potential solutions.

22

Of these six areas, the JIC will directly address desired capabilities, functions, and the planning
horizon. Additionally, the JIC will offer an illustrative scenario. You, however, will have to determine
the rest.
A challenge in specifying these elements in a study plan is that they constitute most of the what of
the CBA which is a major output of the FAA. Yet, this direction says that study plan must contain
the description of what will and will not be considered. So how can you reconcile this guidance?
The answer is that there is nothing (at present) that says you have to complete the study plan prior to
starting the FAA. As a matter of fact, the current guidance only implies that the study plan must be
approved prior to your presenting FAA results, so you can proceed as illustrated in Figure 7.
Consequently, we will discuss ways to do the scoping in Section 6.
Also, you can define the FAA as the public part of your study, that is, the point at which you form
your working group and solicit input. This allows you to do a lot of research that you need for both
the FAA and the study plan, and permits you to put out a study plan prior to starting the public
portions of the FAA.
The study plan should not be an enormous document. Shorter is better, and you should aim for a plan
that is 15 pages or less. There is no set format, but the following outline is a composite of CBA study
plans done to date.

References. List DOD guidance that directly affects your CBA, plus applicable joint concept
and scenario documents.

Purpose. This contains a single paragraph that states the purpose and contents of the study
plan.

Background and Guidance. Summarize the answer to the why this CBA? question and
quote DOD guidance relevant to your CBA.

Objectives. Describe the type of CBA you have and the desired products.

Scope. Discuss the six elements of scoping as they apply to your CBA, and refer back to the
relevant DOD guidance to support your scope. This is the most important part of the study
plan, so you will have to devote some space to proving that your scope is correct.

Methodology. Leave yourself room to adjust in this section. Be specific about how you
intend to do the FAA, but allow for options in the conduct of the FNA and FSA.

Organization and Governance. It is not necessary for you to describe how your core team
will function; this section should instead concentrate on how you will work with external
organizations, to include your web site and coordination procedures. You should also
document the governance structure of your CBA, including all oversight committees and
general officer steering groups.

Projected Schedule. Keep this short, and limit it to major staff actions and milestones (FAA,
FNA, and FSA) that you already know about. Say that an updated schedule will be
maintained on your web site.

Responsibilities. List what you want from external government organizations. For now, you
should be able to specify which organizations should provide representatives to your working
groups. If you are planning on relying on external government organizations for major parts
of your assessment, list them in the study plan and also refer to them in the methodology
section.

Remember that the study plan contains your initial proposals for how you will proceed. It is not an
ironclad contract, because bodies such as the JROC that commission CBAs retain the right to redirect
23

you. Since it is likely that you will modify the scope of your CBA during its execution, changes are
allowed, and the study plan is really a live document. But, to get approval to start, you have to
demonstrate that youre ready to start. The study plan is the basis for that decision.
Clearly, you can maximize your flexibility by minimizing the number of activities you commit to in
the study plan. Unfortunately, your desire for managerial flexibility is at odds with the leaderships
desire to see evidence that you have an approach that is workable. Consequently, you should present
your initial thoughts on the following in the methodology section of the study plan.

Methodology approaches. You probably have some idea of the analytical tools and
techniques you will use for your assessment. While this is not a primary element of scoping,
the choice of methodology is a direct consequence of the capabilities, scenarios, and
functions you want to evaluate. This is important enough that you should cover at least what
level of analysis you expect to conduct (see Section 7.2).

Measures of effectiveness (MOEs).1 While we show these as an FAA output, you should
offer an initial list in the study plan. If you have a JIC, it will give you some advice on
measures. Otherwise, you can derive some measures from attributes listed in the applicable
JOCs and JFCs.

Technological and policy opportunities. Two central reasons for commissioning a CBA are
first, to examine areas where we need to improve, and second, to examine areas where
improvements are possible due to technological or policy opportunities. If the latter is the
case with your CBA, you should mention that in the study plan, and list the specific
technological or policy opportunities.

The quote that begins this section makes it very clear what the JROC wants from a CBA study plan.
To be accepted, the study plan must communicate that:

you understand what youre supposed to be assessing;

you have the correct scope;

you have an approach that makes sense and is executable;

you are working with the right organizations; and,

you have a plan to finish in an acceptable amount of time.

Format and the order of the sections is not a concern; clarity, brevity, completeness, and believability
are.
Finally, your study plan may not need to address an FAA, FNA, and FSA. If a COCOM commissions
a CBA based on its assigned missions, it may have already accumulated enough information to
constitute an FAA. Also, the JIC may contain sufficient information that no additional FAA work is
necessary. In that case, the study plan can reference that work and concentrate on the FNA and FSA.
Also, there may be no need to do an FSA, because the intent of the assessment may be to develop
information to support a joint experiment, or you may know at the outset that the FSA will be done by
a different organization. The point is that there are several possible ways in (and out) of a CBA, and
the general format we offer supports all of them.

In this paper, we define an MOE as a measure of the degree to which we can meet an operational objective, as
distinguished from a Measure of Performance (MOP), which is a measure of how well a system or force
element performs its functions (e.g., survivability or lethality).

24

5. The Quick Look


The notion of doing a quick look a quick, abbreviated version of the entire assessment done at the
start of the process has been used in several CBAs. In all cases, the quick look proved enormously
useful for scoping the assessment, helping the study team discover the landscape of the problem, and
shaping subsequent work.
JCIDS doesnt require a quick look. The value of doing one is so great, however, that we highly
recommend it.

5.1. The Need for a Pilot Effort


In his seminal book on software engineering, Fredrick Brooks comments on pilot efforts:
The management question, therefore, is not whether to build a pilot system and throw it
away. You will do that. The only question is whether to plan in advance to build a
throwaway, or to promise to deliver the throwaway to customers. Seen this way, the
answer is much clearer
Hence plan to throw one away; you will, anyhow [Brooks, 1996, p. 116]
All pilot efforts are designed to discover and repair shortcomings prior to committing to a
major operation, and the quick look we recommend is the same thing. Although you know why
you are doing the CBA, have collected relevant analyses and doctrine, and have built your core
team, you havent exercised your machine yet. The quick look provides you a way to have a
training camp for your CBA.
More importantly, the quick look helps you determine what functions should be examined and
what types of solutions are realizable. Both of these are important parts of scoping the
assessment, but you really need to go further than the FAA to uncover them. For example, you
may believe that your CBA doesnt need to examine deployment and employment of command
and control. You should, however, have some justification for making that decision, and an
end-to-end quick look can inform that decision.
The primary purpose of the quick look is to expose areas of uncertainty and highlight likely
findings and recommendations. You may discover in the quick look that certain functions have
always been assumed to be unimportant. A quick look can expose that possibility, and give you
advice on where to spend scarce analysis time sharpening estimates while there is still time to
resolve those uncertainties. In addition, the quick look should tell you enough about the
dimensionality of the problem and the scenario space to advise you on possible analytic
approaches.
Finally, doing a quick look puts you ahead of your external working group, and gives you a
means to provide a rough estimate of your final results at any point during the CBA. Quick look
results dont have a warranty, and you should present them as rough estimates based on
expertise and aggregated analysis techniques. Regardless, it will be much easier for you and
your management if you have some idea, however imprecise, of the road ahead.

5.2. Establishing Analytical Bounds


As mentioned above, the quick look can help you bound your assessment. In particular, it
should concentrate on the following.

Bounding the effectiveness of current doctrinal CONOPS. How good are we now?
Suppose we currently attack enemy amphibious ships using certain types of platforms,

25

weapons, and tactics. How good could we become if we updated the platforms and
weapons? And what updates are fiscally and technologically possible?

Bounding options open to the enemy. What sorts of things can the enemy do to prevent
us from achieving the desired effects? We note that current operations in Iraq show just
how adaptive and innovative an enemy can be; no assessment done prior to Operation
IRAQI FREEDOM predicted how much the use of improvised explosive devices would
disrupt our stabilization operations.

Bounding investments in the capability areas. The CBA will be assessing a set of
capabilities introduced in the JIC. How much has DOD typically invested in these areas?
How much more (or less) is it likely to invest? If the DOD decided that it wanted to
maximize capability in this area, what would the maximal rational investment be?

Bounding alternative CONOPS and operating policies. Are there things we dont do
right now that we might do? For example, we obeyed the antiballistic missile treaty
negotiated with the Soviets for many years after the Soviet Union dissolved, but then
withdrew from it and began fielding national missile defense systems. Are there similar
alternatives available that could substantially change how we achieve certain capabilities?

The notion of bounding is very important for your assessment. Many DOD studies spend far
too much time refining baseline CONOPS, performance estimates, investment trends, and
policy limits. Such studies ultimately produce answers with considerable depth but no breadth,
and investigate very few alternatives. Since the quick look is entirely yours, you are free to
search for plausible situations that may result in radically different views of the military
problem being considered in your CBA. You do not have to have an external working group
filtering what you examine, nor do you have to seek concurrence from anyone. It is just you
and your core team.

5.3. Quick Look Products and Timing


A quick look, to be completely useful, should produce something that looks like an FAA
(Section 6), an FNA (Section 7), and an FSA (Section 8), and should offer initial answers to the
questions posed in Figure 3.
More importantly, the quick look should effectively communicate what you are proposing for
analytical bounds. In particular, it should address the four questions discussed in Section 5.2
above, offer some alternative bounds, and record the consequences of those alternatives on your
CBAs depth, breadth, and potential completion date.
The output of the quick look should be a briefing, because you will use it to reinforce some
decisions. In your CBA, you will work with an external working group as well as your
governance apparatus, but your day-to-day labors will be closely followed by your normal
chain of command. You will be bringing any substantive decisions on the CBA to your chain
first; if they approve, then youll have to persuade the other groups to accept those decisions as
well.
So the quick look is really aimed at your superiors, and gets them into the decision-making
process on your CBA. They may believe that the joint concepts documents and the JCIDS
documentation contain sufficient guidance to settle any issues, and they will likely be surprised
when you bring in a quick look that shows a large variation in possible approaches and
outcomes.
This briefing has another important function. You should use it as a shell for your final CBA
briefing, and by keeping and documenting successive versions of it, you can maintain sort of a

26

diary of how the assessment proceeded from wide quick look bounds to progressively more
focused recommendations. Again, this is a product for you and your management, and there is
no reason to staff or distribute it outside of your core team.
Ideally, you would finish the quick look prior to starting the FAA, because then you would
have a preliminary analysis in hand to help with scoping. But schedules may forbid that, so the
latest completion date for the quick look should be just prior to the formal staffing of the FAA.
If you delay the quick look longer, you will have to make decisions on CBA directions without
a bounding analysis of the potential outcomes, which is risky.
So, to further explain the precedence relationships in Figure 7, we recommend that you begin a
quick look with your internal team as soon as possible. The quick look can inform the study
plan, but it is not a prerequisite; you can work on the quick look and the study plan
simultaneously, but you should organize the quick look so that it first addresses the scoping
issues that the study plan must address. Finally, you should have both the quick look done and
the study plan approved prior to formal staffing of the FAA.

27

6. The Functional Area Analysis (FAA)


CJCSM 3170.01C tells us that
An FAA defines the military problem to be assessed, the concepts to be examined, and the scope
of the assessment a CBA is motivated both by the existence of military objectives to be
achieved and the publication of a concept or a formal CONOPS for achieving them. The FAA
describes the relevant objectives and CONOPS or concepts, and lists the relevant effects to be
achieved. Since a capability is the ability to achieve an effect, the FAA connects capabilities to
the defense strategy via objectives, concepts, and CONOPS. Furthermore, the capabilities
identified in the FAA also scope the assessment and identify which capabilities will be examined.
[2006, p. A-7]
This seems straightforward. But, you might also ask the following.
1) If I have a JIC that contains tasks, conditions, and standards, and I have to present all of the
important scoping information in the study plan, what do I do in the FAA?
2) Is the FAA nothing more than writing down what Ive already researched?
In a perfect world, the answers would be 1) nothing, and 2) yes. But the world is not yet perfect, so
plan on doing work in your FAA. What we will present in this section is a set of activities that will
lead you from the inputs to the outputs in a reasonable and defensible manner.

6.1. Defining the Military Problem and the Concept to Be Examined


Any analysis begins with a problem statement, and the FAA must start with the military
problem to be examined. If your CBA has an associated JIC, then the JIC will contain a
description of the military problem, as well as the central idea of the concept that is, the idea
that states how we would like to operate in the future. For example, the military problem
described in the Seabasing JIC is one of projecting joint military power in situations where
permanent land basing or temporary access are unavailable, and the central idea of the concept
is that seabasing can provide the necessary access [Seabasing JIC, 2005, pp. 16-18].
Conversely, the Global Strike JIC is concerned with responsive joint operations that strike
enemy high value / payoff targets (HVTs/HPTs), as an integral part of joint force operations
conducted to gain and maintain battlespace access, achieve other desired effects and set
conditions for follow-on decisive operations to achieve strategic and operational objectives
[Global Strike JIC, 2005, p. 2-1]. The central idea of this JIC concept focuses on the initial
phases of a major force-on-force campaign. In particular, it envisions the joint force
commander employing joint capabilities anywhere in the world through and in any domain at
the place and time of his choosing [Global Strike JIC, 2005, p. 3-5], thus defining a global
scope. The Global Strike JIC does not offer solution concepts as the Seabasing JIC does; it is
focused on a mission area.
In both of these JICs, the military problem is stated in a straightforward way and can be quoted
directly. If you do not have a JIC, then you will have to describe the military problem in your
FAA. Furthermore, you will have to quote appropriate strategic guidance to prove that your
problem is worthy of a CBA, and you will have to provide your own central idea. It is not
enough to say, My Service has always done this, as JCIDS was constructed expressly to
avoid such inexorable requirements.
You cannot assume that there are no conflicts between the JIC and other strategic guidance
such as the CPG, UCP, and SPG. Due to publication timing and a host of other issues, there
may be disagreements among those documents, so you will have to navigate among their
28

potentially different views. Also, remember that joint concepts are intended to drive progress,
so they may present views at odds with current doctrine. Reconciling these positions is formally
a JIC issue, but be aware that conflicts may not be settled when you begin your CBA. In some
CBAs, the question of what military problem was being studied persisted until the end of the
FNA.

6.2. Scoping I: Scenario Selection


The use of scenarios has been a topic of much debate over the last several years. When the
DOD shifted to a capabilities-based approach to analyzing needs in 2001, many interpreted this
to mean that major analyses had to be agnostic with respect to scenarios. Consequently, many
analysts argued that it was illegal to specify enemies, and all assessments had to deal in generic
capabilities. In fact, one of the first CBAs commissioned under JCIDS (Joint Forcible Entry
Operations) spent a great deal of time trying to produce a scenario-agnostic assessment.
But this attempt did not succeed, and the philosophy of scenario agnosticism has been
discarded. You are now free to specify actual threat scenarios, and the DOD Analytical Agenda
process provides a comprehensive set of scenarios that have already been approved,
coordinated, and populated (to varying degrees) with data on both friendly and enemy
intentions and capabilities.
Scenario selection is the most important scoping step in your CBA, for four reasons.

Scenarios provide the means to assess the capabilities associated with the mission
area. We cannot declare whether or not the DOD has a capability without testing it
against real enemies with real objectives, forces, and geography. Otherwise, anyone
could simply assert the presence or absence of a capability without providing any proof.

Scenarios provide a way to connect the assessment topic to the existing strategic
guidance. A few years ago, many analysts were claiming their products were
capabilities-based because they posited imaginary enemies operating in synthesized
environments (e.g., assisting in the defense of Puceland from an invasion by the evil
Mauvians). While such artificiality provided some degree of scenario agnosticism,
imaginary forces, objectives, and geography had to be specified. Worse, since these
warring factions didnt actually exist, there was no way to connect them to very specific
strategic guidance on achieving aims in the real world.

Scenarios provide a way to test the concept against the breadth of the defense
strategy. The original aim of the capabilities-based approach was to broaden our
strategic perspective by considering a wider range of military situations. By choosing a
good scenario sample, you can assess the concept against a wide range of relevant
situations and comment on its overall applicability. Also, your assessment will be insured
against sudden swings in priorities (e.g., the shift to the Global War on Terror after
September 11, 2001).

Scenarios provide the spectrum of conditions for the FAA. Scenarios yield a range of
enemies, environments, and access challenges, all of which constitute conditions.

While it is crucial to choose the range of scenarios wisely, scenario selection is less difficult
than you might think. For example, the current defense strategy divides all future security
challenges into four categories:

traditional challenges are posed by states employing recognized military capabilities and
forces in well-understood forms of military competition and conflict;

29

irregular challenges come from those employing unconventional methods to counter the
traditional advantages of stronger opponents;

catastrophic challenges involve the acquisition, possession, and use of (weapons of mass
destruction (WMD) or methods producing WMD like effects; and

disruptive challenges that may come from adversaries who develop and use
breakthrough technologies to negate current U.S. advantages in key operational domains
[National Defense Strategy, 2005, p. 2].

Furthermore, all the scenarios in the DOD Analytical Agenda have been mapped to one or more
of these categories. So, if you choose this framework, you should pick at least four scenarios,
one for each future security environment.
This framework is not the only one available, however. The strategy also offers four strategic
objectives:

secure the United States from direct attack;

secure strategic access and retain global freedom of action;

strengthen alliances and partnerships; and

establish favorable security conditions [National Defense Strategy, 2005, pp. 6-7].

One framework may suit your CBA better than another. Regardless, you must resist the urge to
pick one scenario and devote all your time to it, under the assertion that if we can do this, we
can do anything. This sort of Maginot Line reasoning has been proven untrue so often that the
idea should never come up. But this notion however flawed appears to be ineradicable.
Instead, force your CBA to inspect a wide range of situations (including enemy options within
those situations), and reduce them to a set that provides a good sample, one that covers the
breadth of the defense strategy.
The idea of sampling is very important in scenario selection. An operation such as forcible
entry could be conducted in a very wide range of environments, and you will not have time to
analyze all the interesting situations. Instead, youll have to pick a set of criteria and use those
criteria to select a manageable but comprehensive set. There are quantitative methods available
to help choose a sample, so you should have a member of your team that has these skills.
One useful task for the quick look is to have your core team examine all of the Analytical
Agenda scenarios (there are now over 100 of them, most with multiple variations) and suggest
combinations that are comprehensive and analyzable within the time and resources available.
This exercise will provide many insights in and of itself, and can be done in a day or two.
Scenario selection will likely be the first area of contention in your CBA. One shortcoming of a
capabilities-based approach is that it is very easy to define a situation that requires a particular
capability that is best addressed by a particular solution. Consequently, people promoting these
solutions will try to you drive towards those situations to the exclusion of all others. Your job is
to resist these sorts of hijacking attempts, and instead ensure that the CBA addresses the range
of military operations described in the strategy.
You need a formal position on the choice of scenarios. If you are doing a JROC-directed CBA,
scenario concurrence will come when you staff the study plan in JCIDS.

6.3. From Scenarios to Capabilities


Now that you have a scenario sample, you have to determine the military objectives of each
scenario and extract the objectives that your CBA topic supports. We attain these objectives by
30

creating effects, and the abilities to achieve those effects are the capabilities that are the basis
of your assessment.
This leads to a straightforward examination of each scenario. For example, you may be
assessing integrated air and missile defense, and you are contemplating a typical regional
conflict. The overarching objective is to win the war, and a subordinate objective would be to
win the ground battle. To win the ground battle, we may choose to deploy ground forces, and
those forces have to be protected from enemy air and missile attack at their ports of
debarkation. Providing that protection is the capability that you are assessing; the scenario
provides the context.
What we have outlined above has long been practiced in the DOD under various names. The
best-known label is strategy to task [for example, see Pirnie and Gardiner, 1996]. We have
already advised you to investigate the higher levels of strategic documents for advice related to
your CBA topic; now we are advising you to connect the capabilities you are assessing to your
scenario sample.
This may seem like a deceptively simple step, but it may prove challenging. For example, an
alternative CONOPS for our example above might involve using allied ground forces to win the
ground fight, and not deploying any of our ground forces at all. Then, protecting deploying
ground elements becomes irrelevant, and providing the capability is no longer necessary. So, it
is important to recognize that capabilities are a function of both scenario and CONOPS.
Of course, we may choose to protect allied ground forces from air and missile attack, or we
may have to protect our deploying air and maritime forces. The point is, by tying the
capabilities to scenario objectives and a set of CONOPS, you eliminate the problem of trying to
assess in terms of capabilities de nusquam2.
Early writing on JCIDS often referred to critical capabilities, implying that there are other
capabilities that are not critical. To save yourself a semantic debate, merely state that in your
CBA, the critical capabilities are those effects that you have opted to assess in your scenarios.

6.4. Collecting and Documenting Doctrinal Approaches


Now that you have chosen your scenarios and associated capabilities, its time to employ
collaboration. You have to determine how we provide these capabilities now and how we
currently plan to provide them in the future. And, the best way to do this is to solicit
approaches from your working group.
You should give your group the set of scenarios and the capabilities youve derived from those
scenarios, and have them tell you how they would achieve those effects. This requires you to
define:

the scenarios, the objectives, and the associated capabilities; and

a standard format for reporting the proposals.

Essentially, you are giving your working group a mission order. You want them to tell you how
they would do the mission, particularly:

what force elements they would use;

how long it would take;

what the sequencing of tasks and dependencies among tasks would be; and

Latin, meaning from nowhere.

31

what sort of basing, transport, and allied cooperation would be required.

If you have a concept-oriented CBA such as seabasing, you also want your group to give you
proposals on how the solution concept would be employed to provide the capabilities.
If you use the Analytical Agenda scenarios (and we highly recommend you do that), then you
can just refer your group to those documents. If you dont use these scenarios, youll have to
provide a great deal of information and justification, and it will probably prove to be far more
trouble than it is worth.
Even though you are analyzing needs 10-20 years in the future, you should ask for current
approaches to providing these capabilities. The reason for this is that you can get the Combatant
Commands involved. They likely have an OPLAN or CONPLAN available for scenarios
similar to yours, and they have thought through how to achieve the effects with forces that
actually exist.
Combatant Command staffs, however, do not like to comment on future capabilities.
Consequently, you will have to ask the Services for how they would operate in a future time
period if they execute their program. This will result in several different proposals; if you have
the time, you should conduct a joint war game to try to come up with a set of joint proposals.
You will have to ask for this information in a standard form that allows your working group to
document their proposals in an efficient way. You can use the JCAs and UJTLs (Universal
Joint Task Lists) to describe tasks and component responsibilities, and use a format similar to a
Gantt chart to show timing and task precedent relationships.
Now, you may have sufficient doctrinal expertise on your core team that you feel that you can
describe how we do it (or will do it) with your own resources. This is fine, but you still need
formal concurrence that what you have represents our current doctrinal thinking. The formal
concurrence is crucial. Otherwise, your CBA wont even have an agreed-upon starting point.
Note that you may receive a CONOPS that specifies a substantially different set of capabilities
than what you had in mind. In the air and missile defense example above, for example, one
proposal may call for using nothing but undersea assets, which again would make defending the
land and air domains irrelevant. So what would you do with such a proposal? Well, you would
keep it if there is evidence that it could be done, and assess it in the FNA and FSA. After all,
eliminating the need to provide a capability is just as much a solution as providing the
capability.
The art of collecting current approaches is that you must ask for enough detail to specify the
forces and timing associated with providing the capabilities, but not so much that the workload
and the output obscures the really important issues. Several CBAs have gotten bogged down in
task hierarchies and activity models to the point that they lost sight of the objectives of the
exercise. Generating reams of task tables does not help answer the larger questions.

6.5. Scoping II: Task Structure and Functions


The next step is to take the doctrinal CONOPS you have collected and synthesize them into an
overarching task structure for your CBA. This list should not be overly detailed; the Global
Strike Raid Scenario CBA used a task structure with 10 major tasks (see, for example, Figure
10 in Section 7.1), and the Seabasing CBA had a task structure with 20 tasks.
You want to keep the task discussion at a fairly high level, because you will use these inputs to
help decide what functions and tasks you will assess in your CBA. For example, you may be
assessing undersea superiority and have collected doctrinal approaches that employ

32

psychological operations. You would probably opt to assume that those operations would
execute as planned, and not treat that particular function in your assessment.
Determining what functions and tasks you will analyze is an important part of scoping your
CBA. In general, you will not address a function or task when:

the function or task does not apply to your concept and your scenarios;

the function or task is being actively studied in another, concurrent CBA; or

there is ample evidence that the function or task will succeed in your scenarios.

Several CBAs have relied on group approaches to determine the sets of critical tasks and
functions, ranging from simple voting to use of multiattribute decision theory. These
approaches have merit; among other things, they allow wide participation and can be executed
very quickly. But be warned that such techniques may prevent you from examining functions
and tasks that should be addressed. Prior to Operation Iraqi Freedom, no group predicted that
providing road security after cessation of major combat operations would be a critical task.
Unfortunately, it has proven to be just that.
The issues associated with group methods and the tendency for such groups to merely assert
existing approaches is exactly the reason you need a bounding analysis in the quick look. In
particular, you have to examine the range of potential enemy responses, and use that to help
decide which functions and tasks to assess. JCIDS also provides a formal means to get such
advice, as CJCSM 3170.01C requires the DIA to produce an Initial Threat Warning
Assessment if you ask for one [2006, p. A-10].
Finally, remember that existing structures such as the JCAs and the UJTLs were built to reflect
the DOD organization as it currently exists. Some CBAs have taken an approach similar to the
mythological Greek innkeeper Procrustes, who ensured his guests fit his beds either by
stretching them on the rack or chopping off their feet. If you have a new concept or a new
CONOPS, the existing task frameworks simply may not fit it very well, and you should use
some other depiction. Do not torture your analyses to fit the framework; otherwise you may be
killed, as Procrustes eventually was by Theseus.

6.6. Using Strategic Guidance to Shape Standards


At this point, you have chosen scenarios, which give you the spectrum of conditions for the
FAA-derived capabilities that are both fundamental to achieving the military objectives and are
relevant to your topic. You have collected doctrinal approaches to providing those capabilities,
and derived an overarching task structure. You have also decided which functions to assess, so
you have completed the majority of the scoping tasks. This leaves the question of standards.
The joint concepts development process defines a standard as quantitative or qualitative
measures for specifying the levels of performance of a task [CJCSI 3010.02B, p. GL-4]. While
this is an accurate definition, it does not communicate what you should be producing in an
FAA.
In the simplest terms, the FAA standards describe how you will grade the DODs abilities in
your assessment. Recall that the FAA is the what part of the assessment, and that includes
defining a framework for measuring how good or bad we are. If you come from an operational
environment, you have used training standards; if you are an analyst, you have used measures
of effectiveness; and if you are in acquisition, you have used key performance parameters,
objectives, and thresholds. All of these are used to define what we consider acceptable.

33

You will not find (much less derive) a simple set of pass-fail criteria for all the scenarios,
objectives, functions, and tasks that you are assessing that you can defend. Perhaps the current
doctrinal standard for establishing a certain level of communications connectivity in a deployed
location is 72 hours. Why 72? Do we lose the battle if we are an hour late? Also, attempting to
write down standards for all possible tasks and functions in a microscopic fashion is
complicated, time-consuming, and may very well not add up to anything that will help the FNA
and FSA.
So what should you do?
Recall that JCIDS discusses things called attributes, which are a quantitative or qualitative
characteristic of an element or its actions [CJCSM 3170.01C, p. GL-5]. If you have a JIC, the
JIC will list a set of attributes; if you dont, you can refer to attributes listed in the relevant
JOCs and JFCs).
For example, the Seabasing JIC lists the following attributes:

infrastructure size required for the seabase;

operating capacity of the seabase;

deployment and employment rates;

degree of interoperability;

survivability; and

accessibility in varying environments [Seabasing JIC, pp. 49-51].

These attributes capture the concepts intent for judging the utility of alternative seabasing
concepts, and provide a starting point for you to derive your evaluation criteria.
But, notice that the seabasing list is not comprehensive. For example, there is no attribute that
says contribution to the warfight. Now, the implication is that a small, large capacity,
interoperable, survivable, accessible seabase that deploys quickly and employs and sustains
forces at high rates cannot help but improve the warfight. Nonetheless, there may be situations
where having a seabase does not significantly affect the outcome.
This is why you have to augment what comes in the typical JIC. You need to connect the
attributes to the scenarios youve chosen, and come up with appropriate metrics for the
attributes. Some metrics, such as those associated with survivability, are straightforward.
Others, like measuring interoperability, are much more difficult.
Also, the other strategic documents mentioned in Section 2.3 are likely to contain guidance that
will affect your choice of criteria. The SPG in particular tends to contain very specific guidance
on mission areas where we should either decrease or accept risk, and you must try to respect
this guidance when you develop your standards.
Note also that none of these attributes have obvious pass-fail criteria associated with them, and
instead are probably better represented by a continuum of values. Figure 8 shows a notional
value function associated with a survivability attribute; in this case, the metric is expected
personnel lost in a particular scenario, and the payoff values range from 0 100.
The representation of payoffs versus metrics is much more useful, because it allows you to
represent how we might value a continuum of outcomes, rather than simply stating the
standard is 72 hours. Note that the function in Figure 8 does not prevent you from asserting a
threshold value; for example, you might establish that alternatives that expect to lose more than
1000 personnel are simply unacceptable and will not be considered.

34

Also, expressing your evaluation criteria in a common scale (here, we are using an abstract
value scale) will allow you to investigate tradeoffs later on in the CBA. In the seabasing
example, the desire for high capacity would generally lead to a larger seabase, which is at odds
with the desire to minimize infrastructure.
100

Value

80
60
40
20
0
0

10

100

1000

10000

Expected Personnel Losses

Figure 8. Notional value function for expected personnel losses.

The methodology associated with the development and use of functions such as the one in
Figure 8 is generally known as multiobjective decision analysis, and there are many good books
available on the subject (see, for example, Kirkwood [1997]). Even if your CBA will eventually
use large-scale combat modeling and simulation to estimate warfighting outcomes, it is worth
spending the time to link attributes, metrics, and value functions. The process will help you
identify the most important measures, and will also demonstrate how some of the standards
may conflict (e.g., minimizing infrastructure while maximizing throughput).
If you cannot avoid the pressure to present single-number criteria, consider using the following
set of thresholds:

the minimum level of the measure required for mission success (go-no go threshold);

the minimum level at which measure is no longer a critical or pacing part of the
CONOPS (nominal performance threshold);

the level of the measure above which there is no real increase in mission effectiveness
(gold-plating threshold).

If you develop these three numbers, you will probably find that connecting the dots yields a
curve similar to the one in Figure 8. There is a range below which you cannot function at all, a
range that gives benefits as the measures improve, and a range above which increased
investment simply isnt worth it.

6.7. The Overall FAA Process


Figure 9 shows the major parts of an FAA and our advice on how to order them. All of the
steps are informed by the JIC, other applicable joint concepts, the strategic guidance, the
literature search, and the quick look (if you have done one).
Defining the military problem begins the FAA. Scenario selection provides the linkage to the
defense strategy, and the military objectives of those scenarios provide advice on desirable

35

effects. The scenarios and effects result in capabilities, which represent the condition output of
the FAA.
Once the conditions are established, collecting doctrinal approaches allows you to derive an
overarching task structure. This structure, along with your literature search, will help you
decide which functions to analyze explicitly in the FAA, and represents the task output of the
FAA.
Finally, comparing the scenarios, objectives, and task structure to attributes available in the
joint concepts allows you to choose measures. Once you have developed a set of measures, you
can again go back to the strategic guidance and existing doctrine to develop value functions
(and also minimum performance criteria) for the measures.
Define
Military
Problem

Research
Origins of
CBA Tasking

Choose
Strategic
Framework

Examine
Candidate
Scenarios

Scenario
Sample
Selection,
Coordination

List Military
Objectives,
Capabilities

Specify
Conditions

Collect
Doctrinal
Approaches

Choose
Functions to
Analyze

Develop
Overarching Task
Structure

Derive Tasks
Figure 9. Major FAA tasks and flow.

36

Choose Relevant
Attributes

Develop
Measures

Develop
Standards

7. The Functional Needs Analysis (FNA)


Returning to the formal guidance, CJCSM 3170.01C says that
The FNA assesses the ability of the current and programmed force to meet the relevant military
objectives of the scenarios chosen in the FAA using doctrinal approaches. Using the standards
and evaluation criteria described in the FAA, the FNA assesses whether or not an inability to
achieve a desired effect (a capability gap) exists [2006, p. A-3].
The manual also adds several more suggestions for an FNA, saying that it must:
describe the gaps in terms of the scenarios assessed and the effects on achieving the relevant
military objectives it is essential to link the gaps to their operational context.
.assess the impact of the capability gap in terms of the risk to mission (the ability to achieve the
objectives of the scenario), the risk to force (the potential losses due to the capability gap), and
other important considerations, such as effects on allies and noncombatants.
.characterize whether the gaps are due to proficiency (inability to achieve the relevant effect in
particular conditions), sufficiency (ability to achieve the effect, but inability to bring the needed
force to bear due to force shortages or other commitments), or policy limitations (inability to use
the force as needed due to operational constraints).
list a set of needs that the DOD should address, or conclude that no pressing needs exist
offer a prioritization of the gaps that is directly linked to priorities in the strategic guidanceit
must also completely document the significant driving factors behind the recommended priorities
to give senior leaders the information they need if they choose to make adjustments [CJCSM
3170.01C, 2006, pp. A-9 10]
The manual does not describe these areas further, and does not present any sort of process that
describes how to proceed from FAA inputs to FNA outputs. Our advice on such a process occupies
the rest of this section.

7.1. Operational Depiction


The FAA will result in a set of scenarios, a set of doctrinal CONOPS, and a set of functions and
tasks that you have decided are potentially relevant to the assessment. The FNA uses these
FAA outputs to uncover needs. Since needs generate bills that the DOD must pay, the standard
of proof is high, and you have to demonstrate that we cannot meet some set of military
objectives in some scenario of interest. Consequently, you have to convert your scenarios,
CONOPS, tasks, and functions (which, at this point, are likely just a collection of lists) into a
form that allows you to depict relationships among scenarios, objectives, tasks, and force
elements.
Figure 10 shows an example based on the Global Strike Raid Scenario CBA. This figure is the
generic task structure for a rendition (capture and return) operation against an enemy camp. The
graphic is useful because it captures the high-level tasks, their relationships, and timing. For
example, we see that we can accomplish a fair amount of force movement prior to
understanding the targets, but the actual operation cannot take place until the targets are
characterized and all the battle management and command and control elements are in place.
Furthermore, it is easy to use this structure to write down the possible combinations of force
elements that might perform these tasks, and in what regions and types of terrain.

37

You should be careful when formulating these depictions. One of the problems with using the
DODAF is that it was intended for systems engineering applications, where it is critical that all
connections be documented and made to function (or else the machine wont work). You
cannot assess at that level of detail in a CBA, which examines an entire mission area or a broad
concept. You must settle for a more aggregate, higher-level view of the topic.
It may be difficult for you to compose a small set of graphics that depict the operations you are
analyzing. For example, if you were doing a CBA on undersea superiority, you may decide to
analyze an entire regional warfight to determine to what extent our undersea capabilities
determine the outcome of a particular scenario. But, you could still depict the conflict at the
theater level, and show how undersea superiority affected other aspects of the fight.

Alert

Establish
support
base

Move to
support
base

Move to
launch
Base

Transit
maritime
assets to AOR

Secure
tactical
area of
operations

Attack camp,
search, render
targets
Pursue & render
targets

Choose
course of
action

Exfiltrate force
and targets

Understand
targets

Return to
support
base

Establish battle
management and C2
Provide mission support
Assess effects

Figure 10. Task flow for a rendition operation from the Global Strike Raid Scenario CBA.

In addition to setting you up for your analysis, generating an operational depiction can also help
settle misunderstandings over terminology. Figure 11 shows the overarching engagement
sequence derived for the Integrated Air and Missile Defense (IAMD) CBA. This CBA took a
variety of kill chain depictions and derived a sequence to use as a common reference for
analysis. This allowed that CBA to depict existing doctrine using a single structure, which
allowed for comparisons of proposals and solved a considerable communications problem.

38

Service/Joint Kill Chains


Army

Detect

Track

Classify

TEWA

Engage

Assess

Air Force

Find

Fix

Track

Target

Engage

Assess

Navy

Detect

Track

Decide

Task

Engage

Attack

USMC

Surveil

Detect

Track

ID

Engage

Assess

JCMD

Detect

Track

ID

Allocate
(TEWA)

Engage

CO/TP

Find

Fix

Protect

Target

Engage

JFCOM

Fix

Track

Target

Engage

Assess

Notional

Plan

Find

Fix

Track

Target

Strike

Track

Fix

Target

Engage

Assess

Other

Sense

Monitor

Track

Decide

Act

Plan

Surveil

Detect

Track/ID

TEWA &
Alert

Assess

Engage

Assess

React

Sustain

Act (attack,
shoot,
defend)

Assess /
Reengage
(Sustain)

Joint Engagement Sequence

Figure 11. Derivation of the common Joint Engagement Sequence for the IAMD CBA3.

Notice that the operational outcomes of concern are obvious in both Figures 10 and 11. In the
case of Figure 10, the question is whether we captured the targets of interest, and what
consequences (losses, collateral damage, and so on) resulted from the raid. In Figure 11, the
outcome of interest is whether or not we successfully defended against the missile attack. In
both cases, the operational depiction allows you to communicate with clarity and economy how
you are portraying the mission.
You should also derive a set of operational depictions for enemy forces. After all, enemies have
objectives and alternatives open to them, and your assessment has to account for those as well.
Consider the diagram in Figure 12, which is based on the Global Strike Raid Scenario CBA.
This figure shows how enemy leadership targets relocate in a conflict on receiving warning.
While they spend most of their time in unhardened government facilities, on warning they
transit to a vehicle, then to either an informal hide site or a formal hardened site. If they get
warning while at one of these sites, they move again. Clearly, if your objective is to interdict
one of these leaders, then you must characterize how they move.
Note also that this is the last chance you will have to ensure that you are assessing the right
things. With a good set of operational depictions, you can go back to your chain of command
and your working group (or even the JROC) and ask is this what you wanted us to look at?

TEWA means target evaluation and weapons assignment.

39

warning

leadership
vehicle

informal
facilities

formal, soft
facilities

warning

warning

formal, hard
facilities

Figure 12. Transition diagram for leadership targets from the Global Strike Raid Scenario CBA.

7.2. Choosing an Analytical Approach


You will be pressured to write your preferred analytical approach into the study plan. Your
chain of command knows that analysis, modeling, and simulation can be very time consuming
and expensive, and that many huge DOD analyses have produced little or no return.
Consequently, this is an area that they (and you) will worry about throughout the assessment.
Its not possible for us to give a comprehensive treatment of your analytical options in this
paper, but you need to know something beyond hire someone good and let him take care of
it. You will have to gain a sufficient understanding of the methodology being used to explain
it to senior audiences, because you will have to convince them that your CBA provides
reasonable estimates of warfighting causes and effects.
The analytical approach is not merely choosing a model or a set of models. Models are
abstractions of reality, and are tools. The analytical approach, however, is a plan executed by an
analytical team, and they will employ a collection of tools to transform input data, estimate
warfighting outcomes and MOEs, and present results. A competent analytical team will
examine your problem and recommend options for approaches, and will not just talk about
models.
Nonetheless, the choice of modeling techniques is a central element of an analysis approach,
and also provides a useful way to introduce the types of approaches possible. So, we will use
model classification schemes to illustrate the varying options youll have.
Probably the first question to consider is the operational level of your assessment. Figure 13
depicts a common DOD model taxonomy, which classifies models in terms of warfighting
scope.
In this taxonomy, the size of the forces in opposition and the time span considered define the
type of model. For example, an engineering level model might only consider a single radar
trying to detect a single ballistic missile, and would probably devote a large amount of detail to
the physics of the systems being considered. An engagement level model would feature the
force elements or platforms employing the radar and the missile, and would contain less detail
about the physics of those systems and more information on the tactics of the engagement. The
raid level model would represent a collection of force elements opposing each other in a single
engagement (or over a limited time period), while the campaign model would provide an
abstract representation of an entire war.

40

increasing
forces
represented,
time span

system vs.
system, single
engagement

Engineering
Level Model

force element vs.


force element,
single engagement

Engagement
Level Model

force vs. force, single


engagement or limited
time period

Raid Level
Model

force vs. force,


multiple engagements
over extended time

Campaign
Level Model

increasing
system,
environmental,
tactical detail

Figure 13. Classification of models by warfighting scope.

Figure 14 [Washburn, 1998, p. 3] is a less common depiction, and depicts analysis approaches
by technique as opposed to scope. To explain this figure, we will go from right to left. We are
all familiar with exercises, as they employ actual forces in real environments. There is no
ground truth in an exercise, as the actual exercise outcomes are subject to human perception
and judgments.
normative

evaluative
statistical issues

multi-sided

humans involved
multi-sided
no ground truth
game
optimization
theoretic
models
models

mathematical
models

man-in-theMonte Carlo
loop
wargames
simulations
simulations

exercises

ABSTRACTION

REALISM

Figure 14. A classification scheme for analysis approaches.

Furthermore, exercises have opposing forces, so they are multi-sided and have humans
involved. Exercises also have statistical issues, in that the outcomes might be more a function
of the ability of the particular players involved than the capabilities possessed by the sides.
Finally, exercises are evaluative, in that they are not designed to produce solutions; instead,
they provide a way to demonstrate and measure proposed solutions.
Going to the left, wargames remove the ground truth issue, and specify the physical
environment and the mechanisms that produce combat outcomes. Man-in-the-loop simulations
are man versus machine methodologies, and do not have multiple sides involving humans
making decisions (if they did, this taxonomy would call them wargames). Monte Carlo
simulations use analytical schemes to do repeated trials of random effects such as weather and
bomb hits, and do not use humans in the loop. Finally, there are mathematical models that do
not simulate random effects, but instead evaluate operations, systems, and human behavior
analytically.

41

There is another class of models, which are shown on the left-hand side. These normative
models are not based on evaluating inputs such as CONOPS, but instead use sets of rules to
suggest solutions. Optimization models search a large set of alternatives and recommend
solutions based on maximizing or minimizing a set of quantitative objectives. Game-theoretic
models operate similarly, but add back in the multi-sided nature of military conflict.
Fine, you say. Ill take the game-theoretic model that represents the entire war at the
engineering level, so Ill get good estimates of all the outcomes and use techniques that
compute the best CONOPS.
Well, theres no such model, and theres no free lunch. The techniques on the left side of Figure
14 have significant limits in what they can represent directly, and you will likely have to give
up the ability to represent certain tasks or functions to get your operational depiction into a
form appropriate for those techniques.
Consequently, the question of choosing an analytical approach involves both warfighting scope
and technique, and there are substantial tradeoffs involved. It is unlikely that your CBA will be
so narrow that you will be analyzing purely in the engineering or engagement realm, so you
will probably be assessing at the raid or campaign level. If you tend towards realism, you must
involve humans, and that means your evaluations must be conducted in real time. If you need to
examine a large number of alternatives, then you will need abstraction (and analytical
expertise).
Suppose you have been given the Joint Forcible Entry Operations CBA. What range of
techniques would you employ to assess our current capabilities? An actual exercise is probably
out of the question, but wargames look attractive. But, can you set up wargames for all the
relevant forcible entry scenarios you identified in the FAA? Also, will one set of players for
each wargame be enough, or will you need to repeat the wargames with multiple teams?
Also, is it sufficient to concentrate on the forcible entry itself (which would be a raid-level
analysis)? Dont you need to consider the impacts of the forcible entry operation on the larger
campaign? If the forcible entry operation cannot be conducted as planned, is there a different
CONOPS that would allow the campaign to achieve its objectives, and is there some normative
model out there that could help produce that CONOPS?
Or, is the whole CBA really aimed at judging the contributions of a few proposed systems? If
thats the case, can you reduce the problem to a set of engagement level analyses, and avoid
dealing with all the forces and systems?
These are all difficult questions, which the actual Joint Forcible Entry Operations CBA team
confronted with varying degrees of success. The sheer difficulty of answering these questions is
all the more reason to do a quick look (Section 5), so you have some idea, prior to the FNA, of
where you should spend your scarce analytical resources. Invariably, you will have to strike a
balance between scope, techniques, and level of detail.
The following is a set of questions you should ask when evaluating analytical approaches.

Can the approach evaluate the doctrinal approaches collected in the FAA?

Can the approach estimate the measures of effectiveness tied to the FAA attributes?

Can the approach represent the scenarios, tasks, and functions identified in the FAA?

Does the approach represent the correct warfighting scope?

How large a team does the analytical approach require to execute?

42

How much analytical overhead (i.e., estimation of outcomes not relevant to the CBA)
must be absorbed in the approach?

How long will the approach take to execute?

Does the approach require construction of a set of special-purpose models? If so, how
difficult will it be to win community acceptance of these models?

Is the approach agile enough? Can it quickly assess a large number of alternatives (US
and enemy CONOPS, scenarios, and capabilities)?

What is the backup plan if the approach doesnt work?

Do not let the availability of a particular tool or methodology (or the statement that a model is
validated) drive the analytic approach. The approach must fit the problem, not vice versa.
An aside: group methods. Several CBAs have employed expert judgment techniques, typified
by a variety of group voting and weighting methods. In the taxonomy of Figure 13, these fall
into the category of mathematical methods, because they are evaluative.
Despite their widespread use, we advise against relying on such techniques as your primary
method of estimating outcomes, causes, and needs. In the early days of JCIDS, many analysts
attempted to construct matrices to map systems to capabilities (or functions), and used groups
to grade the contribution of each system to the capability or the function. These grades, which
were normally presented using the typical red-amber-green scale, were supposed to yield
some sense of our adequacy in a mission area.
But, these methods are not very satisfying for estimating the outcomes of interest. Consider the
rendition task flow shown in Figure 10. How does such a method produce estimates of friendly
casualties or collateral damage? How does it assess the likelihood that the camp receives
strategic warning from force movements and scatters before the attack even occurs? And how
can it estimate the likelihood of capturing the targets?
To do this, you need to employ methods that represent the important physics of the situation
how fast both sides can move, how big their signatures are, what their detection capabilities are,
and how well they can fight in a direct-fire engagement in the terrains of interest.
Note that such an approach does not preclude the use of expert judgment. You can gather a set
of experts and have them estimate the probabilities of interest for the tasks shown in Figure 10,
which would not only be fairly quick but would give you the range of expert opinion on the
feasibility of executing that CONOPS. Furthermore, such an approach provides advice on
where you need to spend your analytical resources, because it helps identify tasks that either
appear critical or have widely differing views on their likelihoods of success.
The popularity of group methods is closely tied to the ideas of scenario agnosticism and
capabilities de nusquam. Since the early theories of capabilities-based analysis argued that we
could not represent actual environments and enemies, there was no way to represent the physics
of a situation. Consequently, well-intentioned individuals trying to do JCIDS assessments
found themselves in large conferences answering questions like, on a scale of 1 to 100, what is
the capability of this torpedo in achieving undersea superiority? Such scores were dutifully
compiled, averaged, and presented to three decimal places. While these numbers captured
prevailing opinion, they certainly did not amount to a serious assessment, and more often than
not just resulted in junk science.
We freely admit that there are many important considerations that do not have physics
associated with them. For example, you may be comparing two doctrinal approaches to an
operation, one of which requires getting basing rights from a single mildly uncooperative
43

country, while the other requires getting basing rights from four friendly countries. Which
requires us to spend more diplomatic capital? Does the expenditure of this capital even matter?
In these cases, you must resort to techniques based in expert judgment to make an estimate.
Nonetheless, we note that key performance parameters for major weapons systems are
measurable quantities. Consequently, you should assess in those terms if possible. If you are
interested in learning more about quantitative military modeling, several texts are available
[e.g., Loerch and Rainey, 2007].

7.3. Collecting and Inspecting Performance Data


People who try to build a quantitative case against your assessment will either attack your
scenarios, your analysis techniques, or your input data. Hopefully, you settled all the scenario
issues in the FAA, and the quick look helped you determine a solid analytical approach. Now
whats left is to collect performance data on the forces and systems you intend to analyze.
People new to the DOD analysis world are usually astounded at how difficult it is to obtain
performance data in U.S. weapons systems, particularly those in development. Some of this is
understandable, since we cannot expect to have perfect information on something that hasnt
been employed (or even built) yet. But, getting information on even fielded systems can be
contentious.
A central aim of the DOD Analytical Agenda is to solve this problem by making such data
readily available for major studies. Since DODI 8620.2, Implementation of Data Collection,
Development, and Management for Strategic Analyses, was published in 2003, it has become
much easier to get information on both US and enemy capabilities for modeling purposes. As
mentioned in Section 2.4, the Joint Data Support organization maintains a large repository of
such data.
Your general approach to data collection should be that you should get as much as possible
from current joint studies. This allows you to leverage efforts that have already been through
joint scrutiny, and does not irritate your working group with requests for information that you
could have gotten yourself. Some early CBAs did not take this approach. Instead, they opted to
issue massive data calls that antagonized most of the participants (and provoked outright
rebellion in some cases).
When you do ask external organizations for information, you have to be on your guard for
submissions that have been adjusted to suit the providing organizations interests. You will
have to examine all the submissions you get in some detail, and your core team will have to be
satisfied that the information you are getting are reasonable estimates of the performance of the
forces and systems you are assessing. You will get some inaccurate data, either by design or
misunderstanding, and its your job to catch and correct it.
Too many DOD analyses get hung up over establishing precise, coordinated, acceptable-to-all
numbers for such things as the probability of kill for a weapon or the survivability of a
platform. You can either 1) endure endless arguments over what the correct estimate should be,
or 2) document the range of legitimate opinions on the numbers and assess the extremes to see
if the estimate really matters to your overarching measures. Clearly, the latter is the better
approach, as it reveals useful information. You can only do this, though, if you have adopted an
agile analysis approach. If you have opted for a time-consuming, inordinately detailed model
that only allows you to consider a minimal number of baseline cases, youll succumb to the
endless arguments.

44

7.4. Executing the Analytical Approach


Once you have gotten your analytical approach in place, you will go into production. You will
have a set of scenarios and doctrinal approaches you will be evaluating, possibly for several
different time frames (e.g., current year, 2015, and 2024). But, you have both a chain of a
command and a working group that are clamoring for your output. So how can you
simultaneously produce and present?
First, start out with the least contentious (or most well-understood) case, put your core team to
work on it as soon as possible, and get results from them as soon as possible. When they are
done, inspect the results internally to ensure they make sense, and think through who might
object to the outcomes and why. When you are satisfied that you can bring these results
forward, get your core team started on the next case, and present your results to the working
group.
Bringing results out in the open will always generate questions, if not outright protests. Some
you should be able to handle right away, but others may require you to get additional analysis
from your team. You should build some time into the schedule to rerun some of your analyses,
because your working group will invariably bring forward information that you didnt have at
the start.
Organizations that submitted doctrinal approaches to your scenarios may, upon seeing your
results, want to change their submissions. This is good for you, because JCIDS insists that
alternative CONOPS be considered in CBAs, and getting a revised CONOPS helps you fulfill
that requirement. Now, you will have to be tough with these organizations, and not allow them
to endlessly change their submissions until your machine yields results that they like.
Nonetheless, having alternatives to analyze is preferable, particularly if the approaches were
generated by the Services or combatant commands.
As an aside, we note that there are two cases of alternative CONOPS. The first type consists of
alternative doctrinal approaches that use existing or programmed forces and do not require any
additional resources, including training. The second type consists of approaches that do require
additional resources. Your FNA should only consider the first type; you will consider both
types in the FSA.
Bringing forward results case-by-case allows you to stay ahead of your working group and
gives you some time to collect alternative approaches if offered. More importantly, this
approach builds a story for your FNA in a systematic fashion. As your working group examines
the results of each scenario and each CONOPS considered, they will likely see a set of
pervasive issues appearing. If you have done a quick look, you probably already know what
most of these issues are, so you know whats coming. They dont, however, so its better for
you to take an incremental approach.
Try to handle major conflicts outside of working group meetings. While jousting between CBA
study leads and spun-up action officers has provided much entertainment for working groups in
previous CBAs, such open conflict is counterproductive. Work with the protesting organization
one-on-one outside of your regular meetings and see if you can settle things.

7.5. Extracting and Reporting Needs


At the beginning of this section, we quoted a number of suggestions from the JCIDS manual on
FNA products, which discuss capabilities, gaps, overlaps, risk areas, tasks, conditions, and even
UJTL linkage. It is very easy to get tangled up in all this language, but there is really no need
for confusion. The FNA output is straightforward. It consists of:

45

the scenarios considered;

the alternative CONOPS considered;

the estimated results of executing those CONOPS, in terms of the measures developed in
the FAA;

the results which appear to be unacceptable according to current strategic guidance;

the reasons for the unacceptable results; and

the functional needs that result from those reasons.

Here is a very straightforward example. Consider what an FNA would look like if the scenario
were Operation EAGLE CLAW, the Iranian hostage rescue attempt conducted in 1980. The
mission would be to rescue a set of hostages held in Tehran, with the following measures and
constraints as dictated by the White House:

maximize probability of mission success;

protect the lives of the hostages;

maximize security in the planning process;

minimize collateral damage;

minimize the size of the planning group and the assault force; and

use only US forces. [Ryan, 1985, pp. 10-16].

The CONOPS would be to use RH-53D minesweeping helicopters operating from a carrier and
MC-130 aircraft to move the SOF assault force, which would be supported by AC-130
gunships during the actual assault. An analytic approach would likely try to estimate the
following:

the likelihood of the enemy receiving strategic warning, either by exposure of the
planning process, detection of rehearsals or force movement, or by signals intelligence;

the likelihood of the assault force reaching the initial staging location at Desert One (the
probability of at least 6 of the 8 helicopters reaching Desert One, given the reliability of
the RH-53D at that time, was later estimated to be about 0.65);

the likelihood of the assault force reaching the U.S. embassy in Teheran;

the estimated outcome of the assault, in terms of losses and collateral damage; and

the likelihood of the assault force returning with the hostages.

If your FNA analysis matched that of the Holloway Commission (which investigated Operation
EAGLE CLAW), you would conclude that the mission was high risk [Holloway, 1980, p. v].
Furthermore, your FNA would conclude that, for this scenario and CONOPS, the lack of
reliable, long-range lift from maritime platforms would be one of the overarching functional
needs. You might also argue that the CONOPS was likely to have difficulties because it
involved force elements from all Services and did not include joint training or a full-scale
rehearsal. Finally, you might point out that an alternative need would be to secure a land base to
avoid the complexities of employing helicopters from a carrier.
The point of the foregoing discussion is that the output of an FNA need not be couched in
strange, abstract language (or even linked to UJTLs, for that matter). The FNA results are
simply an assessment of how well we can do something, and an accounting of the reasons why

46

we cannot achieve mission success at an acceptable level of risk. If at all possible, you should
state the needs in quantitative terms.
This leaves the question of prioritizing needs (which are also called gaps in many JCIDS
documents). Within a particular scenario, such as the EAGLE CLAW example above,
prioritizing needs is probably straightforward. If the assault force cannot get to Teheran with
high probability, then the assault cant even occur. So securing highly reliable lift is a necessary
(but not sufficient) requirement for mission success.
Prioritization becomes difficult when you are assessing multiple scenarios across the breadth of
the defense strategy, and you end up with a disparate list of needs. This list will usually contain
a few things that are common to all scenarios, so their pervasiveness probably makes them a
high priority. On the other hand, some needs that are backbreakers in one situation (such as the
need to defeat enemy air defenses) may be irrelevant in others (irregular situations where the
enemy has no air defenses). So how do you provide this prioritization?
One reason we have stressed examining the strategic guidance in this document is that the SPG
and CPG in particular contain a great deal of advice on priorities. The best prioritization
scheme is one where you can directly lift the priority information out of these documents and
apply it to your needs, so that you have a clear source for how you have ordered your needs.
At this stage of the assessment, however, prioritization is not that critical. Since you have not
yet investigated solutions and their costs, having a priority of need is less important than
identifying the set of crucial needs that are dragging down the likelihoods of operational
success. Suppose, for example, that one of your needs ends up as the bottom priority. But, you
subsequently discover in the FSA that the need can be filled by a policy change, using existing
capabilities, at little or no cost. Clearly you would recommend that action, because the costs are
inconsequential.
It is critical, however, to provide the linkage from your needs to your estimated operational
outcomes for each scenario, in terms of your MOEs. This allows senior decision makers to
consider both the likelihood of the scenario occurring and the consequences of failure, which
are the major components of risk. It also allows them to perform their own calculus in terms of
tradeoffs among MOEs (e.g., the inevitable tradeoff between confidence of killing a target and
expected collateral damage).
What if there are no problems? There is one other outcome that may occur in an FNA: you
may discover that there are no needs. This could happen due to changes in strategic priorities
(such as the collapse of a particular enemy), a new application of existing CONOPS, or the
simple exposure of operational combinations that had not been considered in a unified
assessment. To give a recent historical example, the reason the DOD cancelled the Crusader
program was that the leadership felt that the need the Crusader was aimed at was addressed
adequately by other combinations of existing systems.
Concluding that there are no needs will be very controversial. Some important group pushed to
have a CBA done, and they did so with the firm belief that there was some operational problem
that needed to be fixed. They will not react well if you tell them they were wrong, so you will
have to do considerable consultation with your chain of command on how to bring your
assessment forward. Note that if you execute your analysis on the case-by-case basis we
recommend in Section 7.4, the story will grow over time, and your opponents will not be able
to accuse you of sucker-punching them with a completely unexpected outcome. They will still
be upset, but you can honestly respond that they should have seen it coming.

47

7.6. Vision and Reality in Stating Needs


In 1939, there was nothing stopping the Army Air Forces from stating the need for a 2000-lb
bomb that had 10-meter delivery accuracy in all weather conditions when delivered from a B17. But writing down such a need would have been pointless, because no technology existed at
that time that made such a weapon realizable.
Consequently, you need to understand that your statement of needs cannot be a plea for a
miracle, nor can it induce the DOD to produce something made of unobtainum (or
unaffordium). Your statement of needs has to be tempered by rough feasibility, cost, and
schedule estimates, and you have to have some idea of what the DOD is willing to tolerate for
additional investments in your areas.
Do not take this advice to mean that you should artificially limit your imagination. The DOD
has a substantial research program and a substantial experimentation program, and both are
designed to discover what is really possible. But, you are trying to define a strategy for your
mission area with your CBA, and that brings with it the responsibility to not publish yet another
plea for something like omniscient predictive battlespace awareness.

7.7. The Overall FNA Process


We illustrate our advice on the overall FNA process in Figure 15. Finalizing your analytical
approach and collecting performance data is the preparation phase. Choosing a straightforward
scenario to begin with starts the scenario analysis-analysis reconciliation phase, and the entire
exercise concludes with the derivation and documentation of needs.
To conclude this section, we add some additional points. To give advice on when a shortcoming
will become a need, you will have to examine scenarios in at least two time periods. We
previously recommended considering current-day scenarios and capabilities, and you will also
want to examine some time period either at or past the end of the Future Years Defense Plan (710 years in the future).
You also have some choices in what framework to use to portray our needs. The JCIDS manual
emphasizes UJTLs, but you could also couch your needs in terms of the emerging Joint
Capability Areas (JCAs). The point is to try and present the needs in solution-agnostic terms to
the maximum extent possible; rather than saying we need the Crusader, the more general
statement would be that we need survivable, responsive, precise, high-volume fires for
suppressing enemy activities as well as imposing attrition. Furthermore, you should be able to
justify this by saying something like this conclusion was derived from our analysis of Scenario
X in year Y, where we do not have a high enough likelihood of succeeding during the early
counterbattery fight using doctrinal CONOPS X, Y, or Z.

48

Collect and
inspect
performance
data

Select and
finalize
analytical
approach

Scenario
Analysis
for each
scenario

Analysis
Reconciliation
Choose
bestunderstood
scenario

Refine FAA task


structure for
scenario and
CONOPS
Execute
operational
analysis with
doctrinal
CONOPS
Review and
refine results
internally

Scenario
Analysis

Analysis
Preparation

Identify
unacceptable
outcomes

Document
causes from
analysis
Present to
working
group

Reconcile working
group comments

Derive needs in
terms of
operational
depiction

Reanalyze with new


data or CONOPS if
necessary

Prioritize needs based on


trends across scenarios,
strategic guidance

Analysis
Reconciliation

Needs
Development

Figure 15. Overall FNA task flow.

49

8. The Functional Solutions Analysis (FSA)


Here is some of the formal guidance on the FSA:
it is a joint assessment of potential approaches to solving, or at least mitigating, the
capability gaps identified in the FNA. The approaches identified should include the broadest
possible range of joint possibilities for addressing the capability gaps...
Solutions proposed by an FSA must meet three criteria:
(1) the are strategically responsive and deliver solutions when and where they are needed;
(2) they are feasible with respect to policy, personnel limitations, and technological risk; and
(3) they are realizable; the DOD could actually resource and implement the solutions. [CJCSM
3170.01C, 2006, p. A-12].
As with the FAA and FNA, we will suggest a process that will help you execute what would seem to
be an overwhelming task. But some other remarks are in order.
A few CBAs done to date have produced FNAs that recommended multiple FSAs. The problem with
this approach is that a CBA is supposed to take a comprehensive look at a mission or functional area,
and ultimately make recommendations on an integrated package. Having a set of disparate
organizations conduct multiple FSAs without any sort of unifying oversight can provide a balanced
set of recommendations only by accident. Also, we noted in the last chapter that prioritization is not
very useful without knowledge of resource considerations. If you split out your FSA, what
mechanism exists to consolidate the answers into something that makes sense as a whole? The JROC
wont do this; if they could, they wouldnt have bothered to ask you to do an assessment in the first
place.
Of course, reality may intrude, and you may not be given the time, resources, or clearances necessary
to do a completely integrated FSA. In that case, you will have to come up with some
recommendations about how much to pursue, and proceed accordingly. As an extreme, you could
simply stop with the FNA, throw the needs out to the community, and hope someone runs with them.
In fact, a JCD, which does not contain an FSA, is an example of this market-based approach to
obtaining solutions. But you should let everyone know that this is not as efficient as conducting a
deliberate solutions analysis.
As with the FAA and FNA, you do not need integrated architectures to perform your FSA. You are
free to use them if they help, but they are not a requirement.
Finally, you might ask what the difference is between an FSA and an AoA, which is commissioned
by the acquisition community. While we note that the interface between JCIDS and the DOD
acquisition process is still evolving, we will say that, at present, the FSA (and CBA in general) have a
much broader scope than an AoA, which tends to evaluate specific materiel alternatives. In particular,
the CBA should yield the most attractive and realizable CONOPS, essentially recommending how to
fight. The AoA should adopt these recommendations as an assumption, and concentrate on the
particulars of what should be provided to fight in that manner.
Also, the CBA will provide useful products to the AoA, particularly the linkage to the strategic
guidance, the scenarios, the measures of effectiveness, and advice on which problems to solve using
materiel versus non-materiel approaches. The CBA will typically examine many more functions than
an AoA, and will recommend which ones require further attention. The USAF Office of Aerospace
Studies (OAS) has published an AoA handbook [OAS, 2004] that also summarizes JCIDS guidance
on CBAs, so it provides a view of what an AoA should receive from an FSA.

50

8.1. Gaps Versus Game-Changing Capabilities


We offer a distinction that is not in any of the formal guidance, but will help you classify
possible solutions. Recall that our advice on the FNA is to evaluate doctrinal approaches. You
will probably find that the doctrinal approach, using programmed forces and capabilities, is not
adequate for some relevant threat. The natural tendency is to inspect these cases, zero in on the
functions that seem to be the problem, call them gaps, and call for improved versions of
things that currently perform those functions.
It is possible, of course, that getting upgraded versions of the same things would fill the needs.
But, consider as an example the British air defense problem of the late 1930s. Using methods
available at that time, they could only spot attacking air forces from about 10 miles out, which
provided them insufficient warning to interdict the attackers. A British FNA at that time would
probably conclude that to address the gaps, they would need higher-wattage klieg lights, better
barrage balloons, many more spotters, and maybe a companion to their gigantic concrete
acoustic mirror in the Romney marsh in other words, better versions of the same things.
Contrast this with what the British actually did, which was to employ radar. Radar did not fill a
gap, in the sense of this paper; instead, it was a completely different way of doing things, and
represented what we call a game-changing capability. With radar, attackers could be detected at
far greater distances in all weather, and the introduction of this technology completely changed
the dynamics of air defense.
Furthermore, finding a game-changing capability also introduces the need to provide an
employment concept that makes sense. Again, bureaucratic issues will arise, because there will
be no textbook answer on the best use of something totally new, and merely plugging the
innovation into an existing CONOPS probably wont work. As an example, many authors have
contended that the true innovation of the British WWII Chain Home radar system was not the
radars (which were primitive HF systems), but the Filter Room established at Bentley Prior.
The British concept was to have all contacts reported to this single facility, which then built a
common air picture, and planned and executed responses.
A potential game-changing capability is a good candidate for experimentation, because it will
take actual fieldwork to confirm the utility of the idea and discover good ways to employ it.
But, your team must have the expertise to even spot the opportunity, as it will not look like a
better version of what we already have.
You may not, in your CBA, uncover any game-changing capabilities. But, the most important
part of your FSA will be for you to look hard for them. These capabilities may, like radar, come
in the form of a revolutionary technology advance. Others, such as the development of
amphibious operations capability on the eve of World War II, may come from a revolutionary
set of operating concepts supported by existing technology.
In any case, introducing a game-changing capability will meet with considerable resistance, as
they are by their nature at odds with the status quo. For example, the introduction of
mechanized ground forces made horse cavalry irrelevant, but the members of that community
did not give up their existence without a fight. If you uncover a fundamental weakness that a
game-changing capability could solve, you will face reflexive and ferocious opposition from
whatever horse cavalry is threatened by your proposals, as well as (justifiable) skepticism from
everyone else.
So, if you are going to propose a game-changing capability, plan on a fight. Think through who
will be opposed and why, and formulate a good defense. Do not underestimate the ability of the
bureaucracy to make a good idea so painful that even its originator is relieved to see it die.

51

8.2. Examining Policy Alternatives


CBAs done to date have not done a good job of investigating policy alternatives. Much of this
is due to an entrenched tendency to separate policy from operational challenges, and
operational analysts tend to assume that the policy is immutable.
But to not investigate policy alternatives is to ignore a large set of possibilities for non-materiel
solutions. These options could range from simply avoiding the problem to completely changing
the strategic response. Now, we do not recommend that you offer do nothing as an
alternative; in addition to antagonizing your audiences, it would have been pointless to choose
the scenario as part of your sample if the expectation was that the DOD would not respond.
But consider the Iranian hostage rescue situation. Suppose we were analyzing that case in a
CBA, and concluded that we could not build a force that could deliver a high enough
probability of success in the next 10 years. A CBA could suggest altering the strategic response
by, say, inducing the enemy to free the hostages via a maritime blockade, strategic bombing of
economic targets, or holding the leadership at risk with long-range strategic weapons. Such
options would have other policy implications, such as affecting U.S. relations in the region.
You may argue that you cant deal with such open-ended possibilities, and that you would have
to stick with the main objective of conducting a hostage rescue via direct action. But, JCIDS
does not stipulate this, and in fact demands quite the opposite the imperative is to contemplate
broad alternatives. Recall the hostage rescue goals from Section 7.5, which were to:

maximize probability of mission success;

protect the lives of the hostages;

maximize security in the planning process;

minimize collateral damage;

minimize the size of the planning group and the assault force; and

use only US forces. [Ryan, 1985, pp. 10-16].

The third and fifth of these conditions resulted from the direct action CONOPS, but you can
envision other approaches that obey the other four edicts.
The response to any scenario has alternatives rooted in policy changes. We have already
advised you to include policy expertise on your core team, and you should employ that
expertise in the FSA.
A policy change will almost always imply a CONOPS that is different than the baseline
CONOPS written into an Analytical Agenda scenario. This will create a substantial
bureaucratic challenge for you, because the baseline CONOPS was developed by a large
number of people and went through lengthy coordination. As a result, deviating from this
baseline will certainly spark protests. JCIDS, however, demands that you analyze alternative
CONOPS, so you can use that edict in the instruction to solicit proposals from your working
group.
Also, you do not need to produce a consensus CONOPS as you would if you were constructing
an Analytical Agenda scenario baseline. Since you are looking for solutions, you can collect
multiple CONOPS and evaluate them, with an eye towards identifying under what conditions
one CONOPS would work better than another.

52

8.3. Analysis of Mission Effectiveness


In the course of doing the FSA, you will have to revisit the scenario analyses you did in the
FNA to analyze new solutions that you are considering. If you have adopted an agile modeling
process as we recommend in Section 7.2, you should be able to accommodate new systems,
forces, or CONOPS and evaluate them using the measures you developed for the FAA.
One issue is that you will probably be looking at options whose performance is not well
understood, since they probably do not exist yet. In these cases, you should develop bounds on
what these (typically nonexistent) systems could do and analyze the extremes to discover where
they would compete favorably. Also, these options may require CONOPS that are radically
different than those you assessed in the FNA, and tasks that are radically different than those
you developed in the FAA. These are probably the most interesting alternatives, and there is no
reason to reject them because they do not fit within existing task structures.
Evaluating alternative CONOPS will require you to go back to what you did in the FNA, when
you evaluated doctrinal CONOPS. You will have to develop a relevant operational depiction as
discussed in Section 7.1, and evaluate it using the analytical approach you set up for the FNA.
If the proposed CONOPS is so different that your existing analytical approach cant represent
it, you will have to either augment your analytical approach or recommend the CONOPS for
experimentation. Otherwise, you are just revisiting the FNA with a modified CONOPS.

8.4. Finding Affordable, Feasible, and Responsive Solutions


Recall that JCIDS presents three criteria for FSA solutions:

they are affordable;

they are feasible, both from the standpoint of policy and technology; and

they are strategically responsive; that is, they deliver solutions when they are needed.

JCIDS is not a mature process, and one of the most immature parts of JCIDS is how it should
treat affordability. Historically, the requirements processes operated by the JROC have
restricted their advice to operational needs and solutions, and have left the question of whether
a particular solution is worth the investment to the larger programming and acquisition
processes. But, you can no longer avoid the issue in a CBA.
For existing programs, gathering cost estimates is a matter of contacting the program office or
OSD/PA&Es Cost Analysis Improvement Group (CAIG). The program office estimate and the
CAIGs estimate will generally differ, so you should inspect both.
Also get manpower estimates. Many current programs have justified increased materiel expense
via reductions in manning requirements. You will have to determine whether or not these
manpower reductions are captured in your cost data (as they would be in life-cycle costs), or
whether they are not accounted for in developmental and per-unit acquisition costs.
Demonstration programs or proposed programs are much more difficult. In these cases, you
may have to commission a separate cost analysis to estimate what it would take to get the
capability. If that gets to be too difficult or contentious, an alternative approach would be to get
the high and low estimates and evaluate the worth of the capability for both cases.
JCIDS does not insist that you produce a very detailed independent cost estimate such as those
required by the acquisition community. What you should be able to do, though, is characterize
the 20-year life-cycle costs (or savings) of the things you are proposing, in terms of:

developmental costs;

53

facility or infrastructure costs;

per-unit and rough force-level acquisition costs; and

recurring operating costs.

These are the four critical cost components in any solution. We have to pay to develop a
capability, provide a place to house it in peacetime, procure it, and operate, maintain, and staff
it. If you can portray your CBA solutions in those terms (with rough, but reasonable estimates)
and contrast them with the program of record, you will have provided a complete economic
picture to the leadership. It is one thing to prioritize needs and potential solutions, but quite
another to choose what solutions you really procure. Everyone has a vision of the car they
always wanted; few can actually afford it.
Next, you must develop rough estimates of the technical feasibility of your proposed solutions.
You cannot work at the engineering level, because you will be considering a broad range of
possibilities. You can, however, use a framework like the following to classify the technical
risk of your alternatives.

No risk. This is a new use of existing systems, such as employing strategic bombers for
close air support.

Very low risk. This is a new use of systems that are due to be fielded in the near term.

Low risk. This is a new combination of existed or programmed subsystems that will
require new integration, such as equipping the existing Predator UAV with the existing
Hellfire missile.

Medium risk. These options require development of new equipment and systems, but
they do not require technological or industrial advances.

High risk. These options require significant scientific, technological, or industrial


advances.

We note that JCIDS has begun using the more formal framework described in Defense
Acquisition Guidebook [Section 10.5.2, 2006], which describes nine Technology Readiness
Levels (TRLs). This is a framework similar to the one above, but contains more precise
definitions of maturity of the technology. Clearly, you will need legitimate technology experts
to make these kinds of assessments.
This leaves the question of strategic responsiveness. If you have done your job correctly in the
FAA, you have characterized a range of security challenges that are supported by the area you
are assessing. Are these problems that need solutions now? If not, when do we think that they
will really become a problem, and whats the likelihood that they never become a problem?
JCIDS is oriented at looking 7-14 years into the future, but many of the issues being considered
in CBAs exist in the current day, and embarking on a leisurely 15-year development program
may not be the best option.
Since you are dealing with uncertain futures, the question of strategic responsiveness contains
considerable uncertainty. As a result, you may choose to frame options in terms of when they
can be realized. As an example, the DOD realized in the early 1980s that it had a significant
airlift shortfall. Rather than wait for the full execution of the C-17 program, the DOD opted to
restart the C-5 production line and also procure a number of KC-10 tankers that had significant
cargo-carrying capability. This solution had three components: a short-term fix with the KC-10;
a mid-term fix with the C-5B; and a long-term fix with the C-17.

54

If a future threat has considerable uncertainty, then a longer-term hedge approach may be the
better choice. People often decry the two-decade gestation period of the F-22 fighter, which
was originally developed to prosecute the air war against much larger Soviet forces. When the
Soviet Union dissolved, the F-22 program was delayed but was still retained as a hedge in case
a similar threat was resurrected. The Soviets did not reappear, and the F-22 is now being
procured in much smaller numbers than originally planned. Now, we will not debate whether
this is by design or by accident, but the point is that recommending that the entire force be
remade at great expense to address a challenge that may appear in 20 years is not particularly
prudent when a hedged approach is available.
If you are proposing materiel solutions, you probably need to consult with the acquisition
community on when those solutions could be fielded, assuming prompt budgetary action. Many
things have to happen in any acquisition, and all of them take time that you need to estimate.
Too many DOD analyses present recommendations that are simply Christmas lists. They throw
every solution imaginable at a problem, propose gold-plated systems, ask for ill-defined, magic
technologies, recommend concepts that only apply in a narrow (or even fanciful) set of
operational conditions, or outline solutions that are only realizable after three decades of
development. We assume that, if for no other reason than professional pride, you do not want
your CBA to become one of these analyses. The only way to ensure that is to seriously assess
affordability, feasibility, and strategic responsiveness.

8.5. Describing Collections of Options Via Portfolios


You may feel that, if you follow our advice, that your CBA will accumulate what appears to be
an unmanageable set of disparate options. The various alternatives will include a dogs
breakfast of materiel approaches, policy alternatives, and perhaps even one or two gamechanging capabilities, all with differing resource demands, future availabilities, technical risks,
and contributions to your MOEs. So how do you integrate all the information you have
accumulated and produce a coherent set of options?
The answer is that you need to come up with yet another organizing framework for your CBA
that allows you to group sets of options coherently. We call these sets of options portfolios, and
they are mutually supporting sets of recommendations that are related by a common theme.
One obvious portfolio framework that you should examine in all cases is one based on total
solution cost. This framework would contain portfolios that consider three cases:

best obtainable solution if costs are unconstrained;

best solution that neither increases or decreases total costs; and

best solution that achieves some specified decrease in costs.

Now, the situation may be such that your CBA does not have to consider solutions that
decrease costs. After all, the leadership wanted your mission area examined, and they likely
wanted it examined because they jointly committed to the need to improve it. But the first two
portfolio options should be an output of your FSA regardless. You have to give good advice on
the upper bound of realizable solutions, which is the cost-unconstrained case. You should also
give good advice on the best cost-neutral solution, as this, coupled with the cost-unconstrained
case, gives the leadership an estimate of the range of payoffs possible with additional
investments. This is not the oft-criticized budget-driven approach to analysis, where the
objective is to pay some bill by cutting capability in a mission area. Instead the idea is to
characterize the spectrum of investment options and operational payoffs.

55

In addition, creating and analyzing the cost-decreasing case has the benefit of characterizing
where the bulk of the costs lie in the legacy force, and whether those costs are commensurate
with their contributions to your mission area. As an example, the DOD established continual
fighter orbits over most major US cities after the attacks of September 11, 2001 to allow for
rapid intercepts of any additional hijacked airliners. Clearly, a modern fighter such as an F-15C
is grossly over designed for shooting down an airliner, and a CBA on this operational need
would likely recommend a completely different portfolio of approaches if time were available
to change procedures (such as passenger screening), modify existing systems (such as putting
armored doors on crew compartments of airliners), or even procure inexpensive air-to-air or
surface-to-air intercept capabilities. The point is that trying to employ legacy forces on the
cheap to accomplish certain operations may reveal a substantial force capability-operating costneed mismatch that you wouldnt have detected otherwise.
Another useful organizing framework addresses the uncertainty of having critical capabilities
that are outside the scope of your CBA. For example, the DOD has committed to fielding the
Global Information Grid (GIG) as a way to share information. Unfortunately, we dont know
when (and perhaps if) the GIG will be realized, and your options are likely very different
depending on the GIGs availability. This could lead to three portfolios:

GIG assumed available;

GIG assumed available, but solutions hedged against GIG not being available; and

GIG assumed unavailable.

Other frameworks could revolve around strategic risk guidance across future security
challenges (accept risk in one area to improve performance in another), choice of employment
domain (ground, sea, air, space, or cyberspace), or even force basing posture (use CONUSbased or overseas-based forces). Our point is that choosing a few of these frameworks makes it
much easier to assemble sets of options that are linked to overarching themes.
This leaves the question of how to assemble a portfolio option given a particular framework.
This has to be a part of your FSA analysis plan, because you will quickly discover why Wall
Street investment managers are paid so much money to assemble mutual fund portfolios. It is
not an easy job, particularly if you are trying to find the best mix of options across multiple
MOEs and affordability, risk, and responsiveness criteria.
As a result, you should seek a methodology that looks at lots of options. Too often, large DOD
studies devolve to a slide that recommends three possible courses of action, one of which is
obviously preferred, one of which is an obvious throwaway, and the last is included to satisfy
the preferences of some particular senior leader or influential group. While senior leaders and
influential groups can only be ignored at your own peril, their views should not artificially limit
your ability to consider a large number of combinations. Analysts in the optimization
community routinely solve problems with tens of thousands of variables and thousands of
constraints, so computational capability is not the issue.
Using such approaches, however, puts you firmly into the realm of abstract tools as shown in
Figure 14. So, you will have to invest some of your time into understanding how the contents of
your various portfolios were generated if you choose one of these approaches.
Heres some advice on inspecting the contents of solution portfolios.

How is the portfolio divided among special-purpose and general-purpose solutions?


An infantry battalion can accomplish a large variety of missions. It is not, however, the
best choice for disarming a captured nuclear weapon, as that is the specialty of a very
small number of highly trained teams. If your portfolio contains all special-purpose
56

investments, you may be in danger of producing a set of solutions that are optimized only
for particular situations, and are not useful otherwise. Conversely, if your portfolio
contains nothing but general-purpose solutions, you may be in danger of producing a
team full of decathletes competent, but likely to be beaten by a team that contains some
number of specialists.

How much is the portfolio at odds with current investment trends? If your portfolio
calls for, say, a doubling or tripling of funding in a mission area that has not yet resulted
in an actual operational disaster, you will have a difficult time making your case. Even
when risks are understood (such as what would happen if the levees protecting the city of
New Orleans failed in a hurricane, as they did in 2005), it is very difficult to overcome a
long history of no disaster.

Do you have a portfolio that largely recommends realizable non-materiel solutions?


You should produce at least one portfolio that does not recommend a new acquisition
program. This will bound the amount of improvement we can realize without new
materiel, and also satisfies the JCIDS requirements to analyze alternative CONOPS in the
FSA.

One type of CBA that may present a challenge for the portfolio approach is one that proposes
an operational concept, such as seabasing. But, even these types of CBAs contain different
options. For example, a possible theme for alternative seabasing portfolios could be organized
around the question of what type of force to seabase (SOF, ISR, fixed wing aviation, or a full
Marine Expeditionary Force). This framework would result in multiple options, and would
work well in bounding the available seabasing alternatives.
The final challenge with assembling a portfolio is that you are selecting a set of options that
presumably optimize something. Thats easy, you say; Im trying to optimize the likelihood of
mission success. But, recall that way back in the FAA you developed a set of measures to judge
the value of a particular CONOPS. Those measures are what you should be using to evaluate
the mission effectiveness of your portfolios.
Some of your measures will be at odds with each other. For example, the option that minimizes
expected collateral damage may have a low lethality. The existence of such conflicting aims is
why analysts do so-called trade studies; they use these studies to find out how various
operational goals trade off against each other.
Your natural response to this may be to interrogate the relevant decision makers on their
priorities. This almost never works, because:

you cant get enough time with the right decision makers to unambiguously determine
their (multidimensional) priorities;

the decision makers will disagree on what the priorities should be;

the decision makers dont have well-formed priorities (because if they did, they wouldnt
have asked you to do the CBA!); and

there is no guarantee that they wont reject your recommendations, even if you used their
priorities.

A better approach is to examine your measures and try to discover which sets of priorities cause
the recommended portfolio to change. Returning the Iranian hostage rescue, prior to Desert One
the impetus was on planning secrecy and minimal force size. In the subsequent planning for
another attempt after Desert One, these imperatives were much less important, and a CBA
aimed at such a scenario would likely recommend a much different approach.

57

Note that your measures in and of themselves can provide a framework for portfolio options.
You could have, say, a minimal collateral damage portfolio, a maximum lethality portfolio, and
a minimum force size portfolio. This approach also directly addresses the issue of conflicting
operational goals, because you (and your target audiences) can see how the solution choices
change in the portfolios as the measures change.
The construction of a portfolio requires weighing the major components of the possible
solutions: their mission effectiveness, their affordability, their technical risk, and their strategic
responsiveness. The frameworks you choose will dictate how you use these components in
constructing portfolios, and you should be able to formulate several interesting portfolios that
contain a mix of approaches, such as the airlift example we discussed above.

8.6. Excesses
At various places in the formal JCIDS documentation, you will find references that require you
to identify things called redundancies and overlaps. We advise that the appropriate place to
propose capabilities that are excess to needs is in the FSA. It is not until you know the needs,
the spectrum of realizable options, and the resource implications of those options that you can
judge something as being unnecessary.
Admittedly, this is very dangerous territory. Previous Joint Staff requirements processes only
commented on needs, and did not offer up offsets (a polite term for budget sacrifices). JCIDS,
however, mandates that the issue of excesses be examined, so you will have to consider them.
The advice above on portfolio frameworks also gives you a tenable way to bring forward
candidates for excess. If you build a cost-neutral or a cost-saving portfolio, something in the
programmed force will likely become an offset make room for a more efficient capability. So,
that framework will automatically identify excess candidates.
Using risk guidance from the strategic documents is another way to present a framework for
excesses. For example, DOD funding for irregular warfare capabilities has increased
dramatically since 2001, while several traditional warfighting systems have been cancelled or
curtailed. The implication is that strategic priorities have changed what we view as excesses in
general-purpose forces.
Your challenge is that you have to stay within the scope of your CBA. The current state of
JCIDS is such that you are not allowed to declare excesses in other mission areas beyond the
scope you defined in the FAA; you are not allowed to propose gutting the Defense Commissary
Agency to pay for new capabilities that your CBA needs. Nonetheless, you will be looking at a
broad range of force elements and systems, and if you defined your scope correctly, you will
have a large trade space. Some of these forces and systems are only applicable to your mission
area, so you can comment authoritatively on whether they are redundant.
The more difficult problem is judging general-purpose capabilities. Some time ago, the DOD
opted to remove nuclear weapons capability from the B-1B bomber, as it was deemed to be
excess to that particular mission. This decision did not, however, make the B-1B excess in
general, as it has considerable conventional capability. To declare it as excess in general would
require examining its conventional capabilities.
This unfortunately leaves the defender of any capability a trump card. As long as a program
defender can demonstrate his program has utility outside of your CBAs mission area, then they
can claim you cant brand it as excess without further study. Your best option in this case is to
identify it as excess to your mission area, and let the JROC decide whether to commission the
additional study.

58

Since it is easy to construct a situation where the thing you believe is excess is the exactly the
thing we had to have, we have stressed assessing a broad range of operational situations. If you
have an adequate scenario sample, and the resource in question doesnt compete favorably in
any of them, then you will have a strong case. Similarly, if the resource only provides minimal
improvements in cases where the strategic guidance says we can take risk, it may also be
excess, particularly if it is expensive.
It is unlikely that you will get concurrence on excesses from your working group. Everything in
the DOD is supported by someone, and they will execute their counterfire plan as soon as they
hear you may threaten their program. As a result, you will have to do excess determination
within your core team, bring your recommendations forward to your leadership, and do
considerable planning on how to introduce your results so they do not suffer crib death.
In these cases, disagreement is unavoidable, so dont waste time trying to avoid it. Instead,
ensure you have solid, defendable arguments for excesses, and ensure that your arguments
make it to the senior leaders.

8.7. The Overall FSA Process


Figure 16 outlines FSA analytic process we have discussed. The six major tasks are generating
alternatives, assessing the effectiveness of alternatives, assessing the feasibility of alternatives
(affordability, technical risk, and strategic responsiveness), portfolio planning and generation,
identifying game-changing capabilities, and excess analysis.
Note that there is a link between including a game-changing capability in a portfolio if you can
come up with a reasonable CONOPS; otherwise, the recommendation would be for
experimentation.
You could do the portfolio planning tasks earlier in the CBA if you have the time available.
However, leaving those tasks until the FSA gives you the maximum amount of information
about alternatives, and doesnt prematurely commit you to a set of portfolio frameworks.

59

FAA measures

FNA needs
Collect
policy
alternatives
to needs

Choose
portfolio
frameworks

Collect
material
alternatives
to needs

Collect other nonmaterial


alternatives for
needs

Alternative Feasibility
Estimate affordability

Formulate and
finalize portfolio
construction
approach

Generate
portfolios for
each
framework

Portfolio
Planning and
Generation

Estimate technical risk

Alternative
Generation

Identify
potential gamechanging
alternatives

Estimate strategic
responsiveness

Investigate CONOPS
for game-changing
capabilities

Scenario Analysis
(new alternatives)

Recommend
experimentation,
research

GameChanging
Capabilities

Identify excess
candidates
internally

Excess
Analysis

Formulate
bureaucratic plan
for excesses

Staff portfolios,
excesses

Figure 16. Overall FSA task flow.

60

9. The Quick Turn CBA


In some cases, you may be tasked with what has become known as a Quick Turn CBA. As opposed
to the quick look we recommend for a more drawn-out effort, this is a CBA that must be done on a
very tight timeline normally 30 to 60 days. Clearly, you will have to make substantial adjustments
to produce an assessment this quickly, so we will offer some advice on what can be done in these
cases.
In addition to this section, we include an appendix that describes a Quick Turn CBA done on
biometrics in mid-2006. This provides a useful case study for such an assessment.

9.1. Typical Reasons for an Accelerated Assessment


So why would anyone demand that a CBA be done in 30 days? Since this is a central part of the
why are you doing this CBA question described in Section 2.1, you will again have to devote
some time to understanding why the deadlines are so tight. Some common reasons are as
follows.

To address an imminent budget or programming action. A common reason for a Quick


Turn CBA is that a funding decision of some kind is looming, and those making the
decision want one last unified look at the issue. Such cases will normally have been
simmering for a long time, and there will be lots of supporting information available. Your
challenge will be to find and exploit the best of this information.

To break a bureaucratic logjam. Large bureaucracies like the DOD tend to stall new
actions, so frustrated senior officials occasionally sweep aside normal procedures and
commission a special effort to get something assessed when the organizations that normally
do the work cannot do so. In such cases, you should find out why those organizations could
not deliver, and keep those reasons in mind as you execute your assessment. You will
probably also have to rely on information those organizations have developed.

To react to an unexpected budget or program event. The collapse of an acquisition


program or some other radical change in the plan will tend to paralyze the larger process
that produced the plan in the first place. Under these circumstances, someone has put you in
charge of recommending the appropriate triage. This will be a very challenging assessment,
because the range of options will be broad and you will also have to consider the ripple
effects of the unexpected event.

To address an emerging need. While the DOD has a separate Joint Urgent Operational
Need process for current warfighting issues, senior officials may decide that immediate
examination is required to move the DOD towards finding enduring solutions. The CBA
described in Appendix A is such a case. Such assessments normally do not require much
FAA or FNA work, because the shortcoming has already been demonstrated. However, it
may be very challenging to come up with enduring solutions on a short timeline.

To settle a disagreement. The DOD contains many large organizations which periodically
find themselves at odds with each other. In this case, the Quick Turn CBA is a form of
arbitration. Of course, the challenge here is that you are in the middle, and you dont want
to be crushed between collisions of large bodies.

To pull together a set of disparate examinations. The division of labor in the DOD
sometimes makes it impossible to conduct an integrated examination of an issue. Consider,
for example, the issue of distinguishing among friends and foes in combat. While this is
everyones problem, it does not belong to any particular Service, and has only recently

61

been examined in any sort of integrated fashion. The challenge in this type of CBA is
finding all the pieces and then finding a way to assemble them.

9.2. Scoping, Downscoping, and Negotiating the Objective


One thing you should do for a Quick Turn CBA is to negotiate the scope and objectives of the
effort at the start. You do not have time to start over, so if you begin facing the wrong
direction, youll never get back.
The only foolproof way to do this is to write some sort of memo in English, not PowerPoint,
that describes your understanding of the CBA, what you plan to deliver, and how to whoever
tasked you. This will often be difficult, because short-fuse tasks that come from on high are
inevitably spun through multiple levels of management before they get to you. You may not be
able to get an audience with the originator, so your only real option is to write something down
and pass it back up.
So what do you write?
This is not a study plan; in fact, what you want is more akin to a five paragraph order. In that
vein, heres a possible outline:
BACKGROUND (i.e., Situation).
Circumstances leading to the tasking. Try to use the taxonomy in Section 9.1 to describe
why this is being done.
Tasking events. Write a sentence or two that describes how the job landed on you. This
will help expose any inaccuracies in message transmission.
Sources of uncertainty. Describe what is currently not known (or not sufficiently proven
or disproved) that prevents a decision.
TASKING (i.e., Mission).
Questions to be answered. This is also known as essential elements of analysis in some
circles. List the major questions.
Decisions being informed. Write what decision you believe you are informing. Will this
assessment inform an imminent budget action? Provide language for upcoming strategic
guidance? Save or kill a program?
Timeline. Write a sentence or two on when the deliverables are due, and to whom.
ASSESSMENT PLAN (i.e., Execution).
Scenarios and functions considered. List the scenarios you will use and the functions you
will examine. This section specifies the operational scope of the assessment.
Alternatives considered. Describe how you will generate alternatives, or list the
alternatives you believe you have been given. This will frame the solution scope.
Operational evaluation methodology. Describe how you will do this (or if it is even
required). Will you use expert judgment, tabletop wargames, or ?
Technical risk methodology. Say whether this is required, and if so how you plan to do it.
Costing methodology. Say whether this is required, and if so how you plan to do it.
Portfolio methodology. Say whether this is required, and how you plan to address it if that
is the case.

62

Schedule. List the major phase points only; there will not be more than two or three of
them in a 30- to 60-day effort.
RESOURCES (i.e., Service Support).
Organizations supporting the working group. List the people youll use and their
providing organizations, if known.
External resources. Estimate the funding necessary, or describe what tasks you are
diverting resources from to support the CBA.
Classification. State the classification level of the CBA, and whether obtaining higherlevel accesses (or people who already have those accesses) is a limiting factor to meeting
the deadline.
OVERSIGHT (i.e., Command and Signal)
Governance. List the groups (hopefully not more than two) that will oversee your
assessment.
Communications. List the final products to be delivered (briefing, report).
You should write this immediately based on what you know, and you should be able to fit the
initial versions into two or three pages. If you have some part of your team assembled, work it
over with them. Then, walk it back up the tasking chain as far as time and bureaucratic
constraints will allow.
Typically, you will have an initial meeting where you find out youre running a Quick Turn
CBA, with an invitation to come to the next meeting to finalize the tasking. If youre really
agile, youll show up at the follow-on session with a document like the one outlined above.
Working over the words will prevent a great deal of misunderstanding, which you cannot afford
in an accelerated effort.
Also, you can maintain this document as a management tool for your working group.

9.3. Forming a Team


With such a short timeline, you will have to build a team while you are negotiating the
assessment. Furthermore, you will have to be economical about who you bring in, and people
who can cover more than one area of expertise will be very valuable.
Referring to the list of expertise areas in Section 2.6, you will probably not have to dedicate
people to providing bureaucratic agility and study management, as the person who
commissioned your work is probably far up in the DOD hierarchy. When the Vice Chairman or
an Under Secretary of Defense tells you to do something in 30 days, you can simply ignore
many protests you might otherwise have to address.
This still leaves a number of expertise areas that have to be covered. Suppose you can get rid of
the need for a dedicated bureaucratic navigator and a study manager, and you (the study lead)
will be the communicator. Doctrinal knowledge will be essential, as you have to have access to
real expertise on how things are currently done. Also, the right doctrinal expert will always be
able to estimate the operational performance of alternatives from his experience, so you will at
least have that.
You still may need analytical ability, cost estimation, technical knowledge, adversary
knowledge, and policy knowledge. Of these, the need for technical and cost expertise will
depend on how much uncertainty is associated with the alternatives (particularly materiel
alternatives). If the reason you are compressing the CBA is to settle a disagreement or to break
63

a logjam, there are likely competing views on the availability and costs of the alternatives. You
may be able to do the assessment using these views as bounds (e.g., what should we do if the
thing actually costs X), but if these parameters are truly unknown, you will have to devote some
time to estimating them.
The issue is similar for adversary and policy expertise. If your CBA is aimed at a particular
scenario and a specified enemy with well-understood policy and force employment boundaries,
you may not need dedicated experts. We warn you, however, that the choice of opponent and
operational situation drives the conclusions, so be careful about dismissing these needs too
quickly.
The question of streamlining your organization really boils down to this: where is the
uncertainty? What is it about this decision that we need to investigate? The answer to this
question really defines what you need on your team and how much you can accomplish in a
short timeline.
Furthermore, the type of analytic work that can be done is dictated by the area(s) of uncertainty
and the timeline. If you attempt to do analytics that involve operational evaluation or portfolio
construction, then you will need a lead analyst that is very creative. In particular, you will not
have time to execute the normal way of estimating operational outcomes (unless you are in the
rare position of being able to find and exploit work that has already been done). Mass wont
help, either; adding more analysts will slow you down. Instead, insist on getting someone who
has shown he can do the job under these conditions.
It is highly unlikely that the decision makers that task you with the assessment will give you a
team. It is also unlikely that the right team in place and available (or even known) to you. So,
you will have to conduct some sort of draft. Now, here is where you can exploit your chain of
command, as they have been around longer than you and generally have a broader range of
contacts. What you should do, once you have some understanding of the task, is write down the
types of people you need as part of your five-paragraph order. Then, take that list back up the
chain and see if you can get help in getting those types of people. Even if you are very
confident that you know who you need, you should try to exploit your seniors knowledge of
their organizations to get the right expertise.
You will also have to control the number of people who want to subscribe to your study. Most
short-fuse efforts have high priority, and will attract a large number of rubberneckers. In this
case, you will have to be brutal and combine the working group and study group into one team,
and simply banish spectators that are neither contributors nor decision makers. The process we
recommend in Section 3 of having a study group produce and a working group review will be
problematic; if you cannot avoid such an arrangement, at least force the working group to be a
small as possible, and do not produce extra materials for them beyond what you are producing
in the course of the effort. Unless your needs for functional skills dictate otherwise, you should
not allow more than one representative on your team from any external organization.
An aside: directed telescopes and theater critics. Historian Martin Van Creveld coined the
term directed telescope to describe a commanders use of a special, trusted officer or agent to
bring him information directly (see Griffin [1985] for a complete discussion). If you are
assessing a hot issue on a short timeline, you may end up with such a person on your team. If
that person is serving the decision maker who commissioned the CBA, then you may have to
deal with some difficult issues.
First, recognize that the best arrangement is for you to be in regular contact with the decision
maker, rather than someone who ostensibly is working for you but instead is someone elses
agent. If that arrangement is impossible, all is not lost. After all, it is likely that the directed

64

telescope will give you more direct access and faster feedback than you could get otherwise.
So, see if you can make the situation better support your assessment.
Second, you need to make sure that you are in the lead, and that the directed telescope does not
take over. Decision makers will usually appoint someone who does not attempt this sort of
thing, but occasionally you will encounter a liaison who feels compelled to wave his patrons
gun in your face. In these cases, you will have to fall back on your experience to reassert your
authority.
The notion of a theater critic is less well-documented, but is nonetheless a substantive issue.
This situation arises when some organization refuses to provide representation for your Quick
Turn CBA. Ordinarily, this would be fine. But, if that organization also has veto authority over
your results, you have a theater critic a person, group, or organization that does not participate
in the production, but will judge, and possibly kill off, the finished product.
Almost any organization can opt to play theater critic, and you will not have the time to
maneuver, cajole, or shame them into participation. What you can do, however, is to offer them
one or two progress briefings during the course of your assessment. This will eat into your
already-tight schedule, but will allow you to expose issues that you might not see coming until
the end game (when it is too late).

9.4. Working Arrangements


You will naturally have to find dedicated work space for your team, along with computer
support, phones, white boards, ready sources of caffeine, and all the other things you require
when crashing on a project. But, a larger issue is how you will operate your team.
We cannot give you much scientific advice, because the best way to make your team work is a
complex function of the topic and the participants. In general, though, you have an early choice
as to whether to attempt some reasonable division of labor or to operate your CBA team as a
committee of the whole. Most of us would immediately opt for the former, as we know that
difficult jobs generally must be divided up.
Unfortunately, you probably dont know much about the people youve gotten, so you wont
have a very good idea of who can really do what. Under those circumstances, wasting a day or
two in discussions that dont seem to result in much isnt really a waste. Instead, view those
sessions as a way to find out the abilities and beliefs in the group, as well as a way to get people
used to each other. Now, we are not suggesting you begin with some sort of team-building
exercise; while that may be very effective, it may not suit your style (much less the styles in
your team). But, you will have to find some way to begin functioning, and planning for a day or
two of loosely-structured debate isnt unreasonable. You will probably find it much easier to
divide the tasks among the group after such an exercise.
There is, however, one arrangement you should set up immediately, and that is having someone
function as a recorder for all the meetings. You will be moving very quickly, and it will be very
difficult to remember at the end how something was decided. Save yourself some trouble and
assign someone to man a keyboard for every session to capture the essentials of the discussions.
The usual summary briefing slides will NOT capture this information, and you will invariably
need it as a reference when you try to build your final story. You dont want a stenographer;
you need someone who can summarize the discussions in English, and who can detect and
document when important conclusions and decisions are made.
Having such a diary also helps your team. As much as you would like to have total control of
your people, the reality is that you will have some important members of your group that

65

simply cannot work full-time on your effort. If you have summaries, they can scan them and
catch up on what happened while they were working elsewhere.

9.5. Designing to Time


Figures 9, 15, and 16 suggest tasks and flows for a normal CBA. But, since a Quick Turn CBA
is highly compressed, the natural question is what can be skipped or accelerated.
Before you panic, go all the way back to Figure 3, which reduces a CBA to the most pedestrian
representation possible. Can you skip any of these questions? If so, then you can concentrate on
the questions that you do have to address. In what follows, we discuss how each of the major
parts of a CBA may be compressed, under the assumption that your Quick Turn CBA may have
to consider any or all of them. But, start with Figure 3 before getting into too much detail.
It is likely that whoever tasked your Quick Turn CBA felt there was enough information on
tasks, conditions, and standards that you wont need to do the activities in an FAA. Regardless
of the truth of this belief, you must get concurrence at the start on two things:

scenarios to be considered (specify conditions); and

functions to be addressed (derive tasks).

You will still need a task structure of some kind and a set of measures, but you dont need to
have those perfected at the start.
Figure 17 below shows a possible adjustment of the FAA process. It makes several steps
parallel; more importantly, it assumes that the scenarios and functions are settled in the initial
negotiations over the assessment, along with some guidance on relevant standards.
Consequently, you will specify, rather than coordinate, what operational cases and functions
will be assessed, and you will be drafting task structures and final measures.
Note that gathering doctrinal experts is an important first step. Getting them will allow you to
draft a five-paragraph order that is coherent enough to discuss with your management.
Figure 18 shows how an FNA might be compressed. You still will have one or more
operational situations that you are considering, but you will not have to go through a lengthy
reconciliation step with a number of outside organizations. In a Quick Turn CBA, your working
group will do the work and then move on to the next situation.
Remember that the FNA is designed to evaluate doctrinal approaches using programmed
forces. If this evaluation has occurred in a prior study or in an actual operation, your FNA
evaluation just consists of citing that work and justifying that the work is valid and applies to
your assessment. Furthermore, if the needs are specified as part of the tasking, you may not
need to do an FNA at all.
Figure 19 shows how you might collapse an FSA to accommodate Quick Look timelines. You
will note that it doesnt appear to be compressed much; in fact, the only task that drops out is
the need to staff excess candidates.
But, several major tasks may not apply to your Quick Turn CBA. For example, there may be no
need to do excess analysis. In addition, the issue of portfolios may collapse to recommending
one alternative from several choices, so the entire need for portfolio generation under different
frameworks disappears. Also, you may not uncover any game-changing capabilities, or you
may decide that the ones you have found can be executed adequately with existing CONOPS.

66

Receive
Tasking

Recruit
Doctrinal
Experts
Draft, Revise
5-Paragraph
Order

Specify
Scenarios

Specify
Functions to
Analyze

Derive Related
Military
Objectives,
Capabilities

Specify
Conditions

Draft Overarching
Task Structure

Derive Tasks

Draft Relevant
Attributes

Draft Measures

Develop
Standards

Figure 17. Example Quick Turn CBA FAA task flow.

The important point in this entire discussion is that the Quick Turn CBA is not eventdriven; it is time-driven. Your challenge is to first, decide which tasks need to be done, and
second, divide your available manpower and calendar time to those tasks. As a starting tactic,
you may take all these tasks and group them into three categories:

tasks that do not need to be done in the Quick Turn CBA;

tasks that have been already been answered (either by management direction or
previous study); and

tasks that the Quick Turn CBA must address.

Another aside on group methods. Recall that in Section 7.2 we warned against using group
methods as the primary means of estimating outcomes, causes, and needs. Unfortunately, the
time-driven nature of the Quick Turn CBA may drive you to do exactly what we warn against,
because your schedule wont allow you to do anything else. So, how do you reconcile this
conflict?
First of all, if you have clear, logical, qualitative arguments for your causes, needs, and
recommendations, you should use them. In this case, you have to make the case using short
papers and not briefing slides, because slides simply do not allow you to transmit enough
information to make a logical argument in a short document (for more on this, see Tufte
[2003]). Also, assigning scores to some sort of qualitative argument just to make the analysis
appear quantitative usually obscures the argument. Worse, if your target audience detects that
you did this, they will more often than not conclude that you are trying to deceive them. They
understand that you are operating under tight deadlines, so there is no need to add unnecessary
numerological veneer. The Gettysburg Address worked just fine without stoplight charts or
weighting schemes.
If you do rely on group methods such as value-focused thinking or the analytic hierarchy
process, be very clear about what you used those methods for. Were you using them to estimate
67

combat outcomes? Individual unit or system performance? Importance weights on scenarios or


mission areas? There is nothing more frustrating for the DOD leadership than seeing a redamber-green chart or a priority list and not being able to understand how the results were
constructed much less how they support recommendations for action.
In general, you will have to be able to defend your conclusions on the following.

Performance drivers. What functions or situations are causing us problems?

Consequences. What situations lead to unacceptable outcomes?

Possible solutions. Which alternatives appear attractive, from the point of


performance, cost, and availability?

Final recommendations. Which collections of alternatives are worth recommending,


and what was the organizing principle for each collection?

Merely citing techniques will not suffice here. In any CBA, but particularly a Quick Turn CBA,
you will have to communicate and defend what you believe about the four points above.

Collect and
inspect
performance
data

Select and
finalize
analytical
approach

Choose
bestunderstood
scenario

Scenario
Analysis
for each
scenario

Analysis
Preparation

Identify
unacceptable
outcomes

Document
causes from
analysis

Refine FAA task


structure for
scenario and
CONOPS

Derive needs in
terms of
operational
depiction

Execute
operational
analysis with
doctrinal
CONOPS

Prioritize needs based on


trends across scenarios,
strategic guidance

Review and
refine results

Scenario
Analysis

Needs
Development

Figure 18. Example Quick Turn CBA FNA task flow.

We have some final advice on designing to time. First, even if you are given a 30-day deadline,
recognize that it will take at least one week to refine your final presentation with your
management and brief your oversight groups. For quick-turn issues going to the JROC, one
week appears, unfortunately, to be a minimal number.
Second, if you plan on issuing some sort of data call to the Services or Combatant Commands,
this will also take at least a week. Now, if you are clever, you can be doing other work with
68

your study team during that week, so you can do some tasks in parallel. But the fact is that if
you must go out to the Combatant Commands, it will take them a few days to understand what
you want and give you coherent responses. Clearly, you can move faster and retain more
schedule control if you gather the information yourself. But, if you gather input by a staff
action, it will not happen overnight.

9.6. Communicating Results and Risks


Section 5.1 describes the Quick Look as a throwaway that is, a pilot effort to help shape the
actual assessment. The Quick Turn CBA we are describing here may seem to you like a rushed
product sacrifices quality for timeliness, which is exactly the sort of thing that Frederick Brooks
said you that shouldnt give to a customer.
Nonetheless, the DODs leadership will continue to ask difficult questions on tight timelines,
and make substantial decisions based on quick examinations. Being a loyal subordinate, you
will deliver a product on time. But, you will probably be uneasy about it. Consequently, it is
important in a Quick Turn CBA to communicate the risk of the assessment that is, where it
might be wrong and what the consequences might be.
We are not recommending that you try to find some quantitative way to absolve yourself of
subsequent blame, or that you simply leave your findings on the decision makers doorstep and
run away before someone discovers your work led to unintended detonations. Instead, we are
saying that you have to communicate which parts of your assessment are solid, which are
judgment calls (conclusions with some analysis tempered by experience), and which are highly
uncertain.

FAA measures

FNA needs
Collect
policy
alternatives
to needs

Choose
portfolio
frameworks

Collect
material
alternatives
to needs

Collect other nonmaterial


alternatives for
needs

Alternative Feasibility
Estimate affordability

Formulate and
finalize portfolio
construction
approach

Generate
portfolios for
each
framework

Portfolio
Planning and
Generation

Estimate technical risk

Alternative
Generation

Identify
potential gamechanging
alternatives

Estimate strategic
responsiveness

Investigate CONOPS
for game-changing
capabilities

Scenario Analysis
(new alternatives)

Recommend
experimentation,
research

Identify excess
candidates

Excess
Analysis

Figure 19. Example Quick Turn CBA FSA task flow.

69

GameChanging
Capabilities

Since the Quick Turn CBA does not allow for a lengthy oversight and staffing process, you will
not have multiple independent reviews of your work and your conclusions. In most cases that is
exactly what the leadership wants, because they do not want another study that regresses to the
status quo. But, you will have to justify, either by the expertise of your group or the existing
work that you cite, any recommendations that substantively change the current program. The
only case where this will not apply is when the status quo is unexecutable, such as an
assessment done in the wake of a major program cancellation.
Finally, do not forget that you can recommend experimentation. We experiment to test theories
and avoid risky commitments, and it is perfectly legitimate for you to point out the cases where
this would be a good approach. Now, if your marching orders expressly forbid recommending
more study or experimentation; dont violate them. However, ensure you communicate that you
have not been able to reduce the uncertainty of an option sufficiently to make a clear-cut
recommendation, and ensure that your results contain sufficient information so that the decision
makers can decide whether to take the risk.
To conclude, remember that the circumstances that generate Quick Turn CBAs mean that a
decision is imminent, and will be made regardless of what you deliver (or dont deliver).
Consequently, a Quick Turn CBA is large opportunity to influence the direction of the DOD.
As such, it is critical that you scope and negotiate the tasking quickly, recognize and plan
around the time-driven nature of the assessment, and communicate the strengths (and
weaknesses) of your results.

70

10.

A Twenty-Question Summary

In this paper, we have put ourselves in your position, that of someone trying to execute a CBA. We
have covered what JCIDS is trying to do and how it connects to the overarching Defense Strategy and
the Joint Operations Concepts. We have translated what it asks for into an analytical framework that
should be directly applicable to your assessment. We have advised you on what talent you have to
procure, how to organize, how to execute, and where and when to expect resistance.
But, it has taken us quite a few pages to explain all those things clearly. So, as a summary, we offer
something common in the military: a checklist. What follows are the most important things you have
to do to conduct an effective CBA.
So, ask yourself the following questions as you fight your CBA campaign.
1. Do I really know why Im doing this CBA?
2. Do I really understand the relevant strategic guidance, including the concepts?
3. Do I have the right people for my core team?
4. Do I know how Im going to lead my core team?
5. Do I know how Im going to function with an external working group?
6. Is my set of scenarios sufficient to cover the breadth of the strategy, and are they tied to
a relevant strategic framework?
7. Have I scoped my assessment in such a way that it both answers the questions and is
doable in a reasonable amount of time?
8. Do my operational depictions, task structures and measures flow directly from the
scenarios and CONOPS?
9. Does my quick look assessment provide an adequate view of the road ahead and bound
what I expect to conclude?
10. Do I have an analysis approach that is agile enough to consider a broad set of
alternatives, and does it account for the enemys operational alternatives?
11. Does my analysis approach represent the contributions of the alternatives of interest
and estimate the measures of interest?
12. Have I collected a solid, defendable set of doctrinal approaches using the programmed
force?
13. Do I have solid, defendable estimates of the mission effectiveness of those approaches?
14. Have I correctly identified the causes and resulting needs from my estimated
operational outcomes?
15. Have I developed promising policy, materiel, and CONOPS alternatives?
16. Have I found any game changing capabilities, and have I been able to describe feasible
CONOPS for them?
17. Do I have reasonable estimates of the affordability, technical feasibility, and strategic
responsiveness of my materiel alternatives?
18. Do I have a good set of alternative portfolio frameworks?

71

19. Have I generated a compelling set of portfolios for each framework that gives my
decision makers a real set of options?
20. Have I identified excess capabilities, and do I have a bureaucratic plan for bringing
them forward?
If the answers to all of the above are yes, you probably wont have to ask yourself the following
question:
In the future, do I want to tell people that I ran this CBA, or do I want to deny any
involvement?
We hope you find this checklist useful if for no other reason than your leadership will probably use
it. JCIDS asks for a great deal out of a CBA, but if you succeed, you will move the DOD forward in a
significant way.

72

11.

References

Aldridge, Pete, Joint Defense Capabilities Study Final Report, Joint Defense Capabilities Study
Team, December 2003.
Brooks, Frederick P. Jr., The Mythical Man-Month: Essays on Software Engineering, AddisonWesley, 1995.
Chairman of the Joint Chiefs of Staff Instruction 3170.01F, Joint Capabilities Integration and
Development System, December 2006.
Chairman of the Joint Chiefs of Staff Manual 3010.02B, Joint Operations Concepts, 1 December
2005.
Chairman of the Joint Chiefs of Staff Manual 3170.01C, Operation of the Joint Capabilities
Integration and Development System, December 2006.
Crissman, LTC Doug, The Joint Force Capability Assessment (JFCA) Study and the Development
of Joint Capability Areas, briefing, 7 March 2005.
Department of Defense Architecture Framework Working Group, DOD Architecture Framework,
Vol. 1: Definitions and Guidelines, 30 August 2003.
Department of Defense, Defense Acquisition Guidebook, http://akss.dau.mil/dag/, 24 July 2006.
Department of Defense Directive 8620.1, Data Collection, Development, and Management in Support
of Strategic Analysis, 6 December 2002.
Department of Defense Instruction 8620.2, Implementation of Data Collection, Development, and
Management for Strategic Analyses, 21 January 2003.
Department of Defense, Global Strike Joint Integrating Concept, 10 January 2005.
Department of Defense, Rescue Mission Report [Holloway Report], August 1980.
Department of Defense, Seabasing Joint Integrating Concept, 1 August 2005.
Griffin, Gary B., The Directed Telescope: A Traditional Element of Effective Command, Combat
Studies Institute, U.S. Army Command and General Staff College, July 1991.
JCS J-8, SPG-Directed Planning Task: Integrated Architectures, briefing, July 2004.
Joint Requirements Oversight Council, JROCM 062-06, Modifications to the Operation of the Joint
Capabilities Integration and Development System, 17 April 2006.
Joint Requirements Oversight Council, JROCM 199-03, Joint Forcible Entry Operations Study
(PDM II), 20 October 2003.
Kirkwood, Craig W., Strategic Decision Making: Multiobjective Decision Analysis with
Spreadsheets, Duxbury Press, 1997.
Loerch, Andrew, and Larry Rainey (ed), Methods for Conducting Military Operational Analysis,
Military Operations Research Society, to appear April 2007.
Office of Aerospace Studies (OAS/DR), Analysis Handbook: A Guide for Performing Analysis
Studies for Analyses of Alternatives or Functional Solution Analyses, Air Force Materiel Command,
July 2004.
Pirnie, Bruce, and Sam Gardiner, An Objectives-Based Approach to Military Campaign Analysis,
RAND National Research Defense Institute, 1996.

73

Ryan, Paul B., The Iranian Rescue Mission: Why It Failed, Naval Institute Press, Annapolis, 1985.
Secretary of Defense, Operational Availability (OA)-05 / Joint Capability Areas, memo, 6 May
2005.
Secretary of Defense, Requirements System, memo, 18 March 2002
Secretary of Defense, National Defense Strategy of the United States of America, March 2005.
Tufte, Edward R, The Cognitive Style of PowerPoint, Graphics Press LLC, Cheshire, 2003.
Washburn, Alan R., Bits, Bangs, or Bucks?: the Coming Information Crisis, PHALANX, Vol. 34,
No. 3, September 2001.

74

12.

List of Acronyms

AoA ................................................................................................................... Analysis of Alternatives


CAIG ................................................................................................ Cost Analysis Improvement Group
CBA........................................................................................................ Capabilities-Based Assessment
CCJO .......................................................................................... Capstone Concept for Joint Operations
CJCS.............................................................................................. Chairman of the Joint Chiefs of Staff
CJCSI ..........................................................................Chairman of the Joint Chiefs of Staff Instruction
CJCSM ...................................................................Chairman of the Joint Chiefs of Staff Memorandum
COCOM ............................................................................................................. Combatant Commander
CONOPS .............................................................................................................. Concept of Operations
CONPLAN ................................................................................................... Concept of Operations Plan
CPG ...................................................................................................... Contingency Planning Guidance
DIA.............................................................................................................Defense Intelligence Agency
DOD .................................................................................................................... Department of Defense
DODAF .......................................................................Department of Defense Architecture Framework
DODD ................................................................................................. Department of Defense Directive
DODI .................................................................................................Department of Defense Instruction
DSB ..................................................................................................................... Defense Science Board
FAA .............................................................................................................Functional Area Assessment
FCB .............................................................................................................Functional Capability Board
FFRDC .................................................................Federally Funded Research and Development Center
FNA .......................................................................................................... Functional Needs Assessment
FSA...................................................................................................... Functional Solutions Assessment
FYDP............................................................................................................. Future Years Defense Plan
GIG.................................................................................................................... Global Information Grid
HPT ........................................................................................................................... High Payoff Target
HVT............................................................................................................................ High Value Target
IAMD ............................................................................................... Integrated Air and Missile Defense
ICD ............................................................................................................ Initial Capabilities Document
IPL ........................................................................................................................Integrated Priority List
ISR................................................................................. Intelligence, Surveillance, and Reconnaissance
JADMSC ............................................................ Joint Analytic Data Management Steering Committee
JCA......................................................................................................................... Joint Capability Area
JCB .....................................................................................................................Joint Capabilities Board
JCD..............................................................................................................Joint Capabilities Document
JCIDS ............................................................... Joint Capabilities Integration and Development System
JCS ........................................................................................................................... Joint Chiefs of Staff
JDS ............................................................................................................................. Joint Data Support
JFC ................................................................................................................... Joint Functional Concept
JFCA.................................................................................................. Joint Force Capability Assessment
JFCOM ................................................................................................................ Joint Forces Command
JIC ....................................................................................................................Joint Integrating Concept
JOC.................................................................................................................... Joint Operating Concept
JPG ............................................................................................................ Joint Programming Guidance
JROC ........................................................................................... Joint Requirements Oversight Council
JROCM................................................................ Joint Requirements Oversight Council Memorandum
JSAP ...............................................................................................................Joint Staff Action Package
KMDS .................................................................... Knowledge Management and Development System
MOE ............................................................................................................ Measure(s) of Effectiveness
75

MOP ..............................................................................................................Measure(s) of Performance


NDS ................................................................................................................ National Defense Strategy
NMS ............................................................................................................... National Military Strategy
NSS.................................................................................................................National Security Strategy
OA ..................................................................................................................... Operational Availability
OPLAN............................................................................................................................Operations Plan
OSD ................................................................................................... Office of the Secretary of Defense
OSD/PA&E .................................................................................. OSD Program Analysis & Evaluation
PDM .....................................................................................................Program Decision Memorandum
QDR .......................................................................................................... Quadrennial Defense Review
SIPRNET..................................................................................Secret Internet Protocol Router Network
SOF................................................................................................................. Special Operations Forces
SPG..............................................................................................................Strategic Planning Guidance
TEWA ................................................................................ Target Evaluation and Weapon Assignment
TOR ...........................................................................................................................Terms of Reference
TPG .................................................................................................. Transformation Planning Guidance
TRL ............................................................................................................Technology Readiness Level
UARC...........................................................................................University Affiliated Research Center
UCP .................................................................................................................... Unified Command Plan
UJTL.................................................................................................................... Unified Joint Task List
USAF OAS............................................................................US Air Force Office of Aerospace Studies
WMD........................................................................................................ Weapons of Mass Destruction

76

13.

Appendix: The Biometrics Quick Turn CBA

From 28 August to 28 September 2006, a small team conducted a Quick Turn CBA on DOD
biometrics (the measurable physical and behavioral characteristics that allow an individual to be
identified). We offer a brief description of this CBA as an example of how a Quick Look effort was
conducted, particularly with respect to the need to design to time.

13.1. Background
The DOD has long had some biometrics capabilities, but the events of September 2001 greatly
increased the need to be able to accurately identify individuals. At that time, the Secretary of
the Army was designated to lead, consolidate, and coordinate all biometric information
assurance programs in the DOD. In addition, ASD(NII), the Assistant Secretary of Defense for
Networks and Information Integration, was given significant responsibility, since biometrics
was viewed as part of the overall information assurance program.
However, the need for biometrics capabilities was further accelerated by Operation IRAQI
FREEDOM. While the Army had stood up a Biometrics Task Force (BTF) to address these
issues, many senior leaders believed that something more had to be done. US Central
Command had submitted two Joint Urgent Operational Needs requests for biometrics
capabilities (one for base access biometrics and another for better distribution of biometrics
information) in the summer of 2005, but progress had been slow.
In March 2006, the Army presented a briefing on the biometrics program to senior group
including the Vice Chairman of the Joint Chiefs of Staff (VCJCS). As a result, the VJCJS
directed that a team be formed to write a DOD biometrics CONOPS, and that a separate Tiger
Team review the architecture of biometrics data flow in the CENTCOM theater [DAMO-ZA,
2006]. The Tiger Team visited a number of sites in Iraq in March-April 2006 and reported the
following recurring themes.

Who is in charge of biometrics?

Units see the inherent value of biometrics and have adapted biometrics at all echelons.

Lack of user feedback on the biometric information collected, otherwise known as the
so what. The user requires the acknowledgement that data collected was received,
processed, and a report sent back to the collecting organization. The user requires a
rapid response capability at the point of collection to enable missions on the battlefield.

No theater-wide biometric operational architecture or communications plan.

Limited logistics and training.

Information sharing is restricted between systems and sites.

Multiple systems are performing similar operational activities. [DOD, 2006]

In addition, the Under Secretary of Defense for Acquisition, Technology, and Logistics
commissioned a Defense Science Board Task Force on the subject in April 2006. His
memorandum noted that
The Department of Defense (DoD) created a biometrics management approach defined
by pre-9-11 documentation. As a result, all activities in the post-9-11 period are reactive
with ad hoc resources and management teams responding to warfighter applications that
attempt to leverage emerging developments in biometric technologies. Today, DoD must

77

develop a cogent plan of action to institutionalize biometrics as a vital element of the


Departments identity management capability [Krieg, 2006].
The Task Force was directed to report interim results in May 2006, and deliver a final report in
November 2006.
The Task Force reported their initial findings on 1 June 2006, and the VCJCS subsequently met
with his counterparts from the Army, Marine Corps, and several OSD organizations to develop
a coordinated action plan. The VCJCS had a package drafted by mid-June and sent it to the
DEPSECDEF for signature. Subsequently, the OSD Director of Administration and
Management (OSD DA&M) recommended on 19 July that premises of the action plan were
correct, but that the timelines for organizational changes were not practical. On 20 July the
DEPSECDEF, after discussion with the VCJCS, agreed and opted not to endorse the package.
Instead, he asked that OSD DA&M to lead an effort to determine how DOD should organize to
provide biometrics capabilities [DA&M 2006].
He and the VCJCS agreed, however, that the DOD needed to determine the current state of
biometrics capabilities and funding, and also needed to determine whether or not to make any
changes in the Fiscal Year 2008 budget. After some subsequent discussion, the VCJCS directed
a Quick Turn CBA for biometrics on 18 August. His guidance was to:

capture current biometrics capabilities;

determine the gaps between near-term capabilities and needs;

develop options for the Fall 2006 program and budget review; and

provide a foundation for a more detailed assessment and formal JCIDS needs
documents.

He also established that this team would report to the JROC on 28 September, giving them
approximately 30 days to complete the assessment and present results to any lower-level
decision bodies.

13.2. Forming the CBA Team


Prior to the release of the formal memo on 18 August, various action officers representing the
VCJCS, the J-8, the Joint Staff Director of Operations (J-3), and the Director of the Joint Staff
had been discussing responsibilities for the assessment and how it might be done. This group
recommended that J-3 have overall responsibility for the assessment, and that J-8 would
provide expertise on CBAs and JCIDS requirements. Within J-3, action officers from the
Deputy Directorate for Antiterrorism, Force Protection, and Homeland Defense (JCS J34)
would provide primary staff support for the study.
Although J34 ultimately provided 3 officers to support the Quick Look, they did not actually
lead the study. Instead, the VCJCS wanted US Joint Forces Command to lead the assessment to
ensure that the combatant commands needs were articulated directly. Consequently, an O-6
from that organization was named to lead the study. This officer, who had had recent
experience in Iraq, traveled to the Pentagon and remained there for the duration of the effort,
and was totally dedicated to the study.
At this point, the team consisted of the study lead, the J34 officers, the representative from J-8,
and a representative from the VCJCS staff. This group wrote a Joint Staff Action Package
(JSAP) asking for OSD, Joint Staff, Service and Combatant Command representatives for the
Quick Look, and established a first meeting date of 28 August.

78

The team eventually evolved to a group of 18 people, about half of whom worked on the
assessment 90% of the time. The rest attended the sessions 15%-60% of the time; also the team
spent time with representatives from other organizations, such FBI, US Coast Guard, and the
Department of Homeland Security. The study lead made an early decision to limit participation
on the team to at most one person per organization (not including J34) to ensure that debates
were not unbalanced by force of numbers. Overall, each session averaged around 10 people,
which the team found to be workable.

13.3. Initial Planning and Scheduling


In the first week, the team developed several products: an overall methodology, a proposed
schedule, terms of reference, and supporting materials for a data call on desired biometrics
capabilities.
The overall methodology the team decided on was:
1. identify and prioritize operational use cases (scenarios), and decide on a manageable,
representative set of cases;
2. determine the necessary biometrics capabilities for each use case;
3. identify existing biometric capabilities, relevant policies, and legal constraints;
4. determine capability gaps and possible alternatives;
5. estimate costs and risks of alternatives; and
6. develop several possible collections of alternatives for near-term funding.
Initially, the team proposed a schedule that had the following milestones:
1. build requirements matrix for data call (31 August);
2. receive input on requirements from the Combatant Commands (7 September);
3. determine gaps (15 September);
4. draft initial results (22 September);
5. present results to the Director, JCS J34 (26 September);
6. present results to the Director, JCS/J3 and Director, JCS/J8 (27 September);
7. present results to the VCJCS (28 September); and
8. present results to the JROC (29 September); and
9. begin socializing an issue paper for FY08 program and budget review (15 September).
As it turned out, the team was able to stay within a few days of this schedule. An important
addition, however, resulted from the decision to route the final briefing through the normal
JCIDS chain. This meant that the results would have to be presented in turn to the Force
Protection Working Group (chaired by an O-6), the Force Protection Functional Capabilities
Board (chaired by a 2-star general), the Joint Capabilities Board (chaired by a 3-star admiral),
and the JROC (chaired by the VCJCS). This meant that at least one week would be spent in
presentations.
Having established how the final product would flow, the team then turned to the question of
determining use cases. An important part of the assessment was soliciting for needed
capabilities, and the team recognized that it would have to develop products that allowed for
useful and economical input.

79

We will discuss the details of how the team determined the use cases and the structure for
soliciting needs in Section 12.5. It is worth noting, however, that J-8 had solicited for
information on existing biometrics initiatives in mid-June [Chanik, 2006], but the team did not
find this information to be useful, and ended up recollecting most of it.
So, by the end of the first week, the team had a methodology, a presentation plan, a tentative
schedule, and the products in place for a data call; in addition, they had collected most of the
available reports on the topic. The formal request for information went out on 1 September,
with a deadline of 7 September [J-3, 2006].

13.4. Team Evolution


In a 1965 article, Bruce Tuckman published a theory of team development which he called the
forming, storming, norming, performing model [Tuckman, 1965]. Several members of the
Biometrics Quick Turn CBA suggested that this model is exactly what they experienced during
the assessment.

Forming. The team did not know each other when they first met, and had to rely on the
CBA lead for background, objectives, and methods. Several of the members, who came
from organizations with substantive stakes in the results, challenged the initial
objectives, scope, and operating rules.

Storming. This stage manifested itself as the team began determining the use cases.
While the team had at this point agreed on its purpose and objectives, there was
considerable debate and a number of power struggles within the group. The team lead
had to intervene often to move the group forward.

Norming. By the end of the week, the team had become functional enough to produce a
data call that was coherent and could be distributed to external organizations, and had
achieved a degree of unity with respect to what was going to be done and how.

Performing. In Tuckmans performing stage, the team reaches the point where tasks
can be divided among subgroups and disagreements resolved without team lead
intervention. It does not appear that this group really reached this stage in the first
week, as they continued to operate as a committee of the whole until they began to cost
alternatives.

J34 did provide one officer to function as sort of a scribe for the assessment. This person was
largely responsible for capturing the discussions and developing briefings. Eventually, the J34
representatives on the team adopted the habit of spending some time summarizing what had
gone on after the rest of the team left for the day.
Also, the VCJCSs high interest in the assessment led him to put a member of his own staff
full-time in the working group, so he had regular information on the groups progress. This
arrangement had both strengths and weaknesses; while there were some collisions between the
VCJCS representative and the formal study lead, the representative facilitated very quick
resolution of things such as data requests, and was able to get immediate feedback from very
high levels.
Another issue the team had to confront was that the OSD Program Analysis and Evaluation
(PA&E) was getting ready for the fall program review and felt they could not provide a
representative to the CBA. Unfortunately, PA&E was a critical organization, because they
execute the program review and would ultimately manage the recommendations of the issue
paper the CBA was supposed to produce. Consequently, the study lead opted to brief PA&E on

80

a weekly basis (normally during the lunch hour) to keep them informed of the progress of the
effort. By all accounts, this approach worked well.
This is not to say that the assessment was not contentious. The study team lead had to simply
shut off debate and force decisions many times, and various team members had to walk out of
heated discussions occasionally to gather themselves.

13.5. Methodology and Execution


As mentioned above, the team first had to decide on a set of scenarios to provide operational
context for the assessment. The Armys Biometrics Task Force had drafted a capstone concept
of operations for biometrics in response to the VCJCSs March guidance, and had sent the
document out for staffing on 18 July [DAMO-ZA, 2006]. This document contained a list of 11
operational biometrics tasks (e.g., identify friendly force individuals) as well as a set of 12
vignettes (e.g., United States law enforcement support) that each required some set of the
biometrics tasks. The Army had also drafted a JCD on biometrics that contained considerable
information.
Unfortunately, these documents were not approved at that time, so there was considerable
debate whether to use the draft frameworks. Since the direction from the VCJCS had been to
concentrate on near-term alternatives and the Global War on Terror, the team opted to
synthesize a set of eight use cases from the various documents, and further divided the use
cases into vignettes as shown in Table A.1.
The team then ranked these use cases from top to bottom with respect to the following factors:
direct effect on the warfighter, tactical application, operation application, strategic application,
documented DOD responsibility, and near term likelihood. The ranking was not designed to be
used in subsequent analyses; instead, it was used to cut down the scope of the assessment to
something manageable.
After this exercise, the team decided to concentrate on the following cases:

locate, ID, and track persons of interest (raids and high-value targets);

Control physical access (in particular, forward operating bases); and

manage local populations (internment, resettlement, vetting for positions and benefits,
and border and checkpoint security).

The team next had to assemble a format to solicit desired capabilities in the data call. For this,
the team used the overarching functions in the draft CONOPS, which were:

collect biometric samples;

match collected samples to standardized databases to establish identity;

store collected biometric information;

share collected biometric information and results (e.g., where and when matches
occurred); and

analyze collected biometric information and fuse it with other information.

81

Use Case

Category

Subcategory

MIO/EMIO
Locate, ID, Track Persons Raid
of Interest (during tactical
Global tracking of HVT
ops)
Local tracking of HVT
Rescue/recovery ops
CONUS Base Access
OCONUS (non FOB) Base Access
Control Physical Access
FOB Base Access
Facilities/Area Access
Refugee Management
Identify Friendly Personnel

Manage Local
Populations

Counter IED
Forensics

Vetting Grey Personnel


Vetting Blue Personnel

Detainee Ops
Source Management
Check Points
Verify identity of local population
for pay, benefits

Green
Gray
Blue

Vet personnel for law enforcement


(CPATT/CMATT)
Verify identity of intel sources
Verify identity of friendlies
Interagency Operations in
a Foreign Country
Personnel Recovery
Law Enforcement Support
Disaster
Relief/Humanitarian
Assistance

Support US Law Enforcement


Support Local Law Enforcement
Med/Dent Cap
Non-US personnel access to
services
Identify personnel eligible to
receive aid

Support to First
Responder

Table A.1. Candidate biometrics use cases, with categories and subcategories.

The team also subdivided these functions to allow for more detailed input; for example, the
collect function was subdivided into the proportion of time the collection could fail, collection
modality (e.g., face, iris, fingerprints), mobility of the collection system required, and time to
collect.
Once the data call went out on 1 September, the team turned to the identification of existing
capabilities, policies, and legal constraints (step 3). They spent about three days on this step,
and used the same spreadsheets they had sent out in the data call to record all the fielded
capabilities. Since all of the DOD biometrics systems were all off-the-shelf systems and there
was no integrated program of record, the team had to rely on the information built up by other
groups such as the Tiger Team, the Armys BTF organization, and other draft JCIDS
documents. The team also met with various interagency organizations, such as the FBI, to
document their current capabilities in the biometrics area.
After collecting and documenting the current capabilities, the team began turning the input
from the data call into a set of capability gaps, and also began compiling alternatives (step 4).

82

This activity, which took about seven days, was the most intense part of the assessment, and
probably completed the norming of the CBA team.
To screen the large amount of input that was arriving from the data call, the team used the
familiar red-amber-green system of classifying shortcomings. As a result, the team opted to
minimize work on the collect function, because the largest and most glaring performance gaps
were in the other areas. Also, a seemingly minor language problem caused some rework in the
data call. Under the store function, one of the subdivisions was called reliability, which,
unfortunately, was interpreted in several different ways. After some struggling, the team
renamed this subdivision to data confidence, which better reflected the metric of interest.
Unfortunately, the team discovered that the structure they had opted for resulted in 18
capability gap areas. In addition to being unwieldy, this structure did not allow for a
straightforward matching of solution alternatives to gaps, since multiple gaps could be
addressed by an alternative (and vice versa). Finally, while the red-amber-green mechanism
provided quick visual evidence of where the bulk of the issues were within a use case and a
function, the team did not have any sense of how important the problems were among use cases
and functions.
After some debate, the team settled on a compact list of capability gaps, and decided to use a
modified version of the Analytic Hierarchy Process (which one of the team members had
implemented in a spreadsheet) to order the gaps and get some sense of where to focus their
efforts. This effort led to the interim results shown in Table A.2.

Table A.2. The pairwise comparison of the revised set of capability gaps.

The team employed this more as a clustering device than a weighting method. By limiting the
row and column comparison to only six ratios, the team used this as a means to divide the gaps
into groups. Fortunately, there was little variation among use cases; although the team specified
three difference sets of situations, the shortcomings had similar scores among the cases.
The team had also begun collecting solution alternatives, and found it easier to group them by
the following capability categories:

match information (ability to match the particular biometric data);

match scope (the range of databases searched);

match time (time to achieve a match);

83

share (ability to easily share data and results); and

database confidence.

The team further decomposed the alternatives by whether they addressed doctrine,
organization, training, materiel, leadership, personnel, or facilities (DOTMLPF), since that
deconstruction would be necessary to determine what implementation steps would be necessary
if the alternative was adopted.
Interestingly enough, there was not a great deal of competition among solution alternatives.
Due to the time compression, organizations that could have offered alternatives did not offer
irrelevant options. Also, the fact was that biometrics was really not a part of the core culture of
any part of the DOD, so there were not that many factions that had any solutions to offer.
At this point, the team was ready to move to identifying costs and risks of alternatives (step 5),
but the move to a new solution taxonomy forced them to regroup the gaps into that taxonomy
and reassess priorities among the newly-regrouped gaps. The team employed a different
spreadsheet tool to aid in this assessment; this one exploited some of the metrics and tasks that
had come back from the data call, and allowed the team to have more operationally-focused
discussions on what really needed to be done. This resulted in the following priority list:
1. match time;
2. match information;
3. share; and
4. match scope.
At this point, the team had to disentangle the confusion caused by the reliability label in the
data call. In various sessions, the Combatant Commands pressed the view that database
confidence was absolutely essential; without decent information to match against, the fastest,
most complete, totally accessible, and widest-range biometrics solution was useless.
Essentially, database confidence became not just the top priority, but a prerequisite.
Once again, the team discovered that the labels they were using were not really helping to
convey either the shortcomings or the alternatives. Consequently, their briefings were modified
to present gaps, alternatives, and courses of action in terms of following set of prioritized
descriptors:
1. match fast;
2. match accurate;
3. complete intel analysis; and
4. share.
Database confidence was dropped as a separate category (since as a prerequisite it had to be
addressed), and solutions binned to this category were spread among the match fast and match
accurate categories.
With the taxonomy finally settled, the team undertook one more ranking exercise, which was
ordering the specific gaps under each of the categories so they could begin evaluating solutions,
costs, and solution portfolios. This was done via a simple ranking process; an example for the
Share category is shown in Table A.3.

84

Share
Priority

Time
Frame
Near

DOTMLPF
Materiel

No program of record for an authoritative database

Materiel
Doctrine

Near &
Long
Near &
Long
Long

Near

Doctrine

No ability to link to databases outside of DOD, and no


metadata tagging
Insufficient sharing strategy and policies for biometric
data with repositories, agencies, or governments
No automated multi-level security capability when
sharing outside of DOD
Joint doctrine does not address biometric-enabled
capabilities

1
2
3

Materiel

Capability Gap

Table A.3. The top five capability gaps for the Share category.

Having spent four days on this effort, the team moved to the final step of their methodology,
which was to recommend courses of action (COAs; in this document, we call these portfolios).
The team had been able to agree on the best alternative for each particular category and gap, but
there was considerable debate on how to structure alternatives. One natural scheme considered
was to simply recommend funding in order of priority, i.e., COA 1 would be to fund all of the
match fast solutions, COA 2 would be to add funding for match accurate, and so on. This
method, however, would result in unbalanced COAs unless everything was funded.
Consequently, the team decided to organize their recommendations incrementally as shown
below.

Increment 1: improve biometric collection, matching, and recommendation to the


end user. This contained the solutions to the top five gaps from Match Fast, all from
Share, and the top two from Match Accurate, and represented investments of
approximately $230M.

Increment 2: improve data integrity and confidence, and increase capability to


exploit latent fingerprints. This contained the remaining Match Fast solutions, and the
next four Match Accurate solutions, and represented an additional $42M investment.

Increment 3: provide the end user with in-depth analysis enabled by biometrics.
This increment contained the remaining solutions, and would require an additional
$28M.

The team spent roughly three days on this step, and then embarked on presentation refinement
and socializing the program review issue paper.
During the presentations, the team received two reviews. The first was provided by the Armys
G-8 staff, which went over the cost estimates for the alternatives and provided a solid external
review. The second review was a formal Red Team effort conducted by two general officers
from J-8 and several O-6s. Although the study lead had not requested such a review, he
accepted the offer for such a review, and felt it greatly improved the final product. In particular,
the questions posed by the Red Team better exposed the rationale for the teams conclusions
and strengthened the case for the recommended alternatives.
The team gave its first briefing on 19 September, and presented their results and
recommendations to the JROC on 28 September as required by the original tasking. The JROC
endorsed their findings and recommended funding all three increments in 2007 and 2008.

85

13.6. Observations
This assessment followed many of the ideas in this paper. The study teams methodology
specified operational cases, used a functional taxonomy, estimated where the crucial gaps were
among the functions, considered the full spectrum of materiel and non-materiel alternatives,
considered costs, and presented alternative solution portfolios. When you consider that the
group went from its introductory meeting to briefing near-final results recommending over
$300M in initiatives in 21 days, it is clear that this effort was far from easy or routine and that
very few ad hoc groups could have done it.
The Biometrics Quick Turn CBA also typified many reasons for an accelerated assessment: the
need to take imminent budget action, the need to address an emerging need, and the need to
pull together a set of disparate examinations. As noted above, the DOD was simply not treating
biometrics as a core competency, so there was also a need to break a bureaucratic logjam.
One thing the CBA team benefited from was the fact that the VCJCS provided a realizable
scope that did not require extensive renegotiation. Indeed, the stipulation that the JROC would
be briefed 30 days after the assessment started probably swept aside the majority of the
bureaucratic hurdles that would normally plague a CBA.
The team appears to have come together quickly and operated effectively, particularly at the
end. Although it appears that the team was assembled more to provide organizational
representation than functional coverage, it had the essential doctrinal knowledge and enough
process and quantitative skills to do the assessment. While there were no quantitative methods
used beyond rudimentary decision analysis, the methods that were used lent structure to the
assessment, helped the team focus its efforts, and also provide a means to settle debates.
This CBA was also a perfect example of a design-to-time exercise. Everything that was done
was done to meet a schedule, and the various ranking exercises were designed to remove lessimportant considerations and focus on a handful of critical shortcomings. Also, the recognition
that 25-30% of the available time would have to be dedicated to briefings and repackaging was
an important concession to an unpleasant, but unavoidable, reality.
One thing that may strike the reader as inefficient were the repeated restructurings of the
capability gaps and solutions. While this probably could have been done better, we must again
point out that this assessment was done for a mission area with no approved doctrine. If the
draft CONOPS had not been available, the study team could have easily spent the entire 30
days trying to agree on a workable functional structure for the assessment.
Certainly, the Biometrics Quick Turn CBA did not contain all the analysis that this paper
recommends, and no one would offer it as an exemplar of a comprehensive quantitative study.
But, that is not what the VCJCS asked for. Instead, he asked for a short-term assessment to
generate solution options for an emerging mission area with well-documented shortfalls. In
fact, it is likely (at the time this was written) that the DOD will commission a much more
comprehensive CBA on biometrics, one that would do all the things described in this guide.
Nonetheless, this CBA presented alternatives linked to operational situations, used a coherent
functional structure, identified the most important gaps, and suggested multiple portfolios that
contained a spectrum of materiel and non-materiel solutions precisely what this guide does
recommend.

13.7. References
DAMO-ZA, Coordination of Capstone Concept of Operations (CONOPS) for Department of
Defense (DOD) Biometrics in Support of Identity Superiority, memorandum, July 17, 2006.

86

Krieg, Kenneth, Terms of Reference Defense Science Board Task Force on Defense
Biometrics Program, memorandum, 13 April 2006.
Chanik, Evan M., Identification of Service Biometric Activities, memorandum, 8 June 2006.
JCS J34, Quick Look Capability Based Assessment Call To COCOMS/Services Input, Joint
Staff Action Package J-3A 01333-06, 1 September 2006.
Tuckman, Bruce, Development Sequence in Small Groups, Psychological Bulletin, Vol. 63,
pp. 384-399, 1965.
Biometrics Tiger Team, Biometrics Tiger Team Trip Report: 23 April 5 May 2006,
Department of Defense, 28 June 2006.
OSD DA&M email, 20 July 2006, cited in Joint Staff Action Package J-3A 00902-06,
DepSecDef Biometrics Tasking Memorandum, Joint Staff.

87

You might also like